457 Commits (2d9d3ca38be4b6060b7553ca5266675fdf121ad1)

Author SHA1 Message Date
comfyanonymous 1938f5c5fe Add a force argument to soft_empty_cache to force a cache empty. 1 year ago
comfyanonymous 7746bdf7b0 Merge branch 'generalize_fixes' of https://github.com/simonlui/ComfyUI 1 year ago
Simon Lui 2da73b7073 Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused. 1 year ago
comfyanonymous a74c5dbf37 Move some functions to utils.py 1 year ago
Simon Lui 4a0c4ce4ef Some fixes to generalize CUDA specific functionality to Intel or other GPUs. 1 year ago
comfyanonymous 77a176f9e0 Use common function to reshape batch to. 2 years ago
comfyanonymous 7931ff0fd9 Support SDXL inpaint models. 2 years ago
comfyanonymous 0e3b641172 Remove xformers related print. 2 years ago
comfyanonymous 5c363a9d86 Fix controlnet bug. 2 years ago
comfyanonymous cfe1c54de8 Fix controlnet issue. 2 years ago
comfyanonymous 1c012d69af It doesn't make sense for c_crossattn and c_concat to be lists. 2 years ago
comfyanonymous 7e941f9f24 Clean up DiffusersLoader node. 2 years ago
Simon Lui 18617967e5
Fix error message in model_patcher.py
Found while tinkering.
2 years ago
comfyanonymous fe4c07400c Fix "Load Checkpoint with config" node. 2 years ago
comfyanonymous f2f5e5dcbb Support SDXL t2i adapters with 3 channel input. 2 years ago
comfyanonymous 15adc3699f Move beta_schedule to model_config and allow disabling unet creation. 2 years ago
comfyanonymous bed116a1f9 Remove optimization that caused border. 2 years ago
comfyanonymous 65cae62c71 No need to check filename extensions to detect shuffle controlnet. 2 years ago
comfyanonymous 4e89b2c25a Put clip vision outputs on the CPU. 2 years ago
comfyanonymous a094b45c93 Load clipvision model to GPU for faster performance. 2 years ago
comfyanonymous 1300a1bb4c Text encoder should initially load on the offload_device not the regular. 2 years ago
comfyanonymous f92074b84f Move ModelPatcher to model_patcher.py 2 years ago
comfyanonymous 4798cf5a62 Implement loras with norm keys. 2 years ago
comfyanonymous b8c7c770d3 Enable bf16-vae by default on ampere and up. 2 years ago
comfyanonymous 1c794a2161 Fallback to slice attention if xformers doesn't support the operation. 2 years ago
comfyanonymous d935ba50c4 Make --bf16-vae work on torch 2.0 2 years ago
comfyanonymous a57b0c797b Fix lowvram model merging. 2 years ago
comfyanonymous f72780a7e3 The new smart memory management makes this unnecessary. 2 years ago
comfyanonymous c77f02e1c6 Move controlnet code to comfy/controlnet.py 2 years ago
comfyanonymous 15a7716fa6 Move lora code to comfy/lora.py 2 years ago
comfyanonymous ec96f6d03a Move text_projection to base clip model. 2 years ago
comfyanonymous 30eb92c3cb Code cleanups. 2 years ago
comfyanonymous 51dde87e97 Try to free enough vram for control lora inference. 2 years ago
comfyanonymous e3d0a9a490 Fix potential issue with text projection matrix multiplication. 2 years ago
comfyanonymous cc44ade79e Always shift text encoder to GPU when the device supports fp16. 2 years ago
comfyanonymous a6ef08a46a Even with forced fp16 the cpu device should never use it. 2 years ago
comfyanonymous 00c0b2c507 Initialize text encoder to target dtype. 2 years ago
comfyanonymous f081017c1a Save memory by storing text encoder weights in fp16 in most situations.
Do inference in fp32 to make sure quality stays the exact same.
2 years ago
comfyanonymous afcb9cb1df All resolutions now work with t2i adapter for SDXL. 2 years ago
comfyanonymous 85fde89d7f T2I adapter SDXL. 2 years ago
comfyanonymous cf5ae46928 Controlnet/t2iadapter cleanup. 2 years ago
comfyanonymous 763b0cf024 Fix control lora not working in fp32. 2 years ago
comfyanonymous 199d73364a Fix ControlLora on lowvram. 2 years ago
comfyanonymous d08e53de2e Remove autocast from controlnet code. 2 years ago
comfyanonymous 0d7b0a4dc7 Small cleanups. 2 years ago
Simon Lui 9225465975 Further tuning and fix mem_free_total. 2 years ago
Simon Lui 2c096e4260 Add ipex optimize and other enhancements for Intel GPUs based on recent memory changes. 2 years ago
comfyanonymous e9469e732d --disable-smart-memory now disables loading model directly to vram. 2 years ago
comfyanonymous c9b562aed1 Free more memory before VAE encode/decode. 2 years ago
comfyanonymous b80c3276dc Fix issue with gligen. 2 years ago