988 Commits (925fff26fd6e7e313751a9873964d9cbfde70e6b)

Author SHA1 Message Date
ljleb 925fff26fd
alternative to `load_checkpoint_guess_config` that accepts a loaded state dict (#4249)
* make alternative fn

* add back ckpt path as 2nd argument?
7 months ago
comfyanonymous 75b9b55b22 Fix issues with #4302 and support loading diffusers format flux. 7 months ago
Jaret Burkett 1765f1c60c
FLUX: Added full diffusers mapping for FLUX.1 schnell and dev. Adds full LoRA support from diffusers LoRAs. (#4302) 7 months ago
comfyanonymous 1de69fe4d5 Fix some issues with inference slowing down. 7 months ago
comfyanonymous ae197f651b Speed up hunyuan dit inference a bit. 7 months ago
comfyanonymous 1b5b8ca81a Fix regression. 7 months ago
comfyanonymous 6678d5cf65 Fix regression. 7 months ago
TTPlanetPig e172564eea
Update controlnet.py to fix the default controlnet weight as constant (#4285) 7 months ago
comfyanonymous a3cc326748 Better fix for lowvram issue. 7 months ago
comfyanonymous 86a97e91fc Fix controlnet regression. 7 months ago
comfyanonymous 5acdadc9f3 Fix issue with some lowvram weights. 7 months ago
comfyanonymous 55ad9d5f8c Fix regression. 7 months ago
comfyanonymous a9f04edc58 Implement text encoder part of HunyuanDiT loras. 7 months ago
comfyanonymous a475ec2300 Cleanup HunyuanDit controlnets.
Use the: ControlNetApply SD3 and HunyuanDiT node.
7 months ago
来新璐 06eb9fb426
feat: add support for HunYuanDit ControlNet (#4245)
* add support for HunYuanDit ControlNet

* fix hunyuandit controlnet

* fix typo in hunyuandit controlnet

* fix typo in hunyuandit controlnet

* fix code format style

* add control_weight support for HunyuanDit Controlnet

* use control_weights in HunyuanDit Controlnet

* fix typo
7 months ago
comfyanonymous 413322645e Raw torch is faster than einops? 7 months ago
comfyanonymous 11200de970 Cleaner code. 7 months ago
comfyanonymous 037c38eb0f Try to improve inference speed on some machines. 7 months ago
comfyanonymous 1e11d2d1f5 Better prints. 7 months ago
comfyanonymous 66d4233210 Fix. 7 months ago
comfyanonymous 591010b7ef Support diffusers text attention flux loras. 7 months ago
comfyanonymous 08f92d55e9 Partial model shift support. 7 months ago
comfyanonymous 8115d8cce9 Add Flux fp16 support hack. 7 months ago
comfyanonymous 6969fc9ba4 Make supported_dtypes a priority list. 7 months ago
comfyanonymous cb7c4b4be3 Workaround for lora OOM on lowvram mode. 7 months ago
comfyanonymous 1208863eca Fix "Comfy" lora keys.
They are in this format now:
diffusion_model.full.model.key.name.lora_up.weight
7 months ago
comfyanonymous e1c528196e Fix bundled embed. 7 months ago
comfyanonymous 17030fd4c0 Support for "Comfy" lora format.
The keys are just: model.full.model.key.name.lora_up.weight

It is supported by all comfyui supported models.

Now people can just convert loras to this format instead of having to ask
for me to implement them.
7 months ago
comfyanonymous c19dcd362f Controlnet code refactor. 7 months ago
comfyanonymous 1c08bf35b4 Support format for embeddings bundled in loras. 7 months ago
comfyanonymous b334605a66 Fix OOMs happening in some cases.
A cloned model patcher sometimes reported a model was loaded on a device
when it wasn't.
7 months ago
comfyanonymous c14ac98fed Unload models and load them back in lowvram mode no free vram. 7 months ago
comfyanonymous 2d75df45e6 Flux tweak memory usage. 7 months ago
comfyanonymous 8edbcf5209 Improve performance on some lowend GPUs. 7 months ago
a-One-Fan a178e25912
Fix Flux FP64 math on XPU (#4210) 7 months ago
comfyanonymous 78e133d041 Support simple diffusers Flux loras. 7 months ago
Silver 7afa985fba
Correct spelling 'token_weight_pars_t5' to 'token_weight_pairs_t5' (#4200) 7 months ago
comfyanonymous 3b71f84b50 ONNX tracing fixes. 7 months ago
comfyanonymous 0a6b008117 Fix issue with some custom nodes. 7 months ago
comfyanonymous f7a5107784 Fix crash. 7 months ago
comfyanonymous 91be9c2867 Tweak lowvram memory formula. 7 months ago
comfyanonymous 03c5018c98 Lower lowvram memory to 1/3 of free memory. 7 months ago
comfyanonymous 2ba5cc8b86 Fix some issues. 7 months ago
comfyanonymous 1e68002b87 Cap lowvram to half of free memory. 7 months ago
comfyanonymous ba9095e5bd Automatically use fp8 for diffusion model weights if:
Checkpoint contains weights in fp8.

There isn't enough memory to load the diffusion model in GPU vram.
7 months ago
comfyanonymous f123328b82 Load T5 in fp8 if it's in fp8 in the Flux checkpoint. 7 months ago
comfyanonymous 63a7e8edba More aggressive batch splitting. 7 months ago
comfyanonymous ea03c9dcd2 Better per model memory usage estimations. 7 months ago
comfyanonymous 3a9ee995cf Tweak regular SD memory formula. 7 months ago
comfyanonymous 47da42d928 Better Flux vram estimation. 7 months ago