comfyanonymous
11200de970
Cleaner code.
7 months ago
comfyanonymous
037c38eb0f
Try to improve inference speed on some machines.
7 months ago
comfyanonymous
1e11d2d1f5
Better prints.
7 months ago
comfyanonymous
66d4233210
Fix.
7 months ago
comfyanonymous
591010b7ef
Support diffusers text attention flux loras.
7 months ago
comfyanonymous
08f92d55e9
Partial model shift support.
7 months ago
comfyanonymous
8115d8cce9
Add Flux fp16 support hack.
7 months ago
comfyanonymous
6969fc9ba4
Make supported_dtypes a priority list.
7 months ago
comfyanonymous
cb7c4b4be3
Workaround for lora OOM on lowvram mode.
7 months ago
comfyanonymous
1208863eca
Fix "Comfy" lora keys.
...
They are in this format now:
diffusion_model.full.model.key.name.lora_up.weight
7 months ago
comfyanonymous
e1c528196e
Fix bundled embed.
7 months ago
comfyanonymous
17030fd4c0
Support for "Comfy" lora format.
...
The keys are just: model.full.model.key.name.lora_up.weight
It is supported by all comfyui supported models.
Now people can just convert loras to this format instead of having to ask
for me to implement them.
7 months ago
comfyanonymous
c19dcd362f
Controlnet code refactor.
7 months ago
comfyanonymous
1c08bf35b4
Support format for embeddings bundled in loras.
7 months ago
comfyanonymous
b334605a66
Fix OOMs happening in some cases.
...
A cloned model patcher sometimes reported a model was loaded on a device
when it wasn't.
7 months ago
comfyanonymous
c14ac98fed
Unload models and load them back in lowvram mode no free vram.
7 months ago
comfyanonymous
2d75df45e6
Flux tweak memory usage.
7 months ago
comfyanonymous
8edbcf5209
Improve performance on some lowend GPUs.
7 months ago
a-One-Fan
a178e25912
Fix Flux FP64 math on XPU ( #4210 )
7 months ago
comfyanonymous
78e133d041
Support simple diffusers Flux loras.
7 months ago
Silver
7afa985fba
Correct spelling 'token_weight_pars_t5' to 'token_weight_pairs_t5' ( #4200 )
7 months ago
comfyanonymous
3b71f84b50
ONNX tracing fixes.
7 months ago
comfyanonymous
0a6b008117
Fix issue with some custom nodes.
7 months ago
comfyanonymous
f7a5107784
Fix crash.
7 months ago
comfyanonymous
91be9c2867
Tweak lowvram memory formula.
7 months ago
comfyanonymous
03c5018c98
Lower lowvram memory to 1/3 of free memory.
7 months ago
comfyanonymous
2ba5cc8b86
Fix some issues.
7 months ago
comfyanonymous
1e68002b87
Cap lowvram to half of free memory.
7 months ago
comfyanonymous
ba9095e5bd
Automatically use fp8 for diffusion model weights if:
...
Checkpoint contains weights in fp8.
There isn't enough memory to load the diffusion model in GPU vram.
7 months ago
comfyanonymous
f123328b82
Load T5 in fp8 if it's in fp8 in the Flux checkpoint.
7 months ago
comfyanonymous
63a7e8edba
More aggressive batch splitting.
7 months ago
comfyanonymous
ea03c9dcd2
Better per model memory usage estimations.
7 months ago
comfyanonymous
3a9ee995cf
Tweak regular SD memory formula.
7 months ago
comfyanonymous
47da42d928
Better Flux vram estimation.
7 months ago
Alexander Brown
ce9ac2fe05
Fix clip_g/clip_l mixup ( #4168 )
7 months ago
comfyanonymous
e638f2858a
Hack to make all resolutions work on Flux models.
7 months ago
comfyanonymous
d420bc792a
Tweak the memory usage formulas for Flux and SD.
7 months ago
comfyanonymous
d965474aaa
Make ComfyUI split batches a higher priority than weight offload.
7 months ago
comfyanonymous
1c61361fd2
Fast preview support for Flux.
7 months ago
comfyanonymous
a6decf1e62
Fix bfloat16 potentially not being enabled on mps.
7 months ago
comfyanonymous
48eb1399c0
Try to fix mac issue.
7 months ago
comfyanonymous
d7430a1651
Add a way to load the diffusion model in fp8 with UNETLoader node.
7 months ago
comfyanonymous
f2b80f95d2
Better Mac support on flux model.
7 months ago
comfyanonymous
1aa9cf3292
Make lowvram more aggressive on low memory machines.
7 months ago
comfyanonymous
eb96c3bd82
Fix .sft file loading (they are safetensors files).
7 months ago
comfyanonymous
5f98de7697
Load flux t5 in fp8 if weights are in fp8.
7 months ago
comfyanonymous
8d34211a7a
Fix old python versions no longer working.
7 months ago
comfyanonymous
1589b58d3e
Basic Flux Schnell and Flux Dev model implementation.
7 months ago
comfyanonymous
7ad574bffd
Mac supports bf16 just make sure you are using the latest pytorch.
7 months ago
comfyanonymous
e2382b6adb
Make lowvram less aggressive when there are large amounts of free memory.
7 months ago