1062 Commits (ca4b8f30e0bf40cf58dcb3f3e6118832a60348c8)

Author SHA1 Message Date
comfyanonymous 2ca8f6e23d Make the stochastic fp8 rounding reproducible. 6 months ago
comfyanonymous 7985ff88b9 Use less memory in float8 lora patching by doing calculations in fp16. 6 months ago
comfyanonymous c6812947e9 Fix potential memory leak. 6 months ago
comfyanonymous 9230f65823 Fix some controlnets OOMing when loading. 6 months ago
comfyanonymous 8ae23d8e80 Fix onnx export. 6 months ago
comfyanonymous 7df42b9a23 Fix dora. 6 months ago
comfyanonymous 5d8bbb7281 Cleanup. 6 months ago
comfyanonymous 2c1d2375d6 Fix. 6 months ago
Simon Lui 64ccb3c7e3
Rework IPEX check for future inclusion of XPU into Pytorch upstream and do a bit more optimization of ipex.optimize(). (#4562) 6 months ago
Scorpinaus 9465b23432
Added SD15_Inpaint_Diffusers model support for unet_config_from_diffusers_unet function (#4565) 6 months ago
comfyanonymous c0b0da264b Missing imports. 6 months ago
comfyanonymous c26ca27207 Move calculate function to comfy.lora 6 months ago
comfyanonymous 7c6bb84016 Code cleanups. 6 months ago
comfyanonymous c54d3ed5e6 Fix issue with models staying loaded in memory. 6 months ago
comfyanonymous c7ee4b37a1 Try to fix some lora issues. 6 months ago
David 7b70b266d8
Generalize MacOS version check for force-upcast-attention (#4548)
This code automatically forces upcasting attention for MacOS versions 14.5 and 14.6. My computer returns the string "14.6.1" for `platform.mac_ver()[0]`, so this generalizes the comparison to catch more versions.

I am running MacOS Sonoma 14.6.1 (latest version) and was seeing black image generation on previously functional workflows after recent software updates. This PR solved the issue for me.

See comfyanonymous/ComfyUI#3521
6 months ago
comfyanonymous 8f60d093ba Fix issue. 6 months ago
comfyanonymous 843a7ff70c fp16 is actually faster than fp32 on a GTX 1080. 6 months ago
comfyanonymous a60620dcea Fix slow performance on 10 series Nvidia GPUs. 6 months ago
comfyanonymous 015f73dc49 Try a different type of flux fp16 fix. 6 months ago
comfyanonymous 904bf58e7d Make --fast work on pytorch nightly. 6 months ago
Svein Ove Aas 5f50263088
Replace use of .view with .reshape (#4522)
When generating images with fp8_e4_m3 Flux and batch size >1, using --fast, ComfyUI throws a "view size is not compatible with input tensor's size and stride" error pointing at the first of these two calls to view.

As reshape is semantically equivalent to view except for working on a broader set of inputs, there should be no downside to changing this. The only difference is that it clones the underlying data in cases where .view would error out. I have confirmed that the output still looks as expected, but cannot confirm that no mutable use is made of the tensors anywhere.

Note that --fast is only marginally faster than the default.
6 months ago
comfyanonymous 03ec517afb Remove useless line, adjust windows default reserved vram. 6 months ago
comfyanonymous 510f3438c1 Speed up fp8 matrix mult by using better code. 6 months ago
comfyanonymous ea63b1c092 Simpletrainer lycoris format. 6 months ago
comfyanonymous 9953f22fce Add --fast argument to enable experimental optimizations.
Optimizations that might break things/lower quality will be put behind
this flag first and might be enabled by default in the future.

Currently the only optimization is float8_e4m3fn matrix multiplication on
4000/ADA series Nvidia cards or later. If you have one of these cards you
will see a speed boost when using fp8_e4m3fn flux for example.
6 months ago
comfyanonymous d1a6bd6845 Support loading long clipl model with the CLIP loader node. 6 months ago
comfyanonymous 83dbac28eb Properly set if clip text pooled projection instead of using hack. 6 months ago
comfyanonymous 538cb068bc Make cast_to a nop if weight is already good. 6 months ago
comfyanonymous 1b3eee672c Fix potential issue with multi devices. 6 months ago
comfyanonymous 9eee470244 New load_text_encoder_state_dicts function.
Now you can load text encoders straight from a list of state dicts.
6 months ago
comfyanonymous 045377ea89 Add a --reserve-vram argument if you don't want comfy to use all of it.
--reserve-vram 1.0 for example will make ComfyUI try to keep 1GB vram free.

This can also be useful if workflows are failing because of OOM errors but
in that case please report it if --reserve-vram improves your situation.
6 months ago
comfyanonymous 4d341b78e8 Bug fixes. 6 months ago
comfyanonymous 6138f92084 Use better dtype for the lowvram lora system. 6 months ago
comfyanonymous be0726c1ed Remove duplication. 6 months ago
comfyanonymous 4506ddc86a Better subnormal fp8 stochastic rounding. Thanks Ashen. 6 months ago
comfyanonymous 20ace7c853 Code cleanup. 6 months ago
comfyanonymous 22ec02afc0 Handle subnormal numbers in float8 rounding. 6 months ago
comfyanonymous 39f114c44b Less broken non blocking? 6 months ago
comfyanonymous 6730f3e1a3
Disable non blocking.
It fixed some perf issues but caused other issues that need to be debugged.
6 months ago
comfyanonymous 73332160c8 Enable non blocking transfers in lowvram mode. 6 months ago
comfyanonymous 2622c55aff Automatically use RF variant of dpmpp_2s_ancestral if RF model. 6 months ago
Ashen 1beb348ee2 dpmpp_2s_ancestral_RF for rectified flow (Flux, SD3 and Auraflow). 6 months ago
comfyanonymous d31df04c8a Indentation. 6 months ago
Xrvk e68763f40c
Add Flux model support for InstantX style controlnet residuals (#4444)
* Add Flux model support for InstantX style controlnet residuals

* Refactor Flux controlnet residual step to a separate method

* Rollback minor change

* New format for applying controlnet residuals: input->double_blocks, output->single_blocks

* Adjust XLabs Flux controlnet to fit new syntax of applying Flux controlnet residuals

* Remove unnecessary import and minor style change
6 months ago
comfyanonymous 4f7a3cb6fb unet -> diffusion_models. 6 months ago
comfyanonymous bb222ceddb Fix loras having a weak effect when applied on fp8. 6 months ago
comfyanonymous fca42836f2 Add model_options for text encoder. 6 months ago
comfyanonymous cd5017c1c9 calculate_weight function to use a different dtype. 6 months ago
comfyanonymous 83f343146a Fix potential lowvram issue. 6 months ago