2596 Commits (5f84ea63e824debfbe36efc6773e6190e8da324c)
 

Author SHA1 Message Date
Alex "mcmonkey" Goodwin 65ea6be38f
PullRequest CI Run: use pull_request_target to allow the CI Dashboard to work (#4277)
'_target' allows secrets to pass through, and we're just using the secret that allows uploading to the dashboard and are manually vetting PRs before running this workflow anyway
7 months ago
Alex "mcmonkey" Goodwin 5df6f57b5d
minor fix on copypasta action name (#4276)
my bad sorry
7 months ago
Alex "mcmonkey" Goodwin 6588bfdef9
add GitHub workflow for CI tests of PRs (#4275)
When the 'Run-CI-Test' label is added to a PR, it will be tested by the CI, on a small matrix of stable versions.
7 months ago
Alex "mcmonkey" Goodwin 50ed2879ef
Add full CI test matrix GitHub Workflow (#4274)
automatically runs a matrix of full GPU-enabled tests on all new commits to the ComfyUI master branch
7 months ago
comfyanonymous 66d4233210 Fix. 7 months ago
comfyanonymous 591010b7ef Support diffusers text attention flux loras. 7 months ago
comfyanonymous 08f92d55e9 Partial model shift support. 7 months ago
comfyanonymous 8115d8cce9 Add Flux fp16 support hack. 7 months ago
comfyanonymous 6969fc9ba4 Make supported_dtypes a priority list. 7 months ago
comfyanonymous cb7c4b4be3 Workaround for lora OOM on lowvram mode. 7 months ago
comfyanonymous 1208863eca Fix "Comfy" lora keys.
They are in this format now:
diffusion_model.full.model.key.name.lora_up.weight
7 months ago
comfyanonymous e1c528196e Fix bundled embed. 7 months ago
comfyanonymous 17030fd4c0 Support for "Comfy" lora format.
The keys are just: model.full.model.key.name.lora_up.weight

It is supported by all comfyui supported models.

Now people can just convert loras to this format instead of having to ask
for me to implement them.
7 months ago
comfyanonymous c19dcd362f Controlnet code refactor. 7 months ago
comfyanonymous 1c08bf35b4 Support format for embeddings bundled in loras. 7 months ago
PhilWun 2a02546e20
Add type hints to folder_paths.py (#4191)
* add type hints to folder_paths.py

* replace deprecated standard collections type hints

* fix type error when using Python 3.8
7 months ago
comfyanonymous b334605a66 Fix OOMs happening in some cases.
A cloned model patcher sometimes reported a model was loaded on a device
when it wasn't.
7 months ago
comfyanonymous de17a9755e Unload all models if there's an OOM error. 7 months ago
comfyanonymous c14ac98fed Unload models and load them back in lowvram mode no free vram. 7 months ago
Robin Huang 2894511893
Clone taesd with depth of 1 to reduce download size. (#4232) 7 months ago
Silver f3bc40223a
Add format metadata to CLIP save to make compatible with diffusers safetensors loading (#4233) 7 months ago
Chenlei Hu 841e74ac40
Change browser test CI python to 3.8 (#4234) 7 months ago
comfyanonymous 2d75df45e6 Flux tweak memory usage. 7 months ago
Robin Huang 1abc9c8703
Stable release uses cached dependencies (#4231)
* Release stable based on existing tag.

* Update default cuda to 12.1.
7 months ago
comfyanonymous 8edbcf5209 Improve performance on some lowend GPUs. 7 months ago
comfyanonymous e545a636ba This probably doesn't work anymore. 7 months ago
bymyself 33e5203a2a
Don't cache index.html (#4211) 7 months ago
a-One-Fan a178e25912
Fix Flux FP64 math on XPU (#4210) 7 months ago
comfyanonymous 78e133d041 Support simple diffusers Flux loras. 7 months ago
Silver 7afa985fba
Correct spelling 'token_weight_pars_t5' to 'token_weight_pairs_t5' (#4200) 7 months ago
comfyanonymous ddb6a9f47c Set the step in EmptySD3LatentImage to 16.
These models work better when the res is a multiple of 16.
7 months ago
comfyanonymous 3b71f84b50 ONNX tracing fixes. 7 months ago
comfyanonymous 0a6b008117 Fix issue with some custom nodes. 7 months ago
comfyanonymous 56f3c660bf ModelSamplingFlux now takes a resolution and adjusts the shift with it.
If you want to sample Flux dev exactly how the reference code does use
the same resolution as your image in this node.
7 months ago
comfyanonymous f7a5107784 Fix crash. 7 months ago
comfyanonymous 91be9c2867 Tweak lowvram memory formula. 7 months ago
comfyanonymous 03c5018c98 Lower lowvram memory to 1/3 of free memory. 7 months ago
comfyanonymous 2ba5cc8b86 Fix some issues. 7 months ago
comfyanonymous 1e68002b87 Cap lowvram to half of free memory. 7 months ago
comfyanonymous ba9095e5bd Automatically use fp8 for diffusion model weights if:
Checkpoint contains weights in fp8.

There isn't enough memory to load the diffusion model in GPU vram.
7 months ago
comfyanonymous f123328b82 Load T5 in fp8 if it's in fp8 in the Flux checkpoint. 7 months ago
comfyanonymous 63a7e8edba More aggressive batch splitting. 7 months ago
comfyanonymous 0eea47d580 Add ModelSamplingFlux to experiment with the shift value.
Default shift on Flux Schnell is 0.0
7 months ago
comfyanonymous 7cd0cdfce6 Add advanced model merge node for Flux model. 7 months ago
comfyanonymous ea03c9dcd2 Better per model memory usage estimations. 7 months ago
comfyanonymous 3a9ee995cf Tweak regular SD memory formula. 7 months ago
comfyanonymous 47da42d928 Better Flux vram estimation. 7 months ago
comfyanonymous 17bbd83176 Fix bug loading flac workflow when it contains = character. 7 months ago
fgdfgfthgr-fox bfb52de866
Lower SAG scale step for finer control (#4158)
* Lower SAG step for finer control

Since the introduction of cfg++ which uses very low cfg value, a step of 0.1 in SAG might be too high for finer control. Even SAG of 0.1 can be too high when cfg is only 0.6, so I change the step to 0.01.

* Lower PAG step as well.

* Update nodes_sag.py
7 months ago
comfyanonymous eca962c6da Add FluxGuidance node.
This lets you adjust the guidance on the dev model which is a parameter
that is passed to the diffusion model.
7 months ago