comfyanonymous
75b9b55b22
Fix issues with #4302 and support loading diffusers format flux.
7 months ago
Jaret Burkett
1765f1c60c
FLUX: Added full diffusers mapping for FLUX.1 schnell and dev. Adds full LoRA support from diffusers LoRAs. ( #4302 )
7 months ago
comfyanonymous
1de69fe4d5
Fix some issues with inference slowing down.
7 months ago
comfyanonymous
ae197f651b
Speed up hunyuan dit inference a bit.
7 months ago
comfyanonymous
1b5b8ca81a
Fix regression.
7 months ago
comfyanonymous
6678d5cf65
Fix regression.
7 months ago
TTPlanetPig
e172564eea
Update controlnet.py to fix the default controlnet weight as constant ( #4285 )
7 months ago
comfyanonymous
a3cc326748
Better fix for lowvram issue.
7 months ago
comfyanonymous
86a97e91fc
Fix controlnet regression.
7 months ago
comfyanonymous
5acdadc9f3
Fix issue with some lowvram weights.
7 months ago
comfyanonymous
55ad9d5f8c
Fix regression.
7 months ago
comfyanonymous
a9f04edc58
Implement text encoder part of HunyuanDiT loras.
7 months ago
comfyanonymous
a475ec2300
Cleanup HunyuanDit controlnets.
...
Use the: ControlNetApply SD3 and HunyuanDiT node.
7 months ago
来新璐
06eb9fb426
feat: add support for HunYuanDit ControlNet ( #4245 )
...
* add support for HunYuanDit ControlNet
* fix hunyuandit controlnet
* fix typo in hunyuandit controlnet
* fix typo in hunyuandit controlnet
* fix code format style
* add control_weight support for HunyuanDit Controlnet
* use control_weights in HunyuanDit Controlnet
* fix typo
7 months ago
comfyanonymous
413322645e
Raw torch is faster than einops?
7 months ago
comfyanonymous
11200de970
Cleaner code.
7 months ago
comfyanonymous
037c38eb0f
Try to improve inference speed on some machines.
7 months ago
comfyanonymous
1e11d2d1f5
Better prints.
7 months ago
Alex "mcmonkey" Goodwin
65ea6be38f
PullRequest CI Run: use pull_request_target to allow the CI Dashboard to work ( #4277 )
...
'_target' allows secrets to pass through, and we're just using the secret that allows uploading to the dashboard and are manually vetting PRs before running this workflow anyway
7 months ago
Alex "mcmonkey" Goodwin
5df6f57b5d
minor fix on copypasta action name ( #4276 )
...
my bad sorry
7 months ago
Alex "mcmonkey" Goodwin
6588bfdef9
add GitHub workflow for CI tests of PRs ( #4275 )
...
When the 'Run-CI-Test' label is added to a PR, it will be tested by the CI, on a small matrix of stable versions.
7 months ago
Alex "mcmonkey" Goodwin
50ed2879ef
Add full CI test matrix GitHub Workflow ( #4274 )
...
automatically runs a matrix of full GPU-enabled tests on all new commits to the ComfyUI master branch
7 months ago
comfyanonymous
66d4233210
Fix.
7 months ago
comfyanonymous
591010b7ef
Support diffusers text attention flux loras.
7 months ago
comfyanonymous
08f92d55e9
Partial model shift support.
7 months ago
comfyanonymous
8115d8cce9
Add Flux fp16 support hack.
7 months ago
comfyanonymous
6969fc9ba4
Make supported_dtypes a priority list.
7 months ago
comfyanonymous
cb7c4b4be3
Workaround for lora OOM on lowvram mode.
7 months ago
comfyanonymous
1208863eca
Fix "Comfy" lora keys.
...
They are in this format now:
diffusion_model.full.model.key.name.lora_up.weight
7 months ago
comfyanonymous
e1c528196e
Fix bundled embed.
7 months ago
comfyanonymous
17030fd4c0
Support for "Comfy" lora format.
...
The keys are just: model.full.model.key.name.lora_up.weight
It is supported by all comfyui supported models.
Now people can just convert loras to this format instead of having to ask
for me to implement them.
7 months ago
comfyanonymous
c19dcd362f
Controlnet code refactor.
7 months ago
comfyanonymous
1c08bf35b4
Support format for embeddings bundled in loras.
7 months ago
PhilWun
2a02546e20
Add type hints to folder_paths.py ( #4191 )
...
* add type hints to folder_paths.py
* replace deprecated standard collections type hints
* fix type error when using Python 3.8
7 months ago
comfyanonymous
b334605a66
Fix OOMs happening in some cases.
...
A cloned model patcher sometimes reported a model was loaded on a device
when it wasn't.
7 months ago
comfyanonymous
de17a9755e
Unload all models if there's an OOM error.
7 months ago
comfyanonymous
c14ac98fed
Unload models and load them back in lowvram mode no free vram.
7 months ago
Robin Huang
2894511893
Clone taesd with depth of 1 to reduce download size. ( #4232 )
7 months ago
Silver
f3bc40223a
Add format metadata to CLIP save to make compatible with diffusers safetensors loading ( #4233 )
7 months ago
Chenlei Hu
841e74ac40
Change browser test CI python to 3.8 ( #4234 )
7 months ago
comfyanonymous
2d75df45e6
Flux tweak memory usage.
7 months ago
Robin Huang
1abc9c8703
Stable release uses cached dependencies ( #4231 )
...
* Release stable based on existing tag.
* Update default cuda to 12.1.
7 months ago
comfyanonymous
8edbcf5209
Improve performance on some lowend GPUs.
7 months ago
comfyanonymous
e545a636ba
This probably doesn't work anymore.
7 months ago
bymyself
33e5203a2a
Don't cache index.html ( #4211 )
7 months ago
a-One-Fan
a178e25912
Fix Flux FP64 math on XPU ( #4210 )
7 months ago
comfyanonymous
78e133d041
Support simple diffusers Flux loras.
7 months ago
Silver
7afa985fba
Correct spelling 'token_weight_pars_t5' to 'token_weight_pairs_t5' ( #4200 )
7 months ago
comfyanonymous
ddb6a9f47c
Set the step in EmptySD3LatentImage to 16.
...
These models work better when the res is a multiple of 16.
7 months ago
comfyanonymous
3b71f84b50
ONNX tracing fixes.
7 months ago