comfyanonymous
cb7c4b4be3
Workaround for lora OOM on lowvram mode.
7 months ago
comfyanonymous
1208863eca
Fix "Comfy" lora keys.
...
They are in this format now:
diffusion_model.full.model.key.name.lora_up.weight
7 months ago
comfyanonymous
e1c528196e
Fix bundled embed.
7 months ago
comfyanonymous
17030fd4c0
Support for "Comfy" lora format.
...
The keys are just: model.full.model.key.name.lora_up.weight
It is supported by all comfyui supported models.
Now people can just convert loras to this format instead of having to ask
for me to implement them.
7 months ago
comfyanonymous
c19dcd362f
Controlnet code refactor.
7 months ago
comfyanonymous
1c08bf35b4
Support format for embeddings bundled in loras.
7 months ago
comfyanonymous
b334605a66
Fix OOMs happening in some cases.
...
A cloned model patcher sometimes reported a model was loaded on a device
when it wasn't.
7 months ago
comfyanonymous
c14ac98fed
Unload models and load them back in lowvram mode no free vram.
7 months ago
comfyanonymous
2d75df45e6
Flux tweak memory usage.
7 months ago
comfyanonymous
8edbcf5209
Improve performance on some lowend GPUs.
7 months ago
a-One-Fan
a178e25912
Fix Flux FP64 math on XPU ( #4210 )
7 months ago
comfyanonymous
78e133d041
Support simple diffusers Flux loras.
7 months ago
Silver
7afa985fba
Correct spelling 'token_weight_pars_t5' to 'token_weight_pairs_t5' ( #4200 )
7 months ago
comfyanonymous
3b71f84b50
ONNX tracing fixes.
7 months ago
comfyanonymous
0a6b008117
Fix issue with some custom nodes.
7 months ago
comfyanonymous
f7a5107784
Fix crash.
7 months ago
comfyanonymous
91be9c2867
Tweak lowvram memory formula.
7 months ago
comfyanonymous
03c5018c98
Lower lowvram memory to 1/3 of free memory.
7 months ago
comfyanonymous
2ba5cc8b86
Fix some issues.
7 months ago
comfyanonymous
1e68002b87
Cap lowvram to half of free memory.
7 months ago
comfyanonymous
ba9095e5bd
Automatically use fp8 for diffusion model weights if:
...
Checkpoint contains weights in fp8.
There isn't enough memory to load the diffusion model in GPU vram.
7 months ago
comfyanonymous
f123328b82
Load T5 in fp8 if it's in fp8 in the Flux checkpoint.
7 months ago
comfyanonymous
63a7e8edba
More aggressive batch splitting.
7 months ago
comfyanonymous
ea03c9dcd2
Better per model memory usage estimations.
7 months ago
comfyanonymous
3a9ee995cf
Tweak regular SD memory formula.
7 months ago
comfyanonymous
47da42d928
Better Flux vram estimation.
7 months ago
Alexander Brown
ce9ac2fe05
Fix clip_g/clip_l mixup ( #4168 )
7 months ago
comfyanonymous
e638f2858a
Hack to make all resolutions work on Flux models.
7 months ago
comfyanonymous
d420bc792a
Tweak the memory usage formulas for Flux and SD.
7 months ago
comfyanonymous
d965474aaa
Make ComfyUI split batches a higher priority than weight offload.
7 months ago
comfyanonymous
1c61361fd2
Fast preview support for Flux.
7 months ago
comfyanonymous
a6decf1e62
Fix bfloat16 potentially not being enabled on mps.
7 months ago
comfyanonymous
48eb1399c0
Try to fix mac issue.
7 months ago
comfyanonymous
d7430a1651
Add a way to load the diffusion model in fp8 with UNETLoader node.
7 months ago
comfyanonymous
f2b80f95d2
Better Mac support on flux model.
7 months ago
comfyanonymous
1aa9cf3292
Make lowvram more aggressive on low memory machines.
7 months ago
comfyanonymous
eb96c3bd82
Fix .sft file loading (they are safetensors files).
7 months ago
comfyanonymous
5f98de7697
Load flux t5 in fp8 if weights are in fp8.
7 months ago
comfyanonymous
8d34211a7a
Fix old python versions no longer working.
7 months ago
comfyanonymous
1589b58d3e
Basic Flux Schnell and Flux Dev model implementation.
7 months ago
comfyanonymous
7ad574bffd
Mac supports bf16 just make sure you are using the latest pytorch.
7 months ago
comfyanonymous
e2382b6adb
Make lowvram less aggressive when there are large amounts of free memory.
7 months ago
comfyanonymous
c24f897352
Fix to get fp8 working on T5 base.
7 months ago
comfyanonymous
a5991a7aa6
Fix hunyuan dit text encoder weights always being in fp32.
7 months ago
comfyanonymous
2c038ccef0
Lower CLIP memory usage by a bit.
7 months ago
comfyanonymous
b85216a3c0
Lower T5 memory usage by a few hundred MB.
7 months ago
comfyanonymous
82cae45d44
Fix potential issue with non clip text embeddings.
7 months ago
comfyanonymous
25853d0be8
Use common function for casting weights to input.
7 months ago
comfyanonymous
79040635da
Remove unnecessary code.
7 months ago
comfyanonymous
66d35c07ce
Improve artifacts on hydit, auraflow and SD3 on specific resolutions.
...
This breaks seeds for resolutions that are not a multiple of 16 in pixel
resolution by using circular padding instead of reflection padding but
should lower the amount of artifacts when doing img2img at those
resolutions.
7 months ago
comfyanonymous
4ba7fa0244
Refactor: Move sd2_clip.py to text_encoders folder.
7 months ago
comfyanonymous
cf4418b806
Don't treat Bert model like CLIP.
...
Bert can accept up to 512 tokens so any prompt with more than 77 should
just be passed to it as is instead of splitting it up like CLIP.
7 months ago
comfyanonymous
8328a2d8cd
Let hunyuan dit work with all prompt lengths.
7 months ago
comfyanonymous
afe732bef9
Hunyuan dit can now accept longer prompts.
7 months ago
comfyanonymous
a9ac56fc0d
Own BertModel implementation that works with lowvram.
7 months ago
comfyanonymous
25b51b1a8b
Hunyuan DiT lora support.
7 months ago
comfyanonymous
a5f4292f9f
Basic hunyuan dit implementation. ( #4102 )
...
* Let tokenizers return weights to be stored in the saved checkpoint.
* Basic hunyuan dit implementation.
* Fix some resolutions not working.
* Support hydit checkpoint save.
* Init with right dtype.
* Switch to optimized attention in pooler.
* Fix black images on hunyuan dit.
7 months ago
comfyanonymous
f87810cd3e
Let tokenizers return weights to be stored in the saved checkpoint.
7 months ago
comfyanonymous
10c919f4c7
Make it possible to load tokenizer data from checkpoints.
7 months ago
comfyanonymous
10b43ceea5
Remove duplicate code.
7 months ago
comfyanonymous
0a4c49c57c
Support MT5.
7 months ago
comfyanonymous
88ed893034
Allow SPieceTokenizer to load model from a byte string.
7 months ago
comfyanonymous
334ba48cea
More generic unet prefix detection code.
7 months ago
comfyanonymous
14764aa2e2
Rename LLAMATokenizer to SPieceTokenizer.
7 months ago
comfyanonymous
b2c995f623
"auto" type is only relevant to the SetUnionControlNetType node.
7 months ago
Chenlei Hu
4151fbfa8a
Add error message on union controlnet ( #4081 )
7 months ago
comfyanonymous
95fa9545f1
Only append zero to noise schedule if last sigma isn't zero.
7 months ago
comfyanonymous
6ab8cad22e
Implement beta sampling scheduler.
...
It is based on: https://arxiv.org/abs/2407.12173
Add "beta" to the list of schedulers and the BetaSamplingScheduler node.
7 months ago
喵哩个咪
855789403b
support clip-vit-large-patch14-336 ( #4042 )
...
* support clip-vit-large-patch14-336
* support clip-vit-large-patch14-336
7 months ago
comfyanonymous
6f7869f365
Get clip vision image size from config.
7 months ago
comfyanonymous
281ad42df4
Fix lowvram union controlnet bug.
7 months ago
Thomas Ward
c5a48b15bd
Make default hash lib configurable without code changes via CLI argument ( #3947 )
...
* cli_args: Add --duplicate-check-hash-function.
* server.py: compare_image_hash configurable hash function
Uses an argument added in cli_args to specify the type of hashing to default to for duplicate hash checking. Uses an `eval()` to identify the specific hashlib class to utilize, but ultimately safely operates because we have specific options and only those options/choices in the arg parser. So we don't have any unsafe input there.
* Add hasher() to node_helpers
* hashlib selection moved to node_helpers
* default-hashing-function instead of dupe checking hasher
This makes a default-hashing-function option instead of previous selected option.
* Use args.default_hashing_function
* Use safer handling for node_helpers.hasher()
Uses a safer handling method than `eval` to evaluate default hashing function.
* Stray parentheses are evil.
* Indentation fix.
Somehow when I hit save I didn't notice I missed a space to make indentation work proper. Oops!
7 months ago
comfyanonymous
8270c62530
Add SetUnionControlNetType to set the type of the union controlnet model.
7 months ago
comfyanonymous
821f93872e
Allow model sampling to set number of timesteps.
7 months ago
Chenlei Hu
99458e8aca
Add `FrontendManager` to manage non-default front-end impl ( #3897 )
...
* Add frontend manager
* Add tests
* nit
* Add unit test to github CI
* Fix path
* nit
* ignore
* Add logging
* Install test deps
* Remove 'stable' keyword support
* Update test
* Add web-root arg
* Rename web-root to front-end-root
* Add test on non-exist version number
* Use repo owner/name to replace hard coded provider list
* Inline cmd args
* nit
* Fix unit test
7 months ago
comfyanonymous
1305fb294c
Refactor: Move some code to the comfy/text_encoders folder.
7 months ago
comfyanonymous
7914c47d5a
Quick fix for the promax controlnet.
7 months ago
comfyanonymous
a3dffc447a
Support AuraFlow Lora and loading model weights in diffusers format.
...
You can load model weights in diffusers format using the UNETLoader node.
8 months ago
comfyanonymous
29c2e26724
Better tokenizing code for AuraFlow.
8 months ago
comfyanonymous
8e012043a9
Add a ModelSamplingAuraFlow node to change the shift value.
...
Set the default AuraFlow shift value to 1.73 (sqrt(3)).
8 months ago
comfyanonymous
9f291d75b3
AuraFlow model implementation.
8 months ago
comfyanonymous
f45157e3ac
Fix error message never being shown.
8 months ago
comfyanonymous
5e1fced639
Cleaner support for loading different diffusion model types.
8 months ago
comfyanonymous
ffe0bb0a33
Remove useless code.
8 months ago
comfyanonymous
391c1046cf
More flexibility with text encoder return values.
...
Text encoders can now return other values to the CONDITIONING than the cond
and pooled output.
8 months ago
comfyanonymous
e44fa5667f
Support returning text encoder attention masks.
8 months ago
Extraltodeus
f1a01c2c7e
Add sampler_pre_cfg_function ( #3979 )
...
* Update samplers.py
* Update model_patcher.py
8 months ago
comfyanonymous
ade7aa1b0c
Remove useless import.
8 months ago
comfyanonymous
faa57430b0
Controlnet union model basic implementation.
...
This is only the model code itself, it currently defaults to an empty
embedding [0] * 6 which seems to work better than treating it like a
regular controlnet.
TODO: Add nodes to select the image type.
8 months ago
comfyanonymous
bb663bcd6c
Rename clip_t5base to t5base for stable audio text encoder.
8 months ago
comfyanonymous
2dc84d1444
Add a way to set the timestep multiplier in the flow sampling.
8 months ago
comfyanonymous
ff63893d10
Support other types of T5 models.
8 months ago
comfyanonymous
4040491149
Better T5xxl detection.
8 months ago
comfyanonymous
b8e58a9394
Cleanup T5 code a bit.
8 months ago
comfyanonymous
80c4590998
Allow specifying the padding token for the tokenizer.
8 months ago
comfyanonymous
ce649d61c0
Allow zeroing out of embeds with unused attention mask.
8 months ago
comfyanonymous
739b76630e
Remove useless code.
8 months ago
comfyanonymous
d7484ef30c
Support loading checkpoints with the UNETLoader node.
8 months ago
comfyanonymous
537f35c7bc
Don't update dict if contiguous.
8 months ago
Alex "mcmonkey" Goodwin
3f46362d22
fix non-contiguous tensor saving (from channels-last) ( #3932 )
8 months ago