comfyanonymous
1ddf512fdc
Don't auto convert clip and vae weights to fp16 when saving checkpoint.
9 months ago
comfyanonymous
694e0b48e0
SD3 better memory usage estimation.
9 months ago
comfyanonymous
69c8d6d8a6
Single and dual clip loader nodes support SD3.
...
You can use the CLIPLoader to use the t5xxl only or the DualCLIPLoader to
use CLIP-L and CLIP-G only for sd3.
9 months ago
comfyanonymous
0e49211a11
Load the SD3 T5xxl model in the same dtype stored in the checkpoint.
9 months ago
comfyanonymous
5889b7ca0a
Support multiple text encoder configurations on SD3.
9 months ago
comfyanonymous
9424522ead
Reuse code.
9 months ago
Dango233
73ce178021
Remove redundancy in mmdit.py ( #3685 )
9 months ago
comfyanonymous
a82fae2375
Fix bug with cosxl edit model.
9 months ago
comfyanonymous
8c4a9befa7
SD3 Support.
9 months ago
comfyanonymous
a5e6a632f9
Support sampling non 2D latents.
9 months ago
comfyanonymous
742d5720d1
Support zeroing out text embeddings with the attention mask.
9 months ago
comfyanonymous
6cd8ffc465
Reshape the empty latent image to the right amount of channels if needed.
9 months ago
comfyanonymous
56333d4850
Use the end token for the text encoder attention mask.
9 months ago
comfyanonymous
104fcea0c8
Add function to get the list of currently loaded models.
9 months ago
comfyanonymous
b1fd26fe9e
pytorch xpu should be flash or mem efficient attention?
9 months ago
comfyanonymous
809cc85a8e
Remove useless code.
9 months ago
comfyanonymous
b249862080
Add an annoying print to a function I want to remove.
9 months ago
comfyanonymous
bf3e334d46
Disable non_blocking when --deterministic or directml.
9 months ago
JettHu
b26da2245f
Fix UnetParams annotation typo ( #3589 )
9 months ago
comfyanonymous
0920e0e5fe
Remove some unused imports.
9 months ago
comfyanonymous
ffc4b7c30e
Fix DORA strength.
...
This is a different version of #3298 with more correct behavior.
9 months ago
comfyanonymous
efa5a711b2
Reduce memory usage when applying DORA: #3557
9 months ago
comfyanonymous
6c23854f54
Fix OSX latent2rgb previews.
9 months ago
Chenlei Hu
7718ada4ed
Add type annotation UnetWrapperFunction ( #3531 )
...
* Add type annotation UnetWrapperFunction
* nit
* Add types.py
9 months ago
comfyanonymous
8508df2569
Work around black image bug on Mac 14.5 by forcing attention upcasting.
9 months ago
comfyanonymous
83d969e397
Disable xformers when tracing model.
9 months ago
comfyanonymous
1900e5119f
Fix potential issue.
9 months ago
comfyanonymous
09e069ae6c
Log the pytorch version.
9 months ago
comfyanonymous
11a2ad5110
Fix controlnet not upcasting on models that have it enabled.
9 months ago
comfyanonymous
0bdc2b15c7
Cleanup.
9 months ago
comfyanonymous
98f828fad9
Remove unnecessary code.
9 months ago
comfyanonymous
19300655dd
Don't automatically switch to lowvram mode on GPUs with low memory.
9 months ago
comfyanonymous
46daf0a9a7
Add debug options to force on and off attention upcasting.
9 months ago
comfyanonymous
2d41642716
Fix lowvram dora issue.
9 months ago
comfyanonymous
ec6f16adb6
Fix SAG.
9 months ago
comfyanonymous
bb4940d837
Only enable attention upcasting on models that actually need it.
9 months ago
comfyanonymous
b0ab31d06c
Refactor attention upcasting code part 1.
10 months ago
Simon Lui
f509c6fe21
Fix Intel GPU memory allocation accuracy and documentation update. ( #3459 )
...
* Change calculation of memory total to be more accurate, allocated is actually smaller than reserved.
* Update README.md install documentation for Intel GPUs.
10 months ago
comfyanonymous
fa6dd7e5bb
Fix lowvram issue with saving checkpoints.
...
The previous fix didn't cover the case where the model was loaded in
lowvram mode right before.
10 months ago
comfyanonymous
49c20cdc70
No longer necessary.
10 months ago
comfyanonymous
e1489ad257
Fix issue with lowvram mode breaking model saving.
10 months ago
comfyanonymous
93e876a3be
Remove warnings that confuse people.
10 months ago
comfyanonymous
cd07340d96
Typo fix.
10 months ago
comfyanonymous
c61eadf69a
Make the load checkpoint with config function call the regular one.
...
I was going to completely remove this function because it is unmaintainable
but I think this is the best compromise.
The clip skip and v_prediction parts of the configs should still work but
not the fp16 vs fp32.
10 months ago
Simon Lui
a56d02efc7
Change torch.xpu to ipex.optimize, xpu device initialization and remove workaround for text node issue from older IPEX. ( #3388 )
10 months ago
comfyanonymous
f81a6fade8
Fix some edge cases with samplers and arrays with a single sigma.
10 months ago
comfyanonymous
2aed53c4ac
Workaround xformers bug.
10 months ago
Garrett Sutula
bacce529fb
Add TLS Support ( #3312 )
...
* Add TLS Support
* Add to readme
* Add guidance for windows users on generating certificates
* Add guidance for windows users on generating certificates
* Fix typo
10 months ago
Jedrzej Kosinski
7990ae18c1
Fix error when more cond masks passed in than batch size ( #3353 )
10 months ago
comfyanonymous
8dc19e40d1
Don't init a VAE model when there are no VAE weights.
10 months ago