comfyanonymous
a7b5eaa7e3
Forgot to commit this.
1 year ago
comfyanonymous
3b2e579926
Support loading the Stable Cascade effnet and previewer as a VAE.
...
The effnet can be used to encode images for img2img with Stage C.
1 year ago
comfyanonymous
dccca1daa5
Fix gligen lowvram mode.
1 year ago
comfyanonymous
8b60d33bb7
Add ModelSamplingStableCascade to control the shift sampling parameter.
...
shift is 2.0 by default on Stage C and 1.0 by default on Stage B.
1 year ago
comfyanonymous
6bcf57ff10
Fix attention masks properly for multiple batches.
1 year ago
comfyanonymous
11e3221f1f
fp8 weight support for Stable Cascade.
1 year ago
comfyanonymous
f8706546f3
Fix attention mask batch size in some attention functions.
1 year ago
comfyanonymous
3b9969c1c5
Properly fix attention masks in CLIP with batches.
1 year ago
comfyanonymous
5b40e7a5ed
Implement shift schedule for cascade stage C.
1 year ago
comfyanonymous
929e266f3e
Manual cast for bf16 on older GPUs.
1 year ago
comfyanonymous
6c875d846b
Fix clip attention mask issues on some hardware.
1 year ago
comfyanonymous
805c36ac9c
Make Stable Cascade work on old pytorch 2.0
1 year ago
comfyanonymous
f2d1d16f4f
Support Stable Cascade Stage B lite.
1 year ago
comfyanonymous
0b3c50480c
Make --force-fp32 disable loading models in bf16.
1 year ago
comfyanonymous
97d03ae04a
StableCascade CLIP model support.
1 year ago
comfyanonymous
667c92814e
Stable Cascade Stage B.
1 year ago
comfyanonymous
f83109f09b
Stable Cascade Stage C.
1 year ago
comfyanonymous
5e06baf112
Stable Cascade Stage A.
1 year ago
comfyanonymous
aeaeca10bd
Small refactor of is_device_* functions.
1 year ago
comfyanonymous
38b7ac6e26
Don't init the CLIP model when the checkpoint has no CLIP weights.
1 year ago
comfyanonymous
7dd352cbd7
Merge branch 'feature_expose_discard_penultimate_sigma' of https://github.com/blepping/ComfyUI
1 year ago
Jedrzej Kosinski
f44225fd5f
Fix infinite while loop being possible in ddim_scheduler
1 year ago
comfyanonymous
25a4805e51
Add a way to set different conditioning for the controlnet.
1 year ago
blepping
a352c021ec
Allow custom samplers to request discard penultimate sigma
1 year ago
comfyanonymous
c661a8b118
Don't use numpy for calculating sigmas.
1 year ago
comfyanonymous
236bda2683
Make minimum tile size the size of the overlap.
1 year ago
comfyanonymous
66e28ef45c
Don't use is_bf16_supported to check for fp16 support.
1 year ago
comfyanonymous
24129d78e6
Speed up SDXL on 16xx series with fp16 weights and manual cast.
1 year ago
comfyanonymous
4b0239066d
Always use fp16 for the text encoders.
1 year ago
comfyanonymous
da7a8df0d2
Put VAE key name in model config.
1 year ago
comfyanonymous
89507f8adf
Remove some unused imports.
1 year ago
Dr.Lt.Data
05cd00695a
typo fix - calculate_sigmas_scheduler ( #2619 )
...
self.scheduler -> scheduler_name
Co-authored-by: Lt.Dr.Data <lt.dr.data@gmail.com>
1 year ago
comfyanonymous
4871a36458
Cleanup some unused imports.
1 year ago
comfyanonymous
78a70fda87
Remove useless import.
1 year ago
comfyanonymous
d76a04b6ea
Add unfinished ImageOnlyCheckpointSave node to save a SVD checkpoint.
...
This node is unfinished, SVD checkpoints saved with this node will
work with ComfyUI but not with anything else.
1 year ago
comfyanonymous
f9e55d8463
Only auto enable bf16 VAE on nvidia GPUs that actually support it.
1 year ago
comfyanonymous
2395ae740a
Make unclip more deterministic.
...
Pass a seed argument note that this might make old unclip images different.
1 year ago
comfyanonymous
53c8a99e6c
Make server storage the default.
...
Remove --server-storage argument.
1 year ago
comfyanonymous
977eda19a6
Don't round noise mask.
1 year ago
comfyanonymous
10f2609fdd
Add InpaintModelConditioning node.
...
This is an alternative to VAE Encode for inpaint that should work with
lower denoise.
This is a different take on #2501
1 year ago
comfyanonymous
1a57423d30
Fix issue when using multiple t2i adapters with batched images.
1 year ago
comfyanonymous
6a7bc35db8
Use basic attention implementation for small inputs on old pytorch.
1 year ago
pythongosssss
235727fed7
Store user settings/data on the server and multi user support ( #2160 )
...
* wip per user data
* Rename, hide menu
* better error
rework default user
* store pretty
* Add userdata endpoints
Change nodetemplates to userdata
* add multi user message
* make normal arg
* Fix tests
* Ignore user dir
* user tests
* Changed to default to browser storage and add server-storage arg
* fix crash on empty templates
* fix settings added before load
* ignore parse errors
1 year ago
comfyanonymous
c6951548cf
Update optimized_attention_for_device function for new functions that
...
support masked attention.
1 year ago
comfyanonymous
aaa9017302
Add attention mask support to sub quad attention.
1 year ago
comfyanonymous
0c2c9fbdfa
Support attention mask in split attention.
1 year ago
comfyanonymous
3ad0191bfb
Implement attention mask on xformers.
1 year ago
comfyanonymous
8c6493578b
Implement noise augmentation for SD 4X upscale model.
1 year ago
comfyanonymous
ef4f6037cb
Fix model patches not working in custom sampling scheduler nodes.
1 year ago
comfyanonymous
a7874d1a8b
Add support for the stable diffusion x4 upscaling model.
...
This is an old model.
Load the checkpoint like a regular one and use the new
SD_4XUpscale_Conditioning node.
1 year ago
comfyanonymous
2c4e92a98b
Fix regression.
1 year ago
comfyanonymous
5eddfdd80c
Refactor VAE code.
...
Replace constants with downscale_ratio and latent_channels.
1 year ago
comfyanonymous
a47f609f90
Auto detect out_channels from model.
1 year ago
comfyanonymous
79f73a4b33
Remove useless code.
1 year ago
comfyanonymous
1b103e0cb2
Add argument to run the VAE on the CPU.
1 year ago
comfyanonymous
12e822c6c8
Use function to calculate model size in model patcher.
1 year ago
comfyanonymous
e1e322cf69
Load weights that can't be lowvramed to target device.
1 year ago
comfyanonymous
c782144433
Fix clip vision lowvram mode not working.
1 year ago
comfyanonymous
f21bb41787
Fix taesd VAE in lowvram mode.
1 year ago
comfyanonymous
61b3f15f8f
Fix lowvram mode not working with unCLIP and Revision code.
1 year ago
comfyanonymous
d0165d819a
Fix SVD lowvram mode.
1 year ago
comfyanonymous
a252963f95
--disable-smart-memory now unloads everything like it did originally.
1 year ago
comfyanonymous
36a7953142
Greatly improve lowvram sampling speed by getting rid of accelerate.
...
Let me know if this breaks anything.
1 year ago
comfyanonymous
261bcbb0d9
A few missing comfy ops in the VAE.
1 year ago
comfyanonymous
9a7619b72d
Fix regression with inpaint model.
1 year ago
comfyanonymous
571ea8cdcc
Fix SAG not working with cfg 1.0
1 year ago
comfyanonymous
8cf1daa108
Fix SDXL area composition sometimes not using the right pooled output.
1 year ago
comfyanonymous
2258f85159
Support stable zero 123 model.
...
To use it use the ImageOnlyCheckpointLoader to load the checkpoint and
the new Stable_Zero123 node.
1 year ago
comfyanonymous
2f9d6a97ec
Add --deterministic option to make pytorch use deterministic algorithms.
1 year ago
comfyanonymous
e45d920ae3
Don't resize clip vision image when the size is already good.
1 year ago
comfyanonymous
13e6d5366e
Switch clip vision to manual cast.
...
Make it use the same dtype as the text encoder.
1 year ago
comfyanonymous
719fa0866f
Set clip vision model in eval mode so it works without inference mode.
1 year ago
Hari
574363a8a6
Implement Perp-Neg
1 year ago
comfyanonymous
a5056cfb1f
Remove useless code.
1 year ago
comfyanonymous
329c571993
Improve code legibility.
1 year ago
comfyanonymous
6c5990f7db
Fix cfg being calculated more than once if sampler_cfg_function.
1 year ago
comfyanonymous
ba04a87d10
Refactor and improve the sag node.
...
Moved all the sag related code to comfy_extras/nodes_sag.py
1 year ago
Rafie Walker
6761233e9d
Implement Self-Attention Guidance ( #2201 )
...
* First SAG test
* need to put extra options on the model instead of patcher
* no errors and results seem not-broken
* Use @ashen-uncensored formula, which works better!!!
* Fix a crash when using weird resolutions. Remove an unnecessary UNet call
* Improve comments, optimize memory in blur routine
* SAG works with sampler_cfg_function
1 year ago
comfyanonymous
b454a67bb9
Support segmind vega model.
1 year ago
comfyanonymous
824e4935f5
Add dtype parameter to VAE object.
1 year ago
comfyanonymous
32b7e7e769
Add manual cast to controlnet.
1 year ago
comfyanonymous
3152023fbc
Use inference dtype for unet memory usage estimation.
1 year ago
comfyanonymous
77755ab8db
Refactor comfy.ops
...
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
1 year ago
comfyanonymous
b0aab1e4ea
Add an option --fp16-unet to force using fp16 for the unet.
1 year ago
comfyanonymous
ba07cb748e
Use faster manual cast for fp8 in unet.
1 year ago
comfyanonymous
57926635e8
Switch text encoder to manual cast.
...
Use fp16 text encoder weights for CPU inference to lower memory usage.
1 year ago
comfyanonymous
340177e6e8
Disable non blocking on mps.
1 year ago
comfyanonymous
614b7e731f
Implement GLora.
1 year ago
comfyanonymous
cb63e230b4
Make lora code a bit cleaner.
1 year ago
comfyanonymous
174eba8e95
Use own clip vision model implementation.
1 year ago
comfyanonymous
97015b6b38
Cleanup.
1 year ago
comfyanonymous
a4ec54a40d
Add linear_start and linear_end to model_config.sampling_settings
1 year ago
comfyanonymous
9ac0b487ac
Make --gpu-only put intermediate values in GPU memory instead of cpu.
1 year ago
comfyanonymous
efb704c758
Support attention masking in CLIP implementation.
1 year ago
comfyanonymous
fbdb14d4c4
Cleaner CLIP text encoder implementation.
...
Use a simple CLIP model implementation instead of the one from
transformers.
This will allow some interesting things that would too hackish to implement
using the transformers implementation.
1 year ago
comfyanonymous
2db86b4676
Slightly faster lora applying.
1 year ago
comfyanonymous
1bbd65ab30
Missed this one.
1 year ago
comfyanonymous
9b655d4fd7
Fix memory issue with control loras.
1 year ago
comfyanonymous
26b1c0a771
Fix control lora on fp8.
1 year ago
comfyanonymous
be3468ddd5
Less useless downcasting.
1 year ago