827 Commits (0eaa34ec5b22fd305159a78ea92b2ef00105ab18)

Author SHA1 Message Date
comfyanonymous 00425563c0 Cleanup: Use sampling noise scaling function for inpainting. 12 months ago
comfyanonymous c62e836167 Move noise scaling to object with sampling math. 12 months ago
comfyanonymous cb7c3a2921 Allow image_only_indicator to be None. 1 year ago
comfyanonymous b3e97fc714 Koala 700M and 1B support.
Use the UNET Loader node to load the unet file to use them.
1 year ago
comfyanonymous 37a86e4618 Remove duplicate text_projection key from some saved models. 1 year ago
comfyanonymous 8daedc5bf2 Auto detect playground v2.5 model. 1 year ago
comfyanonymous d46583ecec Playground V2.5 support with ModelSamplingContinuousEDM node.
Use ModelSamplingContinuousEDM with edm_playground_v2.5 selected.
1 year ago
comfyanonymous 1e0fcc9a65 Make XL checkpoints save in a more standard format. 1 year ago
comfyanonymous b416be7d78 Make the text projection saved in the checkpoint the right format. 1 year ago
comfyanonymous 03c47fc0f2 Add a min_length property to tokenizer class. 1 year ago
comfyanonymous 8ac69f62e5 Make return_projected_pooled setable from the __init__ 1 year ago
comfyanonymous ca7c310a0e Support loading old CLIP models saved with CLIPSave. 1 year ago
comfyanonymous c2cb8e889b Always return unprojected pooled output for gligen. 1 year ago
comfyanonymous 1cb3f6a83b Move text projection into the CLIP model code.
Fix issue with not loading the SSD1B clip correctly.
1 year ago
comfyanonymous 6533b172c1 Support text encoder text_projection in lora. 1 year ago
comfyanonymous 1e5f0f66be Support lora keys with lora_prior_unet_ and lora_prior_te_ 1 year ago
logtd e1cb93c383 Fix model and cond transformer options merge 1 year ago
comfyanonymous 10847dfafe Cleanup uni_pc inpainting.
This causes some small changes to the uni pc inpainting behavior but it
seems to improve results slightly.
1 year ago
comfyanonymous 18c151b3e3 Add some latent2rgb matrices for previews. 1 year ago
comfyanonymous 0d0fbabd1d Pass pooled CLIP to stage b. 1 year ago
comfyanonymous c6b7a157ed Align simple scheduling closer to official stable cascade scheduler. 1 year ago
comfyanonymous 88f300401c Enable fp16 by default on mps. 1 year ago
comfyanonymous e93cdd0ad0 Remove print. 1 year ago
comfyanonymous 3711b31dff Support Stable Cascade in checkpoint format. 1 year ago
comfyanonymous d91f45ef28 Some cleanups to how the text encoders are loaded. 1 year ago
comfyanonymous a7b5eaa7e3 Forgot to commit this. 1 year ago
comfyanonymous 3b2e579926 Support loading the Stable Cascade effnet and previewer as a VAE.
The effnet can be used to encode images for img2img with Stage C.
1 year ago
comfyanonymous dccca1daa5 Fix gligen lowvram mode. 1 year ago
comfyanonymous 8b60d33bb7 Add ModelSamplingStableCascade to control the shift sampling parameter.
shift is 2.0 by default on Stage C and 1.0 by default on Stage B.
1 year ago
comfyanonymous 6bcf57ff10 Fix attention masks properly for multiple batches. 1 year ago
comfyanonymous 11e3221f1f fp8 weight support for Stable Cascade. 1 year ago
comfyanonymous f8706546f3 Fix attention mask batch size in some attention functions. 1 year ago
comfyanonymous 3b9969c1c5 Properly fix attention masks in CLIP with batches. 1 year ago
comfyanonymous 5b40e7a5ed Implement shift schedule for cascade stage C. 1 year ago
comfyanonymous 929e266f3e Manual cast for bf16 on older GPUs. 1 year ago
comfyanonymous 6c875d846b Fix clip attention mask issues on some hardware. 1 year ago
comfyanonymous 805c36ac9c Make Stable Cascade work on old pytorch 2.0 1 year ago
comfyanonymous f2d1d16f4f Support Stable Cascade Stage B lite. 1 year ago
comfyanonymous 0b3c50480c Make --force-fp32 disable loading models in bf16. 1 year ago
comfyanonymous 97d03ae04a StableCascade CLIP model support. 1 year ago
comfyanonymous 667c92814e Stable Cascade Stage B. 1 year ago
comfyanonymous f83109f09b Stable Cascade Stage C. 1 year ago
comfyanonymous 5e06baf112 Stable Cascade Stage A. 1 year ago
comfyanonymous aeaeca10bd Small refactor of is_device_* functions. 1 year ago
comfyanonymous 38b7ac6e26 Don't init the CLIP model when the checkpoint has no CLIP weights. 1 year ago
comfyanonymous 7dd352cbd7 Merge branch 'feature_expose_discard_penultimate_sigma' of https://github.com/blepping/ComfyUI 1 year ago
Jedrzej Kosinski f44225fd5f Fix infinite while loop being possible in ddim_scheduler 1 year ago
comfyanonymous 25a4805e51 Add a way to set different conditioning for the controlnet. 1 year ago
blepping a352c021ec Allow custom samplers to request discard penultimate sigma 1 year ago
comfyanonymous c661a8b118 Don't use numpy for calculating sigmas. 1 year ago
comfyanonymous 236bda2683 Make minimum tile size the size of the overlap. 1 year ago
comfyanonymous 66e28ef45c Don't use is_bf16_supported to check for fp16 support. 1 year ago
comfyanonymous 24129d78e6 Speed up SDXL on 16xx series with fp16 weights and manual cast. 1 year ago
comfyanonymous 4b0239066d Always use fp16 for the text encoders. 1 year ago
comfyanonymous da7a8df0d2 Put VAE key name in model config. 1 year ago
comfyanonymous 89507f8adf Remove some unused imports. 1 year ago
Dr.Lt.Data 05cd00695a
typo fix - calculate_sigmas_scheduler (#2619)
self.scheduler -> scheduler_name

Co-authored-by: Lt.Dr.Data <lt.dr.data@gmail.com>
1 year ago
comfyanonymous 4871a36458 Cleanup some unused imports. 1 year ago
comfyanonymous 78a70fda87 Remove useless import. 1 year ago
comfyanonymous d76a04b6ea Add unfinished ImageOnlyCheckpointSave node to save a SVD checkpoint.
This node is unfinished, SVD checkpoints saved with this node will
work with ComfyUI but not with anything else.
1 year ago
comfyanonymous f9e55d8463 Only auto enable bf16 VAE on nvidia GPUs that actually support it. 1 year ago
comfyanonymous 2395ae740a Make unclip more deterministic.
Pass a seed argument note that this might make old unclip images different.
1 year ago
comfyanonymous 53c8a99e6c Make server storage the default.
Remove --server-storage argument.
1 year ago
comfyanonymous 977eda19a6 Don't round noise mask. 1 year ago
comfyanonymous 10f2609fdd Add InpaintModelConditioning node.
This is an alternative to VAE Encode for inpaint that should work with
lower denoise.

This is a different take on #2501
1 year ago
comfyanonymous 1a57423d30 Fix issue when using multiple t2i adapters with batched images. 1 year ago
comfyanonymous 6a7bc35db8 Use basic attention implementation for small inputs on old pytorch. 1 year ago
pythongosssss 235727fed7
Store user settings/data on the server and multi user support (#2160)
* wip per user data

* Rename, hide menu

* better error
rework default user

* store pretty

* Add userdata endpoints
Change nodetemplates to userdata

* add multi user message

* make normal arg

* Fix tests

* Ignore user dir

* user tests

* Changed to default to browser storage and add server-storage arg

* fix crash on empty templates

* fix settings added before load

* ignore parse errors
1 year ago
comfyanonymous c6951548cf Update optimized_attention_for_device function for new functions that
support masked attention.
1 year ago
comfyanonymous aaa9017302 Add attention mask support to sub quad attention. 1 year ago
comfyanonymous 0c2c9fbdfa Support attention mask in split attention. 1 year ago
comfyanonymous 3ad0191bfb Implement attention mask on xformers. 1 year ago
comfyanonymous 8c6493578b Implement noise augmentation for SD 4X upscale model. 1 year ago
comfyanonymous ef4f6037cb Fix model patches not working in custom sampling scheduler nodes. 1 year ago
comfyanonymous a7874d1a8b Add support for the stable diffusion x4 upscaling model.
This is an old model.

Load the checkpoint like a regular one and use the new
SD_4XUpscale_Conditioning node.
1 year ago
comfyanonymous 2c4e92a98b Fix regression. 1 year ago
comfyanonymous 5eddfdd80c Refactor VAE code.
Replace constants with downscale_ratio and latent_channels.
1 year ago
comfyanonymous a47f609f90 Auto detect out_channels from model. 1 year ago
comfyanonymous 79f73a4b33 Remove useless code. 1 year ago
comfyanonymous 1b103e0cb2 Add argument to run the VAE on the CPU. 1 year ago
comfyanonymous 12e822c6c8 Use function to calculate model size in model patcher. 1 year ago
comfyanonymous e1e322cf69 Load weights that can't be lowvramed to target device. 1 year ago
comfyanonymous c782144433 Fix clip vision lowvram mode not working. 1 year ago
comfyanonymous f21bb41787 Fix taesd VAE in lowvram mode. 1 year ago
comfyanonymous 61b3f15f8f Fix lowvram mode not working with unCLIP and Revision code. 1 year ago
comfyanonymous d0165d819a Fix SVD lowvram mode. 1 year ago
comfyanonymous a252963f95 --disable-smart-memory now unloads everything like it did originally. 1 year ago
comfyanonymous 36a7953142 Greatly improve lowvram sampling speed by getting rid of accelerate.
Let me know if this breaks anything.
1 year ago
comfyanonymous 261bcbb0d9 A few missing comfy ops in the VAE. 1 year ago
comfyanonymous 9a7619b72d Fix regression with inpaint model. 1 year ago
comfyanonymous 571ea8cdcc Fix SAG not working with cfg 1.0 1 year ago
comfyanonymous 8cf1daa108 Fix SDXL area composition sometimes not using the right pooled output. 1 year ago
comfyanonymous 2258f85159 Support stable zero 123 model.
To use it use the ImageOnlyCheckpointLoader to load the checkpoint and
the new Stable_Zero123 node.
1 year ago
comfyanonymous 2f9d6a97ec Add --deterministic option to make pytorch use deterministic algorithms. 1 year ago
comfyanonymous e45d920ae3 Don't resize clip vision image when the size is already good. 1 year ago
comfyanonymous 13e6d5366e Switch clip vision to manual cast.
Make it use the same dtype as the text encoder.
1 year ago
comfyanonymous 719fa0866f Set clip vision model in eval mode so it works without inference mode. 1 year ago
Hari 574363a8a6 Implement Perp-Neg 1 year ago
comfyanonymous a5056cfb1f Remove useless code. 1 year ago
comfyanonymous 329c571993 Improve code legibility. 1 year ago
comfyanonymous 6c5990f7db Fix cfg being calculated more than once if sampler_cfg_function. 1 year ago
comfyanonymous ba04a87d10 Refactor and improve the sag node.
Moved all the sag related code to comfy_extras/nodes_sag.py
1 year ago
Rafie Walker 6761233e9d
Implement Self-Attention Guidance (#2201)
* First SAG test

* need to put extra options on the model instead of patcher

* no errors and results seem not-broken

* Use @ashen-uncensored formula, which works better!!!

* Fix a crash when using weird resolutions. Remove an unnecessary UNet call

* Improve comments, optimize memory in blur routine

* SAG works with sampler_cfg_function
1 year ago
comfyanonymous b454a67bb9 Support segmind vega model. 1 year ago
comfyanonymous 824e4935f5 Add dtype parameter to VAE object. 1 year ago
comfyanonymous 32b7e7e769 Add manual cast to controlnet. 1 year ago
comfyanonymous 3152023fbc Use inference dtype for unet memory usage estimation. 1 year ago
comfyanonymous 77755ab8db Refactor comfy.ops
comfy.ops -> comfy.ops.disable_weight_init

This should make it more clear what they actually do.

Some unused code has also been removed.
1 year ago
comfyanonymous b0aab1e4ea Add an option --fp16-unet to force using fp16 for the unet. 1 year ago
comfyanonymous ba07cb748e Use faster manual cast for fp8 in unet. 1 year ago
comfyanonymous 57926635e8 Switch text encoder to manual cast.
Use fp16 text encoder weights for CPU inference to lower memory usage.
1 year ago
comfyanonymous 340177e6e8 Disable non blocking on mps. 1 year ago
comfyanonymous 614b7e731f Implement GLora. 1 year ago
comfyanonymous cb63e230b4 Make lora code a bit cleaner. 1 year ago
comfyanonymous 174eba8e95 Use own clip vision model implementation. 1 year ago
comfyanonymous 97015b6b38 Cleanup. 1 year ago
comfyanonymous a4ec54a40d Add linear_start and linear_end to model_config.sampling_settings 1 year ago
comfyanonymous 9ac0b487ac Make --gpu-only put intermediate values in GPU memory instead of cpu. 1 year ago
comfyanonymous efb704c758 Support attention masking in CLIP implementation. 1 year ago
comfyanonymous fbdb14d4c4 Cleaner CLIP text encoder implementation.
Use a simple CLIP model implementation instead of the one from
transformers.

This will allow some interesting things that would too hackish to implement
using the transformers implementation.
1 year ago
comfyanonymous 2db86b4676 Slightly faster lora applying. 1 year ago
comfyanonymous 1bbd65ab30 Missed this one. 1 year ago
comfyanonymous 9b655d4fd7 Fix memory issue with control loras. 1 year ago
comfyanonymous 26b1c0a771 Fix control lora on fp8. 1 year ago
comfyanonymous be3468ddd5 Less useless downcasting. 1 year ago
comfyanonymous ca82ade765 Use .itemsize to get dtype size for fp8. 1 year ago
comfyanonymous 31b0f6f3d8 UNET weights can now be stored in fp8.
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
1 year ago
comfyanonymous af365e4dd1 All the unet ops with weights are now handled by comfy.ops 1 year ago
comfyanonymous 61a123a1e0 A different way of handling multiple images passed to SVD.
Previously when a list of 3 images [0, 1, 2] was used for a 6 frame video
they were concated like this:
[0, 1, 2, 0, 1, 2]

now they are concated like this:
[0, 0, 1, 1, 2, 2]
1 year ago
comfyanonymous c97be4db91 Support SD2.1 turbo checkpoint. 1 year ago
comfyanonymous 983ebc5792 Use smart model management for VAE to decrease latency. 1 year ago
comfyanonymous c45d1b9b67 Add a function to load a unet from a state dict. 1 year ago
comfyanonymous f30b992b18 .sigma and .timestep now return tensors on the same device as the input. 1 year ago
comfyanonymous 13fdee6abf Try to free memory for both cond+uncond before inference. 1 year ago
comfyanonymous be71bb5e13 Tweak memory inference calculations a bit. 1 year ago
comfyanonymous 39e75862b2 Fix regression from last commit. 1 year ago
comfyanonymous 50dc39d6ec Clean up the extra_options dict for the transformer patches.
Now everything in transformer_options gets put in extra_options.
1 year ago
comfyanonymous 5d6dfce548 Fix importing diffusers unets. 1 year ago
comfyanonymous 3e5ea74ad3 Make buggy xformers fall back on pytorch attention. 1 year ago
comfyanonymous 871cc20e13 Support SVD img2vid model. 1 year ago
comfyanonymous 410bf07771 Make VAE memory estimation take dtype into account. 1 year ago
comfyanonymous 32447f0c39 Add sampling_settings so models can specify specific sampling settings. 1 year ago
comfyanonymous c3ae99a749 Allow controlling downscale and upscale methods in PatchModelAddDownscale. 1 year ago
comfyanonymous 72741105a6 Remove useless code. 1 year ago
comfyanonymous 6a491ebe27 Allow model config to preprocess the vae state dict on load. 1 year ago
comfyanonymous cd4fc77d5f Add taesd and taesdxl to VAELoader node.
They will show up if both the taesd_encoder and taesd_decoder or taesdxl
model files are present in the models/vae_approx directory.
1 year ago
comfyanonymous ce67dcbcda Make it easy for models to process the unet state dict on load. 1 year ago
comfyanonymous d9d8702d8d percent_to_sigma now returns a float instead of a tensor. 1 year ago
comfyanonymous 0cf4e86939 Add some command line arguments to store text encoder weights in fp8.
Pytorch supports two variants of fp8:
--fp8_e4m3fn-text-enc (the one that seems to give better results)
--fp8_e5m2-text-enc
1 year ago
comfyanonymous 107e78b1cb Add support for loading SSD1B diffusers unet version.
Improve diffusers model detection.
1 year ago