552 Commits (0a6fd49a3ef730741fc5f43ca89f3fadd3401129)

Author SHA1 Message Date
comfyanonymous 0a6fd49a3e Print leftover keys when using the UNETLoader. 1 year ago
comfyanonymous fe40109b57 Fix issue with object patches not being copied with patcher. 1 year ago
comfyanonymous a527d0c795 Code refactor. 1 year ago
comfyanonymous 2a23ba0b8c Fix unet ops not entirely on GPU. 1 year ago
comfyanonymous 844dbf97a7 Add: advanced->model->ModelSamplingDiscrete node.
This allows changing the sampling parameters of the model (eps or vpred)
or set the model to use zsnr.
1 year ago
comfyanonymous 656c0b5d90 CLIP code refactor and improvements.
More generic clip model class that can be used on more types of text
encoders.

Don't apply weighting algorithm when weight is 1.0

Don't compute an empty token output when it's not needed.
1 year ago
comfyanonymous b3fcd64c6c Make SDTokenizer class work with more types of tokenizers. 1 year ago
gameltb 7e455adc07 fix unet_wrapper_function name in ModelPatcher 1 year ago
comfyanonymous 1ffa8858e7 Move model sampling code to comfy/model_sampling.py 1 year ago
comfyanonymous ae2acfc21b Don't convert Nan to zero.
Converting Nan to zero is a bad idea because it makes it hard to tell when
something went wrong.
1 year ago
comfyanonymous d2e27b48f1 sampler_cfg_function now gets the noisy output as argument again.
This should make things that use sampler_cfg_function behave like before.

Added an input argument for those that want the denoised output.

This means you can calculate the x0 prediction of the model by doing:
(input - cond) for example.
1 year ago
comfyanonymous 2455aaed8a Allow model or clip to be None in load_lora_for_models. 1 year ago
comfyanonymous ecb80abb58 Allow ModelSamplingDiscrete to be instantiated without a model config. 1 year ago
comfyanonymous e73ec8c4da Not used anymore. 1 year ago
comfyanonymous 111f1b5255 Fix some issues with sampling precision. 1 year ago
comfyanonymous 7c0f255de1 Clean up percent start/end and make controlnets work with sigmas. 1 year ago
comfyanonymous a268a574fa Remove a bunch of useless code.
DDIM is the same as euler with a small difference in the inpaint code.
DDIM uses randn_like but I set a fixed seed instead.

I'm keeping it in because I'm sure if I remove it people are going to
complain.
1 year ago
comfyanonymous 1777b54d02 Sampling code changes.
apply_model in model_base now returns the denoised output.

This means that sampling_function now computes things on the denoised
output instead of the model output. This should make things more consistent
across current and future models.
1 year ago
comfyanonymous c837a173fa Fix some memory issues in sub quad attention. 1 year ago
comfyanonymous 125b03eead Fix some OOM issues with split attention. 1 year ago
comfyanonymous a12cc05323 Add --max-upload-size argument, the default is 100MB. 1 year ago
comfyanonymous 2a134bfab9 Fix checkpoint loader with config. 1 year ago
comfyanonymous e60ca6929a SD1 and SD2 clip and tokenizer code is now more similar to the SDXL one. 1 year ago
comfyanonymous 6ec3f12c6e Support SSD1B model and make it easier to support asymmetric unets. 1 year ago
comfyanonymous 434ce25ec0 Restrict loading embeddings from embedding folders. 1 year ago
comfyanonymous 723847f6b3 Faster clip image processing. 1 year ago
comfyanonymous a373367b0c Fix some OOM issues with split and sub quad attention. 1 year ago
comfyanonymous 7fbb217d3a Fix uni_pc returning noisy image when steps <= 3 1 year ago
Jedrzej Kosinski 3783cb8bfd change 'c_adm' to 'y' in ControlNet.get_control 1 year ago
comfyanonymous d1d2fea806 Pass extra conds directly to unet. 1 year ago
comfyanonymous 036f88c621 Refactor to make it easier to add custom conds to models. 1 year ago
comfyanonymous 3fce8881ca Sampling code refactor to make it easier to add more conds. 1 year ago
comfyanonymous 8594c8be4d Empty the cache when torch cache is more than 25% free mem. 1 year ago
comfyanonymous 8b65f5de54 attention_basic now works with hypertile. 1 year ago
comfyanonymous e6bc42df46 Make sub_quad and split work with hypertile. 1 year ago
comfyanonymous a0690f9df9 Fix t2i adapter issue. 1 year ago
comfyanonymous 9906e3efe3 Make xformers work with hypertile. 1 year ago
comfyanonymous 4185324a1d Fix uni_pc sampler math. This changes the images this sampler produces. 1 year ago
comfyanonymous e6962120c6 Make sure cond_concat is on the right device. 1 year ago
comfyanonymous 45c972aba8 Refactor cond_concat into conditioning. 1 year ago
comfyanonymous 430a8334c5 Fix some potential issues. 1 year ago
comfyanonymous 782a24fce6 Refactor cond_concat into model object. 1 year ago
comfyanonymous 0d45a565da Fix memory issue related to control loras.
The cleanup function was not getting called.
1 year ago
comfyanonymous d44a2de49f Make VAE code closer to sgm. 1 year ago
comfyanonymous 23680a9155 Refactor the attention stuff in the VAE. 1 year ago
comfyanonymous c8013f73e5 Add some Quadro cards to the list of cards with broken fp16. 1 year ago
comfyanonymous bb064c9796 Add a separate optimized_attention_masked function. 1 year ago
comfyanonymous fd4c5f07e7 Add a --bf16-unet to test running the unet in bf16. 1 year ago
comfyanonymous 9a55dadb4c Refactor code so model can be a dtype other than fp32 or fp16. 1 year ago
comfyanonymous 88733c997f pytorch_attention_enabled can now return True when xformers is enabled. 1 year ago