170 Commits (ffe0bb0a33a8a8d9a999dafb540558646127d443)

Author SHA1 Message Date
comfyanonymous 77755ab8db Refactor comfy.ops
comfy.ops -> comfy.ops.disable_weight_init

This should make it more clear what they actually do.

Some unused code has also been removed.
1 year ago
comfyanonymous fbdb14d4c4 Cleaner CLIP text encoder implementation.
Use a simple CLIP model implementation instead of the one from
transformers.

This will allow some interesting things that would too hackish to implement
using the transformers implementation.
1 year ago
comfyanonymous 1bbd65ab30 Missed this one. 1 year ago
comfyanonymous 31b0f6f3d8 UNET weights can now be stored in fp8.
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
1 year ago
comfyanonymous af365e4dd1 All the unet ops with weights are now handled by comfy.ops 1 year ago
comfyanonymous 39e75862b2 Fix regression from last commit. 1 year ago
comfyanonymous 50dc39d6ec Clean up the extra_options dict for the transformer patches.
Now everything in transformer_options gets put in extra_options.
1 year ago
comfyanonymous 3e5ea74ad3 Make buggy xformers fall back on pytorch attention. 1 year ago
comfyanonymous 871cc20e13 Support SVD img2vid model. 1 year ago
comfyanonymous 72741105a6 Remove useless code. 1 year ago
comfyanonymous 7e3fe3ad28 Make deep shrink behave like it should. 1 year ago
comfyanonymous 7ea6bb038c Print warning when controlnet can't be applied instead of crashing. 1 year ago
comfyanonymous 94cc718e9c Add a way to add patches to the input block. 1 year ago
comfyanonymous 794dd2064d Fix typo. 1 year ago
comfyanonymous a527d0c795 Code refactor. 1 year ago
comfyanonymous 2a23ba0b8c Fix unet ops not entirely on GPU. 1 year ago
comfyanonymous a268a574fa Remove a bunch of useless code.
DDIM is the same as euler with a small difference in the inpaint code.
DDIM uses randn_like but I set a fixed seed instead.

I'm keeping it in because I'm sure if I remove it people are going to
complain.
1 year ago
comfyanonymous c837a173fa Fix some memory issues in sub quad attention. 1 year ago
comfyanonymous 125b03eead Fix some OOM issues with split attention. 1 year ago
comfyanonymous 6ec3f12c6e Support SSD1B model and make it easier to support asymmetric unets. 1 year ago
comfyanonymous a373367b0c Fix some OOM issues with split and sub quad attention. 1 year ago
comfyanonymous 8b65f5de54 attention_basic now works with hypertile. 1 year ago
comfyanonymous e6bc42df46 Make sub_quad and split work with hypertile. 1 year ago
comfyanonymous 9906e3efe3 Make xformers work with hypertile. 1 year ago
comfyanonymous d44a2de49f Make VAE code closer to sgm. 1 year ago
comfyanonymous 23680a9155 Refactor the attention stuff in the VAE. 1 year ago
comfyanonymous bb064c9796 Add a separate optimized_attention_masked function. 1 year ago
comfyanonymous 9a55dadb4c Refactor code so model can be a dtype other than fp32 or fp16. 1 year ago
comfyanonymous 88733c997f pytorch_attention_enabled can now return True when xformers is enabled. 1 year ago
comfyanonymous ac7d8cfa87 Allow attn_mask in attention_pytorch. 1 year ago
comfyanonymous 1a4bd9e9a6 Refactor the attention functions.
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
1 year ago
comfyanonymous fff491b032 Model patches can now know which batch is positive and negative. 1 year ago
comfyanonymous 446caf711c Sampling code refactor. 1 year ago
comfyanonymous afa2399f79 Add a way to set output block patches to modify the h and hsp. 1 year ago
comfyanonymous 94e4fe39d8 This isn't used anywhere. 1 year ago
comfyanonymous 1938f5c5fe Add a force argument to soft_empty_cache to force a cache empty. 1 year ago
Simon Lui 2da73b7073 Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused. 2 years ago
Simon Lui 4a0c4ce4ef Some fixes to generalize CUDA specific functionality to Intel or other GPUs. 2 years ago
comfyanonymous 0e3b641172 Remove xformers related print. 2 years ago
comfyanonymous bed116a1f9 Remove optimization that caused border. 2 years ago
comfyanonymous 1c794a2161 Fallback to slice attention if xformers doesn't support the operation. 2 years ago
comfyanonymous d935ba50c4 Make --bf16-vae work on torch 2.0 2 years ago
comfyanonymous cf5ae46928 Controlnet/t2iadapter cleanup. 2 years ago
comfyanonymous b80c3276dc Fix issue with gligen. 2 years ago
comfyanonymous d6e4b342e6 Support for Control Loras.
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.

This allows a much smaller memory footprint depending on the rank of the
matrices.

These controlnets are used just like regular ones.
2 years ago
comfyanonymous 2b13939044 Remove some useless code. 2 years ago
comfyanonymous 95d796fc85 Faster VAE loading. 2 years ago
comfyanonymous 4b957a0010 Initialize the unet directly on the target device. 2 years ago
comfyanonymous 9ba440995a It's actually possible to torch.compile the unet now. 2 years ago
comfyanonymous 3ded1a3a04 Refactor of sampler code to deal more easily with different model types. 2 years ago
comfyanonymous ddc6f12ad5 Disable autocast in unet for increased speed. 2 years ago
comfyanonymous 103c487a89 Cleanup. 2 years ago
comfyanonymous c71a7e6b20 Fix ddim + inpainting not working. 2 years ago
comfyanonymous 78d8035f73 Fix bug with controlnet. 2 years ago
comfyanonymous 05676942b7 Add some more transformer hooks and move tomesd to comfy_extras.
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
2 years ago
comfyanonymous fa28d7334b Remove useless code. 2 years ago
comfyanonymous f87ec10a97 Support base SDXL and SDXL refiner models.
Large refactor of the model detection and loading code.
2 years ago
comfyanonymous 9fccf4aa03 Add original_shape parameter to transformer patch extra_options. 2 years ago
comfyanonymous 8883cb0f67 Add a way to set patches that modify the attn2 output.
Change the transformer patches function format to be more future proof.
2 years ago
comfyanonymous 45be2e92c1 Fix DDIM v-prediction. 2 years ago
comfyanonymous ae43f09ef7 All the unet weights should now be initialized with the right dtype. 2 years ago
comfyanonymous 7bf89ba923 Initialize more unet weights as the right dtype. 2 years ago
comfyanonymous e21d9ad445 Initialize transformer unet block weights in right dtype at the start. 2 years ago
comfyanonymous 21f04fe632 Disable default weight values in unet conv2d for faster loading. 2 years ago
comfyanonymous 9d54066ebc This isn't needed for inference. 2 years ago
comfyanonymous 6971646b8b Speed up model loading a bit.
Default pytorch Linear initializes the weights which is useless and slow.
2 years ago
comfyanonymous 274dff3257 Remove more useless files. 2 years ago
comfyanonymous f0a2b81cd0 Cleanup: Remove a bunch of useless files. 2 years ago
comfyanonymous b8636a44aa Make scaled_dot_product switch to sliced attention on OOM. 2 years ago
comfyanonymous 797c4e8d3b Simplify and improve some vae attention code. 2 years ago
BlenderNeko d9e088ddfd minor changes for tiled sampler 2 years ago
comfyanonymous cb1551b819 Lowvram mode for gligen and fix some lowvram issues. 2 years ago
comfyanonymous bae4fb4a9d Fix imports. 2 years ago
comfyanonymous ba8a4c3667 Change latent resolution step to 8. 2 years ago
comfyanonymous 66c8aa5c3e Make unet work with any input shape. 2 years ago
comfyanonymous d3293c8339 Properly disable all progress bars when disable_pbar=True 2 years ago
comfyanonymous 5282f56434 Implement Linear hypernetworks.
Add a HypernetworkLoader node to use hypernetworks.
2 years ago
comfyanonymous 6908f9c949 This makes pytorch2.0 attention perform a bit faster. 2 years ago
comfyanonymous 3696d1699a Add support for GLIGEN textbox model. 2 years ago
comfyanonymous 73c3e11e83 Fix model_management import so it doesn't get executed twice. 2 years ago
EllangoK e5e587b1c0 seperates out arg parser and imports args 2 years ago
comfyanonymous e46b1c3034 Disable xformers in VAE when xformers == 0.0.18 2 years ago
comfyanonymous 539ff487a8 Pull latest tomesd code from upstream. 2 years ago
comfyanonymous 809bcc8ceb Add support for unCLIP SD2.x models.
See _for_testing/unclip in the UI for the new nodes.

unCLIPCheckpointLoader is used to load them.

unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2 years ago
comfyanonymous 0d972b85e6 This seems to give better quality in tome. 2 years ago
comfyanonymous 18a6c1db33 Add a TomePatchModel node to the _for_testing section.
Tome increases sampling speed at the expense of quality.
2 years ago
comfyanonymous 61ec3c9d5d Add a way to pass options to the transformers blocks. 2 years ago
comfyanonymous f5365c9c81 Fix ddim for Mac: #264 2 years ago
comfyanonymous 3ed4a4e4e6 Try again with vae tiled decoding if regular fails because of OOM. 2 years ago
comfyanonymous c692509c2b Try to improve VAEEncode memory usage a bit. 2 years ago
comfyanonymous 54dbfaf2ec Remove omegaconf dependency and some ci changes. 2 years ago
comfyanonymous 83f23f82b8 Add pytorch attention support to VAE. 2 years ago
comfyanonymous a256a2abde --disable-xformers should not even try to import xformers. 2 years ago
comfyanonymous 0f3ba7482f Xformers is now properly disabled when --cpu used.
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
2 years ago
comfyanonymous 1de86851b1 Try to fix memory issue. 2 years ago
edikius 165be5828a
Fixed import (#44)
* fixed import error

I had an
ImportError: cannot import name 'Protocol' from 'typing'
while trying to update so I fixed it to start an app

* Update main.py

* deleted example files
2 years ago
comfyanonymous cc8baf1080 Make VAE use common function to get free memory. 2 years ago
comfyanonymous 798c90e1c0 Fix pytorch 2.0 cross attention not working. 2 years ago
comfyanonymous 4215206281 Add a node to set CLIP skip.
Use a more simple way to detect if the model is -v prediction.
2 years ago
comfyanonymous 94bb0375b0 New CheckpointLoaderSimple to load checkpoints without a config. 2 years ago