129 Commits (40e124c6be01195eada95e8c319ca6ddf4fd1a17)

Author SHA1 Message Date
comfyanonymous 2a813c3b09 Switch some more prints to logging. 12 months ago
comfyanonymous cb7c3a2921 Allow image_only_indicator to be None. 1 year ago
comfyanonymous b3e97fc714 Koala 700M and 1B support.
Use the UNET Loader node to load the unet file to use them.
1 year ago
comfyanonymous 6bcf57ff10 Fix attention masks properly for multiple batches. 1 year ago
comfyanonymous f8706546f3 Fix attention mask batch size in some attention functions. 1 year ago
comfyanonymous 3b9969c1c5 Properly fix attention masks in CLIP with batches. 1 year ago
comfyanonymous c661a8b118 Don't use numpy for calculating sigmas. 1 year ago
comfyanonymous 89507f8adf Remove some unused imports. 1 year ago
comfyanonymous 2395ae740a Make unclip more deterministic.
Pass a seed argument note that this might make old unclip images different.
1 year ago
comfyanonymous 6a7bc35db8 Use basic attention implementation for small inputs on old pytorch. 1 year ago
comfyanonymous c6951548cf Update optimized_attention_for_device function for new functions that
support masked attention.
1 year ago
comfyanonymous aaa9017302 Add attention mask support to sub quad attention. 1 year ago
comfyanonymous 0c2c9fbdfa Support attention mask in split attention. 1 year ago
comfyanonymous 3ad0191bfb Implement attention mask on xformers. 1 year ago
comfyanonymous 8c6493578b Implement noise augmentation for SD 4X upscale model. 1 year ago
comfyanonymous 79f73a4b33 Remove useless code. 1 year ago
comfyanonymous 61b3f15f8f Fix lowvram mode not working with unCLIP and Revision code. 1 year ago
comfyanonymous d0165d819a Fix SVD lowvram mode. 1 year ago
comfyanonymous 261bcbb0d9 A few missing comfy ops in the VAE. 1 year ago
comfyanonymous a5056cfb1f Remove useless code. 1 year ago
comfyanonymous 77755ab8db Refactor comfy.ops
comfy.ops -> comfy.ops.disable_weight_init

This should make it more clear what they actually do.

Some unused code has also been removed.
1 year ago
comfyanonymous fbdb14d4c4 Cleaner CLIP text encoder implementation.
Use a simple CLIP model implementation instead of the one from
transformers.

This will allow some interesting things that would too hackish to implement
using the transformers implementation.
1 year ago
comfyanonymous 1bbd65ab30 Missed this one. 1 year ago
comfyanonymous 31b0f6f3d8 UNET weights can now be stored in fp8.
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
1 year ago
comfyanonymous af365e4dd1 All the unet ops with weights are now handled by comfy.ops 1 year ago
comfyanonymous 39e75862b2 Fix regression from last commit. 1 year ago
comfyanonymous 50dc39d6ec Clean up the extra_options dict for the transformer patches.
Now everything in transformer_options gets put in extra_options.
1 year ago
comfyanonymous 3e5ea74ad3 Make buggy xformers fall back on pytorch attention. 1 year ago
comfyanonymous 871cc20e13 Support SVD img2vid model. 1 year ago
comfyanonymous 72741105a6 Remove useless code. 1 year ago
comfyanonymous 7e3fe3ad28 Make deep shrink behave like it should. 1 year ago
comfyanonymous 7ea6bb038c Print warning when controlnet can't be applied instead of crashing. 1 year ago
comfyanonymous 94cc718e9c Add a way to add patches to the input block. 1 year ago
comfyanonymous 794dd2064d Fix typo. 1 year ago
comfyanonymous a527d0c795 Code refactor. 1 year ago
comfyanonymous 2a23ba0b8c Fix unet ops not entirely on GPU. 1 year ago
comfyanonymous c837a173fa Fix some memory issues in sub quad attention. 1 year ago
comfyanonymous 125b03eead Fix some OOM issues with split attention. 1 year ago
comfyanonymous 6ec3f12c6e Support SSD1B model and make it easier to support asymmetric unets. 1 year ago
comfyanonymous a373367b0c Fix some OOM issues with split and sub quad attention. 1 year ago
comfyanonymous 8b65f5de54 attention_basic now works with hypertile. 1 year ago
comfyanonymous e6bc42df46 Make sub_quad and split work with hypertile. 1 year ago
comfyanonymous 9906e3efe3 Make xformers work with hypertile. 1 year ago
comfyanonymous d44a2de49f Make VAE code closer to sgm. 1 year ago
comfyanonymous 23680a9155 Refactor the attention stuff in the VAE. 1 year ago
comfyanonymous bb064c9796 Add a separate optimized_attention_masked function. 1 year ago
comfyanonymous 9a55dadb4c Refactor code so model can be a dtype other than fp32 or fp16. 1 year ago
comfyanonymous 88733c997f pytorch_attention_enabled can now return True when xformers is enabled. 1 year ago
comfyanonymous ac7d8cfa87 Allow attn_mask in attention_pytorch. 1 year ago
comfyanonymous 1a4bd9e9a6 Refactor the attention functions.
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
1 year ago