comfyanonymous
3b71f84b50
ONNX tracing fixes.
7 months ago
comfyanonymous
25853d0be8
Use common function for casting weights to input.
7 months ago
comfyanonymous
79040635da
Remove unnecessary code.
7 months ago
comfyanonymous
66d35c07ce
Improve artifacts on hydit, auraflow and SD3 on specific resolutions.
...
This breaks seeds for resolutions that are not a multiple of 16 in pixel
resolution by using circular padding instead of reflection padding but
should lower the amount of artifacts when doing img2img at those
resolutions.
7 months ago
comfyanonymous
10b43ceea5
Remove duplicate code.
7 months ago
comfyanonymous
f8f7568d03
Basic SD3 controlnet implementation.
...
Still missing the node to properly use it.
8 months ago
comfyanonymous
1281f933c1
Small optimization.
8 months ago
comfyanonymous
605e64f6d3
Fix lowvram issue.
9 months ago
Dango233
73ce178021
Remove redundancy in mmdit.py ( #3685 )
9 months ago
comfyanonymous
8c4a9befa7
SD3 Support.
9 months ago
comfyanonymous
0bdc2b15c7
Cleanup.
9 months ago
comfyanonymous
98f828fad9
Remove unnecessary code.
9 months ago
comfyanonymous
bb4940d837
Only enable attention upcasting on models that actually need it.
10 months ago
comfyanonymous
2a813c3b09
Switch some more prints to logging.
12 months ago
comfyanonymous
cb7c3a2921
Allow image_only_indicator to be None.
1 year ago
comfyanonymous
b3e97fc714
Koala 700M and 1B support.
...
Use the UNET Loader node to load the unet file to use them.
1 year ago
comfyanonymous
c661a8b118
Don't use numpy for calculating sigmas.
1 year ago
comfyanonymous
89507f8adf
Remove some unused imports.
1 year ago
comfyanonymous
8c6493578b
Implement noise augmentation for SD 4X upscale model.
1 year ago
comfyanonymous
79f73a4b33
Remove useless code.
1 year ago
comfyanonymous
61b3f15f8f
Fix lowvram mode not working with unCLIP and Revision code.
1 year ago
comfyanonymous
d0165d819a
Fix SVD lowvram mode.
1 year ago
comfyanonymous
261bcbb0d9
A few missing comfy ops in the VAE.
1 year ago
comfyanonymous
77755ab8db
Refactor comfy.ops
...
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
1 year ago
comfyanonymous
31b0f6f3d8
UNET weights can now be stored in fp8.
...
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
1 year ago
comfyanonymous
af365e4dd1
All the unet ops with weights are now handled by comfy.ops
1 year ago
comfyanonymous
50dc39d6ec
Clean up the extra_options dict for the transformer patches.
...
Now everything in transformer_options gets put in extra_options.
1 year ago
comfyanonymous
871cc20e13
Support SVD img2vid model.
1 year ago
comfyanonymous
72741105a6
Remove useless code.
1 year ago
comfyanonymous
7e3fe3ad28
Make deep shrink behave like it should.
1 year ago
comfyanonymous
7ea6bb038c
Print warning when controlnet can't be applied instead of crashing.
1 year ago
comfyanonymous
94cc718e9c
Add a way to add patches to the input block.
1 year ago
comfyanonymous
794dd2064d
Fix typo.
1 year ago
comfyanonymous
a527d0c795
Code refactor.
1 year ago
comfyanonymous
2a23ba0b8c
Fix unet ops not entirely on GPU.
1 year ago
comfyanonymous
6ec3f12c6e
Support SSD1B model and make it easier to support asymmetric unets.
1 year ago
comfyanonymous
d44a2de49f
Make VAE code closer to sgm.
1 year ago
comfyanonymous
23680a9155
Refactor the attention stuff in the VAE.
1 year ago
comfyanonymous
9a55dadb4c
Refactor code so model can be a dtype other than fp32 or fp16.
1 year ago
comfyanonymous
88733c997f
pytorch_attention_enabled can now return True when xformers is enabled.
1 year ago
comfyanonymous
1a4bd9e9a6
Refactor the attention functions.
...
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
1 year ago
comfyanonymous
afa2399f79
Add a way to set output block patches to modify the h and hsp.
1 year ago
comfyanonymous
1938f5c5fe
Add a force argument to soft_empty_cache to force a cache empty.
1 year ago
comfyanonymous
bed116a1f9
Remove optimization that caused border.
2 years ago
comfyanonymous
1c794a2161
Fallback to slice attention if xformers doesn't support the operation.
2 years ago
comfyanonymous
d935ba50c4
Make --bf16-vae work on torch 2.0
2 years ago
comfyanonymous
cf5ae46928
Controlnet/t2iadapter cleanup.
2 years ago
comfyanonymous
b80c3276dc
Fix issue with gligen.
2 years ago
comfyanonymous
d6e4b342e6
Support for Control Loras.
...
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.
This allows a much smaller memory footprint depending on the rank of the
matrices.
These controlnets are used just like regular ones.
2 years ago
comfyanonymous
2b13939044
Remove some useless code.
2 years ago