comfyanonymous
77755ab8db
Refactor comfy.ops
...
comfy.ops -> comfy.ops.disable_weight_init
This should make it more clear what they actually do.
Some unused code has also been removed.
1 year ago
comfyanonymous
ba07cb748e
Use faster manual cast for fp8 in unet.
1 year ago
comfyanonymous
57926635e8
Switch text encoder to manual cast.
...
Use fp16 text encoder weights for CPU inference to lower memory usage.
1 year ago
comfyanonymous
af365e4dd1
All the unet ops with weights are now handled by comfy.ops
1 year ago
comfyanonymous
412d3ff57d
Refactor.
1 year ago
comfyanonymous
00c0b2c507
Initialize text encoder to target dtype.
2 years ago
comfyanonymous
d6e4b342e6
Support for Control Loras.
...
Control loras are controlnets where some of the weights are stored in
"lora" format: an up and a down low rank matrice that when multiplied
together and added to the unet weight give the controlnet weight.
This allows a much smaller memory footprint depending on the rank of the
matrices.
These controlnets are used just like regular ones.
2 years ago
comfyanonymous
bb1f45d6e8
Properly disable weight initialization in clip models.
2 years ago
comfyanonymous
21f04fe632
Disable default weight values in unet conv2d for faster loading.
2 years ago
comfyanonymous
6971646b8b
Speed up model loading a bit.
...
Default pytorch Linear initializes the weights which is useless and slow.
2 years ago