16 Commits (9e411073e901f766118a7b82f613872fd745ecc2)

Author SHA1 Message Date
comfyanonymous 7e3fe3ad28 Make deep shrink behave like it should. 1 year ago
comfyanonymous 9f00a18095 Fix potential issues. 1 year ago
comfyanonymous 94cc718e9c Add a way to add patches to the input block. 1 year ago
comfyanonymous dd4ba68b6e Allow different models to estimate memory usage differently. 1 year ago
comfyanonymous 248aa3e563 Fix bug. 1 year ago
comfyanonymous 4a8a839b40 Add option to use in place weight updating in ModelPatcher. 1 year ago
comfyanonymous 3e0033ef30 Fix model merge bug.
Unload models before getting weights for model patching.
1 year ago
comfyanonymous fe40109b57 Fix issue with object patches not being copied with patcher. 1 year ago
comfyanonymous 844dbf97a7 Add: advanced->model->ModelSamplingDiscrete node.
This allows changing the sampling parameters of the model (eps or vpred)
or set the model to use zsnr.
1 year ago
gameltb 7e455adc07 fix unet_wrapper_function name in ModelPatcher 1 year ago
comfyanonymous 8cc75c64ff Let unet wrapper functions have .to attributes. 1 year ago
comfyanonymous afa2399f79 Add a way to set output block patches to modify the h and hsp. 1 year ago
comfyanonymous 1cdfb3dba4 Only do the cast on the device if the device supports it. 1 year ago
comfyanonymous b92bf8196e Do lora cast on GPU instead of CPU for higher performance. 1 year ago
Simon Lui 18617967e5
Fix error message in model_patcher.py
Found while tinkering.
2 years ago
comfyanonymous f92074b84f Move ModelPatcher to model_patcher.py 2 years ago