96 Commits (20f579d91dccc44bca4e28beba18c7a5211f5aa0)

Author SHA1 Message Date
comfyanonymous 20f579d91d Add DualClipLoader to load clip models for SDXL.
Update LoadClip to load clip models for SDXL refiner.
2 years ago
comfyanonymous b7933960bb Fix CLIPLoader node. 2 years ago
comfyanonymous 05676942b7 Add some more transformer hooks and move tomesd to comfy_extras.
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
2 years ago
comfyanonymous 8607c2d42d Move latent scale factor from VAE to model. 2 years ago
comfyanonymous 30a3861946 Fix bug when yaml config has no clip params. 2 years ago
comfyanonymous 9e37f4c7d5 Fix error with ClipVision loader node. 2 years ago
comfyanonymous 9f83b098c9 Don't merge weights when shapes don't match and print a warning. 2 years ago
comfyanonymous f87ec10a97 Support base SDXL and SDXL refiner models.
Large refactor of the model detection and loading code.
2 years ago
comfyanonymous 51581dbfa9 Fix last commits causing an issue with the text encoder lora. 2 years ago
comfyanonymous 8125b51a62 Keep a set of model_keys for faster add_patches. 2 years ago
comfyanonymous 45beebd33c Add a type of model patch useful for model merging. 2 years ago
comfyanonymous 8883cb0f67 Add a way to set patches that modify the attn2 output.
Change the transformer patches function format to be more future proof.
2 years ago
comfyanonymous fb4bf7f591 This is not needed anymore and causes issues with alphas_cumprod. 2 years ago
comfyanonymous f7edcfd927 Add a --gpu-only argument to keep and run everything on the GPU.
Make the CLIP model work on the GPU.
2 years ago
comfyanonymous 6b774589a5 Set model to fp16 before loading the state dict to lower ram bump. 2 years ago
comfyanonymous 388567f20b sampler_cfg_function now uses a dict for the argument.
This means arguments can be added without issues.
2 years ago
comfyanonymous ff9b22d79e Turn on safe load for a few models. 2 years ago
comfyanonymous f0a2b81cd0 Cleanup: Remove a bunch of useless files. 2 years ago
comfyanonymous f8c5931053 Split the batch in VAEEncode if there's not enough memory. 2 years ago
comfyanonymous c069fc0730 Auto switch to tiled VAE encode if regular one runs out of memory. 2 years ago
comfyanonymous de142eaad5 Simpler base model code. 2 years ago
comfyanonymous 0e425603fb Small refactor. 2 years ago
comfyanonymous 700491d81a Implement global average pooling for controlnet. 2 years ago
comfyanonymous 03da8a3426 This is useless for inference. 2 years ago
comfyanonymous eb448dd8e1 Auto load model in lowvram if not enough memory. 2 years ago
comfyanonymous a532888846 Support VAEs in diffusers format. 2 years ago
BlenderNeko 19c014f429 comment out annoying print statement 2 years ago
BlenderNeko d9e088ddfd minor changes for tiled sampler 2 years ago
comfyanonymous bae4fb4a9d Fix imports. 2 years ago
comfyanonymous fcf513e0b6 Refactor. 2 years ago
pythongosssss 5eeecf3fd5 remove unused import 2 years ago
pythongosssss 8912623ea9 use comfy progress bar 2 years ago
pythongosssss fdf57325f4 Merge remote-tracking branch 'origin/master' into tiled-progress 2 years ago
pythongosssss 27df74101e reduce duplication 2 years ago
pythongosssss 06ad35b493 added progress to encode + upscale 2 years ago
comfyanonymous 9c335a553f LoKR support. 2 years ago
pythongosssss c8c9926eeb Add progress to vae decode tiled 2 years ago
comfyanonymous 5282f56434 Implement Linear hypernetworks.
Add a HypernetworkLoader node to use hypernetworks.
2 years ago
comfyanonymous 3696d1699a Add support for GLIGEN textbox model. 2 years ago
comfyanonymous 884ea653c8 Add a way for nodes to set a custom CFG function. 2 years ago
comfyanonymous 73c3e11e83 Fix model_management import so it doesn't get executed twice. 2 years ago
comfyanonymous 81d1f00df3 Some refactoring: from_tokens -> encode_from_tokens 2 years ago
BlenderNeko da115bd78d ensure backwards compat with optional args 2 years ago
BlenderNeko 73175cf58c split tokenizer from encoder 2 years ago
comfyanonymous 809bcc8ceb Add support for unCLIP SD2.x models.
See _for_testing/unclip in the UI for the new nodes.

unCLIPCheckpointLoader is used to load them.

unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2 years ago
comfyanonymous 18a6c1db33 Add a TomePatchModel node to the _for_testing section.
Tome increases sampling speed at the expense of quality.
2 years ago
comfyanonymous b2554bc4dd Split VAE decode batches depending on free memory. 2 years ago
comfyanonymous dd095efc2c Support loha that use cp decomposition. 2 years ago
comfyanonymous 94a7c895f4 Add loha support. 2 years ago
comfyanonymous 3ed4a4e4e6 Try again with vae tiled decoding if regular fails because of OOM. 2 years ago