1014 Commits (cd5017c1c9b3a7b0fec892e80290a32616bbff38)

Author SHA1 Message Date
comfyanonymous 4f9b6f39d1 Fix potential issue with Save Checkpoint. 2 years ago
comfyanonymous 5f75d784a1 Start is now 0.0 and end is now 1.0 for the timestep ranges. 2 years ago
comfyanonymous 7ff14b62f8 ControlNetApplyAdvanced can now define when controlnet gets applied. 2 years ago
comfyanonymous d191c4f9ed Add a ControlNetApplyAdvanced node.
The controlnet can be applied to the positive or negative prompt only by
connecting it correctly.
2 years ago
comfyanonymous 0240946ecf Add a way to set which range of timesteps the cond gets applied to. 2 years ago
comfyanonymous 22f29d66ca Try to fix memory issue with lora. 2 years ago
comfyanonymous 67be7eb81d Nodes can now patch the unet function. 2 years ago
comfyanonymous 12a6e93171 Del the right object when applying lora. 2 years ago
comfyanonymous 78e7958d17 Support controlnet in diffusers format. 2 years ago
comfyanonymous 09386a3697 Fix issue with lora in some cases when combined with model merging. 2 years ago
comfyanonymous 58b2364f58 Properly support SDXL diffusers unet with UNETLoader node. 2 years ago
comfyanonymous 0115018695 Print errors and continue when lora weights are not compatible. 2 years ago
comfyanonymous 4760c29380 Merge branch 'fix-AttributeError-module-'torch'-has-no-attribute-'mps'' of https://github.com/KarryCharon/ComfyUI 2 years ago
comfyanonymous 0b284f650b Fix typo. 2 years ago
comfyanonymous e032ca6138 Fix ddim issue with older torch versions. 2 years ago
comfyanonymous 18885f803a Add MX450 and MX550 to list of cards with broken fp16. 2 years ago
comfyanonymous 9ba440995a It's actually possible to torch.compile the unet now. 2 years ago
comfyanonymous 51d5477579 Add key to indicate checkpoint is v_prediction when saving. 2 years ago
comfyanonymous ff6b047a74 Fix device print on old torch version. 2 years ago
comfyanonymous 9871a15cf9 Enable --cuda-malloc by default on torch 2.0 and up.
Add --disable-cuda-malloc to disable it.
2 years ago
comfyanonymous 55d0fca9fa --windows-standalone-build now enables --cuda-malloc 2 years ago
comfyanonymous 1679abd86d Add a command line argument to enable backend:cudaMallocAsync 2 years ago
comfyanonymous 3a150bad15 Only calculate randn in some samplers when it's actually being used. 2 years ago
comfyanonymous ee8f8ee07f Fix regression with ddim and uni_pc when batch size > 1. 2 years ago
comfyanonymous 3ded1a3a04 Refactor of sampler code to deal more easily with different model types. 2 years ago
comfyanonymous 5f57362613 Lower lora ram usage when in normal vram mode. 2 years ago
comfyanonymous 490771b7f4 Speed up lora loading a bit. 2 years ago
comfyanonymous 50b1180dde Fix CLIPSetLastLayer not reverting when removed. 2 years ago
comfyanonymous 6fb084f39d Reduce floating point rounding errors in loras. 2 years ago
comfyanonymous 91ed2815d5 Add a node to merge CLIP models. 2 years ago
comfyanonymous b2f03164c7 Prevent the clip_g position_ids key from being saved in the checkpoint.
This is to make it match the official checkpoint.
2 years ago
comfyanonymous 46dc050c9f Fix potential tensors being on different devices issues. 2 years ago
KarryCharon 3e2309f149 fix mps miss import 2 years ago
comfyanonymous 606a537090 Support SDXL embedding format with 2 CLIP. 2 years ago
comfyanonymous 6ad0a6d7e2 Don't patch weights when multiplier is zero. 2 years ago
comfyanonymous d5323d16e0 latent2rgb matrix for SDXL. 2 years ago
comfyanonymous 0ae81c03bb Empty cache after model unloading for normal vram and lower. 2 years ago
comfyanonymous d3f5998218 Support loading clip_g from diffusers in CLIP Loader nodes. 2 years ago
comfyanonymous a9a4ba7574 Fix merging not working when model2 of model merge node was a merge. 2 years ago
comfyanonymous bb5fbd29e9 Merge branch 'condmask-fix' of https://github.com/vmedea/ComfyUI 2 years ago
comfyanonymous e7bee85df8 Add arguments to run the VAE in fp16 or bf16 for testing. 2 years ago
comfyanonymous 608fcc2591 Fix bug with weights when prompt is long. 2 years ago
comfyanonymous ddc6f12ad5 Disable autocast in unet for increased speed. 2 years ago
comfyanonymous 603f02d613 Fix loras not working when loading checkpoint with config. 2 years ago
comfyanonymous af7a49916b Support loading unet files in diffusers format. 2 years ago
comfyanonymous e57cba4c61 Add gpu variations of the sde samplers that are less deterministic
but faster.
2 years ago
comfyanonymous f81b192944 Add logit scale parameter so it's present when saving the checkpoint. 2 years ago
comfyanonymous acf95191ff Properly support SDXL diffusers loras for unet. 2 years ago
mara c61a95f9f7 Fix size check for conditioning mask
The wrong dimensions were being checked, [1] and [2] are the image size.
not [2] and [3]. This results in an out-of-bounds error if one of them
actually matches.
2 years ago
comfyanonymous 8d694cc450 Fix issue with OSX. 2 years ago
comfyanonymous c3e96e637d Pass device to CLIP model. 2 years ago
comfyanonymous 5e6bc824aa Allow passing custom path to clip-g and clip-h. 2 years ago
comfyanonymous dc9d1f31c8 Improvements for OSX. 2 years ago
comfyanonymous 103c487a89 Cleanup. 2 years ago
comfyanonymous 2c4e0b49b7 Switch to fp16 on some cards when the model is too big. 2 years ago
comfyanonymous 6f3d9f52db Add a --force-fp16 argument to force fp16 for testing. 2 years ago
comfyanonymous 1c1b0e7299 --gpu-only now keeps the VAE on the device. 2 years ago
comfyanonymous ce35d8c659 Lower latency by batching some text encoder inputs. 2 years ago
comfyanonymous 3b6fe51c1d Leave text_encoder on the CPU when it can handle it. 2 years ago
comfyanonymous b6a60fa696 Try to keep text encoders loaded and patched to increase speed.
load_model_gpu() is now used with the text encoder models instead of just
the unet.
2 years ago
comfyanonymous 97ee230682 Make highvram and normalvram shift the text encoders to vram and back.
This is faster on big text encoder models than running it on the CPU.
2 years ago
comfyanonymous 5a9ddf94eb LoraLoader node now caches the lora file between executions. 2 years ago
comfyanonymous 9920367d3c Fix embeddings not working with --gpu-only 2 years ago
comfyanonymous 62db11683b Move unet to device right after loading on highvram mode. 2 years ago
comfyanonymous 4376b125eb Remove useless code. 2 years ago
comfyanonymous 89120f1fbe This is unused but it should be 1280. 2 years ago
comfyanonymous 2c7c14de56 Support for SDXL text encoder lora. 2 years ago
comfyanonymous fcef47f06e Fix bug. 2 years ago
comfyanonymous 8248babd44 Use pytorch attention by default on nvidia when xformers isn't present.
Add a new argument --use-quad-cross-attention
2 years ago
comfyanonymous 9b93b920be Add CheckpointSave node to save checkpoints.
The created checkpoints contain workflow metadata that can be loaded by
dragging them on top of the UI or loading them with the "Load" button.

Checkpoints will be saved in fp16 or fp32 depending on the format ComfyUI
is using for inference on your hardware. To force fp32 use: --force-fp32

Anything that patches the model weights like merging or loras will be
saved.

The output directory is currently set to: output/checkpoints but that might
change in the future.
2 years ago
comfyanonymous b72a7a835a Support loras based on the stability unet implementation. 2 years ago
comfyanonymous c71a7e6b20 Fix ddim + inpainting not working. 2 years ago
comfyanonymous 4eab00e14b Set the seed in the SDE samplers to make them more reproducible. 2 years ago
comfyanonymous cef6aa62b2 Add support for TAESD decoder for SDXL. 2 years ago
comfyanonymous 20f579d91d Add DualClipLoader to load clip models for SDXL.
Update LoadClip to load clip models for SDXL refiner.
2 years ago
comfyanonymous b7933960bb Fix CLIPLoader node. 2 years ago
comfyanonymous 78d8035f73 Fix bug with controlnet. 2 years ago
comfyanonymous 05676942b7 Add some more transformer hooks and move tomesd to comfy_extras.
Tomesd now uses q instead of x to decide which tokens to merge because
it seems to give better results.
2 years ago
comfyanonymous fa28d7334b Remove useless code. 2 years ago
comfyanonymous 8607c2d42d Move latent scale factor from VAE to model. 2 years ago
comfyanonymous 30a3861946 Fix bug when yaml config has no clip params. 2 years ago
comfyanonymous 9e37f4c7d5 Fix error with ClipVision loader node. 2 years ago
comfyanonymous 9f83b098c9 Don't merge weights when shapes don't match and print a warning. 2 years ago
comfyanonymous f87ec10a97 Support base SDXL and SDXL refiner models.
Large refactor of the model detection and loading code.
2 years ago
comfyanonymous 9fccf4aa03 Add original_shape parameter to transformer patch extra_options. 2 years ago
comfyanonymous 51581dbfa9 Fix last commits causing an issue with the text encoder lora. 2 years ago
comfyanonymous 8125b51a62 Keep a set of model_keys for faster add_patches. 2 years ago
comfyanonymous 45beebd33c Add a type of model patch useful for model merging. 2 years ago
comfyanonymous 036a22077c Fix k_diffusion math being off by a tiny bit during txt2img. 2 years ago
comfyanonymous 8883cb0f67 Add a way to set patches that modify the attn2 output.
Change the transformer patches function format to be more future proof.
2 years ago
comfyanonymous cd930d4e7f pop clip vision keys after loading them. 2 years ago
comfyanonymous c9e4a8c9e5 Not needed anymore. 2 years ago
comfyanonymous fb4bf7f591 This is not needed anymore and causes issues with alphas_cumprod. 2 years ago
comfyanonymous 45be2e92c1 Fix DDIM v-prediction. 2 years ago
comfyanonymous e6e50ab2dd Fix an issue when alphas_comprod are half floats. 2 years ago
comfyanonymous ae43f09ef7 All the unet weights should now be initialized with the right dtype. 2 years ago
comfyanonymous f7edcfd927 Add a --gpu-only argument to keep and run everything on the GPU.
Make the CLIP model work on the GPU.
2 years ago
comfyanonymous 7bf89ba923 Initialize more unet weights as the right dtype. 2 years ago
comfyanonymous e21d9ad445 Initialize transformer unet block weights in right dtype at the start. 2 years ago
comfyanonymous bb1f45d6e8 Properly disable weight initialization in clip models. 2 years ago
comfyanonymous 21f04fe632 Disable default weight values in unet conv2d for faster loading. 2 years ago
comfyanonymous 9d54066ebc This isn't needed for inference. 2 years ago
comfyanonymous fa2cca056c Don't initialize CLIPVision weights to default values. 2 years ago
comfyanonymous 6b774589a5 Set model to fp16 before loading the state dict to lower ram bump. 2 years ago
comfyanonymous 0c7cad404c Don't initialize clip weights to default values. 2 years ago
comfyanonymous 6971646b8b Speed up model loading a bit.
Default pytorch Linear initializes the weights which is useless and slow.
2 years ago
comfyanonymous 388567f20b sampler_cfg_function now uses a dict for the argument.
This means arguments can be added without issues.
2 years ago
comfyanonymous ff9b22d79e Turn on safe load for a few models. 2 years ago
comfyanonymous 735ac4cf81 Remove pytorch_lightning dependency. 2 years ago
comfyanonymous 2b14041d4b Remove useless code. 2 years ago
comfyanonymous 274dff3257 Remove more useless files. 2 years ago
comfyanonymous f0a2b81cd0 Cleanup: Remove a bunch of useless files. 2 years ago
comfyanonymous f8c5931053 Split the batch in VAEEncode if there's not enough memory. 2 years ago
comfyanonymous c069fc0730 Auto switch to tiled VAE encode if regular one runs out of memory. 2 years ago
comfyanonymous c64ca8c0b2 Refactor unCLIP noise augment out of samplers.py 2 years ago
comfyanonymous de142eaad5 Simpler base model code. 2 years ago
comfyanonymous 23cf8ca7c5 Fix bug when embedding gets ignored because of mismatched size. 2 years ago
comfyanonymous 0e425603fb Small refactor. 2 years ago
comfyanonymous a3a713b6c5 Refactor previews into one command line argument.
Clean up a few things.
2 years ago
space-nuko 3e17971acb preview method autodetection 2 years ago
space-nuko d5a28fadaa Add latent2rgb preview 2 years ago
space-nuko 48f7ec750c Make previews into cli option 2 years ago
space-nuko b4f434ee66 Preview sampled images with TAESD 2 years ago
comfyanonymous fed0a4dd29 Some comments to say what the vram state options mean. 2 years ago
comfyanonymous 0a5fefd621 Cleanups and fixes for model_management.py
Hopefully fix regression on MPS and CPU.
2 years ago
comfyanonymous 700491d81a Implement global average pooling for controlnet. 2 years ago
comfyanonymous 67892b5ac5 Refactor and improve model_management code related to free memory. 2 years ago
space-nuko 499641ebf1 More accurate total 2 years ago
space-nuko b5dd15c67a System stats endpoint 2 years ago
comfyanonymous 5c38958e49 Tweak lowvram model memory so it's closer to what it was before. 2 years ago
comfyanonymous 94680732d3 Empty cache on mps. 2 years ago
comfyanonymous 03da8a3426 This is useless for inference. 2 years ago
comfyanonymous eb448dd8e1 Auto load model in lowvram if not enough memory. 2 years ago
comfyanonymous b9818eb910 Add route to get safetensors metadata:
/view_metadata/loras?filename=lora.safetensors
2 years ago
comfyanonymous a532888846 Support VAEs in diffusers format. 2 years ago
comfyanonymous 0fc483dcfd Refactor diffusers model convert code to be able to reuse it. 2 years ago
comfyanonymous eb4bd7711a Remove einops. 2 years ago
comfyanonymous 87ab25fac7 Do operations in same order as the one it replaces. 2 years ago
comfyanonymous 2b1fac9708 Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI 2 years ago
comfyanonymous e1278fa925 Support old pytorch versions that don't have weights_only. 2 years ago
BlenderNeko 8b4b0c3188 vecorized bislerp 2 years ago
comfyanonymous b8ccbec6d8 Various improvements to bislerp. 2 years ago
comfyanonymous 34887b8885 Add experimental bislerp algorithm for latent upscaling.
It's like bilinear but with slerp.
2 years ago
comfyanonymous 6cc450579b Auto transpose images from exif data. 2 years ago
comfyanonymous dc198650c0 sample_dpmpp_2m_sde no longer crashes when step == 1. 2 years ago
comfyanonymous 069657fbf3 Add DPM-Solver++(2M) SDE and exponential scheduler.
exponential scheduler is the one recommended with this sampler.
2 years ago
comfyanonymous b8636a44aa Make scaled_dot_product switch to sliced attention on OOM. 2 years ago
comfyanonymous 797c4e8d3b Simplify and improve some vae attention code. 2 years ago
comfyanonymous ef815ba1e2 Switch default scheduler to normal. 2 years ago
comfyanonymous 68d12b530e Merge branch 'tiled_sampler' of https://github.com/BlenderNeko/ComfyUI 2 years ago
comfyanonymous 3a1f47764d Print the torch device that is used on startup. 2 years ago
BlenderNeko 1201d2eae5
Make nodes map over input lists (#579)
* allow nodes to map over lists

* make work with IS_CHANGED and VALIDATE_INPUTS

* give list outputs distinct socket shape

* add rebatch node

* add batch index logic

* add repeat latent batch

* deal with noise mask edge cases in latentfrombatch
2 years ago
BlenderNeko 19c014f429 comment out annoying print statement 2 years ago
BlenderNeko d9e088ddfd minor changes for tiled sampler 2 years ago
comfyanonymous f7c0f75d1f Auto batching improvements.
Try batching when cond sizes don't match with smart padding.
2 years ago
comfyanonymous 314e526c5c Not needed anymore because sampling works with any latent size. 2 years ago
comfyanonymous c6e34963e4 Make t2i adapter work with any latent resolution. 2 years ago
comfyanonymous a1f12e370d Merge branch 'autostart' of https://github.com/EllangoK/ComfyUI 2 years ago
comfyanonymous 6fc4917634 Make maximum_batch_area take into account python2.0 attention function.
More conservative xformers maximum_batch_area.
2 years ago
comfyanonymous 678f933d38 maximum_batch_area for xformers.
Remove useless code.
2 years ago
EllangoK 8e03c789a2 auto-launch cli arg 2 years ago
comfyanonymous cb1551b819 Lowvram mode for gligen and fix some lowvram issues. 2 years ago
comfyanonymous af9cc1fb6a Search recursively in subfolders for embeddings. 2 years ago
comfyanonymous 6ee11d7bc0 Fix import. 2 years ago
comfyanonymous bae4fb4a9d Fix imports. 2 years ago
comfyanonymous fcf513e0b6 Refactor. 2 years ago
comfyanonymous a74e176a24 Merge branch 'tiled-progress' of https://github.com/pythongosssss/ComfyUI 2 years ago
pythongosssss 5eeecf3fd5 remove unused import 2 years ago
pythongosssss 8912623ea9 use comfy progress bar 2 years ago
comfyanonymous 908dc1d5a8 Add a total_steps value to sampler callback. 2 years ago
pythongosssss fdf57325f4 Merge remote-tracking branch 'origin/master' into tiled-progress 2 years ago
pythongosssss 27df74101e reduce duplication 2 years ago
comfyanonymous 93c64afaa9 Use sampler callback instead of tqdm hook for progress bar. 2 years ago
pythongosssss 06ad35b493 added progress to encode + upscale 2 years ago
comfyanonymous ba8a4c3667 Change latent resolution step to 8. 2 years ago
comfyanonymous 66c8aa5c3e Make unet work with any input shape. 2 years ago
comfyanonymous 9c335a553f LoKR support. 2 years ago
comfyanonymous d3293c8339 Properly disable all progress bars when disable_pbar=True 2 years ago
BlenderNeko a2e18b1504 allow disabling of progress bar when sampling 2 years ago
comfyanonymous 071011aebe Mask strength should be separate from area strength. 2 years ago
comfyanonymous 870fae62e7 Merge branch 'condition_by_mask_node' of https://github.com/guill/ComfyUI 2 years ago
Jacob Segal af02393c2a Default to sampling entire image
By default, when applying a mask to a condition, the entire image will
still be used for sampling. The new "set_area_to_bounds" option on the
node will allow the user to automatically limit conditioning to the
bounds of the mask.

I've also removed the dependency on torchvision for calculating bounding
boxes. I've taken the opportunity to fix some frustrating details in the
other version:
1. An all-0 mask will no longer cause an error
2. Indices are returned as integers instead of floats so they can be
   used to index into tensors.
2 years ago
comfyanonymous 056e5545ff Don't try to get vram from xpu or cuda when directml is enabled. 2 years ago
comfyanonymous 2ca934f7d4 You can now select the device index with: --directml id
Like this for example: --directml 1
2 years ago
comfyanonymous 3baded9892 Basic torch_directml support. Use --directml to use it. 2 years ago
Jacob Segal e214c917ae Add Condition by Mask node
This PR adds support for a Condition by Mask node. This node allows
conditioning to be limited to a non-rectangle area.
2 years ago
comfyanonymous 5a971cecdb Add callback to sampler function.
Callback format is: callback(step, x0, x)
2 years ago
comfyanonymous aa57136dae Some fixes to the batch masks PR. 2 years ago
comfyanonymous c50208a703 Refactor more code to sample.py 2 years ago
comfyanonymous 7983b3a975 This is cleaner this way. 2 years ago
BlenderNeko 0b07b2cc0f gligen tuple 2 years ago
pythongosssss c8c9926eeb Add progress to vae decode tiled 2 years ago
BlenderNeko d9b1595f85 made sample functions more explicit 2 years ago
BlenderNeko 5818539743 add docstrings 2 years ago
BlenderNeko 8d2de420d3 Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI 2 years ago
BlenderNeko 2a09e2aa27 refactor/split various bits of code for sampling 2 years ago
comfyanonymous 5282f56434 Implement Linear hypernetworks.
Add a HypernetworkLoader node to use hypernetworks.
2 years ago
comfyanonymous 6908f9c949 This makes pytorch2.0 attention perform a bit faster. 2 years ago
comfyanonymous 907010e082 Remove some useless code. 2 years ago
comfyanonymous 96b57a9ad6 Don't pass adm to model when it doesn't support it. 2 years ago
comfyanonymous 3696d1699a Add support for GLIGEN textbox model. 2 years ago
comfyanonymous 884ea653c8 Add a way for nodes to set a custom CFG function. 2 years ago
comfyanonymous 73c3e11e83 Fix model_management import so it doesn't get executed twice. 2 years ago
comfyanonymous 81d1f00df3 Some refactoring: from_tokens -> encode_from_tokens 2 years ago
comfyanonymous 719c26c3c9 Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI 2 years ago
BlenderNeko d0b1b6c6bf fixed improper padding 2 years ago
comfyanonymous deb2b93e79 Move code to empty gpu cache to model_management.py 2 years ago
comfyanonymous 04d9bc13af Safely load pickled embeds that don't load with weights_only=True. 2 years ago
BlenderNeko da115bd78d ensure backwards compat with optional args 2 years ago
BlenderNeko 752f7a162b align behavior with old tokenize function 2 years ago
comfyanonymous 334aab05e5 Don't stop workflow if loading embedding fails. 2 years ago
BlenderNeko 73175cf58c split tokenizer from encoder 2 years ago
BlenderNeko 8489cba140 add unique ID per word/embedding for tokenizer 2 years ago
comfyanonymous 92eca60ec9 Fix for new transformers version. 2 years ago
comfyanonymous 1e1875f674 Print xformers version and warning about 0.0.18 2 years ago
comfyanonymous 7e254d2f69 Clarify what --windows-standalone-build does. 2 years ago
comfyanonymous 44fea05064 Cleanup. 2 years ago
comfyanonymous 58ed0f2da4 Fix loading SD1.5 diffusers checkpoint. 2 years ago
comfyanonymous 8b9ac8fedb Merge branch 'master' of https://github.com/sALTaccount/ComfyUI 2 years ago
comfyanonymous 64557d6781 Add a --force-fp32 argument to force fp32 for debugging. 2 years ago
comfyanonymous bceccca0e5 Small refactor. 2 years ago
comfyanonymous 28a7205739 Merge branch 'ipex' of https://github.com/kwaa/ComfyUI-IPEX 2 years ago
藍+85CD 05eeaa2de5
Merge branch 'master' into ipex 2 years ago
EllangoK 28fff5d1db fixes lack of support for multi configs
also adds some metavars to argarse
2 years ago
comfyanonymous f84f2508cc Rename the cors parameter to something more verbose. 2 years ago
EllangoK 48efae1608 makes cors a cli parameter 2 years ago
EllangoK 01c1fc669f set listen flag to listen on all if specifed 2 years ago
藍+85CD 3e2608e12b Fix auto lowvram detection on CUDA 2 years ago
sALTaccount 60127a8304 diffusers loader 2 years ago
藍+85CD 7cb924f684 Use separate variables instead of `vram_state` 2 years ago
藍+85CD 84b9c0ac2f Import intel_extension_for_pytorch as ipex 2 years ago
EllangoK e5e587b1c0 seperates out arg parser and imports args 2 years ago
藍+85CD 37713e3b0a Add basic XPU device support
closed #387
2 years ago
comfyanonymous e46b1c3034 Disable xformers in VAE when xformers == 0.0.18 2 years ago
comfyanonymous 1718730e80 Ignore embeddings when sizes don't match and print a WARNING. 2 years ago
comfyanonymous 23524ad8c5 Remove print. 2 years ago
comfyanonymous 539ff487a8 Pull latest tomesd code from upstream. 2 years ago
comfyanonymous f50b1fec69 Add noise augmentation setting to unCLIPConditioning. 2 years ago
comfyanonymous 809bcc8ceb Add support for unCLIP SD2.x models.
See _for_testing/unclip in the UI for the new nodes.

unCLIPCheckpointLoader is used to load them.

unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2 years ago
comfyanonymous 0d972b85e6 This seems to give better quality in tome. 2 years ago
comfyanonymous 18a6c1db33 Add a TomePatchModel node to the _for_testing section.
Tome increases sampling speed at the expense of quality.
2 years ago
comfyanonymous 61ec3c9d5d Add a way to pass options to the transformers blocks. 2 years ago
comfyanonymous afd65d3819 Fix noise mask not working with > 1 batch size on ksamplers. 2 years ago
comfyanonymous b2554bc4dd Split VAE decode batches depending on free memory. 2 years ago
comfyanonymous 0d65cb17b7 Fix ddim_uniform crashing with 37 steps. 2 years ago
Francesco Yoshi Gobbo f55755f0d2
code cleanup 2 years ago
Francesco Yoshi Gobbo cf0098d539
no lowvram state if cpu only 2 years ago
comfyanonymous f5365c9c81 Fix ddim for Mac: #264 2 years ago
comfyanonymous 4adcea7228 I don't think controlnets were being handled correctly by MPS. 2 years ago
comfyanonymous 3c6ff8821c Merge branch 'master' of https://github.com/GaidamakUA/ComfyUI 2 years ago
Yurii Mazurevich fc71e7ea08 Fixed typo 2 years ago
comfyanonymous 7f0fd99b5d Make ddim work with --cpu 2 years ago
Yurii Mazurevich 4b943d2b60 Removed unnecessary comment 2 years ago
Yurii Mazurevich 89fd5ed574 Added MPS device support 2 years ago
comfyanonymous dd095efc2c Support loha that use cp decomposition. 2 years ago
comfyanonymous 94a7c895f4 Add loha support. 2 years ago
comfyanonymous 3ed4a4e4e6 Try again with vae tiled decoding if regular fails because of OOM. 2 years ago
comfyanonymous 4039616ca6 Less seams in tiled outputs at the cost of more processing. 2 years ago
comfyanonymous c692509c2b Try to improve VAEEncode memory usage a bit. 2 years ago
comfyanonymous 9d0665c8d0 Add laptop quadro cards to fp32 list. 2 years ago
comfyanonymous cc309568e1 Add support for locon mid weights. 2 years ago
comfyanonymous edfc4ca663 Try to fix a vram issue with controlnets. 2 years ago
comfyanonymous b4b21be707 Fix area composition feathering not working properly. 2 years ago
comfyanonymous 50099bcd96 Support multiple paths for embeddings. 2 years ago
comfyanonymous 2e73367f45 Merge T2IAdapterLoader and ControlNetLoader.
Workflows will be auto updated.
2 years ago
comfyanonymous ee46bef03a Make --cpu have priority over everything else. 2 years ago
comfyanonymous 0e836d525e use half() on fp16 models loaded with config. 2 years ago
comfyanonymous 986dd820dc Use half() function on model when loading in fp16. 2 years ago
comfyanonymous 54dbfaf2ec Remove omegaconf dependency and some ci changes. 2 years ago
comfyanonymous 83f23f82b8 Add pytorch attention support to VAE. 2 years ago
comfyanonymous a256a2abde --disable-xformers should not even try to import xformers. 2 years ago
comfyanonymous 0f3ba7482f Xformers is now properly disabled when --cpu used.
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
2 years ago
comfyanonymous e33dc2b33b Add a VAEEncodeTiled node. 2 years ago
comfyanonymous 1de86851b1 Try to fix memory issue. 2 years ago
comfyanonymous 2b1fce2943 Make tiled_scale work for downscaling. 2 years ago
comfyanonymous 9db2e97b47 Tiled upscaling with the upscale models. 2 years ago
comfyanonymous cd64111c83 Add locon support. 2 years ago
comfyanonymous c70f0ac64b SD2.x controlnets now work. 2 years ago
comfyanonymous 19415c3ace Relative imports to test something. 2 years ago
edikius 165be5828a
Fixed import (#44)
* fixed import error

I had an
ImportError: cannot import name 'Protocol' from 'typing'
while trying to update so I fixed it to start an app

* Update main.py

* deleted example files
2 years ago
comfyanonymous 501f19eec6 Fix clip_skip no longer being loaded from yaml file. 2 years ago
comfyanonymous afff30fc0a Add --cpu to use the cpu for inference. 2 years ago
comfyanonymous 47acb3d73e Implement support for t2i style model.
It needs the CLIPVision model so I added CLIPVisionLoader and CLIPVisionEncode.

Put the clip vision model in models/clip_vision
Put the t2i style model in models/style_models

StyleModelLoader to load it, StyleModelApply to apply it
ConditioningAppend to append the conditioning it outputs to a positive one.
2 years ago
comfyanonymous cc8baf1080 Make VAE use common function to get free memory. 2 years ago
comfyanonymous 798c90e1c0 Fix pytorch 2.0 cross attention not working. 2 years ago
comfyanonymous 16130c7546 Add support for new colour T2I adapter model. 2 years ago
comfyanonymous 9d00235b41 Update T2I adapter code to latest. 2 years ago
comfyanonymous ebfcf0a9c9 Fix issue. 2 years ago
comfyanonymous 4215206281 Add a node to set CLIP skip.
Use a more simple way to detect if the model is -v prediction.
2 years ago
comfyanonymous fed315a76a To be really simple CheckpointLoaderSimple should pick the right type. 2 years ago
comfyanonymous 94bb0375b0 New CheckpointLoaderSimple to load checkpoints without a config. 2 years ago
comfyanonymous c1f5855ac1 Make some cross attention functions work on the CPU. 2 years ago
comfyanonymous 1a612e1c74 Add some pytorch scaled_dot_product_attention code for testing.
--use-pytorch-cross-attention to use it.
2 years ago
comfyanonymous 69cc75fbf8 Add a way to interrupt current processing in the backend. 2 years ago
comfyanonymous 9502ee45c3 Hopefully fix a strange issue with xformers + lowvram. 2 years ago
comfyanonymous b31daadc03 Try to improve memory issues with del. 2 years ago
comfyanonymous 2c5f0ec681 Small adjustment. 2 years ago
comfyanonymous 86721d5158 Enable highvram automatically when vram >> ram 2 years ago
comfyanonymous 75fa162531 Remove sample_ from some sampler names.
Old workflows will still work.
2 years ago
comfyanonymous 9f4214e534 Preparing to add another function to load checkpoints. 2 years ago
comfyanonymous 3cd7d84b53 Fix uni_pc sampler not working with 1 or 2 steps. 2 years ago
comfyanonymous dfb397e034 Fix multiple controlnets not working. 2 years ago
comfyanonymous af3cc1b5fb Fixed issue when batched image was used as a controlnet input. 2 years ago
comfyanonymous d2da346b0b Fix missing variable. 2 years ago
comfyanonymous 4e6b83a80a Add a T2IAdapterLoader node to load T2I-Adapter models.
They are loaded as CONTROL_NET objects because they are similar.
2 years ago
comfyanonymous fcb25d37db Prepare for t2i adapter. 2 years ago
comfyanonymous cf5a211efc Remove some useless imports 2 years ago
comfyanonymous 87b00b37f6 Added an experimental VAEDecodeTiled.
This decodes the image with the VAE in tiles which should be faster and
use less vram.

It's in the _for_testing section so I might change/remove it or even
add the functionality to the regular VAEDecode node depending on how
well it performs which means don't depend too much on it.
2 years ago
comfyanonymous 62df8dd62a Add a node to load diff controlnets. 2 years ago
comfyanonymous f04dc2c2f4 Implement DDIM sampler. 2 years ago
comfyanonymous 2976c1ad28 Uni_PC: make max denoise behave more like other samplers.
On the KSamplers denoise of 1.0 is the same as txt2img but there was a
small difference on UniPC.
2 years ago
comfyanonymous c9daec4c89 Remove prints that are useless when xformers is enabled. 2 years ago
comfyanonymous a7328e4945 Add uni_pc bh2 variant. 2 years ago
comfyanonymous d80af7ca30 ControlNetApply now stacks.
It can be used to apply multiple control nets at the same time.
2 years ago
comfyanonymous 00a9189e30 Support old pytorch. 2 years ago
comfyanonymous 137ae2606c Support people putting commas after the embedding name in the prompt. 2 years ago
comfyanonymous 2326ff1263 Add: --highvram for when you want models to stay on the vram. 2 years ago
comfyanonymous 09f1d76ed8 Fix an OOM issue. 2 years ago
comfyanonymous d66415c021 Low vram mode for controlnets. 2 years ago
comfyanonymous 220a72d36b Use fp16 for fp16 control nets. 2 years ago
comfyanonymous 6135a21ee8 Add a way to control controlnet strength. 2 years ago
comfyanonymous 4efa67fa12 Add ControlNet support. 2 years ago
comfyanonymous bc69fb5245 Use inpaint models the proper way by using VAEEncodeForInpaint. 2 years ago
comfyanonymous cef2cc3cb0 Support for inpaint models. 2 years ago
comfyanonymous 07db00355f Add masks to samplers code for inpainting. 2 years ago
comfyanonymous e3451cea4f uni_pc now works with KSamplerAdvanced return_with_leftover_noise. 2 years ago
comfyanonymous f542f248f1 Show the right amount of steps in the progress bar for uni_pc.
The extra step doesn't actually call the unet so it doesn't belong in
the progress bar.
2 years ago
comfyanonymous f10b8948c3 768-v support for uni_pc sampler. 2 years ago
comfyanonymous ce0aeb109e Remove print. 2 years ago
comfyanonymous 5489d5af04 Add uni_pc sampler to KSampler* nodes. 2 years ago
comfyanonymous 1a4edd19cd Fix overflow issue with inplace softmax. 2 years ago
comfyanonymous 509c7dfc6d Use real softmax in split op to fix issue with some images. 2 years ago
comfyanonymous 7e1e193f39 Automatically enable lowvram mode if vram is less than 4GB.
Use: --normalvram to disable it.
2 years ago
comfyanonymous 324273fff2 Fix embedding not working when on new line. 2 years ago
comfyanonymous 1f6a467e92 Update ldm dir with latest upstream stable diffusion changes. 2 years ago
comfyanonymous 773cdabfce Same thing but for the other places where it's used. 2 years ago
comfyanonymous df40d4f3bf torch.cuda.OutOfMemoryError is not present on older pytorch versions. 2 years ago
comfyanonymous e8c499ddd4 Split optimization for VAE attention block. 2 years ago
comfyanonymous 5b4e312749 Use inplace operations for less OOM issues. 2 years ago
comfyanonymous 3fd87cbd21 Slightly smarter batching behaviour.
Try to keep batch sizes more consistent which seems to improve things on
AMD GPUs.
2 years ago
comfyanonymous bbdcf0b737 Use relative imports for k_diffusion. 2 years ago
comfyanonymous 708138c77d Remove print. 2 years ago
comfyanonymous 047775615b Lower the chances of an OOM. 2 years ago
comfyanonymous 853e96ada3 Increase it/s by batching together some stuff sent to unet. 2 years ago
comfyanonymous c92633eaa2 Auto calculate amount of memory to use for --lowvram 2 years ago
comfyanonymous 534736b924 Add some low vram modes: --lowvram and --novram 2 years ago
comfyanonymous a84cd0d1ad Don't unload/reload model from CPU uselessly. 2 years ago
comfyanonymous b1a7c9ebf6 Embeddings/textual inversion support for SD2.x 2 years ago
comfyanonymous 1de5aa6a59 Add a CLIPLoader node to load standalone clip weights.
Put them in models/clip
2 years ago
comfyanonymous 56d802e1f3 Use transformers CLIP instead of open_clip for SD2.x
This should make things a bit cleaner.
2 years ago
comfyanonymous bf9ccffb17 Small fix for SD2.x loras. 2 years ago
comfyanonymous 678105fade SD2.x CLIP support for Loras. 2 years ago
comfyanonymous ef90e9c376 Add a LoraLoader node to apply loras to models and clip.
The models are modified in place before being used and unpatched after.
I think this is better than monkeypatching since it might make it easier
to use faster non pytorch unet inference in the future.
2 years ago
comfyanonymous 69df7eba94 Add KSamplerAdvanced node.
This node exposes more sampling options and makes it possible for example
to sample the first few steps on the latent image, do some operations on it
 and then do the rest of the sampling steps. This can be achieved using the
start_at_step and end_at_step options.
2 years ago
comfyanonymous 1daccf3678 Run softmax in place if it OOMs. 2 years ago
comfyanonymous f73e57d881 Add support for textual inversion embedding for SD1.x CLIP. 2 years ago
comfyanonymous 50db297cf6 Try to fix OOM issues with cards that have less vram than mine. 2 years ago
comfyanonymous 73f60740c8 Slightly cleaner code. 2 years ago
comfyanonymous 0108616b77 Fix issue with some models. 2 years ago
comfyanonymous 2973ff24c5 Round CLIP position ids to fix float issues in some checkpoints. 2 years ago
comfyanonymous c4b02059d0 Add ConditioningSetArea node.
to apply conditioning/prompts only to a specific area of the image.

Add ConditioningCombine node.
so that multiple conditioning/prompts can be applied to the image at the
same time
2 years ago
comfyanonymous acdc6f42e0 Fix loading some malformed checkpoints? 2 years ago
comfyanonymous 051f472e8f Fix sub quadratic attention for SD2 and make it the default optimization. 2 years ago
comfyanonymous 220afe3310 Initial commit. 2 years ago