264 Commits (21f04fe632a5192d8f29f1bc0c852b24eb9dce2f)

Author SHA1 Message Date
comfyanonymous 3696d1699a Add support for GLIGEN textbox model. 2 years ago
comfyanonymous 884ea653c8 Add a way for nodes to set a custom CFG function. 2 years ago
comfyanonymous 73c3e11e83 Fix model_management import so it doesn't get executed twice. 2 years ago
comfyanonymous 81d1f00df3 Some refactoring: from_tokens -> encode_from_tokens 2 years ago
comfyanonymous 719c26c3c9 Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI 2 years ago
BlenderNeko d0b1b6c6bf fixed improper padding 2 years ago
comfyanonymous deb2b93e79 Move code to empty gpu cache to model_management.py 2 years ago
comfyanonymous 04d9bc13af Safely load pickled embeds that don't load with weights_only=True. 2 years ago
BlenderNeko da115bd78d ensure backwards compat with optional args 2 years ago
BlenderNeko 752f7a162b align behavior with old tokenize function 2 years ago
comfyanonymous 334aab05e5 Don't stop workflow if loading embedding fails. 2 years ago
BlenderNeko 73175cf58c split tokenizer from encoder 2 years ago
BlenderNeko 8489cba140 add unique ID per word/embedding for tokenizer 2 years ago
comfyanonymous 92eca60ec9 Fix for new transformers version. 2 years ago
comfyanonymous 1e1875f674 Print xformers version and warning about 0.0.18 2 years ago
comfyanonymous 7e254d2f69 Clarify what --windows-standalone-build does. 2 years ago
comfyanonymous 44fea05064 Cleanup. 2 years ago
comfyanonymous 58ed0f2da4 Fix loading SD1.5 diffusers checkpoint. 2 years ago
comfyanonymous 8b9ac8fedb Merge branch 'master' of https://github.com/sALTaccount/ComfyUI 2 years ago
comfyanonymous 64557d6781 Add a --force-fp32 argument to force fp32 for debugging. 2 years ago
comfyanonymous bceccca0e5 Small refactor. 2 years ago
comfyanonymous 28a7205739 Merge branch 'ipex' of https://github.com/kwaa/ComfyUI-IPEX 2 years ago
藍+85CD 05eeaa2de5
Merge branch 'master' into ipex 2 years ago
EllangoK 28fff5d1db fixes lack of support for multi configs
also adds some metavars to argarse
2 years ago
comfyanonymous f84f2508cc Rename the cors parameter to something more verbose. 2 years ago
EllangoK 48efae1608 makes cors a cli parameter 2 years ago
EllangoK 01c1fc669f set listen flag to listen on all if specifed 2 years ago
藍+85CD 3e2608e12b Fix auto lowvram detection on CUDA 2 years ago
sALTaccount 60127a8304 diffusers loader 2 years ago
藍+85CD 7cb924f684 Use separate variables instead of `vram_state` 2 years ago
藍+85CD 84b9c0ac2f Import intel_extension_for_pytorch as ipex 2 years ago
EllangoK e5e587b1c0 seperates out arg parser and imports args 2 years ago
藍+85CD 37713e3b0a Add basic XPU device support
closed #387
2 years ago
comfyanonymous e46b1c3034 Disable xformers in VAE when xformers == 0.0.18 2 years ago
comfyanonymous 1718730e80 Ignore embeddings when sizes don't match and print a WARNING. 2 years ago
comfyanonymous 23524ad8c5 Remove print. 2 years ago
comfyanonymous 539ff487a8 Pull latest tomesd code from upstream. 2 years ago
comfyanonymous f50b1fec69 Add noise augmentation setting to unCLIPConditioning. 2 years ago
comfyanonymous 809bcc8ceb Add support for unCLIP SD2.x models.
See _for_testing/unclip in the UI for the new nodes.

unCLIPCheckpointLoader is used to load them.

unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2 years ago
comfyanonymous 0d972b85e6 This seems to give better quality in tome. 2 years ago
comfyanonymous 18a6c1db33 Add a TomePatchModel node to the _for_testing section.
Tome increases sampling speed at the expense of quality.
2 years ago
comfyanonymous 61ec3c9d5d Add a way to pass options to the transformers blocks. 2 years ago
comfyanonymous afd65d3819 Fix noise mask not working with > 1 batch size on ksamplers. 2 years ago
comfyanonymous b2554bc4dd Split VAE decode batches depending on free memory. 2 years ago
comfyanonymous 0d65cb17b7 Fix ddim_uniform crashing with 37 steps. 2 years ago
Francesco Yoshi Gobbo f55755f0d2
code cleanup 2 years ago
Francesco Yoshi Gobbo cf0098d539
no lowvram state if cpu only 2 years ago
comfyanonymous f5365c9c81 Fix ddim for Mac: #264 2 years ago
comfyanonymous 4adcea7228 I don't think controlnets were being handled correctly by MPS. 2 years ago
comfyanonymous 3c6ff8821c Merge branch 'master' of https://github.com/GaidamakUA/ComfyUI 2 years ago
Yurii Mazurevich fc71e7ea08 Fixed typo 2 years ago
comfyanonymous 7f0fd99b5d Make ddim work with --cpu 2 years ago
Yurii Mazurevich 4b943d2b60 Removed unnecessary comment 2 years ago
Yurii Mazurevich 89fd5ed574 Added MPS device support 2 years ago
comfyanonymous dd095efc2c Support loha that use cp decomposition. 2 years ago
comfyanonymous 94a7c895f4 Add loha support. 2 years ago
comfyanonymous 3ed4a4e4e6 Try again with vae tiled decoding if regular fails because of OOM. 2 years ago
comfyanonymous 4039616ca6 Less seams in tiled outputs at the cost of more processing. 2 years ago
comfyanonymous c692509c2b Try to improve VAEEncode memory usage a bit. 2 years ago
comfyanonymous 9d0665c8d0 Add laptop quadro cards to fp32 list. 2 years ago
comfyanonymous cc309568e1 Add support for locon mid weights. 2 years ago
comfyanonymous edfc4ca663 Try to fix a vram issue with controlnets. 2 years ago
comfyanonymous b4b21be707 Fix area composition feathering not working properly. 2 years ago
comfyanonymous 50099bcd96 Support multiple paths for embeddings. 2 years ago
comfyanonymous 2e73367f45 Merge T2IAdapterLoader and ControlNetLoader.
Workflows will be auto updated.
2 years ago
comfyanonymous ee46bef03a Make --cpu have priority over everything else. 2 years ago
comfyanonymous 0e836d525e use half() on fp16 models loaded with config. 2 years ago
comfyanonymous 986dd820dc Use half() function on model when loading in fp16. 2 years ago
comfyanonymous 54dbfaf2ec Remove omegaconf dependency and some ci changes. 2 years ago
comfyanonymous 83f23f82b8 Add pytorch attention support to VAE. 2 years ago
comfyanonymous a256a2abde --disable-xformers should not even try to import xformers. 2 years ago
comfyanonymous 0f3ba7482f Xformers is now properly disabled when --cpu used.
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
2 years ago
comfyanonymous e33dc2b33b Add a VAEEncodeTiled node. 2 years ago
comfyanonymous 1de86851b1 Try to fix memory issue. 2 years ago
comfyanonymous 2b1fce2943 Make tiled_scale work for downscaling. 2 years ago
comfyanonymous 9db2e97b47 Tiled upscaling with the upscale models. 2 years ago
comfyanonymous cd64111c83 Add locon support. 2 years ago
comfyanonymous c70f0ac64b SD2.x controlnets now work. 2 years ago
comfyanonymous 19415c3ace Relative imports to test something. 2 years ago
edikius 165be5828a
Fixed import (#44)
* fixed import error

I had an
ImportError: cannot import name 'Protocol' from 'typing'
while trying to update so I fixed it to start an app

* Update main.py

* deleted example files
2 years ago
comfyanonymous 501f19eec6 Fix clip_skip no longer being loaded from yaml file. 2 years ago
comfyanonymous afff30fc0a Add --cpu to use the cpu for inference. 2 years ago
comfyanonymous 47acb3d73e Implement support for t2i style model.
It needs the CLIPVision model so I added CLIPVisionLoader and CLIPVisionEncode.

Put the clip vision model in models/clip_vision
Put the t2i style model in models/style_models

StyleModelLoader to load it, StyleModelApply to apply it
ConditioningAppend to append the conditioning it outputs to a positive one.
2 years ago
comfyanonymous cc8baf1080 Make VAE use common function to get free memory. 2 years ago
comfyanonymous 798c90e1c0 Fix pytorch 2.0 cross attention not working. 2 years ago
comfyanonymous 16130c7546 Add support for new colour T2I adapter model. 2 years ago
comfyanonymous 9d00235b41 Update T2I adapter code to latest. 2 years ago
comfyanonymous ebfcf0a9c9 Fix issue. 2 years ago
comfyanonymous 4215206281 Add a node to set CLIP skip.
Use a more simple way to detect if the model is -v prediction.
2 years ago
comfyanonymous fed315a76a To be really simple CheckpointLoaderSimple should pick the right type. 2 years ago
comfyanonymous 94bb0375b0 New CheckpointLoaderSimple to load checkpoints without a config. 2 years ago
comfyanonymous c1f5855ac1 Make some cross attention functions work on the CPU. 2 years ago
comfyanonymous 1a612e1c74 Add some pytorch scaled_dot_product_attention code for testing.
--use-pytorch-cross-attention to use it.
2 years ago
comfyanonymous 69cc75fbf8 Add a way to interrupt current processing in the backend. 2 years ago
comfyanonymous 9502ee45c3 Hopefully fix a strange issue with xformers + lowvram. 2 years ago
comfyanonymous b31daadc03 Try to improve memory issues with del. 2 years ago
comfyanonymous 2c5f0ec681 Small adjustment. 2 years ago
comfyanonymous 86721d5158 Enable highvram automatically when vram >> ram 2 years ago
comfyanonymous 75fa162531 Remove sample_ from some sampler names.
Old workflows will still work.
2 years ago
comfyanonymous 9f4214e534 Preparing to add another function to load checkpoints. 2 years ago
comfyanonymous 3cd7d84b53 Fix uni_pc sampler not working with 1 or 2 steps. 2 years ago
comfyanonymous dfb397e034 Fix multiple controlnets not working. 2 years ago
comfyanonymous af3cc1b5fb Fixed issue when batched image was used as a controlnet input. 2 years ago
comfyanonymous d2da346b0b Fix missing variable. 2 years ago
comfyanonymous 4e6b83a80a Add a T2IAdapterLoader node to load T2I-Adapter models.
They are loaded as CONTROL_NET objects because they are similar.
2 years ago
comfyanonymous fcb25d37db Prepare for t2i adapter. 2 years ago
comfyanonymous cf5a211efc Remove some useless imports 2 years ago
comfyanonymous 87b00b37f6 Added an experimental VAEDecodeTiled.
This decodes the image with the VAE in tiles which should be faster and
use less vram.

It's in the _for_testing section so I might change/remove it or even
add the functionality to the regular VAEDecode node depending on how
well it performs which means don't depend too much on it.
2 years ago
comfyanonymous 62df8dd62a Add a node to load diff controlnets. 2 years ago
comfyanonymous f04dc2c2f4 Implement DDIM sampler. 2 years ago
comfyanonymous 2976c1ad28 Uni_PC: make max denoise behave more like other samplers.
On the KSamplers denoise of 1.0 is the same as txt2img but there was a
small difference on UniPC.
2 years ago
comfyanonymous c9daec4c89 Remove prints that are useless when xformers is enabled. 2 years ago
comfyanonymous a7328e4945 Add uni_pc bh2 variant. 2 years ago
comfyanonymous d80af7ca30 ControlNetApply now stacks.
It can be used to apply multiple control nets at the same time.
2 years ago
comfyanonymous 00a9189e30 Support old pytorch. 2 years ago
comfyanonymous 137ae2606c Support people putting commas after the embedding name in the prompt. 2 years ago
comfyanonymous 2326ff1263 Add: --highvram for when you want models to stay on the vram. 2 years ago
comfyanonymous 09f1d76ed8 Fix an OOM issue. 2 years ago
comfyanonymous d66415c021 Low vram mode for controlnets. 2 years ago
comfyanonymous 220a72d36b Use fp16 for fp16 control nets. 2 years ago
comfyanonymous 6135a21ee8 Add a way to control controlnet strength. 2 years ago
comfyanonymous 4efa67fa12 Add ControlNet support. 2 years ago
comfyanonymous bc69fb5245 Use inpaint models the proper way by using VAEEncodeForInpaint. 2 years ago
comfyanonymous cef2cc3cb0 Support for inpaint models. 2 years ago
comfyanonymous 07db00355f Add masks to samplers code for inpainting. 2 years ago
comfyanonymous e3451cea4f uni_pc now works with KSamplerAdvanced return_with_leftover_noise. 2 years ago
comfyanonymous f542f248f1 Show the right amount of steps in the progress bar for uni_pc.
The extra step doesn't actually call the unet so it doesn't belong in
the progress bar.
2 years ago
comfyanonymous f10b8948c3 768-v support for uni_pc sampler. 2 years ago
comfyanonymous ce0aeb109e Remove print. 2 years ago
comfyanonymous 5489d5af04 Add uni_pc sampler to KSampler* nodes. 2 years ago
comfyanonymous 1a4edd19cd Fix overflow issue with inplace softmax. 2 years ago
comfyanonymous 509c7dfc6d Use real softmax in split op to fix issue with some images. 2 years ago
comfyanonymous 7e1e193f39 Automatically enable lowvram mode if vram is less than 4GB.
Use: --normalvram to disable it.
2 years ago
comfyanonymous 324273fff2 Fix embedding not working when on new line. 2 years ago
comfyanonymous 1f6a467e92 Update ldm dir with latest upstream stable diffusion changes. 2 years ago
comfyanonymous 773cdabfce Same thing but for the other places where it's used. 2 years ago
comfyanonymous df40d4f3bf torch.cuda.OutOfMemoryError is not present on older pytorch versions. 2 years ago
comfyanonymous e8c499ddd4 Split optimization for VAE attention block. 2 years ago
comfyanonymous 5b4e312749 Use inplace operations for less OOM issues. 2 years ago
comfyanonymous 3fd87cbd21 Slightly smarter batching behaviour.
Try to keep batch sizes more consistent which seems to improve things on
AMD GPUs.
2 years ago
comfyanonymous bbdcf0b737 Use relative imports for k_diffusion. 2 years ago
comfyanonymous 708138c77d Remove print. 2 years ago
comfyanonymous 047775615b Lower the chances of an OOM. 2 years ago
comfyanonymous 853e96ada3 Increase it/s by batching together some stuff sent to unet. 2 years ago
comfyanonymous c92633eaa2 Auto calculate amount of memory to use for --lowvram 2 years ago
comfyanonymous 534736b924 Add some low vram modes: --lowvram and --novram 2 years ago
comfyanonymous a84cd0d1ad Don't unload/reload model from CPU uselessly. 2 years ago
comfyanonymous b1a7c9ebf6 Embeddings/textual inversion support for SD2.x 2 years ago
comfyanonymous 1de5aa6a59 Add a CLIPLoader node to load standalone clip weights.
Put them in models/clip
2 years ago
comfyanonymous 56d802e1f3 Use transformers CLIP instead of open_clip for SD2.x
This should make things a bit cleaner.
2 years ago