264 Commits (21f04fe632a5192d8f29f1bc0c852b24eb9dce2f)

Author SHA1 Message Date
comfyanonymous 3a1f47764d Print the torch device that is used on startup. 2 years ago
BlenderNeko 1201d2eae5
Make nodes map over input lists (#579)
* allow nodes to map over lists

* make work with IS_CHANGED and VALIDATE_INPUTS

* give list outputs distinct socket shape

* add rebatch node

* add batch index logic

* add repeat latent batch

* deal with noise mask edge cases in latentfrombatch
2 years ago
BlenderNeko 19c014f429 comment out annoying print statement 2 years ago
BlenderNeko d9e088ddfd minor changes for tiled sampler 2 years ago
comfyanonymous f7c0f75d1f Auto batching improvements.
Try batching when cond sizes don't match with smart padding.
2 years ago
comfyanonymous 314e526c5c Not needed anymore because sampling works with any latent size. 2 years ago
comfyanonymous c6e34963e4 Make t2i adapter work with any latent resolution. 2 years ago
comfyanonymous a1f12e370d Merge branch 'autostart' of https://github.com/EllangoK/ComfyUI 2 years ago
comfyanonymous 6fc4917634 Make maximum_batch_area take into account python2.0 attention function.
More conservative xformers maximum_batch_area.
2 years ago
comfyanonymous 678f933d38 maximum_batch_area for xformers.
Remove useless code.
2 years ago
EllangoK 8e03c789a2 auto-launch cli arg 2 years ago
comfyanonymous cb1551b819 Lowvram mode for gligen and fix some lowvram issues. 2 years ago
comfyanonymous af9cc1fb6a Search recursively in subfolders for embeddings. 2 years ago
comfyanonymous 6ee11d7bc0 Fix import. 2 years ago
comfyanonymous bae4fb4a9d Fix imports. 2 years ago
comfyanonymous fcf513e0b6 Refactor. 2 years ago
comfyanonymous a74e176a24 Merge branch 'tiled-progress' of https://github.com/pythongosssss/ComfyUI 2 years ago
pythongosssss 5eeecf3fd5 remove unused import 2 years ago
pythongosssss 8912623ea9 use comfy progress bar 2 years ago
comfyanonymous 908dc1d5a8 Add a total_steps value to sampler callback. 2 years ago
pythongosssss fdf57325f4 Merge remote-tracking branch 'origin/master' into tiled-progress 2 years ago
pythongosssss 27df74101e reduce duplication 2 years ago
comfyanonymous 93c64afaa9 Use sampler callback instead of tqdm hook for progress bar. 2 years ago
pythongosssss 06ad35b493 added progress to encode + upscale 2 years ago
comfyanonymous ba8a4c3667 Change latent resolution step to 8. 2 years ago
comfyanonymous 66c8aa5c3e Make unet work with any input shape. 2 years ago
comfyanonymous 9c335a553f LoKR support. 2 years ago
comfyanonymous d3293c8339 Properly disable all progress bars when disable_pbar=True 2 years ago
BlenderNeko a2e18b1504 allow disabling of progress bar when sampling 2 years ago
comfyanonymous 071011aebe Mask strength should be separate from area strength. 2 years ago
comfyanonymous 870fae62e7 Merge branch 'condition_by_mask_node' of https://github.com/guill/ComfyUI 2 years ago
Jacob Segal af02393c2a Default to sampling entire image
By default, when applying a mask to a condition, the entire image will
still be used for sampling. The new "set_area_to_bounds" option on the
node will allow the user to automatically limit conditioning to the
bounds of the mask.

I've also removed the dependency on torchvision for calculating bounding
boxes. I've taken the opportunity to fix some frustrating details in the
other version:
1. An all-0 mask will no longer cause an error
2. Indices are returned as integers instead of floats so they can be
   used to index into tensors.
2 years ago
comfyanonymous 056e5545ff Don't try to get vram from xpu or cuda when directml is enabled. 2 years ago
comfyanonymous 2ca934f7d4 You can now select the device index with: --directml id
Like this for example: --directml 1
2 years ago
comfyanonymous 3baded9892 Basic torch_directml support. Use --directml to use it. 2 years ago
Jacob Segal e214c917ae Add Condition by Mask node
This PR adds support for a Condition by Mask node. This node allows
conditioning to be limited to a non-rectangle area.
2 years ago
comfyanonymous 5a971cecdb Add callback to sampler function.
Callback format is: callback(step, x0, x)
2 years ago
comfyanonymous aa57136dae Some fixes to the batch masks PR. 2 years ago
comfyanonymous c50208a703 Refactor more code to sample.py 2 years ago
comfyanonymous 7983b3a975 This is cleaner this way. 2 years ago
BlenderNeko 0b07b2cc0f gligen tuple 2 years ago
pythongosssss c8c9926eeb Add progress to vae decode tiled 2 years ago
BlenderNeko d9b1595f85 made sample functions more explicit 2 years ago
BlenderNeko 5818539743 add docstrings 2 years ago
BlenderNeko 8d2de420d3 Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI 2 years ago
BlenderNeko 2a09e2aa27 refactor/split various bits of code for sampling 2 years ago
comfyanonymous 5282f56434 Implement Linear hypernetworks.
Add a HypernetworkLoader node to use hypernetworks.
2 years ago
comfyanonymous 6908f9c949 This makes pytorch2.0 attention perform a bit faster. 2 years ago
comfyanonymous 907010e082 Remove some useless code. 2 years ago
comfyanonymous 96b57a9ad6 Don't pass adm to model when it doesn't support it. 2 years ago
comfyanonymous 3696d1699a Add support for GLIGEN textbox model. 2 years ago
comfyanonymous 884ea653c8 Add a way for nodes to set a custom CFG function. 2 years ago
comfyanonymous 73c3e11e83 Fix model_management import so it doesn't get executed twice. 2 years ago
comfyanonymous 81d1f00df3 Some refactoring: from_tokens -> encode_from_tokens 2 years ago
comfyanonymous 719c26c3c9 Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI 2 years ago
BlenderNeko d0b1b6c6bf fixed improper padding 2 years ago
comfyanonymous deb2b93e79 Move code to empty gpu cache to model_management.py 2 years ago
comfyanonymous 04d9bc13af Safely load pickled embeds that don't load with weights_only=True. 2 years ago
BlenderNeko da115bd78d ensure backwards compat with optional args 2 years ago
BlenderNeko 752f7a162b align behavior with old tokenize function 2 years ago
comfyanonymous 334aab05e5 Don't stop workflow if loading embedding fails. 2 years ago
BlenderNeko 73175cf58c split tokenizer from encoder 2 years ago
BlenderNeko 8489cba140 add unique ID per word/embedding for tokenizer 2 years ago
comfyanonymous 92eca60ec9 Fix for new transformers version. 2 years ago
comfyanonymous 1e1875f674 Print xformers version and warning about 0.0.18 2 years ago
comfyanonymous 7e254d2f69 Clarify what --windows-standalone-build does. 2 years ago
comfyanonymous 44fea05064 Cleanup. 2 years ago
comfyanonymous 58ed0f2da4 Fix loading SD1.5 diffusers checkpoint. 2 years ago
comfyanonymous 8b9ac8fedb Merge branch 'master' of https://github.com/sALTaccount/ComfyUI 2 years ago
comfyanonymous 64557d6781 Add a --force-fp32 argument to force fp32 for debugging. 2 years ago
comfyanonymous bceccca0e5 Small refactor. 2 years ago
comfyanonymous 28a7205739 Merge branch 'ipex' of https://github.com/kwaa/ComfyUI-IPEX 2 years ago
藍+85CD 05eeaa2de5
Merge branch 'master' into ipex 2 years ago
EllangoK 28fff5d1db fixes lack of support for multi configs
also adds some metavars to argarse
2 years ago
comfyanonymous f84f2508cc Rename the cors parameter to something more verbose. 2 years ago
EllangoK 48efae1608 makes cors a cli parameter 2 years ago
EllangoK 01c1fc669f set listen flag to listen on all if specifed 2 years ago
藍+85CD 3e2608e12b Fix auto lowvram detection on CUDA 2 years ago
sALTaccount 60127a8304 diffusers loader 2 years ago
藍+85CD 7cb924f684 Use separate variables instead of `vram_state` 2 years ago
藍+85CD 84b9c0ac2f Import intel_extension_for_pytorch as ipex 2 years ago
EllangoK e5e587b1c0 seperates out arg parser and imports args 2 years ago
藍+85CD 37713e3b0a Add basic XPU device support
closed #387
2 years ago
comfyanonymous e46b1c3034 Disable xformers in VAE when xformers == 0.0.18 2 years ago
comfyanonymous 1718730e80 Ignore embeddings when sizes don't match and print a WARNING. 2 years ago
comfyanonymous 23524ad8c5 Remove print. 2 years ago
comfyanonymous 539ff487a8 Pull latest tomesd code from upstream. 2 years ago
comfyanonymous f50b1fec69 Add noise augmentation setting to unCLIPConditioning. 2 years ago
comfyanonymous 809bcc8ceb Add support for unCLIP SD2.x models.
See _for_testing/unclip in the UI for the new nodes.

unCLIPCheckpointLoader is used to load them.

unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2 years ago
comfyanonymous 0d972b85e6 This seems to give better quality in tome. 2 years ago
comfyanonymous 18a6c1db33 Add a TomePatchModel node to the _for_testing section.
Tome increases sampling speed at the expense of quality.
2 years ago
comfyanonymous 61ec3c9d5d Add a way to pass options to the transformers blocks. 2 years ago
comfyanonymous afd65d3819 Fix noise mask not working with > 1 batch size on ksamplers. 2 years ago
comfyanonymous b2554bc4dd Split VAE decode batches depending on free memory. 2 years ago
comfyanonymous 0d65cb17b7 Fix ddim_uniform crashing with 37 steps. 2 years ago
Francesco Yoshi Gobbo f55755f0d2
code cleanup 2 years ago
Francesco Yoshi Gobbo cf0098d539
no lowvram state if cpu only 2 years ago
comfyanonymous f5365c9c81 Fix ddim for Mac: #264 2 years ago
comfyanonymous 4adcea7228 I don't think controlnets were being handled correctly by MPS. 2 years ago
comfyanonymous 3c6ff8821c Merge branch 'master' of https://github.com/GaidamakUA/ComfyUI 2 years ago