comfyanonymous
3a1f47764d
Print the torch device that is used on startup.
2 years ago
BlenderNeko
1201d2eae5
Make nodes map over input lists ( #579 )
...
* allow nodes to map over lists
* make work with IS_CHANGED and VALIDATE_INPUTS
* give list outputs distinct socket shape
* add rebatch node
* add batch index logic
* add repeat latent batch
* deal with noise mask edge cases in latentfrombatch
2 years ago
BlenderNeko
19c014f429
comment out annoying print statement
2 years ago
BlenderNeko
d9e088ddfd
minor changes for tiled sampler
2 years ago
comfyanonymous
f7c0f75d1f
Auto batching improvements.
...
Try batching when cond sizes don't match with smart padding.
2 years ago
comfyanonymous
314e526c5c
Not needed anymore because sampling works with any latent size.
2 years ago
comfyanonymous
c6e34963e4
Make t2i adapter work with any latent resolution.
2 years ago
comfyanonymous
a1f12e370d
Merge branch 'autostart' of https://github.com/EllangoK/ComfyUI
2 years ago
comfyanonymous
6fc4917634
Make maximum_batch_area take into account python2.0 attention function.
...
More conservative xformers maximum_batch_area.
2 years ago
comfyanonymous
678f933d38
maximum_batch_area for xformers.
...
Remove useless code.
2 years ago
EllangoK
8e03c789a2
auto-launch cli arg
2 years ago
comfyanonymous
cb1551b819
Lowvram mode for gligen and fix some lowvram issues.
2 years ago
comfyanonymous
af9cc1fb6a
Search recursively in subfolders for embeddings.
2 years ago
comfyanonymous
6ee11d7bc0
Fix import.
2 years ago
comfyanonymous
bae4fb4a9d
Fix imports.
2 years ago
comfyanonymous
fcf513e0b6
Refactor.
2 years ago
comfyanonymous
a74e176a24
Merge branch 'tiled-progress' of https://github.com/pythongosssss/ComfyUI
2 years ago
pythongosssss
5eeecf3fd5
remove unused import
2 years ago
pythongosssss
8912623ea9
use comfy progress bar
2 years ago
comfyanonymous
908dc1d5a8
Add a total_steps value to sampler callback.
2 years ago
pythongosssss
fdf57325f4
Merge remote-tracking branch 'origin/master' into tiled-progress
2 years ago
pythongosssss
27df74101e
reduce duplication
2 years ago
comfyanonymous
93c64afaa9
Use sampler callback instead of tqdm hook for progress bar.
2 years ago
pythongosssss
06ad35b493
added progress to encode + upscale
2 years ago
comfyanonymous
ba8a4c3667
Change latent resolution step to 8.
2 years ago
comfyanonymous
66c8aa5c3e
Make unet work with any input shape.
2 years ago
comfyanonymous
9c335a553f
LoKR support.
2 years ago
comfyanonymous
d3293c8339
Properly disable all progress bars when disable_pbar=True
2 years ago
BlenderNeko
a2e18b1504
allow disabling of progress bar when sampling
2 years ago
comfyanonymous
071011aebe
Mask strength should be separate from area strength.
2 years ago
comfyanonymous
870fae62e7
Merge branch 'condition_by_mask_node' of https://github.com/guill/ComfyUI
2 years ago
Jacob Segal
af02393c2a
Default to sampling entire image
...
By default, when applying a mask to a condition, the entire image will
still be used for sampling. The new "set_area_to_bounds" option on the
node will allow the user to automatically limit conditioning to the
bounds of the mask.
I've also removed the dependency on torchvision for calculating bounding
boxes. I've taken the opportunity to fix some frustrating details in the
other version:
1. An all-0 mask will no longer cause an error
2. Indices are returned as integers instead of floats so they can be
used to index into tensors.
2 years ago
comfyanonymous
056e5545ff
Don't try to get vram from xpu or cuda when directml is enabled.
2 years ago
comfyanonymous
2ca934f7d4
You can now select the device index with: --directml id
...
Like this for example: --directml 1
2 years ago
comfyanonymous
3baded9892
Basic torch_directml support. Use --directml to use it.
2 years ago
Jacob Segal
e214c917ae
Add Condition by Mask node
...
This PR adds support for a Condition by Mask node. This node allows
conditioning to be limited to a non-rectangle area.
2 years ago
comfyanonymous
5a971cecdb
Add callback to sampler function.
...
Callback format is: callback(step, x0, x)
2 years ago
comfyanonymous
aa57136dae
Some fixes to the batch masks PR.
2 years ago
comfyanonymous
c50208a703
Refactor more code to sample.py
2 years ago
comfyanonymous
7983b3a975
This is cleaner this way.
2 years ago
BlenderNeko
0b07b2cc0f
gligen tuple
2 years ago
pythongosssss
c8c9926eeb
Add progress to vae decode tiled
2 years ago
BlenderNeko
d9b1595f85
made sample functions more explicit
2 years ago
BlenderNeko
5818539743
add docstrings
2 years ago
BlenderNeko
8d2de420d3
Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI
2 years ago
BlenderNeko
2a09e2aa27
refactor/split various bits of code for sampling
2 years ago
comfyanonymous
5282f56434
Implement Linear hypernetworks.
...
Add a HypernetworkLoader node to use hypernetworks.
2 years ago
comfyanonymous
6908f9c949
This makes pytorch2.0 attention perform a bit faster.
2 years ago
comfyanonymous
907010e082
Remove some useless code.
2 years ago
comfyanonymous
96b57a9ad6
Don't pass adm to model when it doesn't support it.
2 years ago
comfyanonymous
3696d1699a
Add support for GLIGEN textbox model.
2 years ago
comfyanonymous
884ea653c8
Add a way for nodes to set a custom CFG function.
2 years ago
comfyanonymous
73c3e11e83
Fix model_management import so it doesn't get executed twice.
2 years ago
comfyanonymous
81d1f00df3
Some refactoring: from_tokens -> encode_from_tokens
2 years ago
comfyanonymous
719c26c3c9
Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI
2 years ago
BlenderNeko
d0b1b6c6bf
fixed improper padding
2 years ago
comfyanonymous
deb2b93e79
Move code to empty gpu cache to model_management.py
2 years ago
comfyanonymous
04d9bc13af
Safely load pickled embeds that don't load with weights_only=True.
2 years ago
BlenderNeko
da115bd78d
ensure backwards compat with optional args
2 years ago
BlenderNeko
752f7a162b
align behavior with old tokenize function
2 years ago
comfyanonymous
334aab05e5
Don't stop workflow if loading embedding fails.
2 years ago
BlenderNeko
73175cf58c
split tokenizer from encoder
2 years ago
BlenderNeko
8489cba140
add unique ID per word/embedding for tokenizer
2 years ago
comfyanonymous
92eca60ec9
Fix for new transformers version.
2 years ago
comfyanonymous
1e1875f674
Print xformers version and warning about 0.0.18
2 years ago
comfyanonymous
7e254d2f69
Clarify what --windows-standalone-build does.
2 years ago
comfyanonymous
44fea05064
Cleanup.
2 years ago
comfyanonymous
58ed0f2da4
Fix loading SD1.5 diffusers checkpoint.
2 years ago
comfyanonymous
8b9ac8fedb
Merge branch 'master' of https://github.com/sALTaccount/ComfyUI
2 years ago
comfyanonymous
64557d6781
Add a --force-fp32 argument to force fp32 for debugging.
2 years ago
comfyanonymous
bceccca0e5
Small refactor.
2 years ago
comfyanonymous
28a7205739
Merge branch 'ipex' of https://github.com/kwaa/ComfyUI-IPEX
2 years ago
藍+85CD
05eeaa2de5
Merge branch 'master' into ipex
2 years ago
EllangoK
28fff5d1db
fixes lack of support for multi configs
...
also adds some metavars to argarse
2 years ago
comfyanonymous
f84f2508cc
Rename the cors parameter to something more verbose.
2 years ago
EllangoK
48efae1608
makes cors a cli parameter
2 years ago
EllangoK
01c1fc669f
set listen flag to listen on all if specifed
2 years ago
藍+85CD
3e2608e12b
Fix auto lowvram detection on CUDA
2 years ago
sALTaccount
60127a8304
diffusers loader
2 years ago
藍+85CD
7cb924f684
Use separate variables instead of `vram_state`
2 years ago
藍+85CD
84b9c0ac2f
Import intel_extension_for_pytorch as ipex
2 years ago
EllangoK
e5e587b1c0
seperates out arg parser and imports args
2 years ago
藍+85CD
37713e3b0a
Add basic XPU device support
...
closed #387
2 years ago
comfyanonymous
e46b1c3034
Disable xformers in VAE when xformers == 0.0.18
2 years ago
comfyanonymous
1718730e80
Ignore embeddings when sizes don't match and print a WARNING.
2 years ago
comfyanonymous
23524ad8c5
Remove print.
2 years ago
comfyanonymous
539ff487a8
Pull latest tomesd code from upstream.
2 years ago
comfyanonymous
f50b1fec69
Add noise augmentation setting to unCLIPConditioning.
2 years ago
comfyanonymous
809bcc8ceb
Add support for unCLIP SD2.x models.
...
See _for_testing/unclip in the UI for the new nodes.
unCLIPCheckpointLoader is used to load them.
unCLIPConditioning is used to add the image cond and takes as input a
CLIPVisionEncode output which has been moved to the conditioning section.
2 years ago
comfyanonymous
0d972b85e6
This seems to give better quality in tome.
2 years ago
comfyanonymous
18a6c1db33
Add a TomePatchModel node to the _for_testing section.
...
Tome increases sampling speed at the expense of quality.
2 years ago
comfyanonymous
61ec3c9d5d
Add a way to pass options to the transformers blocks.
2 years ago
comfyanonymous
afd65d3819
Fix noise mask not working with > 1 batch size on ksamplers.
2 years ago
comfyanonymous
b2554bc4dd
Split VAE decode batches depending on free memory.
2 years ago
comfyanonymous
0d65cb17b7
Fix ddim_uniform crashing with 37 steps.
2 years ago
Francesco Yoshi Gobbo
f55755f0d2
code cleanup
2 years ago
Francesco Yoshi Gobbo
cf0098d539
no lowvram state if cpu only
2 years ago
comfyanonymous
f5365c9c81
Fix ddim for Mac: #264
2 years ago
comfyanonymous
4adcea7228
I don't think controlnets were being handled correctly by MPS.
2 years ago
comfyanonymous
3c6ff8821c
Merge branch 'master' of https://github.com/GaidamakUA/ComfyUI
2 years ago
Yurii Mazurevich
fc71e7ea08
Fixed typo
2 years ago
comfyanonymous
7f0fd99b5d
Make ddim work with --cpu
2 years ago
Yurii Mazurevich
4b943d2b60
Removed unnecessary comment
2 years ago
Yurii Mazurevich
89fd5ed574
Added MPS device support
2 years ago
comfyanonymous
dd095efc2c
Support loha that use cp decomposition.
2 years ago
comfyanonymous
94a7c895f4
Add loha support.
2 years ago
comfyanonymous
3ed4a4e4e6
Try again with vae tiled decoding if regular fails because of OOM.
2 years ago
comfyanonymous
4039616ca6
Less seams in tiled outputs at the cost of more processing.
2 years ago
comfyanonymous
c692509c2b
Try to improve VAEEncode memory usage a bit.
2 years ago
comfyanonymous
9d0665c8d0
Add laptop quadro cards to fp32 list.
2 years ago
comfyanonymous
cc309568e1
Add support for locon mid weights.
2 years ago
comfyanonymous
edfc4ca663
Try to fix a vram issue with controlnets.
2 years ago
comfyanonymous
b4b21be707
Fix area composition feathering not working properly.
2 years ago
comfyanonymous
50099bcd96
Support multiple paths for embeddings.
2 years ago
comfyanonymous
2e73367f45
Merge T2IAdapterLoader and ControlNetLoader.
...
Workflows will be auto updated.
2 years ago
comfyanonymous
ee46bef03a
Make --cpu have priority over everything else.
2 years ago
comfyanonymous
0e836d525e
use half() on fp16 models loaded with config.
2 years ago
comfyanonymous
986dd820dc
Use half() function on model when loading in fp16.
2 years ago
comfyanonymous
54dbfaf2ec
Remove omegaconf dependency and some ci changes.
2 years ago
comfyanonymous
83f23f82b8
Add pytorch attention support to VAE.
2 years ago
comfyanonymous
a256a2abde
--disable-xformers should not even try to import xformers.
2 years ago
comfyanonymous
0f3ba7482f
Xformers is now properly disabled when --cpu used.
...
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
2 years ago
comfyanonymous
e33dc2b33b
Add a VAEEncodeTiled node.
2 years ago
comfyanonymous
1de86851b1
Try to fix memory issue.
2 years ago
comfyanonymous
2b1fce2943
Make tiled_scale work for downscaling.
2 years ago
comfyanonymous
9db2e97b47
Tiled upscaling with the upscale models.
2 years ago
comfyanonymous
cd64111c83
Add locon support.
2 years ago
comfyanonymous
c70f0ac64b
SD2.x controlnets now work.
2 years ago
comfyanonymous
19415c3ace
Relative imports to test something.
2 years ago
edikius
165be5828a
Fixed import ( #44 )
...
* fixed import error
I had an
ImportError: cannot import name 'Protocol' from 'typing'
while trying to update so I fixed it to start an app
* Update main.py
* deleted example files
2 years ago
comfyanonymous
501f19eec6
Fix clip_skip no longer being loaded from yaml file.
2 years ago
comfyanonymous
afff30fc0a
Add --cpu to use the cpu for inference.
2 years ago
comfyanonymous
47acb3d73e
Implement support for t2i style model.
...
It needs the CLIPVision model so I added CLIPVisionLoader and CLIPVisionEncode.
Put the clip vision model in models/clip_vision
Put the t2i style model in models/style_models
StyleModelLoader to load it, StyleModelApply to apply it
ConditioningAppend to append the conditioning it outputs to a positive one.
2 years ago
comfyanonymous
cc8baf1080
Make VAE use common function to get free memory.
2 years ago
comfyanonymous
798c90e1c0
Fix pytorch 2.0 cross attention not working.
2 years ago
comfyanonymous
16130c7546
Add support for new colour T2I adapter model.
2 years ago
comfyanonymous
9d00235b41
Update T2I adapter code to latest.
2 years ago
comfyanonymous
ebfcf0a9c9
Fix issue.
2 years ago
comfyanonymous
4215206281
Add a node to set CLIP skip.
...
Use a more simple way to detect if the model is -v prediction.
2 years ago
comfyanonymous
fed315a76a
To be really simple CheckpointLoaderSimple should pick the right type.
2 years ago
comfyanonymous
94bb0375b0
New CheckpointLoaderSimple to load checkpoints without a config.
2 years ago
comfyanonymous
c1f5855ac1
Make some cross attention functions work on the CPU.
2 years ago
comfyanonymous
1a612e1c74
Add some pytorch scaled_dot_product_attention code for testing.
...
--use-pytorch-cross-attention to use it.
2 years ago
comfyanonymous
69cc75fbf8
Add a way to interrupt current processing in the backend.
2 years ago
comfyanonymous
9502ee45c3
Hopefully fix a strange issue with xformers + lowvram.
2 years ago
comfyanonymous
b31daadc03
Try to improve memory issues with del.
2 years ago
comfyanonymous
2c5f0ec681
Small adjustment.
2 years ago
comfyanonymous
86721d5158
Enable highvram automatically when vram >> ram
2 years ago
comfyanonymous
75fa162531
Remove sample_ from some sampler names.
...
Old workflows will still work.
2 years ago
comfyanonymous
9f4214e534
Preparing to add another function to load checkpoints.
2 years ago
comfyanonymous
3cd7d84b53
Fix uni_pc sampler not working with 1 or 2 steps.
2 years ago
comfyanonymous
dfb397e034
Fix multiple controlnets not working.
2 years ago
comfyanonymous
af3cc1b5fb
Fixed issue when batched image was used as a controlnet input.
2 years ago
comfyanonymous
d2da346b0b
Fix missing variable.
2 years ago
comfyanonymous
4e6b83a80a
Add a T2IAdapterLoader node to load T2I-Adapter models.
...
They are loaded as CONTROL_NET objects because they are similar.
2 years ago
comfyanonymous
fcb25d37db
Prepare for t2i adapter.
2 years ago
comfyanonymous
cf5a211efc
Remove some useless imports
2 years ago
comfyanonymous
87b00b37f6
Added an experimental VAEDecodeTiled.
...
This decodes the image with the VAE in tiles which should be faster and
use less vram.
It's in the _for_testing section so I might change/remove it or even
add the functionality to the regular VAEDecode node depending on how
well it performs which means don't depend too much on it.
2 years ago
comfyanonymous
62df8dd62a
Add a node to load diff controlnets.
2 years ago
comfyanonymous
f04dc2c2f4
Implement DDIM sampler.
2 years ago
comfyanonymous
2976c1ad28
Uni_PC: make max denoise behave more like other samplers.
...
On the KSamplers denoise of 1.0 is the same as txt2img but there was a
small difference on UniPC.
2 years ago
comfyanonymous
c9daec4c89
Remove prints that are useless when xformers is enabled.
2 years ago
comfyanonymous
a7328e4945
Add uni_pc bh2 variant.
2 years ago
comfyanonymous
d80af7ca30
ControlNetApply now stacks.
...
It can be used to apply multiple control nets at the same time.
2 years ago
comfyanonymous
00a9189e30
Support old pytorch.
2 years ago
comfyanonymous
137ae2606c
Support people putting commas after the embedding name in the prompt.
2 years ago
comfyanonymous
2326ff1263
Add: --highvram for when you want models to stay on the vram.
2 years ago
comfyanonymous
09f1d76ed8
Fix an OOM issue.
2 years ago
comfyanonymous
d66415c021
Low vram mode for controlnets.
2 years ago
comfyanonymous
220a72d36b
Use fp16 for fp16 control nets.
2 years ago
comfyanonymous
6135a21ee8
Add a way to control controlnet strength.
2 years ago
comfyanonymous
4efa67fa12
Add ControlNet support.
2 years ago
comfyanonymous
bc69fb5245
Use inpaint models the proper way by using VAEEncodeForInpaint.
2 years ago
comfyanonymous
cef2cc3cb0
Support for inpaint models.
2 years ago
comfyanonymous
07db00355f
Add masks to samplers code for inpainting.
2 years ago
comfyanonymous
e3451cea4f
uni_pc now works with KSamplerAdvanced return_with_leftover_noise.
2 years ago
comfyanonymous
f542f248f1
Show the right amount of steps in the progress bar for uni_pc.
...
The extra step doesn't actually call the unet so it doesn't belong in
the progress bar.
2 years ago
comfyanonymous
f10b8948c3
768-v support for uni_pc sampler.
2 years ago
comfyanonymous
ce0aeb109e
Remove print.
2 years ago
comfyanonymous
5489d5af04
Add uni_pc sampler to KSampler* nodes.
2 years ago
comfyanonymous
1a4edd19cd
Fix overflow issue with inplace softmax.
2 years ago
comfyanonymous
509c7dfc6d
Use real softmax in split op to fix issue with some images.
2 years ago
comfyanonymous
7e1e193f39
Automatically enable lowvram mode if vram is less than 4GB.
...
Use: --normalvram to disable it.
2 years ago
comfyanonymous
324273fff2
Fix embedding not working when on new line.
2 years ago
comfyanonymous
1f6a467e92
Update ldm dir with latest upstream stable diffusion changes.
2 years ago
comfyanonymous
773cdabfce
Same thing but for the other places where it's used.
2 years ago
comfyanonymous
df40d4f3bf
torch.cuda.OutOfMemoryError is not present on older pytorch versions.
2 years ago
comfyanonymous
e8c499ddd4
Split optimization for VAE attention block.
2 years ago
comfyanonymous
5b4e312749
Use inplace operations for less OOM issues.
2 years ago
comfyanonymous
3fd87cbd21
Slightly smarter batching behaviour.
...
Try to keep batch sizes more consistent which seems to improve things on
AMD GPUs.
2 years ago
comfyanonymous
bbdcf0b737
Use relative imports for k_diffusion.
2 years ago
comfyanonymous
708138c77d
Remove print.
2 years ago
comfyanonymous
047775615b
Lower the chances of an OOM.
2 years ago
comfyanonymous
853e96ada3
Increase it/s by batching together some stuff sent to unet.
2 years ago
comfyanonymous
c92633eaa2
Auto calculate amount of memory to use for --lowvram
2 years ago
comfyanonymous
534736b924
Add some low vram modes: --lowvram and --novram
2 years ago
comfyanonymous
a84cd0d1ad
Don't unload/reload model from CPU uselessly.
2 years ago
comfyanonymous
b1a7c9ebf6
Embeddings/textual inversion support for SD2.x
2 years ago
comfyanonymous
1de5aa6a59
Add a CLIPLoader node to load standalone clip weights.
...
Put them in models/clip
2 years ago
comfyanonymous
56d802e1f3
Use transformers CLIP instead of open_clip for SD2.x
...
This should make things a bit cleaner.
2 years ago
comfyanonymous
bf9ccffb17
Small fix for SD2.x loras.
2 years ago
comfyanonymous
678105fade
SD2.x CLIP support for Loras.
2 years ago
comfyanonymous
ef90e9c376
Add a LoraLoader node to apply loras to models and clip.
...
The models are modified in place before being used and unpatched after.
I think this is better than monkeypatching since it might make it easier
to use faster non pytorch unet inference in the future.
2 years ago
comfyanonymous
69df7eba94
Add KSamplerAdvanced node.
...
This node exposes more sampling options and makes it possible for example
to sample the first few steps on the latent image, do some operations on it
and then do the rest of the sampling steps. This can be achieved using the
start_at_step and end_at_step options.
2 years ago
comfyanonymous
1daccf3678
Run softmax in place if it OOMs.
2 years ago
comfyanonymous
f73e57d881
Add support for textual inversion embedding for SD1.x CLIP.
2 years ago
comfyanonymous
50db297cf6
Try to fix OOM issues with cards that have less vram than mine.
2 years ago
comfyanonymous
73f60740c8
Slightly cleaner code.
2 years ago
comfyanonymous
0108616b77
Fix issue with some models.
2 years ago
comfyanonymous
2973ff24c5
Round CLIP position ids to fix float issues in some checkpoints.
2 years ago
comfyanonymous
c4b02059d0
Add ConditioningSetArea node.
...
to apply conditioning/prompts only to a specific area of the image.
Add ConditioningCombine node.
so that multiple conditioning/prompts can be applied to the image at the
same time
2 years ago
comfyanonymous
acdc6f42e0
Fix loading some malformed checkpoints?
2 years ago
comfyanonymous
051f472e8f
Fix sub quadratic attention for SD2 and make it the default optimization.
2 years ago
comfyanonymous
220afe3310
Initial commit.
2 years ago