comfyanonymous
8607c2d42d
Move latent scale factor from VAE to model.
2 years ago
comfyanonymous
30a3861946
Fix bug when yaml config has no clip params.
2 years ago
comfyanonymous
9e37f4c7d5
Fix error with ClipVision loader node.
2 years ago
comfyanonymous
9f83b098c9
Don't merge weights when shapes don't match and print a warning.
2 years ago
comfyanonymous
f87ec10a97
Support base SDXL and SDXL refiner models.
...
Large refactor of the model detection and loading code.
2 years ago
comfyanonymous
9fccf4aa03
Add original_shape parameter to transformer patch extra_options.
2 years ago
comfyanonymous
51581dbfa9
Fix last commits causing an issue with the text encoder lora.
2 years ago
comfyanonymous
8125b51a62
Keep a set of model_keys for faster add_patches.
2 years ago
comfyanonymous
45beebd33c
Add a type of model patch useful for model merging.
2 years ago
comfyanonymous
036a22077c
Fix k_diffusion math being off by a tiny bit during txt2img.
2 years ago
comfyanonymous
8883cb0f67
Add a way to set patches that modify the attn2 output.
...
Change the transformer patches function format to be more future proof.
2 years ago
comfyanonymous
cd930d4e7f
pop clip vision keys after loading them.
2 years ago
comfyanonymous
c9e4a8c9e5
Not needed anymore.
2 years ago
comfyanonymous
fb4bf7f591
This is not needed anymore and causes issues with alphas_cumprod.
2 years ago
comfyanonymous
45be2e92c1
Fix DDIM v-prediction.
2 years ago
comfyanonymous
e6e50ab2dd
Fix an issue when alphas_comprod are half floats.
2 years ago
comfyanonymous
ae43f09ef7
All the unet weights should now be initialized with the right dtype.
2 years ago
comfyanonymous
f7edcfd927
Add a --gpu-only argument to keep and run everything on the GPU.
...
Make the CLIP model work on the GPU.
2 years ago
comfyanonymous
7bf89ba923
Initialize more unet weights as the right dtype.
2 years ago
comfyanonymous
e21d9ad445
Initialize transformer unet block weights in right dtype at the start.
2 years ago
comfyanonymous
bb1f45d6e8
Properly disable weight initialization in clip models.
2 years ago
comfyanonymous
21f04fe632
Disable default weight values in unet conv2d for faster loading.
2 years ago
comfyanonymous
9d54066ebc
This isn't needed for inference.
2 years ago
comfyanonymous
fa2cca056c
Don't initialize CLIPVision weights to default values.
2 years ago
comfyanonymous
6b774589a5
Set model to fp16 before loading the state dict to lower ram bump.
2 years ago
comfyanonymous
0c7cad404c
Don't initialize clip weights to default values.
2 years ago
comfyanonymous
6971646b8b
Speed up model loading a bit.
...
Default pytorch Linear initializes the weights which is useless and slow.
2 years ago
comfyanonymous
388567f20b
sampler_cfg_function now uses a dict for the argument.
...
This means arguments can be added without issues.
2 years ago
comfyanonymous
ff9b22d79e
Turn on safe load for a few models.
2 years ago
comfyanonymous
735ac4cf81
Remove pytorch_lightning dependency.
2 years ago
comfyanonymous
2b14041d4b
Remove useless code.
2 years ago
comfyanonymous
274dff3257
Remove more useless files.
2 years ago
comfyanonymous
f0a2b81cd0
Cleanup: Remove a bunch of useless files.
2 years ago
comfyanonymous
f8c5931053
Split the batch in VAEEncode if there's not enough memory.
2 years ago
comfyanonymous
c069fc0730
Auto switch to tiled VAE encode if regular one runs out of memory.
2 years ago
comfyanonymous
c64ca8c0b2
Refactor unCLIP noise augment out of samplers.py
2 years ago
comfyanonymous
de142eaad5
Simpler base model code.
2 years ago
comfyanonymous
23cf8ca7c5
Fix bug when embedding gets ignored because of mismatched size.
2 years ago
comfyanonymous
0e425603fb
Small refactor.
2 years ago
comfyanonymous
a3a713b6c5
Refactor previews into one command line argument.
...
Clean up a few things.
2 years ago
space-nuko
3e17971acb
preview method autodetection
2 years ago
space-nuko
d5a28fadaa
Add latent2rgb preview
2 years ago
space-nuko
48f7ec750c
Make previews into cli option
2 years ago
space-nuko
b4f434ee66
Preview sampled images with TAESD
2 years ago
comfyanonymous
fed0a4dd29
Some comments to say what the vram state options mean.
2 years ago
comfyanonymous
0a5fefd621
Cleanups and fixes for model_management.py
...
Hopefully fix regression on MPS and CPU.
2 years ago
comfyanonymous
700491d81a
Implement global average pooling for controlnet.
2 years ago
comfyanonymous
67892b5ac5
Refactor and improve model_management code related to free memory.
2 years ago
space-nuko
499641ebf1
More accurate total
2 years ago
space-nuko
b5dd15c67a
System stats endpoint
2 years ago
comfyanonymous
5c38958e49
Tweak lowvram model memory so it's closer to what it was before.
2 years ago
comfyanonymous
94680732d3
Empty cache on mps.
2 years ago
comfyanonymous
03da8a3426
This is useless for inference.
2 years ago
comfyanonymous
eb448dd8e1
Auto load model in lowvram if not enough memory.
2 years ago
comfyanonymous
b9818eb910
Add route to get safetensors metadata:
...
/view_metadata/loras?filename=lora.safetensors
2 years ago
comfyanonymous
a532888846
Support VAEs in diffusers format.
2 years ago
comfyanonymous
0fc483dcfd
Refactor diffusers model convert code to be able to reuse it.
2 years ago
comfyanonymous
eb4bd7711a
Remove einops.
2 years ago
comfyanonymous
87ab25fac7
Do operations in same order as the one it replaces.
2 years ago
comfyanonymous
2b1fac9708
Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI
2 years ago
comfyanonymous
e1278fa925
Support old pytorch versions that don't have weights_only.
2 years ago
BlenderNeko
8b4b0c3188
vecorized bislerp
2 years ago
comfyanonymous
b8ccbec6d8
Various improvements to bislerp.
2 years ago
comfyanonymous
34887b8885
Add experimental bislerp algorithm for latent upscaling.
...
It's like bilinear but with slerp.
2 years ago
comfyanonymous
6cc450579b
Auto transpose images from exif data.
2 years ago
comfyanonymous
dc198650c0
sample_dpmpp_2m_sde no longer crashes when step == 1.
2 years ago
comfyanonymous
069657fbf3
Add DPM-Solver++(2M) SDE and exponential scheduler.
...
exponential scheduler is the one recommended with this sampler.
2 years ago
comfyanonymous
b8636a44aa
Make scaled_dot_product switch to sliced attention on OOM.
2 years ago
comfyanonymous
797c4e8d3b
Simplify and improve some vae attention code.
2 years ago
comfyanonymous
ef815ba1e2
Switch default scheduler to normal.
2 years ago
comfyanonymous
68d12b530e
Merge branch 'tiled_sampler' of https://github.com/BlenderNeko/ComfyUI
2 years ago
comfyanonymous
3a1f47764d
Print the torch device that is used on startup.
2 years ago
BlenderNeko
1201d2eae5
Make nodes map over input lists ( #579 )
...
* allow nodes to map over lists
* make work with IS_CHANGED and VALIDATE_INPUTS
* give list outputs distinct socket shape
* add rebatch node
* add batch index logic
* add repeat latent batch
* deal with noise mask edge cases in latentfrombatch
2 years ago
BlenderNeko
19c014f429
comment out annoying print statement
2 years ago
BlenderNeko
d9e088ddfd
minor changes for tiled sampler
2 years ago
comfyanonymous
f7c0f75d1f
Auto batching improvements.
...
Try batching when cond sizes don't match with smart padding.
2 years ago
comfyanonymous
314e526c5c
Not needed anymore because sampling works with any latent size.
2 years ago
comfyanonymous
c6e34963e4
Make t2i adapter work with any latent resolution.
2 years ago
comfyanonymous
a1f12e370d
Merge branch 'autostart' of https://github.com/EllangoK/ComfyUI
2 years ago
comfyanonymous
6fc4917634
Make maximum_batch_area take into account python2.0 attention function.
...
More conservative xformers maximum_batch_area.
2 years ago
comfyanonymous
678f933d38
maximum_batch_area for xformers.
...
Remove useless code.
2 years ago
EllangoK
8e03c789a2
auto-launch cli arg
2 years ago
comfyanonymous
cb1551b819
Lowvram mode for gligen and fix some lowvram issues.
2 years ago
comfyanonymous
af9cc1fb6a
Search recursively in subfolders for embeddings.
2 years ago
comfyanonymous
6ee11d7bc0
Fix import.
2 years ago
comfyanonymous
bae4fb4a9d
Fix imports.
2 years ago
comfyanonymous
fcf513e0b6
Refactor.
2 years ago
comfyanonymous
a74e176a24
Merge branch 'tiled-progress' of https://github.com/pythongosssss/ComfyUI
2 years ago
pythongosssss
5eeecf3fd5
remove unused import
2 years ago
pythongosssss
8912623ea9
use comfy progress bar
2 years ago
comfyanonymous
908dc1d5a8
Add a total_steps value to sampler callback.
2 years ago
pythongosssss
fdf57325f4
Merge remote-tracking branch 'origin/master' into tiled-progress
2 years ago
pythongosssss
27df74101e
reduce duplication
2 years ago
comfyanonymous
93c64afaa9
Use sampler callback instead of tqdm hook for progress bar.
2 years ago
pythongosssss
06ad35b493
added progress to encode + upscale
2 years ago
comfyanonymous
ba8a4c3667
Change latent resolution step to 8.
2 years ago
comfyanonymous
66c8aa5c3e
Make unet work with any input shape.
2 years ago
comfyanonymous
9c335a553f
LoKR support.
2 years ago
comfyanonymous
d3293c8339
Properly disable all progress bars when disable_pbar=True
2 years ago
BlenderNeko
a2e18b1504
allow disabling of progress bar when sampling
2 years ago