comfyanonymous
2c9dba8dc0
sampling_function now has the model object as the argument.
1 year ago
comfyanonymous
8d80584f6a
Remove useless argument from uni_pc sampler.
1 year ago
comfyanonymous
248aa3e563
Fix bug.
1 year ago
comfyanonymous
4a8a839b40
Add option to use in place weight updating in ModelPatcher.
1 year ago
comfyanonymous
412d3ff57d
Refactor.
1 year ago
comfyanonymous
58d5d71a93
Working RescaleCFG node.
...
This was broken because of recent changes so I fixed it and moved it from
the experiments repo.
1 year ago
comfyanonymous
3e0033ef30
Fix model merge bug.
...
Unload models before getting weights for model patching.
1 year ago
comfyanonymous
002aefa382
Support lcm models.
...
Use the "lcm" sampler to sample them, you also have to use the
ModelSamplingDiscrete node to set them as lcm models to use them properly.
1 year ago
comfyanonymous
ec12000136
Add support for full diff lora keys.
1 year ago
comfyanonymous
064d7583eb
Add a CONDConstant for passing non tensor conds to unet.
1 year ago
comfyanonymous
794dd2064d
Fix typo.
1 year ago
comfyanonymous
0a6fd49a3e
Print leftover keys when using the UNETLoader.
1 year ago
comfyanonymous
fe40109b57
Fix issue with object patches not being copied with patcher.
1 year ago
comfyanonymous
a527d0c795
Code refactor.
1 year ago
comfyanonymous
2a23ba0b8c
Fix unet ops not entirely on GPU.
1 year ago
comfyanonymous
844dbf97a7
Add: advanced->model->ModelSamplingDiscrete node.
...
This allows changing the sampling parameters of the model (eps or vpred)
or set the model to use zsnr.
1 year ago
comfyanonymous
656c0b5d90
CLIP code refactor and improvements.
...
More generic clip model class that can be used on more types of text
encoders.
Don't apply weighting algorithm when weight is 1.0
Don't compute an empty token output when it's not needed.
1 year ago
comfyanonymous
b3fcd64c6c
Make SDTokenizer class work with more types of tokenizers.
1 year ago
gameltb
7e455adc07
fix unet_wrapper_function name in ModelPatcher
1 year ago
comfyanonymous
1ffa8858e7
Move model sampling code to comfy/model_sampling.py
1 year ago
comfyanonymous
ae2acfc21b
Don't convert Nan to zero.
...
Converting Nan to zero is a bad idea because it makes it hard to tell when
something went wrong.
1 year ago
comfyanonymous
d2e27b48f1
sampler_cfg_function now gets the noisy output as argument again.
...
This should make things that use sampler_cfg_function behave like before.
Added an input argument for those that want the denoised output.
This means you can calculate the x0 prediction of the model by doing:
(input - cond) for example.
1 year ago
comfyanonymous
2455aaed8a
Allow model or clip to be None in load_lora_for_models.
1 year ago
comfyanonymous
ecb80abb58
Allow ModelSamplingDiscrete to be instantiated without a model config.
1 year ago
comfyanonymous
e73ec8c4da
Not used anymore.
1 year ago
comfyanonymous
111f1b5255
Fix some issues with sampling precision.
1 year ago
comfyanonymous
7c0f255de1
Clean up percent start/end and make controlnets work with sigmas.
1 year ago
comfyanonymous
a268a574fa
Remove a bunch of useless code.
...
DDIM is the same as euler with a small difference in the inpaint code.
DDIM uses randn_like but I set a fixed seed instead.
I'm keeping it in because I'm sure if I remove it people are going to
complain.
1 year ago
comfyanonymous
1777b54d02
Sampling code changes.
...
apply_model in model_base now returns the denoised output.
This means that sampling_function now computes things on the denoised
output instead of the model output. This should make things more consistent
across current and future models.
1 year ago
comfyanonymous
c837a173fa
Fix some memory issues in sub quad attention.
1 year ago
comfyanonymous
125b03eead
Fix some OOM issues with split attention.
1 year ago
comfyanonymous
a12cc05323
Add --max-upload-size argument, the default is 100MB.
1 year ago
comfyanonymous
2a134bfab9
Fix checkpoint loader with config.
1 year ago
comfyanonymous
e60ca6929a
SD1 and SD2 clip and tokenizer code is now more similar to the SDXL one.
1 year ago
comfyanonymous
6ec3f12c6e
Support SSD1B model and make it easier to support asymmetric unets.
1 year ago
comfyanonymous
434ce25ec0
Restrict loading embeddings from embedding folders.
1 year ago
comfyanonymous
723847f6b3
Faster clip image processing.
1 year ago
comfyanonymous
a373367b0c
Fix some OOM issues with split and sub quad attention.
1 year ago
comfyanonymous
7fbb217d3a
Fix uni_pc returning noisy image when steps <= 3
1 year ago
Jedrzej Kosinski
3783cb8bfd
change 'c_adm' to 'y' in ControlNet.get_control
1 year ago
comfyanonymous
d1d2fea806
Pass extra conds directly to unet.
1 year ago
comfyanonymous
036f88c621
Refactor to make it easier to add custom conds to models.
1 year ago
comfyanonymous
3fce8881ca
Sampling code refactor to make it easier to add more conds.
1 year ago
comfyanonymous
8594c8be4d
Empty the cache when torch cache is more than 25% free mem.
1 year ago
comfyanonymous
8b65f5de54
attention_basic now works with hypertile.
1 year ago
comfyanonymous
e6bc42df46
Make sub_quad and split work with hypertile.
1 year ago
comfyanonymous
a0690f9df9
Fix t2i adapter issue.
1 year ago
comfyanonymous
9906e3efe3
Make xformers work with hypertile.
1 year ago
comfyanonymous
4185324a1d
Fix uni_pc sampler math. This changes the images this sampler produces.
1 year ago
comfyanonymous
e6962120c6
Make sure cond_concat is on the right device.
1 year ago
comfyanonymous
45c972aba8
Refactor cond_concat into conditioning.
1 year ago
comfyanonymous
430a8334c5
Fix some potential issues.
1 year ago
comfyanonymous
782a24fce6
Refactor cond_concat into model object.
1 year ago
comfyanonymous
0d45a565da
Fix memory issue related to control loras.
...
The cleanup function was not getting called.
1 year ago
comfyanonymous
d44a2de49f
Make VAE code closer to sgm.
1 year ago
comfyanonymous
23680a9155
Refactor the attention stuff in the VAE.
1 year ago
comfyanonymous
c8013f73e5
Add some Quadro cards to the list of cards with broken fp16.
1 year ago
comfyanonymous
bb064c9796
Add a separate optimized_attention_masked function.
1 year ago
comfyanonymous
fd4c5f07e7
Add a --bf16-unet to test running the unet in bf16.
1 year ago
comfyanonymous
9a55dadb4c
Refactor code so model can be a dtype other than fp32 or fp16.
1 year ago
comfyanonymous
88733c997f
pytorch_attention_enabled can now return True when xformers is enabled.
1 year ago
comfyanonymous
20d3852aa1
Pull some small changes from the other repo.
1 year ago
comfyanonymous
ac7d8cfa87
Allow attn_mask in attention_pytorch.
1 year ago
comfyanonymous
1a4bd9e9a6
Refactor the attention functions.
...
There's no reason for the whole CrossAttention object to be repeated when
only the operation in the middle changes.
1 year ago
comfyanonymous
8cc75c64ff
Let unet wrapper functions have .to attributes.
1 year ago
comfyanonymous
5e885bd9c8
Cleanup.
1 year ago
comfyanonymous
851bb87ca9
Merge branch 'taesd_safetensors' of https://github.com/mochiya98/ComfyUI
1 year ago
Yukimasa Funaoka
9eb621c95a
Supports TAESD models in safetensors format
1 year ago
comfyanonymous
d1a0abd40b
Merge branch 'input-directory' of https://github.com/jn-jairo/ComfyUI
1 year ago
comfyanonymous
72188dffc3
load_checkpoint_guess_config can now optionally output the model.
1 year ago
Jairo Correa
63e5fd1790
Option to input directory
1 year ago
City
9bfec2bdbf
Fix quality loss due to low precision
1 year ago
badayvedat
0f17993d05
fix: typo in extra sampler
1 year ago
comfyanonymous
66756de100
Add SamplerDPMPP_2M_SDE node.
1 year ago
comfyanonymous
71713888c4
Print missing VAE keys.
1 year ago
comfyanonymous
d234ca558a
Add missing samplers to KSamplerSelect.
1 year ago
comfyanonymous
1adcc4c3a2
Add a SamplerCustom Node.
...
This node takes a list of sigmas and a sampler object as input.
This lets people easily implement custom schedulers and samplers as nodes.
More nodes will be added to it in the future.
1 year ago
comfyanonymous
bf3fc2f1b7
Refactor sampling related code.
1 year ago
comfyanonymous
fff491b032
Model patches can now know which batch is positive and negative.
1 year ago
comfyanonymous
1d6dd83184
Scheduler code refactor.
1 year ago
comfyanonymous
446caf711c
Sampling code refactor.
1 year ago
comfyanonymous
76cdc809bf
Support more controlnet models.
1 year ago
comfyanonymous
ae87543653
Merge branch 'cast_intel' of https://github.com/simonlui/ComfyUI
1 year ago
Simon Lui
eec449ca8e
Allow Intel GPUs to LoRA cast on GPU since it supports BF16 natively.
1 year ago
comfyanonymous
afa2399f79
Add a way to set output block patches to modify the h and hsp.
1 year ago
comfyanonymous
492db2de8d
Allow having a different pooled output for each image in a batch.
1 year ago
comfyanonymous
1cdfb3dba4
Only do the cast on the device if the device supports it.
1 year ago
comfyanonymous
7c9a92f552
Don't depend on torchvision.
1 year ago
MoonRide303
2b6b178173
Added support for lanczos scaling
1 year ago
comfyanonymous
b92bf8196e
Do lora cast on GPU instead of CPU for higher performance.
1 year ago
comfyanonymous
321c5fa295
Enable pytorch attention by default on xpu.
1 year ago
comfyanonymous
61b1f67734
Support models without previews.
1 year ago
comfyanonymous
43d4935a1d
Add cond_or_uncond array to transformer_options so hooks can check what is
...
cond and what is uncond.
1 year ago
comfyanonymous
415abb275f
Add DDPM sampler.
1 year ago
comfyanonymous
94e4fe39d8
This isn't used anywhere.
1 year ago
comfyanonymous
44361f6344
Support for text encoder models that need attention_mask.
1 year ago
comfyanonymous
0d8f376446
Set last layer on SD2.x models uses the proper indexes now.
...
Before I had made the last layer the penultimate layer because some
checkpoints don't have them but it's not consistent with the others models.
TLDR: for SD2.x models only: CLIPSetLastLayer -1 is now -2.
1 year ago
comfyanonymous
0966d3ce82
Don't run text encoders on xpu because there are issues.
1 year ago
comfyanonymous
3039b08eb1
Only parse command line args when main.py is called.
1 year ago
comfyanonymous
ed58730658
Don't leave very large hidden states in the clip vision output.
1 year ago