1014 Commits (cd5017c1c9b3a7b0fec892e80290a32616bbff38)

Author SHA1 Message Date
comfyanonymous 104fcea0c8 Add function to get the list of currently loaded models. 9 months ago
comfyanonymous b1fd26fe9e pytorch xpu should be flash or mem efficient attention? 9 months ago
comfyanonymous 809cc85a8e Remove useless code. 9 months ago
comfyanonymous b249862080 Add an annoying print to a function I want to remove. 9 months ago
comfyanonymous bf3e334d46 Disable non_blocking when --deterministic or directml. 9 months ago
JettHu b26da2245f
Fix UnetParams annotation typo (#3589) 9 months ago
comfyanonymous 0920e0e5fe Remove some unused imports. 9 months ago
comfyanonymous ffc4b7c30e Fix DORA strength.
This is a different version of #3298 with more correct behavior.
9 months ago
comfyanonymous efa5a711b2 Reduce memory usage when applying DORA: #3557 9 months ago
comfyanonymous 6c23854f54 Fix OSX latent2rgb previews. 9 months ago
Chenlei Hu 7718ada4ed
Add type annotation UnetWrapperFunction (#3531)
* Add type annotation UnetWrapperFunction

* nit

* Add types.py
9 months ago
comfyanonymous 8508df2569 Work around black image bug on Mac 14.5 by forcing attention upcasting. 9 months ago
comfyanonymous 83d969e397 Disable xformers when tracing model. 9 months ago
comfyanonymous 1900e5119f Fix potential issue. 9 months ago
comfyanonymous 09e069ae6c Log the pytorch version. 9 months ago
comfyanonymous 11a2ad5110 Fix controlnet not upcasting on models that have it enabled. 9 months ago
comfyanonymous 0bdc2b15c7 Cleanup. 9 months ago
comfyanonymous 98f828fad9 Remove unnecessary code. 9 months ago
comfyanonymous 19300655dd Don't automatically switch to lowvram mode on GPUs with low memory. 9 months ago
comfyanonymous 46daf0a9a7 Add debug options to force on and off attention upcasting. 9 months ago
comfyanonymous 2d41642716 Fix lowvram dora issue. 9 months ago
comfyanonymous ec6f16adb6 Fix SAG. 9 months ago
comfyanonymous bb4940d837 Only enable attention upcasting on models that actually need it. 9 months ago
comfyanonymous b0ab31d06c Refactor attention upcasting code part 1. 10 months ago
Simon Lui f509c6fe21
Fix Intel GPU memory allocation accuracy and documentation update. (#3459)
* Change calculation of memory total to be more accurate, allocated is actually smaller than reserved.

* Update README.md install documentation for Intel GPUs.
10 months ago
comfyanonymous fa6dd7e5bb Fix lowvram issue with saving checkpoints.
The previous fix didn't cover the case where the model was loaded in
lowvram mode right before.
10 months ago
comfyanonymous 49c20cdc70 No longer necessary. 10 months ago
comfyanonymous e1489ad257 Fix issue with lowvram mode breaking model saving. 10 months ago
comfyanonymous 93e876a3be Remove warnings that confuse people. 10 months ago
comfyanonymous cd07340d96 Typo fix. 10 months ago
comfyanonymous c61eadf69a Make the load checkpoint with config function call the regular one.
I was going to completely remove this function because it is unmaintainable
but I think this is the best compromise.

The clip skip and v_prediction parts of the configs should still work but
not the fp16 vs fp32.
10 months ago
Simon Lui a56d02efc7
Change torch.xpu to ipex.optimize, xpu device initialization and remove workaround for text node issue from older IPEX. (#3388) 10 months ago
comfyanonymous f81a6fade8 Fix some edge cases with samplers and arrays with a single sigma. 10 months ago
comfyanonymous 2aed53c4ac Workaround xformers bug. 10 months ago
Garrett Sutula bacce529fb
Add TLS Support (#3312)
* Add TLS Support

* Add to readme

* Add guidance for windows users on generating certificates

* Add guidance for windows users on generating certificates

* Fix typo
10 months ago
Jedrzej Kosinski 7990ae18c1
Fix error when more cond masks passed in than batch size (#3353) 10 months ago
comfyanonymous 8dc19e40d1 Don't init a VAE model when there are no VAE weights. 10 months ago
comfyanonymous c59fe9f254 Support VAE without quant_conv. 10 months ago
comfyanonymous 719fb2c81d Add basic PAG node. 10 months ago
comfyanonymous 258dbc06c3 Fix some memory related issues. 11 months ago
comfyanonymous 58812ab8ca Support SDXS 512 model. 11 months ago
comfyanonymous 831511a1ee Fix issue with sampling_settings persisting across models. 11 months ago
comfyanonymous 30abc324c2 Support properly saving CosXL checkpoints. 11 months ago
comfyanonymous 0a03009808 Fix issue with controlnet models getting loaded multiple times. 11 months ago
kk-89 38ed2da2dd
Fix typo in lowvram patcher (#3209) 11 months ago
comfyanonymous 1088d1850f Support for CosXL models. 11 months ago
comfyanonymous 41ed7e85ea Fix object_patches_backup not being the same object across clones. 11 months ago
comfyanonymous 0f5768e038 Fix missing arguments in cfg_function. 11 months ago
comfyanonymous 1f4fc9ea0c Fix issue with get_model_object on patched model. 11 months ago
comfyanonymous 1a0486bb96 Fix model needing to be loaded on GPU to generate the sigmas. 11 months ago
comfyanonymous c6bd456c45 Make zero denoise a NOP. 11 months ago
comfyanonymous fcfd2bdf8a Small cleanup. 11 months ago
comfyanonymous 0542088ef8 Refactor sampler code for more advanced sampler nodes part 2. 11 months ago
comfyanonymous 57753c964a Refactor sampling code for more advanced sampler nodes. 11 months ago
comfyanonymous 6c6a39251f Fix saving text encoder in fp8. 11 months ago
comfyanonymous e6482fbbfc Refactor calc_cond_uncond_batch into calc_cond_batch.
calc_cond_batch can take an arbitrary amount of cond inputs.

Added a calc_cond_uncond_batch wrapper with a warning so custom nodes
won't break.
11 months ago
comfyanonymous 575acb69e4 IP2P model loading support.
This is the code to load the model and inference it with only a text
prompt. This commit does not contain the nodes to properly use it with an
image input.

This supports both the original SD1 instructpix2pix model and the
diffusers SDXL one.
11 months ago
comfyanonymous 94a5a67c32 Cleanup to support different types of inpaint models. 11 months ago
comfyanonymous 5d8898c056 Fix some performance issues with weight loading and unloading.
Lower peak memory usage when changing model.

Fix case where model weights would be unloaded and reloaded.
11 months ago
comfyanonymous 327ca1313d Support SDXS 0.9 11 months ago
comfyanonymous ae77590b4e dora_scale support for lora file. 11 months ago
comfyanonymous c6de09b02e Optimize memory unload strategy for more optimized performance. 11 months ago
comfyanonymous 0624838237 Add inverse noise scaling function. 11 months ago
comfyanonymous 5d875d77fe Fix regression with lcm not working with batches. 11 months ago
comfyanonymous 4b9005e949 Fix regression with model merging. 11 months ago
comfyanonymous c18a203a8a Don't unload model weights for non weight patches. 11 months ago
comfyanonymous 150a3e946f Make LCM sampler use the model noise scaling function. 11 months ago
comfyanonymous 40e124c6be SV3D support. 11 months ago
comfyanonymous cacb022c4a Make saved SD1 checkpoints match more closely the official one. 11 months ago
comfyanonymous d7897fff2c Move cascade scale factor from stage_a to latent_formats.py 11 months ago
comfyanonymous f2fe635c9f SamplerDPMAdaptative node to test the different options. 11 months ago
comfyanonymous 448d9263a2 Fix control loras breaking. 12 months ago
comfyanonymous db8b59ecff Lower memory usage for loras in lowvram mode at the cost of perf. 12 months ago
comfyanonymous 2a813c3b09 Switch some more prints to logging. 12 months ago
comfyanonymous 0ed72befe1 Change log levels.
Logging level now defaults to info. --verbose sets it to debug.
12 months ago
comfyanonymous 65397ce601 Replace prints with logging and add --verbose argument. 12 months ago
comfyanonymous 5f60ee246e Support loading the sr cascade controlnet. 12 months ago
comfyanonymous 03e6e81629 Set upscale algorithm to bilinear for stable cascade controlnet. 12 months ago
comfyanonymous 03e83bb5d0 Support stable cascade canny controlnet. 12 months ago
comfyanonymous 10860bcd28 Add compression_ratio to controlnet code. 12 months ago
comfyanonymous 478f71a249 Remove useless check. 12 months ago
comfyanonymous 12c1080ebc Simplify differential diffusion code. 12 months ago
Shiimizu 727021bdea
Implement Differential Diffusion (#2876)
* Implement Differential Diffusion

* Cleanup.

* Fix.

* Masks should be applied at full strength.

* Fix colors.

* Register the node.

* Cleaner code.

* Fix issue with getting unipc sampler.

* Adjust thresholds.

* Switch to linear thresholds.

* Only calculate nearest_idx on valid thresholds.
12 months ago
comfyanonymous 1abf8374ec utils.set_attr can now be used to set any attribute.
The old set_attr has been renamed to set_attr_param.
12 months ago
comfyanonymous dce3555339 Add some tesla pascal GPUs to the fp16 working but slower list. 12 months ago
comfyanonymous 51df846598 Let conditioning specify custom concat conds. 12 months ago
comfyanonymous 9f71e4b62d Let model patches patch sub objects. 12 months ago
comfyanonymous 00425563c0 Cleanup: Use sampling noise scaling function for inpainting. 12 months ago
comfyanonymous c62e836167 Move noise scaling to object with sampling math. 12 months ago
comfyanonymous cb7c3a2921 Allow image_only_indicator to be None. 1 year ago
comfyanonymous b3e97fc714 Koala 700M and 1B support.
Use the UNET Loader node to load the unet file to use them.
1 year ago
comfyanonymous 37a86e4618 Remove duplicate text_projection key from some saved models. 1 year ago
comfyanonymous 8daedc5bf2 Auto detect playground v2.5 model. 1 year ago
comfyanonymous d46583ecec Playground V2.5 support with ModelSamplingContinuousEDM node.
Use ModelSamplingContinuousEDM with edm_playground_v2.5 selected.
1 year ago
comfyanonymous 1e0fcc9a65 Make XL checkpoints save in a more standard format. 1 year ago
comfyanonymous b416be7d78 Make the text projection saved in the checkpoint the right format. 1 year ago
comfyanonymous 03c47fc0f2 Add a min_length property to tokenizer class. 1 year ago
comfyanonymous 8ac69f62e5 Make return_projected_pooled setable from the __init__ 1 year ago
comfyanonymous ca7c310a0e Support loading old CLIP models saved with CLIPSave. 1 year ago
comfyanonymous c2cb8e889b Always return unprojected pooled output for gligen. 1 year ago
comfyanonymous 1cb3f6a83b Move text projection into the CLIP model code.
Fix issue with not loading the SSD1B clip correctly.
1 year ago
comfyanonymous 6533b172c1 Support text encoder text_projection in lora. 1 year ago
comfyanonymous 1e5f0f66be Support lora keys with lora_prior_unet_ and lora_prior_te_ 1 year ago
logtd e1cb93c383 Fix model and cond transformer options merge 1 year ago
comfyanonymous 10847dfafe Cleanup uni_pc inpainting.
This causes some small changes to the uni pc inpainting behavior but it
seems to improve results slightly.
1 year ago
comfyanonymous 18c151b3e3 Add some latent2rgb matrices for previews. 1 year ago
comfyanonymous 0d0fbabd1d Pass pooled CLIP to stage b. 1 year ago
comfyanonymous c6b7a157ed Align simple scheduling closer to official stable cascade scheduler. 1 year ago
comfyanonymous 88f300401c Enable fp16 by default on mps. 1 year ago
comfyanonymous e93cdd0ad0 Remove print. 1 year ago
comfyanonymous 3711b31dff Support Stable Cascade in checkpoint format. 1 year ago
comfyanonymous d91f45ef28 Some cleanups to how the text encoders are loaded. 1 year ago
comfyanonymous a7b5eaa7e3 Forgot to commit this. 1 year ago
comfyanonymous 3b2e579926 Support loading the Stable Cascade effnet and previewer as a VAE.
The effnet can be used to encode images for img2img with Stage C.
1 year ago
comfyanonymous dccca1daa5 Fix gligen lowvram mode. 1 year ago
comfyanonymous 8b60d33bb7 Add ModelSamplingStableCascade to control the shift sampling parameter.
shift is 2.0 by default on Stage C and 1.0 by default on Stage B.
1 year ago
comfyanonymous 6bcf57ff10 Fix attention masks properly for multiple batches. 1 year ago
comfyanonymous 11e3221f1f fp8 weight support for Stable Cascade. 1 year ago
comfyanonymous f8706546f3 Fix attention mask batch size in some attention functions. 1 year ago
comfyanonymous 3b9969c1c5 Properly fix attention masks in CLIP with batches. 1 year ago
comfyanonymous 5b40e7a5ed Implement shift schedule for cascade stage C. 1 year ago
comfyanonymous 929e266f3e Manual cast for bf16 on older GPUs. 1 year ago
comfyanonymous 6c875d846b Fix clip attention mask issues on some hardware. 1 year ago
comfyanonymous 805c36ac9c Make Stable Cascade work on old pytorch 2.0 1 year ago
comfyanonymous f2d1d16f4f Support Stable Cascade Stage B lite. 1 year ago
comfyanonymous 0b3c50480c Make --force-fp32 disable loading models in bf16. 1 year ago
comfyanonymous 97d03ae04a StableCascade CLIP model support. 1 year ago
comfyanonymous 667c92814e Stable Cascade Stage B. 1 year ago
comfyanonymous f83109f09b Stable Cascade Stage C. 1 year ago
comfyanonymous 5e06baf112 Stable Cascade Stage A. 1 year ago
comfyanonymous aeaeca10bd Small refactor of is_device_* functions. 1 year ago
comfyanonymous 38b7ac6e26 Don't init the CLIP model when the checkpoint has no CLIP weights. 1 year ago
comfyanonymous 7dd352cbd7 Merge branch 'feature_expose_discard_penultimate_sigma' of https://github.com/blepping/ComfyUI 1 year ago
Jedrzej Kosinski f44225fd5f Fix infinite while loop being possible in ddim_scheduler 1 year ago
comfyanonymous 25a4805e51 Add a way to set different conditioning for the controlnet. 1 year ago
blepping a352c021ec Allow custom samplers to request discard penultimate sigma 1 year ago
comfyanonymous c661a8b118 Don't use numpy for calculating sigmas. 1 year ago
comfyanonymous 236bda2683 Make minimum tile size the size of the overlap. 1 year ago
comfyanonymous 66e28ef45c Don't use is_bf16_supported to check for fp16 support. 1 year ago
comfyanonymous 24129d78e6 Speed up SDXL on 16xx series with fp16 weights and manual cast. 1 year ago
comfyanonymous 4b0239066d Always use fp16 for the text encoders. 1 year ago
comfyanonymous da7a8df0d2 Put VAE key name in model config. 1 year ago
comfyanonymous 89507f8adf Remove some unused imports. 1 year ago
Dr.Lt.Data 05cd00695a
typo fix - calculate_sigmas_scheduler (#2619)
self.scheduler -> scheduler_name

Co-authored-by: Lt.Dr.Data <lt.dr.data@gmail.com>
1 year ago
comfyanonymous 4871a36458 Cleanup some unused imports. 1 year ago
comfyanonymous 78a70fda87 Remove useless import. 1 year ago
comfyanonymous d76a04b6ea Add unfinished ImageOnlyCheckpointSave node to save a SVD checkpoint.
This node is unfinished, SVD checkpoints saved with this node will
work with ComfyUI but not with anything else.
1 year ago
comfyanonymous f9e55d8463 Only auto enable bf16 VAE on nvidia GPUs that actually support it. 1 year ago
comfyanonymous 2395ae740a Make unclip more deterministic.
Pass a seed argument note that this might make old unclip images different.
1 year ago
comfyanonymous 53c8a99e6c Make server storage the default.
Remove --server-storage argument.
1 year ago
comfyanonymous 977eda19a6 Don't round noise mask. 1 year ago
comfyanonymous 10f2609fdd Add InpaintModelConditioning node.
This is an alternative to VAE Encode for inpaint that should work with
lower denoise.

This is a different take on #2501
1 year ago
comfyanonymous 1a57423d30 Fix issue when using multiple t2i adapters with batched images. 1 year ago
comfyanonymous 6a7bc35db8 Use basic attention implementation for small inputs on old pytorch. 1 year ago
pythongosssss 235727fed7
Store user settings/data on the server and multi user support (#2160)
* wip per user data

* Rename, hide menu

* better error
rework default user

* store pretty

* Add userdata endpoints
Change nodetemplates to userdata

* add multi user message

* make normal arg

* Fix tests

* Ignore user dir

* user tests

* Changed to default to browser storage and add server-storage arg

* fix crash on empty templates

* fix settings added before load

* ignore parse errors
1 year ago
comfyanonymous c6951548cf Update optimized_attention_for_device function for new functions that
support masked attention.
1 year ago
comfyanonymous aaa9017302 Add attention mask support to sub quad attention. 1 year ago
comfyanonymous 0c2c9fbdfa Support attention mask in split attention. 1 year ago
comfyanonymous 3ad0191bfb Implement attention mask on xformers. 1 year ago
comfyanonymous 8c6493578b Implement noise augmentation for SD 4X upscale model. 1 year ago
comfyanonymous ef4f6037cb Fix model patches not working in custom sampling scheduler nodes. 1 year ago
comfyanonymous a7874d1a8b Add support for the stable diffusion x4 upscaling model.
This is an old model.

Load the checkpoint like a regular one and use the new
SD_4XUpscale_Conditioning node.
1 year ago
comfyanonymous 2c4e92a98b Fix regression. 1 year ago
comfyanonymous 5eddfdd80c Refactor VAE code.
Replace constants with downscale_ratio and latent_channels.
1 year ago
comfyanonymous a47f609f90 Auto detect out_channels from model. 1 year ago
comfyanonymous 79f73a4b33 Remove useless code. 1 year ago
comfyanonymous 1b103e0cb2 Add argument to run the VAE on the CPU. 1 year ago
comfyanonymous 12e822c6c8 Use function to calculate model size in model patcher. 1 year ago
comfyanonymous e1e322cf69 Load weights that can't be lowvramed to target device. 1 year ago
comfyanonymous c782144433 Fix clip vision lowvram mode not working. 1 year ago
comfyanonymous f21bb41787 Fix taesd VAE in lowvram mode. 1 year ago
comfyanonymous 61b3f15f8f Fix lowvram mode not working with unCLIP and Revision code. 1 year ago
comfyanonymous d0165d819a Fix SVD lowvram mode. 1 year ago
comfyanonymous a252963f95 --disable-smart-memory now unloads everything like it did originally. 1 year ago
comfyanonymous 36a7953142 Greatly improve lowvram sampling speed by getting rid of accelerate.
Let me know if this breaks anything.
1 year ago
comfyanonymous 261bcbb0d9 A few missing comfy ops in the VAE. 1 year ago
comfyanonymous 9a7619b72d Fix regression with inpaint model. 1 year ago
comfyanonymous 571ea8cdcc Fix SAG not working with cfg 1.0 1 year ago
comfyanonymous 8cf1daa108 Fix SDXL area composition sometimes not using the right pooled output. 1 year ago
comfyanonymous 2258f85159 Support stable zero 123 model.
To use it use the ImageOnlyCheckpointLoader to load the checkpoint and
the new Stable_Zero123 node.
1 year ago
comfyanonymous 2f9d6a97ec Add --deterministic option to make pytorch use deterministic algorithms. 1 year ago
comfyanonymous e45d920ae3 Don't resize clip vision image when the size is already good. 1 year ago
comfyanonymous 13e6d5366e Switch clip vision to manual cast.
Make it use the same dtype as the text encoder.
1 year ago
comfyanonymous 719fa0866f Set clip vision model in eval mode so it works without inference mode. 1 year ago
Hari 574363a8a6 Implement Perp-Neg 1 year ago
comfyanonymous a5056cfb1f Remove useless code. 1 year ago
comfyanonymous 329c571993 Improve code legibility. 1 year ago
comfyanonymous 6c5990f7db Fix cfg being calculated more than once if sampler_cfg_function. 1 year ago
comfyanonymous ba04a87d10 Refactor and improve the sag node.
Moved all the sag related code to comfy_extras/nodes_sag.py
1 year ago
Rafie Walker 6761233e9d
Implement Self-Attention Guidance (#2201)
* First SAG test

* need to put extra options on the model instead of patcher

* no errors and results seem not-broken

* Use @ashen-uncensored formula, which works better!!!

* Fix a crash when using weird resolutions. Remove an unnecessary UNet call

* Improve comments, optimize memory in blur routine

* SAG works with sampler_cfg_function
1 year ago
comfyanonymous b454a67bb9 Support segmind vega model. 1 year ago
comfyanonymous 824e4935f5 Add dtype parameter to VAE object. 1 year ago
comfyanonymous 32b7e7e769 Add manual cast to controlnet. 1 year ago
comfyanonymous 3152023fbc Use inference dtype for unet memory usage estimation. 1 year ago
comfyanonymous 77755ab8db Refactor comfy.ops
comfy.ops -> comfy.ops.disable_weight_init

This should make it more clear what they actually do.

Some unused code has also been removed.
1 year ago
comfyanonymous b0aab1e4ea Add an option --fp16-unet to force using fp16 for the unet. 1 year ago
comfyanonymous ba07cb748e Use faster manual cast for fp8 in unet. 1 year ago
comfyanonymous 57926635e8 Switch text encoder to manual cast.
Use fp16 text encoder weights for CPU inference to lower memory usage.
1 year ago
comfyanonymous 340177e6e8 Disable non blocking on mps. 1 year ago
comfyanonymous 614b7e731f Implement GLora. 1 year ago
comfyanonymous cb63e230b4 Make lora code a bit cleaner. 1 year ago
comfyanonymous 174eba8e95 Use own clip vision model implementation. 1 year ago
comfyanonymous 97015b6b38 Cleanup. 1 year ago
comfyanonymous a4ec54a40d Add linear_start and linear_end to model_config.sampling_settings 1 year ago
comfyanonymous 9ac0b487ac Make --gpu-only put intermediate values in GPU memory instead of cpu. 1 year ago
comfyanonymous efb704c758 Support attention masking in CLIP implementation. 1 year ago
comfyanonymous fbdb14d4c4 Cleaner CLIP text encoder implementation.
Use a simple CLIP model implementation instead of the one from
transformers.

This will allow some interesting things that would too hackish to implement
using the transformers implementation.
1 year ago
comfyanonymous 2db86b4676 Slightly faster lora applying. 1 year ago
comfyanonymous 1bbd65ab30 Missed this one. 1 year ago
comfyanonymous 9b655d4fd7 Fix memory issue with control loras. 1 year ago
comfyanonymous 26b1c0a771 Fix control lora on fp8. 1 year ago
comfyanonymous be3468ddd5 Less useless downcasting. 1 year ago
comfyanonymous ca82ade765 Use .itemsize to get dtype size for fp8. 1 year ago
comfyanonymous 31b0f6f3d8 UNET weights can now be stored in fp8.
--fp8_e4m3fn-unet and --fp8_e5m2-unet are the two different formats
supported by pytorch.
1 year ago
comfyanonymous af365e4dd1 All the unet ops with weights are now handled by comfy.ops 1 year ago
comfyanonymous 61a123a1e0 A different way of handling multiple images passed to SVD.
Previously when a list of 3 images [0, 1, 2] was used for a 6 frame video
they were concated like this:
[0, 1, 2, 0, 1, 2]

now they are concated like this:
[0, 0, 1, 1, 2, 2]
1 year ago
comfyanonymous c97be4db91 Support SD2.1 turbo checkpoint. 1 year ago
comfyanonymous 983ebc5792 Use smart model management for VAE to decrease latency. 1 year ago
comfyanonymous c45d1b9b67 Add a function to load a unet from a state dict. 1 year ago
comfyanonymous f30b992b18 .sigma and .timestep now return tensors on the same device as the input. 1 year ago
comfyanonymous 13fdee6abf Try to free memory for both cond+uncond before inference. 1 year ago
comfyanonymous be71bb5e13 Tweak memory inference calculations a bit. 1 year ago
comfyanonymous 39e75862b2 Fix regression from last commit. 1 year ago
comfyanonymous 50dc39d6ec Clean up the extra_options dict for the transformer patches.
Now everything in transformer_options gets put in extra_options.
1 year ago
comfyanonymous 5d6dfce548 Fix importing diffusers unets. 1 year ago
comfyanonymous 3e5ea74ad3 Make buggy xformers fall back on pytorch attention. 1 year ago
comfyanonymous 871cc20e13 Support SVD img2vid model. 1 year ago
comfyanonymous 410bf07771 Make VAE memory estimation take dtype into account. 1 year ago
comfyanonymous 32447f0c39 Add sampling_settings so models can specify specific sampling settings. 1 year ago
comfyanonymous c3ae99a749 Allow controlling downscale and upscale methods in PatchModelAddDownscale. 1 year ago
comfyanonymous 72741105a6 Remove useless code. 1 year ago
comfyanonymous 6a491ebe27 Allow model config to preprocess the vae state dict on load. 1 year ago
comfyanonymous cd4fc77d5f Add taesd and taesdxl to VAELoader node.
They will show up if both the taesd_encoder and taesd_decoder or taesdxl
model files are present in the models/vae_approx directory.
1 year ago
comfyanonymous ce67dcbcda Make it easy for models to process the unet state dict on load. 1 year ago
comfyanonymous d9d8702d8d percent_to_sigma now returns a float instead of a tensor. 1 year ago
comfyanonymous 0cf4e86939 Add some command line arguments to store text encoder weights in fp8.
Pytorch supports two variants of fp8:
--fp8_e4m3fn-text-enc (the one that seems to give better results)
--fp8_e5m2-text-enc
1 year ago
comfyanonymous 107e78b1cb Add support for loading SSD1B diffusers unet version.
Improve diffusers model detection.
1 year ago
comfyanonymous 7e3fe3ad28 Make deep shrink behave like it should. 1 year ago
comfyanonymous 9f00a18095 Fix potential issues. 1 year ago
comfyanonymous 7ea6bb038c Print warning when controlnet can't be applied instead of crashing. 1 year ago
comfyanonymous dcec1047e6 Invert the start and end percentages in the code.
This doesn't affect how percentages behave in the frontend but breaks
things if you relied on them in the backend.

percent_to_sigma goes from 0 to 1.0 instead of 1.0 to 0 for less confusion.

Make percent 0 return an extremely large sigma and percent 1.0 return a
zero one to fix imprecision.
1 year ago
comfyanonymous 57eea0efbb heunpp2 sampler. 1 year ago
comfyanonymous 728613bb3e Fix last pr. 1 year ago
comfyanonymous ec3d0ab432 Merge branch 'master' of https://github.com/Jannchie/ComfyUI 1 year ago
comfyanonymous c962884a5c Make bislerp work on GPU. 1 year ago
comfyanonymous 420beeeb05 Clean up and refactor sampler code.
This should make it much easier to write custom nodes with kdiffusion type
samplers.
1 year ago
Jianqi Pan f2e49b1d57 fix: adaptation to older versions of pytroch 1 year ago
comfyanonymous 94cc718e9c Add a way to add patches to the input block. 1 year ago
comfyanonymous 7339479b10 Disable xformers when it can't load properly. 1 year ago
comfyanonymous 4781819a85 Make memory estimation aware of model dtype. 1 year ago