381 Commits (5e2b4893da62dec1c9ee4d0167f6b62e3b11fbef)

Author SHA1 Message Date
comfyanonymous 1f0f4cc0bd Add argument to disable auto launching the browser. 2 years ago
comfyanonymous d8e58f0a7e Detect hint_channels from controlnet. 2 years ago
comfyanonymous c5d7593ccf Support loras in diffusers format. 2 years ago
comfyanonymous 1ce0d8ad68 Add CMP 30HX card to the nvidia_16_series list. 2 years ago
comfyanonymous c99d8002f8 Make sure the pooled output stays at the EOS token with added embeddings. 2 years ago
comfyanonymous 4a77fcd6ab Only shift text encoder to vram when CPU cores are under 8. 2 years ago
comfyanonymous 3cd31d0e24 Lower CPU thread check for running the text encoder on the CPU vs GPU. 2 years ago
comfyanonymous 2b13939044 Remove some useless code. 2 years ago
comfyanonymous 95d796fc85 Faster VAE loading. 2 years ago
comfyanonymous 4b957a0010 Initialize the unet directly on the target device. 2 years ago
comfyanonymous c910b4a01c Remove unused code and torchdiffeq dependency. 2 years ago
comfyanonymous 1141029a4a Add --disable-metadata argument to disable saving metadata in files. 2 years ago
comfyanonymous fbf5c51c1c Merge branch 'fix_batch_timesteps' of https://github.com/asagi4/ComfyUI 2 years ago
comfyanonymous 68be24eead Remove some prints. 2 years ago
asagi4 1ea4d84691 Fix timestep ranges when batch_size > 1 2 years ago
comfyanonymous 5379051d16 Fix diffusers VAE loading. 2 years ago
comfyanonymous 727588d076 Fix some new loras. 2 years ago
comfyanonymous 4f9b6f39d1 Fix potential issue with Save Checkpoint. 2 years ago
comfyanonymous 5f75d784a1 Start is now 0.0 and end is now 1.0 for the timestep ranges. 2 years ago
comfyanonymous 7ff14b62f8 ControlNetApplyAdvanced can now define when controlnet gets applied. 2 years ago
comfyanonymous d191c4f9ed Add a ControlNetApplyAdvanced node.
The controlnet can be applied to the positive or negative prompt only by
connecting it correctly.
2 years ago
comfyanonymous 0240946ecf Add a way to set which range of timesteps the cond gets applied to. 2 years ago
comfyanonymous 22f29d66ca Try to fix memory issue with lora. 2 years ago
comfyanonymous 67be7eb81d Nodes can now patch the unet function. 2 years ago
comfyanonymous 12a6e93171 Del the right object when applying lora. 2 years ago
comfyanonymous 78e7958d17 Support controlnet in diffusers format. 2 years ago
comfyanonymous 09386a3697 Fix issue with lora in some cases when combined with model merging. 2 years ago
comfyanonymous 58b2364f58 Properly support SDXL diffusers unet with UNETLoader node. 2 years ago
comfyanonymous 0115018695 Print errors and continue when lora weights are not compatible. 2 years ago
comfyanonymous 4760c29380 Merge branch 'fix-AttributeError-module-'torch'-has-no-attribute-'mps'' of https://github.com/KarryCharon/ComfyUI 2 years ago
comfyanonymous 0b284f650b Fix typo. 2 years ago
comfyanonymous e032ca6138 Fix ddim issue with older torch versions. 2 years ago
comfyanonymous 18885f803a Add MX450 and MX550 to list of cards with broken fp16. 2 years ago
comfyanonymous 9ba440995a It's actually possible to torch.compile the unet now. 2 years ago
comfyanonymous 51d5477579 Add key to indicate checkpoint is v_prediction when saving. 2 years ago
comfyanonymous ff6b047a74 Fix device print on old torch version. 2 years ago
comfyanonymous 9871a15cf9 Enable --cuda-malloc by default on torch 2.0 and up.
Add --disable-cuda-malloc to disable it.
2 years ago
comfyanonymous 55d0fca9fa --windows-standalone-build now enables --cuda-malloc 2 years ago
comfyanonymous 1679abd86d Add a command line argument to enable backend:cudaMallocAsync 2 years ago
comfyanonymous 3a150bad15 Only calculate randn in some samplers when it's actually being used. 2 years ago
comfyanonymous ee8f8ee07f Fix regression with ddim and uni_pc when batch size > 1. 2 years ago
comfyanonymous 3ded1a3a04 Refactor of sampler code to deal more easily with different model types. 2 years ago
comfyanonymous 5f57362613 Lower lora ram usage when in normal vram mode. 2 years ago
comfyanonymous 490771b7f4 Speed up lora loading a bit. 2 years ago
comfyanonymous 50b1180dde Fix CLIPSetLastLayer not reverting when removed. 2 years ago
comfyanonymous 6fb084f39d Reduce floating point rounding errors in loras. 2 years ago
comfyanonymous 91ed2815d5 Add a node to merge CLIP models. 2 years ago
comfyanonymous b2f03164c7 Prevent the clip_g position_ids key from being saved in the checkpoint.
This is to make it match the official checkpoint.
2 years ago
comfyanonymous 46dc050c9f Fix potential tensors being on different devices issues. 2 years ago
KarryCharon 3e2309f149 fix mps miss import 2 years ago