1386 Commits (43f2505389a8cc37c3a76b676f2b31de65056d34)
 

Author SHA1 Message Date
comfyanonymous 43f2505389 Merge branch 'fix/widget-wonkyness' of https://github.com/M1kep/ComfyUI 2 years ago
comfyanonymous 0e3b641172 Remove xformers related print. 2 years ago
comfyanonymous 5c363a9d86 Fix controlnet bug. 2 years ago
Michael Poutre 69c5e6de85
fix(widgets): Add options object if not present when forceInput: true 2 years ago
Michael Poutre 9a7a52f8b5
refactor/fix: Treat forceInput widgets as standard widgets 2 years ago
comfyanonymous cfe1c54de8 Fix controlnet issue. 2 years ago
comfyanonymous 57beace324 Fix VAEDecodeTiled minimum. 2 years ago
comfyanonymous 1c012d69af It doesn't make sense for c_crossattn and c_concat to be lists. 2 years ago
comfyanonymous 5f101f4da1 Update litegraph with upstream: middle mouse dragging. 2 years ago
Ridan Vandenbergh 2cd3980199 Remove forced lowercase on embeddings endpoint 2 years ago
comfyanonymous 7e941f9f24 Clean up DiffusersLoader node. 2 years ago
Simon Lui 18617967e5
Fix error message in model_patcher.py
Found while tinkering.
2 years ago
comfyanonymous fe4c07400c Fix "Load Checkpoint with config" node. 2 years ago
comfyanonymous d70b0bc43c Use the GPU for the canny preprocessor when available. 2 years ago
comfyanonymous 81d9200e18 Add node to convert a specific colour in an image to a mask. 2 years ago
comfyanonymous f2f5e5dcbb Support SDXL t2i adapters with 3 channel input. 2 years ago
comfyanonymous 15adc3699f Move beta_schedule to model_config and allow disabling unet creation. 2 years ago
comfyanonymous 968078b149 Merge branch 'feat/mute_bypass_nodes_in_group' of https://github.com/M1kep/ComfyUI 2 years ago
comfyanonymous 66c690e698 Merge branch 'preserve-pnginfo' of https://github.com/chrisgoringe/ComfyUI 2 years ago
comfyanonymous bed116a1f9 Remove optimization that caused border. 2 years ago
Chris 18379dea36 check for text attr and save 2 years ago
Chris edcff9ab8a copy metadata into modified image 2 years ago
Michael Poutre 6944288aff
refactor(ui): Switch statement, and handle other modes in group actions 2 years ago
Michael Poutre e30d546e38
feat(ui): Add node mode toggles to group context menu 2 years ago
comfyanonymous 8ddd081b09 Use the same units for tile size in VAEDecodeTiled and VAEEncodeTiled. 2 years ago
comfyanonymous fbf375f161 Merge branch 'master' of https://github.com/bvhari/ComfyUI 2 years ago
comfyanonymous 65cae62c71 No need to check filename extensions to detect shuffle controlnet. 2 years ago
comfyanonymous 4e89b2c25a Put clip vision outputs on the CPU. 2 years ago
comfyanonymous a094b45c93 Load clipvision model to GPU for faster performance. 2 years ago
comfyanonymous 1300a1bb4c Text encoder should initially load on the offload_device not the regular. 2 years ago
comfyanonymous f92074b84f Move ModelPatcher to model_patcher.py 2 years ago
BVH d86b222fe9
Reduce min tile size for encode 2 years ago
comfyanonymous 4798cf5a62 Implement loras with norm keys. 2 years ago
BVH 9196588088
Make tile size in Tiled VAE encode/decode user configurable 2 years ago
Dr.Lt.Data 0faee1186f
support on prompt event handler (#765)
Co-authored-by: Lt.Dr.Data <lt.dr.data@gmail.com>
2 years ago
comfyanonymous b8c7c770d3 Enable bf16-vae by default on ampere and up. 2 years ago
comfyanonymous 1c794a2161 Fallback to slice attention if xformers doesn't support the operation. 2 years ago
comfyanonymous d935ba50c4 Make --bf16-vae work on torch 2.0 2 years ago
comfyanonymous 412596d325 Merge branch 'increase_client_max_size' of https://github.com/ramyma/ComfyUI 2 years ago
Dr.Lt.Data d9f4922993
fix: cannot disable dynamicPrompts (#1327)
* fix: cannot disable dynamicPrompts

* indent fix

---------

Co-authored-by: Lt.Dr.Data <lt.dr.data@gmail.com>
2 years ago
ramyma 0b6cf7a558 Increase client_max_size to allow bigger request bodies 2 years ago
comfyanonymous a57b0c797b Fix lowvram model merging. 2 years ago
comfyanonymous f72780a7e3 The new smart memory management makes this unnecessary. 2 years ago
comfyanonymous c77f02e1c6 Move controlnet code to comfy/controlnet.py 2 years ago
comfyanonymous 15a7716fa6 Move lora code to comfy/lora.py 2 years ago
comfyanonymous ec96f6d03a Move text_projection to base clip model. 2 years ago
comfyanonymous 30eb92c3cb Code cleanups. 2 years ago
comfyanonymous 51dde87e97 Try to free enough vram for control lora inference. 2 years ago
comfyanonymous e3d0a9a490 Fix potential issue with text projection matrix multiplication. 2 years ago
comfyanonymous cc44ade79e Always shift text encoder to GPU when the device supports fp16. 2 years ago