Commit Graph

  • 34608de2e9 Not sure if this actually changes anything but it can't hurt. comfyanonymous 2024-08-13 12:35:25 -0400
  • 39fb74c5bd Fix bug when model cannot be partially unloaded. comfyanonymous 2024-08-13 03:57:55 -0400
  • 74e124f4d7 Fix some issues with TE being in lowvram mode. comfyanonymous 2024-08-12 23:42:21 -0400
  • a562c17e8a load_unet -> load_diffusion_model with a model_options argument. comfyanonymous 2024-08-12 23:18:54 -0400
  • 5942c17d55 Order of operations matters. comfyanonymous 2024-08-12 21:56:18 -0400
  • c032b11e07
    xlabs Flux controlnet implementation. (#4260) comfyanonymous 2024-08-12 21:22:22 -0400
  • b8ffb2937f Memory tweaks. comfyanonymous 2024-08-12 15:03:33 -0400
  • ce37c11164
    add DS_Store to gitignore (#4324) Vladimir Semyonov 2024-08-12 19:32:34 +0300
  • b5c3906b38
    Automatically link the Comfy CI page on PRs (#4326) Alex "mcmonkey" Goodwin 2024-08-12 09:32:16 -0700
  • 5d43e75e5b Fix some issues with the model sometimes not getting patched. comfyanonymous 2024-08-12 12:27:54 -0400
  • 517f4a94e4 Fix some lora loading slowdowns. comfyanonymous 2024-08-12 11:38:06 -0400
  • 52a471c5c7 Change name of log. comfyanonymous 2024-08-12 10:35:06 -0400
  • ad76574cb8 Fix some potential issues with the previous commits. comfyanonymous 2024-08-12 00:23:29 -0400
  • 9acfe4df41 Support loading directly to vram with CLIPLoader node. comfyanonymous 2024-08-12 00:06:01 -0400
  • 9829b013ea Fix mistake in last commit. comfyanonymous 2024-08-12 00:00:17 -0400
  • 5c69cde037 Load TE model straight to vram if certain conditions are met. comfyanonymous 2024-08-11 23:50:01 -0400
  • e9589d6d92 Add a way to set model dtype and ops from load_checkpoint_guess_config. comfyanonymous 2024-08-11 08:50:34 -0400
  • 0d82a798a5 Remove the ckpt_path from load_state_dict_guess_config. comfyanonymous 2024-08-11 08:37:35 -0400
  • 925fff26fd
    alternative to `load_checkpoint_guess_config` that accepts a loaded state dict (#4249) ljleb 2024-08-11 08:36:52 -0400
  • 75b9b55b22 Fix issues with #4302 and support loading diffusers format flux. comfyanonymous 2024-08-10 21:28:24 -0400
  • 1765f1c60c
    FLUX: Added full diffusers mapping for FLUX.1 schnell and dev. Adds full LoRA support from diffusers LoRAs. (#4302) Jaret Burkett 2024-08-10 19:26:41 -0600
  • 1de69fe4d5 Fix some issues with inference slowing down. comfyanonymous 2024-08-10 15:29:36 -0400
  • ae197f651b Speed up hunyuan dit inference a bit. comfyanonymous 2024-08-10 07:36:27 -0400
  • 1b5b8ca81a Fix regression. comfyanonymous 2024-08-09 21:45:21 -0400
  • 6678d5cf65 Fix regression. comfyanonymous 2024-08-09 14:02:38 -0400
  • e172564eea
    Update controlnet.py to fix the default controlnet weight as constant (#4285) TTPlanetPig 2024-08-10 01:40:05 +0800
  • a3cc326748 Better fix for lowvram issue. comfyanonymous 2024-08-09 12:16:25 -0400
  • 86a97e91fc Fix controlnet regression. comfyanonymous 2024-08-09 12:08:58 -0400
  • 5acdadc9f3 Fix issue with some lowvram weights. comfyanonymous 2024-08-09 03:58:28 -0400
  • 55ad9d5f8c Fix regression. comfyanonymous 2024-08-09 03:36:40 -0400
  • a9f04edc58 Implement text encoder part of HunyuanDiT loras. comfyanonymous 2024-08-09 03:21:10 -0400
  • a475ec2300 Cleanup HunyuanDit controlnets. comfyanonymous 2024-08-09 02:35:19 -0400
  • 06eb9fb426
    feat: add support for HunYuanDit ControlNet (#4245) 来新璐 2024-08-09 14:59:24 +0800
  • 413322645e Raw torch is faster than einops? comfyanonymous 2024-08-08 22:09:29 -0400
  • 11200de970 Cleaner code. comfyanonymous 2024-08-08 20:07:09 -0400
  • 037c38eb0f Try to improve inference speed on some machines. comfyanonymous 2024-08-08 17:28:35 -0400
  • 1e11d2d1f5 Better prints. comfyanonymous 2024-08-08 17:05:16 -0400
  • 65ea6be38f
    PullRequest CI Run: use pull_request_target to allow the CI Dashboard to work (#4277) Alex "mcmonkey" Goodwin 2024-08-08 14:20:48 -0700
  • 5df6f57b5d
    minor fix on copypasta action name (#4276) Alex "mcmonkey" Goodwin 2024-08-08 13:30:59 -0700
  • 6588bfdef9
    add GitHub workflow for CI tests of PRs (#4275) Alex "mcmonkey" Goodwin 2024-08-08 13:24:49 -0700
  • 50ed2879ef
    Add full CI test matrix GitHub Workflow (#4274) Alex "mcmonkey" Goodwin 2024-08-08 12:40:07 -0700
  • 66d4233210 Fix. comfyanonymous 2024-08-08 15:16:51 -0400
  • 591010b7ef Support diffusers text attention flux loras. comfyanonymous 2024-08-08 14:45:52 -0400
  • 08f92d55e9 Partial model shift support. comfyanonymous 2024-08-08 03:27:37 -0400
  • 8115d8cce9 Add Flux fp16 support hack. comfyanonymous 2024-08-07 15:08:39 -0400
  • 6969fc9ba4 Make supported_dtypes a priority list. comfyanonymous 2024-08-07 15:00:06 -0400
  • cb7c4b4be3 Workaround for lora OOM on lowvram mode. comfyanonymous 2024-08-07 14:30:54 -0400
  • 1208863eca Fix "Comfy" lora keys. comfyanonymous 2024-08-07 13:49:31 -0400
  • e1c528196e Fix bundled embed. comfyanonymous 2024-08-07 13:30:45 -0400
  • 17030fd4c0 Support for "Comfy" lora format. comfyanonymous 2024-08-07 13:18:32 -0400
  • c19dcd362f Controlnet code refactor. comfyanonymous 2024-08-07 12:59:28 -0400
  • 1c08bf35b4 Support format for embeddings bundled in loras. comfyanonymous 2024-08-07 03:45:25 -0400
  • 2a02546e20
    Add type hints to folder_paths.py (#4191) PhilWun 2024-08-07 03:59:34 +0200
  • b334605a66 Fix OOMs happening in some cases. comfyanonymous 2024-08-06 13:27:48 -0400
  • de17a9755e Unload all models if there's an OOM error. comfyanonymous 2024-08-06 03:30:28 -0400
  • c14ac98fed Unload models and load them back in lowvram mode no free vram. comfyanonymous 2024-08-06 03:22:39 -0400
  • 2894511893
    Clone taesd with depth of 1 to reduce download size. (#4232) Robin Huang 2024-08-05 22:46:09 -0700
  • f3bc40223a
    Add format metadata to CLIP save to make compatible with diffusers safetensors loading (#4233) Silver 2024-08-06 07:45:24 +0200
  • 841e74ac40
    Change browser test CI python to 3.8 (#4234) Chenlei Hu 2024-08-06 01:27:28 -0400
  • 2d75df45e6 Flux tweak memory usage. comfyanonymous 2024-08-05 21:58:28 -0400
  • 1abc9c8703
    Stable release uses cached dependencies (#4231) Robin Huang 2024-08-05 17:07:16 -0700
  • 8edbcf5209 Improve performance on some lowend GPUs. comfyanonymous 2024-08-05 16:24:04 -0400
  • e545a636ba This probably doesn't work anymore. comfyanonymous 2024-08-05 12:31:12 -0400
  • 33e5203a2a
    Don't cache index.html (#4211) bymyself 2024-08-05 09:25:28 -0700
  • a178e25912
    Fix Flux FP64 math on XPU (#4210) a-One-Fan 2024-08-05 08:26:20 +0300
  • 78e133d041 Support simple diffusers Flux loras. comfyanonymous 2024-08-04 21:59:42 -0400
  • 7afa985fba
    Correct spelling 'token_weight_pars_t5' to 'token_weight_pairs_t5' (#4200) Silver 2024-08-04 23:10:02 +0200
  • ddb6a9f47c Set the step in EmptySD3LatentImage to 16. comfyanonymous 2024-08-04 15:59:02 -0400
  • 3b71f84b50 ONNX tracing fixes. comfyanonymous 2024-08-04 15:45:43 -0400
  • 0a6b008117 Fix issue with some custom nodes. comfyanonymous 2024-08-04 10:03:33 -0400
  • 56f3c660bf ModelSamplingFlux now takes a resolution and adjusts the shift with it. comfyanonymous 2024-08-04 04:06:00 -0400
  • f7a5107784 Fix crash. comfyanonymous 2024-08-03 16:55:38 -0400
  • 91be9c2867 Tweak lowvram memory formula. comfyanonymous 2024-08-03 16:34:27 -0400
  • 03c5018c98 Lower lowvram memory to 1/3 of free memory. comfyanonymous 2024-08-03 15:14:07 -0400
  • 2ba5cc8b86 Fix some issues. comfyanonymous 2024-08-03 15:06:40 -0400
  • 1e68002b87 Cap lowvram to half of free memory. comfyanonymous 2024-08-03 14:50:20 -0400
  • ba9095e5bd Automatically use fp8 for diffusion model weights if: comfyanonymous 2024-08-03 13:45:19 -0400
  • f123328b82 Load T5 in fp8 if it's in fp8 in the Flux checkpoint. comfyanonymous 2024-08-03 12:39:33 -0400
  • 63a7e8edba More aggressive batch splitting. comfyanonymous 2024-08-03 11:53:30 -0400
  • 0eea47d580 Add ModelSamplingFlux to experiment with the shift value. comfyanonymous 2024-08-03 03:54:38 -0400
  • 7cd0cdfce6 Add advanced model merge node for Flux model. comfyanonymous 2024-08-02 23:20:30 -0400
  • ea03c9dcd2 Better per model memory usage estimations. comfyanonymous 2024-08-02 18:08:21 -0400
  • 3a9ee995cf Tweak regular SD memory formula. comfyanonymous 2024-08-02 17:34:30 -0400
  • 47da42d928 Better Flux vram estimation. comfyanonymous 2024-08-02 17:02:35 -0400
  • 17bbd83176 Fix bug loading flac workflow when it contains = character. comfyanonymous 2024-08-02 13:14:28 -0400
  • bfb52de866
    Lower SAG scale step for finer control (#4158) fgdfgfthgr-fox 2024-08-03 02:29:03 +1200
  • eca962c6da Add FluxGuidance node. comfyanonymous 2024-08-02 10:24:53 -0400
  • c1696cd1b5
    Add missing import (#4174) Jairo Correa 2024-08-02 10:34:12 -0300
  • 369f459b20 Fix no longer working on old pytorch. comfyanonymous 2024-08-01 22:19:53 -0400
  • ce9ac2fe05
    Fix clip_g/clip_l mixup (#4168) Alexander Brown 2024-08-01 18:40:56 -0700
  • e638f2858a Hack to make all resolutions work on Flux models. comfyanonymous 2024-08-01 21:03:26 -0400
  • a531001cc7 Add CLIPTextEncodeFlux. comfyanonymous 2024-08-01 18:53:25 -0400
  • d420bc792a Tweak the memory usage formulas for Flux and SD. comfyanonymous 2024-08-01 17:49:46 -0400
  • d965474aaa Make ComfyUI split batches a higher priority than weight offload. comfyanonymous 2024-08-01 16:39:59 -0400
  • 1c61361fd2 Fast preview support for Flux. comfyanonymous 2024-08-01 16:28:11 -0400
  • a6decf1e62 Fix bfloat16 potentially not being enabled on mps. comfyanonymous 2024-08-01 16:18:14 -0400
  • 48eb1399c0 Try to fix mac issue. comfyanonymous 2024-08-01 13:41:27 -0400
  • b4f6ebb2e8 Rename UNETLoader node to "Load Diffusion Model". comfyanonymous 2024-08-01 13:33:30 -0400
  • d7430a1651 Add a way to load the diffusion model in fp8 with UNETLoader node. comfyanonymous 2024-08-01 13:28:41 -0400
  • f2b80f95d2 Better Mac support on flux model. comfyanonymous 2024-08-01 12:55:28 -0400