953 Commits (78e133d0415784924cd2674e2ee48f3eeca8a2aa)

Author SHA1 Message Date
comfyanonymous 78e133d041 Support simple diffusers Flux loras. 7 months ago
Silver 7afa985fba
Correct spelling 'token_weight_pars_t5' to 'token_weight_pairs_t5' (#4200) 7 months ago
comfyanonymous 3b71f84b50 ONNX tracing fixes. 7 months ago
comfyanonymous 0a6b008117 Fix issue with some custom nodes. 7 months ago
comfyanonymous f7a5107784 Fix crash. 7 months ago
comfyanonymous 91be9c2867 Tweak lowvram memory formula. 7 months ago
comfyanonymous 03c5018c98 Lower lowvram memory to 1/3 of free memory. 7 months ago
comfyanonymous 2ba5cc8b86 Fix some issues. 7 months ago
comfyanonymous 1e68002b87 Cap lowvram to half of free memory. 7 months ago
comfyanonymous ba9095e5bd Automatically use fp8 for diffusion model weights if:
Checkpoint contains weights in fp8.

There isn't enough memory to load the diffusion model in GPU vram.
7 months ago
comfyanonymous f123328b82 Load T5 in fp8 if it's in fp8 in the Flux checkpoint. 7 months ago
comfyanonymous 63a7e8edba More aggressive batch splitting. 7 months ago
comfyanonymous ea03c9dcd2 Better per model memory usage estimations. 7 months ago
comfyanonymous 3a9ee995cf Tweak regular SD memory formula. 7 months ago
comfyanonymous 47da42d928 Better Flux vram estimation. 7 months ago
Alexander Brown ce9ac2fe05
Fix clip_g/clip_l mixup (#4168) 7 months ago
comfyanonymous e638f2858a Hack to make all resolutions work on Flux models. 7 months ago
comfyanonymous d420bc792a Tweak the memory usage formulas for Flux and SD. 7 months ago
comfyanonymous d965474aaa Make ComfyUI split batches a higher priority than weight offload. 7 months ago
comfyanonymous 1c61361fd2 Fast preview support for Flux. 7 months ago
comfyanonymous a6decf1e62 Fix bfloat16 potentially not being enabled on mps. 7 months ago
comfyanonymous 48eb1399c0 Try to fix mac issue. 7 months ago
comfyanonymous d7430a1651 Add a way to load the diffusion model in fp8 with UNETLoader node. 7 months ago
comfyanonymous f2b80f95d2 Better Mac support on flux model. 7 months ago
comfyanonymous 1aa9cf3292 Make lowvram more aggressive on low memory machines. 7 months ago
comfyanonymous eb96c3bd82 Fix .sft file loading (they are safetensors files). 7 months ago
comfyanonymous 5f98de7697 Load flux t5 in fp8 if weights are in fp8. 7 months ago
comfyanonymous 8d34211a7a Fix old python versions no longer working. 7 months ago
comfyanonymous 1589b58d3e Basic Flux Schnell and Flux Dev model implementation. 7 months ago
comfyanonymous 7ad574bffd Mac supports bf16 just make sure you are using the latest pytorch. 7 months ago
comfyanonymous e2382b6adb Make lowvram less aggressive when there are large amounts of free memory. 7 months ago
comfyanonymous c24f897352 Fix to get fp8 working on T5 base. 7 months ago
comfyanonymous a5991a7aa6 Fix hunyuan dit text encoder weights always being in fp32. 7 months ago
comfyanonymous 2c038ccef0 Lower CLIP memory usage by a bit. 7 months ago
comfyanonymous b85216a3c0 Lower T5 memory usage by a few hundred MB. 7 months ago
comfyanonymous 82cae45d44 Fix potential issue with non clip text embeddings. 7 months ago
comfyanonymous 25853d0be8 Use common function for casting weights to input. 7 months ago
comfyanonymous 79040635da Remove unnecessary code. 7 months ago
comfyanonymous 66d35c07ce Improve artifacts on hydit, auraflow and SD3 on specific resolutions.
This breaks seeds for resolutions that are not a multiple of 16 in pixel
resolution by using circular padding instead of reflection padding but
should lower the amount of artifacts when doing img2img at those
resolutions.
7 months ago
comfyanonymous 4ba7fa0244 Refactor: Move sd2_clip.py to text_encoders folder. 7 months ago
comfyanonymous cf4418b806 Don't treat Bert model like CLIP.
Bert can accept up to 512 tokens so any prompt with more than 77 should
just be passed to it as is instead of splitting it up like CLIP.
7 months ago
comfyanonymous 8328a2d8cd Let hunyuan dit work with all prompt lengths. 7 months ago
comfyanonymous afe732bef9 Hunyuan dit can now accept longer prompts. 7 months ago
comfyanonymous a9ac56fc0d Own BertModel implementation that works with lowvram. 7 months ago
comfyanonymous 25b51b1a8b Hunyuan DiT lora support. 7 months ago
comfyanonymous a5f4292f9f
Basic hunyuan dit implementation. (#4102)
* Let tokenizers return weights to be stored in the saved checkpoint.

* Basic hunyuan dit implementation.

* Fix some resolutions not working.

* Support hydit checkpoint save.

* Init with right dtype.

* Switch to optimized attention in pooler.

* Fix black images on hunyuan dit.
7 months ago
comfyanonymous f87810cd3e Let tokenizers return weights to be stored in the saved checkpoint. 7 months ago
comfyanonymous 10c919f4c7 Make it possible to load tokenizer data from checkpoints. 7 months ago
comfyanonymous 10b43ceea5 Remove duplicate code. 7 months ago
comfyanonymous 0a4c49c57c Support MT5. 7 months ago