58 Commits (0eaa34ec5b22fd305159a78ea92b2ef00105ab18)

Author SHA1 Message Date
comfyanonymous 0e49211a11 Load the SD3 T5xxl model in the same dtype stored in the checkpoint. 9 months ago
comfyanonymous 742d5720d1 Support zeroing out text embeddings with the attention mask. 9 months ago
comfyanonymous 56333d4850 Use the end token for the text encoder attention mask. 9 months ago
comfyanonymous 65397ce601 Replace prints with logging and add --verbose argument. 12 months ago
comfyanonymous 03c47fc0f2 Add a min_length property to tokenizer class. 1 year ago
comfyanonymous 8ac69f62e5 Make return_projected_pooled setable from the __init__ 1 year ago
comfyanonymous c2cb8e889b Always return unprojected pooled output for gligen. 1 year ago
comfyanonymous 1cb3f6a83b Move text projection into the CLIP model code.
Fix issue with not loading the SSD1B clip correctly.
1 year ago
comfyanonymous 97d03ae04a StableCascade CLIP model support. 1 year ago
comfyanonymous 4871a36458 Cleanup some unused imports. 1 year ago
comfyanonymous 57926635e8 Switch text encoder to manual cast.
Use fp16 text encoder weights for CPU inference to lower memory usage.
1 year ago
comfyanonymous 9ac0b487ac Make --gpu-only put intermediate values in GPU memory instead of cpu. 1 year ago
comfyanonymous fbdb14d4c4 Cleaner CLIP text encoder implementation.
Use a simple CLIP model implementation instead of the one from
transformers.

This will allow some interesting things that would too hackish to implement
using the transformers implementation.
1 year ago
comfyanonymous be3468ddd5 Less useless downcasting. 1 year ago
comfyanonymous 728613bb3e Fix last pr. 1 year ago
Jianqi Pan f2e49b1d57 fix: adaptation to older versions of pytroch 1 year ago
comfyanonymous 656c0b5d90 CLIP code refactor and improvements.
More generic clip model class that can be used on more types of text
encoders.

Don't apply weighting algorithm when weight is 1.0

Don't compute an empty token output when it's not needed.
1 year ago
comfyanonymous b3fcd64c6c Make SDTokenizer class work with more types of tokenizers. 1 year ago
comfyanonymous 2a134bfab9 Fix checkpoint loader with config. 1 year ago
comfyanonymous e60ca6929a SD1 and SD2 clip and tokenizer code is now more similar to the SDXL one. 1 year ago
comfyanonymous 434ce25ec0 Restrict loading embeddings from embedding folders. 1 year ago
comfyanonymous 44361f6344 Support for text encoder models that need attention_mask. 1 year ago
comfyanonymous fb3b728203 Fix issue where autocast fp32 CLIP gave different results from regular. 1 year ago
comfyanonymous ec96f6d03a Move text_projection to base clip model. 2 years ago
comfyanonymous e3d0a9a490 Fix potential issue with text projection matrix multiplication. 2 years ago
comfyanonymous 00c0b2c507 Initialize text encoder to target dtype. 2 years ago
comfyanonymous f081017c1a Save memory by storing text encoder weights in fp16 in most situations.
Do inference in fp32 to make sure quality stays the exact same.
2 years ago
comfyanonymous c99d8002f8 Make sure the pooled output stays at the EOS token with added embeddings. 2 years ago
comfyanonymous 50b1180dde Fix CLIPSetLastLayer not reverting when removed. 2 years ago
comfyanonymous 46dc050c9f Fix potential tensors being on different devices issues. 2 years ago
comfyanonymous 606a537090 Support SDXL embedding format with 2 CLIP. 2 years ago
comfyanonymous 608fcc2591 Fix bug with weights when prompt is long. 2 years ago
comfyanonymous ce35d8c659 Lower latency by batching some text encoder inputs. 2 years ago
comfyanonymous b6a60fa696 Try to keep text encoders loaded and patched to increase speed.
load_model_gpu() is now used with the text encoder models instead of just
the unet.
2 years ago
comfyanonymous 97ee230682 Make highvram and normalvram shift the text encoders to vram and back.
This is faster on big text encoder models than running it on the CPU.
2 years ago
comfyanonymous 9920367d3c Fix embeddings not working with --gpu-only 2 years ago
comfyanonymous 20f579d91d Add DualClipLoader to load clip models for SDXL.
Update LoadClip to load clip models for SDXL refiner.
2 years ago
comfyanonymous f87ec10a97 Support base SDXL and SDXL refiner models.
Large refactor of the model detection and loading code.
2 years ago
comfyanonymous f7edcfd927 Add a --gpu-only argument to keep and run everything on the GPU.
Make the CLIP model work on the GPU.
2 years ago
comfyanonymous bb1f45d6e8 Properly disable weight initialization in clip models. 2 years ago
comfyanonymous 0c7cad404c Don't initialize clip weights to default values. 2 years ago
comfyanonymous 23cf8ca7c5 Fix bug when embedding gets ignored because of mismatched size. 2 years ago
comfyanonymous af9cc1fb6a Search recursively in subfolders for embeddings. 2 years ago
comfyanonymous 81d1f00df3 Some refactoring: from_tokens -> encode_from_tokens 2 years ago
comfyanonymous 719c26c3c9 Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI 2 years ago
BlenderNeko d0b1b6c6bf fixed improper padding 2 years ago
comfyanonymous 04d9bc13af Safely load pickled embeds that don't load with weights_only=True. 2 years ago
BlenderNeko da115bd78d ensure backwards compat with optional args 2 years ago
BlenderNeko 752f7a162b align behavior with old tokenize function 2 years ago
comfyanonymous 334aab05e5 Don't stop workflow if loading embedding fails. 2 years ago