48 Commits (b4e915e74560bd2c090f9b4ed6b73b0781b7050e)

Author SHA1 Message Date
comfyanonymous 57926635e8 Switch text encoder to manual cast.
Use fp16 text encoder weights for CPU inference to lower memory usage.
1 year ago
comfyanonymous 9ac0b487ac Make --gpu-only put intermediate values in GPU memory instead of cpu. 1 year ago
comfyanonymous fbdb14d4c4 Cleaner CLIP text encoder implementation.
Use a simple CLIP model implementation instead of the one from
transformers.

This will allow some interesting things that would too hackish to implement
using the transformers implementation.
1 year ago
comfyanonymous be3468ddd5 Less useless downcasting. 1 year ago
comfyanonymous 728613bb3e Fix last pr. 1 year ago
Jianqi Pan f2e49b1d57 fix: adaptation to older versions of pytroch 1 year ago
comfyanonymous 656c0b5d90 CLIP code refactor and improvements.
More generic clip model class that can be used on more types of text
encoders.

Don't apply weighting algorithm when weight is 1.0

Don't compute an empty token output when it's not needed.
1 year ago
comfyanonymous b3fcd64c6c Make SDTokenizer class work with more types of tokenizers. 1 year ago
comfyanonymous 2a134bfab9 Fix checkpoint loader with config. 1 year ago
comfyanonymous e60ca6929a SD1 and SD2 clip and tokenizer code is now more similar to the SDXL one. 1 year ago
comfyanonymous 434ce25ec0 Restrict loading embeddings from embedding folders. 1 year ago
comfyanonymous 44361f6344 Support for text encoder models that need attention_mask. 1 year ago
comfyanonymous fb3b728203 Fix issue where autocast fp32 CLIP gave different results from regular. 1 year ago
comfyanonymous ec96f6d03a Move text_projection to base clip model. 2 years ago
comfyanonymous e3d0a9a490 Fix potential issue with text projection matrix multiplication. 2 years ago
comfyanonymous 00c0b2c507 Initialize text encoder to target dtype. 2 years ago
comfyanonymous f081017c1a Save memory by storing text encoder weights in fp16 in most situations.
Do inference in fp32 to make sure quality stays the exact same.
2 years ago
comfyanonymous c99d8002f8 Make sure the pooled output stays at the EOS token with added embeddings. 2 years ago
comfyanonymous 50b1180dde Fix CLIPSetLastLayer not reverting when removed. 2 years ago
comfyanonymous 46dc050c9f Fix potential tensors being on different devices issues. 2 years ago
comfyanonymous 606a537090 Support SDXL embedding format with 2 CLIP. 2 years ago
comfyanonymous 608fcc2591 Fix bug with weights when prompt is long. 2 years ago
comfyanonymous ce35d8c659 Lower latency by batching some text encoder inputs. 2 years ago
comfyanonymous b6a60fa696 Try to keep text encoders loaded and patched to increase speed.
load_model_gpu() is now used with the text encoder models instead of just
the unet.
2 years ago
comfyanonymous 97ee230682 Make highvram and normalvram shift the text encoders to vram and back.
This is faster on big text encoder models than running it on the CPU.
2 years ago
comfyanonymous 9920367d3c Fix embeddings not working with --gpu-only 2 years ago
comfyanonymous 20f579d91d Add DualClipLoader to load clip models for SDXL.
Update LoadClip to load clip models for SDXL refiner.
2 years ago
comfyanonymous f87ec10a97 Support base SDXL and SDXL refiner models.
Large refactor of the model detection and loading code.
2 years ago
comfyanonymous f7edcfd927 Add a --gpu-only argument to keep and run everything on the GPU.
Make the CLIP model work on the GPU.
2 years ago
comfyanonymous bb1f45d6e8 Properly disable weight initialization in clip models. 2 years ago
comfyanonymous 0c7cad404c Don't initialize clip weights to default values. 2 years ago
comfyanonymous 23cf8ca7c5 Fix bug when embedding gets ignored because of mismatched size. 2 years ago
comfyanonymous af9cc1fb6a Search recursively in subfolders for embeddings. 2 years ago
comfyanonymous 81d1f00df3 Some refactoring: from_tokens -> encode_from_tokens 2 years ago
comfyanonymous 719c26c3c9 Merge branch 'master' of https://github.com/BlenderNeko/ComfyUI 2 years ago
BlenderNeko d0b1b6c6bf fixed improper padding 2 years ago
comfyanonymous 04d9bc13af Safely load pickled embeds that don't load with weights_only=True. 2 years ago
BlenderNeko da115bd78d ensure backwards compat with optional args 2 years ago
BlenderNeko 752f7a162b align behavior with old tokenize function 2 years ago
comfyanonymous 334aab05e5 Don't stop workflow if loading embedding fails. 2 years ago
BlenderNeko 8489cba140 add unique ID per word/embedding for tokenizer 2 years ago
comfyanonymous 1718730e80 Ignore embeddings when sizes don't match and print a WARNING. 2 years ago
comfyanonymous 50099bcd96 Support multiple paths for embeddings. 2 years ago
comfyanonymous 00a9189e30 Support old pytorch. 2 years ago
comfyanonymous 137ae2606c Support people putting commas after the embedding name in the prompt. 2 years ago
comfyanonymous 324273fff2 Fix embedding not working when on new line. 2 years ago
comfyanonymous f73e57d881 Add support for textual inversion embedding for SD1.x CLIP. 2 years ago
comfyanonymous 220afe3310 Initial commit. 2 years ago