11 Commits (e60e19b175f93f2c8fac037063de9f13259be9d4)

Author SHA1 Message Date
comfyanonymous 2c038ccef0 Lower CLIP memory usage by a bit. 7 months ago
comfyanonymous 82cae45d44 Fix potential issue with non clip text embeddings. 7 months ago
comfyanonymous c2cb8e889b Always return unprojected pooled output for gligen. 1 year ago
comfyanonymous 1cb3f6a83b Move text projection into the CLIP model code.
Fix issue with not loading the SSD1B clip correctly.
1 year ago
comfyanonymous 3b9969c1c5 Properly fix attention masks in CLIP with batches. 1 year ago
comfyanonymous 6c875d846b Fix clip attention mask issues on some hardware. 1 year ago
comfyanonymous c6951548cf Update optimized_attention_for_device function for new functions that
support masked attention.
1 year ago
comfyanonymous c782144433 Fix clip vision lowvram mode not working. 1 year ago
comfyanonymous 174eba8e95 Use own clip vision model implementation. 1 year ago
comfyanonymous efb704c758 Support attention masking in CLIP implementation. 1 year ago
comfyanonymous fbdb14d4c4 Cleaner CLIP text encoder implementation.
Use a simple CLIP model implementation instead of the one from
transformers.

This will allow some interesting things that would too hackish to implement
using the transformers implementation.
1 year ago