162 Commits (8ddc151a4ce7f59de75b24ea8349f0e406ba0da5)

Author SHA1 Message Date
comfyanonymous 97ee230682 Make highvram and normalvram shift the text encoders to vram and back.
This is faster on big text encoder models than running it on the CPU.
2 years ago
comfyanonymous 62db11683b Move unet to device right after loading on highvram mode. 2 years ago
comfyanonymous 8248babd44 Use pytorch attention by default on nvidia when xformers isn't present.
Add a new argument --use-quad-cross-attention
2 years ago
comfyanonymous f7edcfd927 Add a --gpu-only argument to keep and run everything on the GPU.
Make the CLIP model work on the GPU.
2 years ago
comfyanonymous fed0a4dd29 Some comments to say what the vram state options mean. 2 years ago
comfyanonymous 0a5fefd621 Cleanups and fixes for model_management.py
Hopefully fix regression on MPS and CPU.
2 years ago
comfyanonymous 67892b5ac5 Refactor and improve model_management code related to free memory. 2 years ago
space-nuko 499641ebf1 More accurate total 2 years ago
space-nuko b5dd15c67a System stats endpoint 2 years ago
comfyanonymous 5c38958e49 Tweak lowvram model memory so it's closer to what it was before. 2 years ago
comfyanonymous 94680732d3 Empty cache on mps. 2 years ago
comfyanonymous eb448dd8e1 Auto load model in lowvram if not enough memory. 2 years ago
comfyanonymous 3a1f47764d Print the torch device that is used on startup. 2 years ago
comfyanonymous 6fc4917634 Make maximum_batch_area take into account python2.0 attention function.
More conservative xformers maximum_batch_area.
2 years ago
comfyanonymous 678f933d38 maximum_batch_area for xformers.
Remove useless code.
2 years ago
comfyanonymous cb1551b819 Lowvram mode for gligen and fix some lowvram issues. 2 years ago
comfyanonymous 6ee11d7bc0 Fix import. 2 years ago
comfyanonymous bae4fb4a9d Fix imports. 2 years ago
comfyanonymous 056e5545ff Don't try to get vram from xpu or cuda when directml is enabled. 2 years ago
comfyanonymous 2ca934f7d4 You can now select the device index with: --directml id
Like this for example: --directml 1
2 years ago
comfyanonymous 3baded9892 Basic torch_directml support. Use --directml to use it. 2 years ago
comfyanonymous 5282f56434 Implement Linear hypernetworks.
Add a HypernetworkLoader node to use hypernetworks.
2 years ago
comfyanonymous 3696d1699a Add support for GLIGEN textbox model. 2 years ago
comfyanonymous deb2b93e79 Move code to empty gpu cache to model_management.py 2 years ago
comfyanonymous 1e1875f674 Print xformers version and warning about 0.0.18 2 years ago
comfyanonymous 64557d6781 Add a --force-fp32 argument to force fp32 for debugging. 2 years ago
comfyanonymous bceccca0e5 Small refactor. 2 years ago
藍+85CD 05eeaa2de5
Merge branch 'master' into ipex 2 years ago
藍+85CD 3e2608e12b Fix auto lowvram detection on CUDA 2 years ago
藍+85CD 7cb924f684 Use separate variables instead of `vram_state` 2 years ago
藍+85CD 84b9c0ac2f Import intel_extension_for_pytorch as ipex 2 years ago
EllangoK e5e587b1c0 seperates out arg parser and imports args 2 years ago
藍+85CD 37713e3b0a Add basic XPU device support
closed #387
2 years ago
comfyanonymous e46b1c3034 Disable xformers in VAE when xformers == 0.0.18 2 years ago
Francesco Yoshi Gobbo f55755f0d2
code cleanup 2 years ago
Francesco Yoshi Gobbo cf0098d539
no lowvram state if cpu only 2 years ago
comfyanonymous 4adcea7228 I don't think controlnets were being handled correctly by MPS. 2 years ago
Yurii Mazurevich fc71e7ea08 Fixed typo 2 years ago
Yurii Mazurevich 4b943d2b60 Removed unnecessary comment 2 years ago
Yurii Mazurevich 89fd5ed574 Added MPS device support 2 years ago
comfyanonymous 3ed4a4e4e6 Try again with vae tiled decoding if regular fails because of OOM. 2 years ago
comfyanonymous 9d0665c8d0 Add laptop quadro cards to fp32 list. 2 years ago
comfyanonymous ee46bef03a Make --cpu have priority over everything else. 2 years ago
comfyanonymous 83f23f82b8 Add pytorch attention support to VAE. 2 years ago
comfyanonymous a256a2abde --disable-xformers should not even try to import xformers. 2 years ago
comfyanonymous 0f3ba7482f Xformers is now properly disabled when --cpu used.
Added --windows-standalone-build option, currently it only opens
makes the code open up comfyui in the browser.
2 years ago
comfyanonymous afff30fc0a Add --cpu to use the cpu for inference. 2 years ago
comfyanonymous ebfcf0a9c9 Fix issue. 2 years ago
comfyanonymous fed315a76a To be really simple CheckpointLoaderSimple should pick the right type. 2 years ago
comfyanonymous c1f5855ac1 Make some cross attention functions work on the CPU. 2 years ago
comfyanonymous 69cc75fbf8 Add a way to interrupt current processing in the backend. 2 years ago
comfyanonymous 2c5f0ec681 Small adjustment. 2 years ago
comfyanonymous 86721d5158 Enable highvram automatically when vram >> ram 2 years ago
comfyanonymous 2326ff1263 Add: --highvram for when you want models to stay on the vram. 2 years ago
comfyanonymous d66415c021 Low vram mode for controlnets. 2 years ago
comfyanonymous 4efa67fa12 Add ControlNet support. 2 years ago
comfyanonymous 7e1e193f39 Automatically enable lowvram mode if vram is less than 4GB.
Use: --normalvram to disable it.
2 years ago
comfyanonymous 708138c77d Remove print. 2 years ago
comfyanonymous 853e96ada3 Increase it/s by batching together some stuff sent to unet. 2 years ago
comfyanonymous c92633eaa2 Auto calculate amount of memory to use for --lowvram 2 years ago
comfyanonymous 534736b924 Add some low vram modes: --lowvram and --novram 2 years ago
comfyanonymous a84cd0d1ad Don't unload/reload model from CPU uselessly. 2 years ago