Optimizations that might break things/lower quality will be put behind
this flag first and might be enabled by default in the future.
Currently the only optimization is float8_e4m3fn matrix multiplication on
4000/ADA series Nvidia cards or later. If you have one of these cards you
will see a speed boost when using fp8_e4m3fn flux for example.
parser.add_argument("--disable-smart-memory",action="store_true",help="Force ComfyUI to agressively offload to regular ram instead of keeping models in vram when it can.")
parser.add_argument("--deterministic",action="store_true",help="Make pytorch use slower deterministic algorithms when it can. Note that this might not make images deterministic in all cases.")
parser.add_argument("--fast",action="store_true",help="Enable some untested and potentially quality deteriorating optimizations.")
parser.add_argument("--dont-print-server",action="store_true",help="Don't print server output.")
parser.add_argument("--quick-test-for-ci",action="store_true",help="Quick test for CI.")