comfyanonymous
7ad574bffd
Mac supports bf16 just make sure you are using the latest pytorch.
7 months ago
comfyanonymous
e2382b6adb
Make lowvram less aggressive when there are large amounts of free memory.
7 months ago
comfyanonymous
6425252c4f
Use fp16 as the default vae dtype for the audio VAE.
8 months ago
comfyanonymous
0ec513d877
Add a --force-channels-last to inference models in channel last mode.
8 months ago
Simon Lui
5eb98f0092
Exempt IPEX from non_blocking previews fixing segmentation faults. ( #3708 )
8 months ago
comfyanonymous
0e49211a11
Load the SD3 T5xxl model in the same dtype stored in the checkpoint.
9 months ago
comfyanonymous
104fcea0c8
Add function to get the list of currently loaded models.
9 months ago
comfyanonymous
b1fd26fe9e
pytorch xpu should be flash or mem efficient attention?
9 months ago
comfyanonymous
b249862080
Add an annoying print to a function I want to remove.
9 months ago
comfyanonymous
bf3e334d46
Disable non_blocking when --deterministic or directml.
9 months ago
comfyanonymous
0920e0e5fe
Remove some unused imports.
9 months ago
comfyanonymous
6c23854f54
Fix OSX latent2rgb previews.
9 months ago
comfyanonymous
8508df2569
Work around black image bug on Mac 14.5 by forcing attention upcasting.
9 months ago
comfyanonymous
09e069ae6c
Log the pytorch version.
9 months ago
comfyanonymous
19300655dd
Don't automatically switch to lowvram mode on GPUs with low memory.
9 months ago
Simon Lui
f509c6fe21
Fix Intel GPU memory allocation accuracy and documentation update. ( #3459 )
...
* Change calculation of memory total to be more accurate, allocated is actually smaller than reserved.
* Update README.md install documentation for Intel GPUs.
10 months ago
comfyanonymous
fa6dd7e5bb
Fix lowvram issue with saving checkpoints.
...
The previous fix didn't cover the case where the model was loaded in
lowvram mode right before.
10 months ago
comfyanonymous
49c20cdc70
No longer necessary.
10 months ago
comfyanonymous
e1489ad257
Fix issue with lowvram mode breaking model saving.
10 months ago
Simon Lui
a56d02efc7
Change torch.xpu to ipex.optimize, xpu device initialization and remove workaround for text node issue from older IPEX. ( #3388 )
10 months ago
comfyanonymous
258dbc06c3
Fix some memory related issues.
10 months ago
comfyanonymous
0a03009808
Fix issue with controlnet models getting loaded multiple times.
11 months ago
comfyanonymous
5d8898c056
Fix some performance issues with weight loading and unloading.
...
Lower peak memory usage when changing model.
Fix case where model weights would be unloaded and reloaded.
11 months ago
comfyanonymous
c6de09b02e
Optimize memory unload strategy for more optimized performance.
11 months ago
comfyanonymous
4b9005e949
Fix regression with model merging.
11 months ago
comfyanonymous
c18a203a8a
Don't unload model weights for non weight patches.
11 months ago
comfyanonymous
db8b59ecff
Lower memory usage for loras in lowvram mode at the cost of perf.
12 months ago
comfyanonymous
0ed72befe1
Change log levels.
...
Logging level now defaults to info. --verbose sets it to debug.
12 months ago
comfyanonymous
65397ce601
Replace prints with logging and add --verbose argument.
12 months ago
comfyanonymous
dce3555339
Add some tesla pascal GPUs to the fp16 working but slower list.
12 months ago
comfyanonymous
88f300401c
Enable fp16 by default on mps.
1 year ago
comfyanonymous
929e266f3e
Manual cast for bf16 on older GPUs.
1 year ago
comfyanonymous
0b3c50480c
Make --force-fp32 disable loading models in bf16.
1 year ago
comfyanonymous
f83109f09b
Stable Cascade Stage C.
1 year ago
comfyanonymous
aeaeca10bd
Small refactor of is_device_* functions.
1 year ago
comfyanonymous
66e28ef45c
Don't use is_bf16_supported to check for fp16 support.
1 year ago
comfyanonymous
24129d78e6
Speed up SDXL on 16xx series with fp16 weights and manual cast.
1 year ago
comfyanonymous
4b0239066d
Always use fp16 for the text encoders.
1 year ago
comfyanonymous
f9e55d8463
Only auto enable bf16 VAE on nvidia GPUs that actually support it.
1 year ago
comfyanonymous
1b103e0cb2
Add argument to run the VAE on the CPU.
1 year ago
comfyanonymous
e1e322cf69
Load weights that can't be lowvramed to target device.
1 year ago
comfyanonymous
a252963f95
--disable-smart-memory now unloads everything like it did originally.
1 year ago
comfyanonymous
36a7953142
Greatly improve lowvram sampling speed by getting rid of accelerate.
...
Let me know if this breaks anything.
1 year ago
comfyanonymous
2f9d6a97ec
Add --deterministic option to make pytorch use deterministic algorithms.
1 year ago
comfyanonymous
b0aab1e4ea
Add an option --fp16-unet to force using fp16 for the unet.
1 year ago
comfyanonymous
ba07cb748e
Use faster manual cast for fp8 in unet.
1 year ago
comfyanonymous
57926635e8
Switch text encoder to manual cast.
...
Use fp16 text encoder weights for CPU inference to lower memory usage.
1 year ago
comfyanonymous
340177e6e8
Disable non blocking on mps.
1 year ago
comfyanonymous
9ac0b487ac
Make --gpu-only put intermediate values in GPU memory instead of cpu.
1 year ago
comfyanonymous
2db86b4676
Slightly faster lora applying.
1 year ago