1014 Commits (cd5017c1c9b3a7b0fec892e80290a32616bbff38)

Author SHA1 Message Date
comfyanonymous cd5017c1c9 calculate_weight function to use a different dtype. 6 months ago
comfyanonymous 83f343146a Fix potential lowvram issue. 6 months ago
Matthew Turnshek 1770fc77ed
Implement support for taef1 latent previews (#4409)
* add taef1 handling to several places

* remove guess_latent_channels and add latent_channels info directly to flux model

* remove TODO

* fix numbers
6 months ago
comfyanonymous 5960f946a9 Move a few files from comfy -> comfy_execution.
Python code in the comfy folder should not import things from outside it.
6 months ago
guill 5cfe38f41c
Execution Model Inversion (#2666)
* Execution Model Inversion

This PR inverts the execution model -- from recursively calling nodes to
using a topological sort of the nodes. This change allows for
modification of the node graph during execution. This allows for two
major advantages:

    1. The implementation of lazy evaluation in nodes. For example, if a
    "Mix Images" node has a mix factor of exactly 0.0, the second image
    input doesn't even need to be evaluated (and visa-versa if the mix
    factor is 1.0).

    2. Dynamic expansion of nodes. This allows for the creation of dynamic
    "node groups". Specifically, custom nodes can return subgraphs that
    replace the original node in the graph. This is an incredibly
    powerful concept. Using this functionality, it was easy to
    implement:
        a. Components (a.k.a. node groups)
        b. Flow control (i.e. while loops) via tail recursion
        c. All-in-one nodes that replicate the WebUI functionality
        d. and more
    All of those were able to be implemented entirely via custom nodes,
    so those features are *not* a part of this PR. (There are some
    front-end changes that should occur before that functionality is
    made widely available, particularly around variant sockets.)

The custom nodes associated with this PR can be found at:
https://github.com/BadCafeCode/execution-inversion-demo-comfyui

Note that some of them require that variant socket types ("*") be
enabled.

* Allow `input_info` to be of type `None`

* Handle errors (like OOM) more gracefully

* Add a command-line argument to enable variants

This allows the use of nodes that have sockets of type '*' without
applying a patch to the code.

* Fix an overly aggressive assertion.

This could happen when attempting to evaluate `IS_CHANGED` for a node
during the creation of the cache (in order to create the cache key).

* Fix Pyright warnings

* Add execution model unit tests

* Fix issue with unused literals

Behavior should now match the master branch with regard to undeclared
inputs. Undeclared inputs that are socket connections will be used while
undeclared inputs that are literals will be ignored.

* Make custom VALIDATE_INPUTS skip normal validation

Additionally, if `VALIDATE_INPUTS` takes an argument named `input_types`,
that variable will be a dictionary of the socket type of all incoming
connections. If that argument exists, normal socket type validation will
not occur. This removes the last hurdle for enabling variant types
entirely from custom nodes, so I've removed that command-line option.

I've added appropriate unit tests for these changes.

* Fix example in unit test

This wouldn't have caused any issues in the unit test, but it would have
bugged the UI if someone copy+pasted it into their own node pack.

* Use fstrings instead of '%' formatting syntax

* Use custom exception types.

* Display an error for dependency cycles

Previously, dependency cycles that were created during node expansion
would cause the application to quit (due to an uncaught exception). Now,
we'll throw a proper error to the UI. We also make an attempt to 'blame'
the most relevant node in the UI.

* Add docs on when ExecutionBlocker should be used

* Remove unused functionality

* Rename ExecutionResult.SLEEPING to PENDING

* Remove superfluous function parameter

* Pass None for uneval inputs instead of default

This applies to `VALIDATE_INPUTS`, `check_lazy_status`, and lazy values
in evaluation functions.

* Add a test for mixed node expansion

This test ensures that a node that returns a combination of expanded
subgraphs and literal values functions correctly.

* Raise exception for bad get_node calls.

* Minor refactor of IsChangedCache.get

* Refactor `map_node_over_list` function

* Fix ui output for duplicated nodes

* Add documentation on `check_lazy_status`

* Add file for execution model unit tests

* Clean up Javascript code as per review

* Improve documentation

Converted some comments to docstrings as per review

* Add a new unit test for mixed lazy results

This test validates that when an output list is fed to a lazy node, the
node will properly evaluate previous nodes that are needed by any inputs
to the lazy node.

No code in the execution model has been changed. The test already
passes.

* Allow kwargs in VALIDATE_INPUTS functions

When kwargs are used, validation is skipped for all inputs as if they
had been mentioned explicitly.

* List cached nodes in `execution_cached` message

This was previously just bugged in this PR.
6 months ago
comfyanonymous 0f9c2a7822 Try to fix SDXL OOM issue on some configurations. 6 months ago
comfyanonymous f1d6cef71c Revert "Disable cuda malloc by default."
This reverts commit 50bf66e5c4.
6 months ago
comfyanonymous 33fb282d5c Fix issue. 6 months ago
comfyanonymous 50bf66e5c4 Disable cuda malloc by default. 6 months ago
comfyanonymous a5af64d3ce Revert "Not sure if this actually changes anything but it can't hurt."
This reverts commit 34608de2e9.
6 months ago
comfyanonymous 34608de2e9 Not sure if this actually changes anything but it can't hurt. 6 months ago
comfyanonymous 39fb74c5bd Fix bug when model cannot be partially unloaded. 6 months ago
comfyanonymous 74e124f4d7 Fix some issues with TE being in lowvram mode. 6 months ago
comfyanonymous a562c17e8a load_unet -> load_diffusion_model with a model_options argument. 6 months ago
comfyanonymous 5942c17d55 Order of operations matters. 6 months ago
comfyanonymous c032b11e07
xlabs Flux controlnet implementation. (#4260)
* xlabs Flux controlnet.

* Fix not working on old python.

* Remove comment.
6 months ago
comfyanonymous b8ffb2937f Memory tweaks. 7 months ago
comfyanonymous 5d43e75e5b Fix some issues with the model sometimes not getting patched. 7 months ago
comfyanonymous 517f4a94e4 Fix some lora loading slowdowns. 7 months ago
comfyanonymous 52a471c5c7 Change name of log. 7 months ago
comfyanonymous ad76574cb8 Fix some potential issues with the previous commits. 7 months ago
comfyanonymous 9acfe4df41 Support loading directly to vram with CLIPLoader node. 7 months ago
comfyanonymous 9829b013ea Fix mistake in last commit. 7 months ago
comfyanonymous 5c69cde037 Load TE model straight to vram if certain conditions are met. 7 months ago
comfyanonymous e9589d6d92 Add a way to set model dtype and ops from load_checkpoint_guess_config. 7 months ago
comfyanonymous 0d82a798a5 Remove the ckpt_path from load_state_dict_guess_config. 7 months ago
ljleb 925fff26fd
alternative to `load_checkpoint_guess_config` that accepts a loaded state dict (#4249)
* make alternative fn

* add back ckpt path as 2nd argument?
7 months ago
comfyanonymous 75b9b55b22 Fix issues with #4302 and support loading diffusers format flux. 7 months ago
Jaret Burkett 1765f1c60c
FLUX: Added full diffusers mapping for FLUX.1 schnell and dev. Adds full LoRA support from diffusers LoRAs. (#4302) 7 months ago
comfyanonymous 1de69fe4d5 Fix some issues with inference slowing down. 7 months ago
comfyanonymous ae197f651b Speed up hunyuan dit inference a bit. 7 months ago
comfyanonymous 1b5b8ca81a Fix regression. 7 months ago
comfyanonymous 6678d5cf65 Fix regression. 7 months ago
TTPlanetPig e172564eea
Update controlnet.py to fix the default controlnet weight as constant (#4285) 7 months ago
comfyanonymous a3cc326748 Better fix for lowvram issue. 7 months ago
comfyanonymous 86a97e91fc Fix controlnet regression. 7 months ago
comfyanonymous 5acdadc9f3 Fix issue with some lowvram weights. 7 months ago
comfyanonymous 55ad9d5f8c Fix regression. 7 months ago
comfyanonymous a9f04edc58 Implement text encoder part of HunyuanDiT loras. 7 months ago
comfyanonymous a475ec2300 Cleanup HunyuanDit controlnets.
Use the: ControlNetApply SD3 and HunyuanDiT node.
7 months ago
来新璐 06eb9fb426
feat: add support for HunYuanDit ControlNet (#4245)
* add support for HunYuanDit ControlNet

* fix hunyuandit controlnet

* fix typo in hunyuandit controlnet

* fix typo in hunyuandit controlnet

* fix code format style

* add control_weight support for HunyuanDit Controlnet

* use control_weights in HunyuanDit Controlnet

* fix typo
7 months ago
comfyanonymous 413322645e Raw torch is faster than einops? 7 months ago
comfyanonymous 11200de970 Cleaner code. 7 months ago
comfyanonymous 037c38eb0f Try to improve inference speed on some machines. 7 months ago
comfyanonymous 1e11d2d1f5 Better prints. 7 months ago
comfyanonymous 66d4233210 Fix. 7 months ago
comfyanonymous 591010b7ef Support diffusers text attention flux loras. 7 months ago
comfyanonymous 08f92d55e9 Partial model shift support. 7 months ago
comfyanonymous 8115d8cce9 Add Flux fp16 support hack. 7 months ago
comfyanonymous 6969fc9ba4 Make supported_dtypes a priority list. 7 months ago