Hi all,
I’ve been dabbling with diffusion and diffusers since the early days and never made the jump to tools like ComfyUI. Instead, I kept iterating on my own approach, and it eventually grew into something that might be useful to others.
diffusers-workflow is a declarative, JSON-based workflow engine for diffusers. The idea is simple: define a generation pipeline in JSON, run it from the command line, no custom Python required. It wraps a good chunk of what diffusers is capable of, focusing on my own desire to experiment and tweak without a ton of overhead, while not exposing the entire complexity of the tech stack.
A few highlights:
-
Text-to-image/video and image-to-image/video workflows and lots of examples
-
Variable substitution so workflows become reusable templates
-
Composable multi-step workflows (e.g., generate image → generate video)
-
Interactive REPL that keeps models loaded for faster iteration
-
Built-in tasks for prompt augmentation, upscaling, background removal, etc.
If you prefer staying close to diffusers but want a more structured way to define and run pipelines, this might be worth a look.
GitHub: https://github.com/dkackman/diffusers-workflow
Feedback welcome!
Don
PS: not affiliated with huggingface or diffusers. just a fan