ComfyUI

ComfyUI Introduction


Introduction

The most powerful and modular stable diffusion GUI and backend.

Image

This ui will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface.

Features

  • Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything.
  • Fully supports SD1.x, SD2.x, SDXL, Stable Video Diffusion and Stable Cascade
  • Can load ckpt, safetensors and diffusers models/checkpoints. Standalone VAEs and CLIP models.
  • Embeddings/Textual inversion
  • Loras (regular, locon and loha)
  • Area Composition
  • Inpainting with both regular and inpainting models.
  • ControlNet and T2I-Adapter
  • Upscale Models (ESRGAN, ESRGAN variants, SwinIR, Swin2SR, etc...)
  • unCLIP Models
  • GLIGEN
  • Model Merging
  • LCM models and Loras
  • SDXL Turbo

For more details, you could follow ComfyUI repo

Why ComfyUI?

TODO

Install

Windows

There is a portable standalone build for Windows that should work for running on Nvidia GPUs or for running on your CPU only on the releases page.

Direct link to download

Simply download, extract with 7-Zip and run. Make sure you put your Stable Diffusion checkpoints/models (the huge ckpt/safetensors files) in: ComfyUI\models\checkpoints

Linux

  1. Git clone this repo
git clone https://github.com/comfyanonymous/ComfyUI
# Nvidia users should install stable pytorch using this command:
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121
# Install the dependencies
cd ComfyUI
pip install -r requirements.txt
 
  1. Put your SD checkpoints (the huge ckpt/safetensors files) in: models/checkpoints

  2. Put your VAE in: models/vae

  3. Run ComfyUI

python main.py

Example

For some workflow examples and see what ComfyUI can do you can check out: