Technical Articles, Integration

Accelerate your Image Generation: Pruna meets ComfyUI

Feb 5, 2025

Angelos Nikitaras

Angelos Nikitaras

ML Working Student

John Rachwan

John Rachwan

Cofounder & CTO

Bertrand Charpentier

Bertrand Charpentier

Cofounder, President & Chief Scientist

ComfyUI & Pruna
ComfyUI & Pruna

In today's rapidly evolving landscape of machine learning, accessibility and efficiency are paramount. ComfyUI has revolutionized image generation by providing an intuitive, node-based interface that empowers users—from beginners to experts—to create powerful workflows to serve image generation models like Stable Diffusion or Flux. However, as models grow larger and more complex, generation times can slow down, increasing computational demands and costs.

Pruna tackles these challenges head-on by optimizing models to be faster, smaller, cheaper, and greener. By integrating Pruna's advanced compilation techniques directly into ComfyUI through the Pruna Compile node, you can accelerate both Stable Diffusion and Flux inference without sacrificing output quality. This means smoother, more efficient image generation—even as your models scale up.

In this blog post, you'll learn how to integrate the Pruna node into ComfyUI to supercharge your Stable Diffusion and Flux workflows. We'll walk you through each step and show you how these optimizations can lead to faster, more efficient image generation. For additional resources and updates, visit our repository—and if you find it helpful, consider giving it a star!

Getting Started

Setting up Pruna within ComfyUI is straightforward. With just a few steps, you'll be ready to compile your Stable Diffusion or Flux models for faster inference right inside the ComfyUI interface. Here's a quick guide to get started.

Step 1 - Prerequisites

Before proceeding, ensure you have created a conda environment with Python 3.10 and installed both ComfyUI and Pruna. In particular:

  1. Create a conda environment with Python 3.10, e.g. with

    conda create -n comfyui python=3.10
    conda activate comfyui
  2. Install ComfyUI

  3. Install Pruna.

  4. Generate a Pruna token, if you haven't yet obtained one.

Step 2 - Pruna node integration

With your environment prepared, you're ready to integrate the Pruna node into your ComfyUI setup. Follow these steps to clone the repository and launch ComfyUI with the Pruna compilation node enabled:

  1. Navigate to your ComfyUI installation’s custom_nodes folder:

    cd <path_to_comfyui>/custom_nodes
  2. Clone the ComfyUI_pruna repository:

    git clone https://github.com/PrunaAI/ComfyUI_pruna.git
  3. Launch ComfyUI

    cd <path_to_comfyui> && python main.py --disable-cuda-malloc --gpu-only

After completing these steps, you should now see the Pruna Compile node in the nodes menu under the Pruna category.

Pruna in Action: Accelerate Stable Diffusion and Flux

We offer two ComfyUI workflows to help you get started with the Pruna node—one designed for Stable Diffusion and another for Flux.

You can load a workflow either by dragging and dropping the provided JSON file into the ComfyUI window or by clicking Open in the Workflow tab, as shown here.

Our node currently supports two compilation modes: x-fast and torch_compile.

Example 1 - Stable Diffusion

In this example, we accelerate the inference of the Stable Diffusion v1.4 model. To get started, download the model and place it in the appropriate folder:

  1. Download the model.

  2. Place it in <path_to_comfyui>/models/checkpoints.

Then, use the Stable Diffusion workflow as described above to generate images.

Example 2 - Flux

For Flux, the setup is a bit more involved due to its multi-component pipeline. To use the Flux workflow, you'll need to download each component of the model individually. Specifically:

  1. For CLIP, download the clip_l.safetensors and t5xxl_fp16.safetensors files, and place them in <path_to_comfyui>/models/clip/.

  2. For VAE, download the VAE model and place it in <path_to_comfyui>/models/vae/.

  3. For the Flux model, download the weights and place them in <path_to_comfyui>/models/diffusion_models/. If you are unable to access the link, you can request model access on Hugging Face.

Now, load the Flux workflow, and you are ready to go!

Enjoy the speed-up!🎉

Benchmarks

We've put the Pruna Compile Node to the test to showcase its efficiency gains by comparing the base model with versions compiled using Pruna's x-fast and torch-compile compilers. Using an L40S Nvidia GPU, we measured two key performance metrics: iterations per second (as reported by ComfyUI) and the end-to-end time required to generate an image.

For Stable Diffusion, we observed an impressive 3.5x speedup in iterations per second using the x-fast compiler. While the gains for Flux are more modest, a 20% boost with torch_compile remains significant, especially since it comes with no degradation in output quality.

Closing Remarks

We’re excited to see how Pruna’s advanced compilation techniques empower ComfyUI to elevate your image generation workflows. Our benchmarks demonstrate significant performance gains without compromising quality, enabling you to push the boundaries of your creative projects 🚀

For any questions, feedback or community discussions, feel free to join our Discord where you can also get help from our dedicated help-desk channel.

For bug reports or technical issues, please open an issue in our repository.

Subscribe to Pruna's Newsletter

Speed Up Your Models With Pruna AI.

Inefficient models drive up costs, slow down your productivity and increase carbon emissions. Make your AI more accessible and sustainable with Pruna.

Speed Up Your Models With Pruna AI.

Inefficient models drive up costs, slow down your productivity and increase carbon emissions. Make your AI more accessible and sustainable with Pruna.

Speed Up Your Models With Pruna AI.

Inefficient models drive up costs, slow down your productivity and increase carbon emissions. Make your AI more accessible and sustainable with Pruna.

© 2025 Pruna AI - Built with Pretzels & Croissants 🥨 🥐

© 2025 Pruna AI - Built with Pretzels & Croissants

© 2025 Pruna AI - Built with Pretzels & Croissants 🥨 🥐