Run Flux model (gguf) with LoRA in ComfyUI

Yiling Liu
3 min readNov 25, 2024

--

Recently I’m generating images using Flux’s hosted version, but its been quite expensive. Also Replicate increased their pricing so I’ve got some big bills about it.

I decided to generate images locally, using gguf format because this model format makes model more compact and faster to load (using advanced compressing techniques).

The whole process consists of 4parts.

Part 1: Download and run comfyUI, add gguf support

Part 2: Download workflow and load into comfyUI

Part 3: Dowload models (TE, VAE, transformer, LoRA) into comfyUI

Part 4: write prompts and run workflow inside comfyUI

Part 1: Download and run comfyUI

Download ComfyUI from its github link: https://github.com/comfyanonymous/ComfyUI?tab=readme-ov-file#installing (I’m using a Mac Pro and I don’t have an Nvidia GPU, but I can still run inference in about 3 minutes.)

At the end of this section you should be able to start a comfyUI in your browser

(creator) macbookpro4:ComfyUI yil$ python main.py 
Total VRAM 32768 MB, total RAM 32768 MB
pytorch version: 2.5.1
Set vram state to: SHARED
Device: mps
Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
[Prompt Server] web root: /Users/yil/comfyUI/ComfyUI/web

Starting server

To see the GUI go to: http://127.0.0.1:8188

and go to http://127.0.0.1:8188 you’ll be able to see a webpage with this little controller

Then we just need to download ComfyUI-GGUF to add gguf support to ComfyUI. You need to go into your comfyUI/custom_nodes repo, download https://github.com/city96/ComfyUI-GGUF inside this folder using git clone.

Part 2: Download workflow and load into ComfyUI

You can download workflow from here: https://drive.google.com/file/d/1lWsaVtESydGudEakK0CW9C28qHSVOlcu/view?usp=sharing

Then just click on “Load” on the panel shown above to load the json file into ComfyUI.

Part 3: Dowload models (TE, VAE, transformer, LoRA) into comfyUI

To do this, we’ll need to first download a bunch of models:

2 Clips (text encoder):

VAE:

Unet

LoRA

Then we need to put all these models into its own place, the two clips model goes under comfyUI/models/clip folder, the VAE goes into the comfyUI/models/vae folder, the Unet goes into the comfyUI/models/unet folder, the LoRA goes into the comfyUI/models/loras folder.

Part 4: write prompts and run workflow inside comfyUI

--

--

Yiling Liu
Yiling Liu

Written by Yiling Liu

Creative Technologist, ex-googler

Responses (1)