fp8 model is broken

#1
by ruleez - opened

your model only produces complete noise

Same. Doesn't work at least in comfy ui

With the standard bf16 model after switching the model I get OOM on 64GB of RAM, in fp8_e4m3fn mode it gives artifacts :(

Same, produces only noise - tried all schedulers, different text encoders and vae models

this is not for ComfyUI guys! you need lightx2v interface to run it

2509 works in Comfy UI and also required special weights to avoid artifacts. https://huggingface.co/lightx2v/Qwen-Image-Lightning/tree/main/Qwen-Image-Edit-2509 I remember for Wan they did separate fp8 weights for Comfy and for their interface, maybe here it is a similar issue

2509 works in Comfy UI and also required special weights to avoid artifacts. https://huggingface.co/lightx2v/Qwen-Image-Lightning/tree/main/Qwen-Image-Edit-2509 I remember for Wan they did separate fp8 weights for Comfy and for their interface, maybe here it is a similar issue

but the issue is about fp8 2511 not 2509

Yes, but this also should work in Comfy UI

Same, produces only noise

Yes, but this also should work in Comfy UI

you're wrong

Same, complete noise and reporting a bunch of loading key errors in the console. But lightning lora works fine with GGUF version.

qwen_image_edit_2511_fp8_e4m3fn_scaled_lightning_comfyui.safetensors

qwen_image_edit_2511_fp8_e4m3fn_scaled_lightning_comfyui.safetensors + 4ζ­₯lora εθ€Œη•«ι’ε΄© 不加 εθ€Œζ²’ε•ι‘Œ...

qwen_image_edit_2511_fp8_e4m3fn_scaled_lightning_comfyui.safetensors + 4ζ­₯lora εθ€Œη•«ι’ε΄© 不加 εθ€Œζ²’ε•ι‘Œ...

The 4 step lora is already merged into qwen_image_edit_2511_fp8_e4m3fn_scaled_lightning_comfyui.safetensors, you can use it alone for 4-step inference

qwen_image_edit_2511_fp8_e4m3fn_scaled_lightning_comfyui.safetensors

Working now. Thank you

qwen_image_edit_2511_fp8_e4m3fn_scaled_lightning_comfyui.safetensors

Working now. Thank you

Working with ComfyUI? Couldnt make it work in ComfyUI, it gets stuck in sampler.

Working with ComfyUI?

Yes

qwen_image_edit_2511_fp8_e4m3fn_scaled_lightning_comfyui.safetensors

Working now. Thank you

Working with ComfyUI? Couldnt make it work in ComfyUI, it gets stuck in sampler.

Mine is working fine with the new model.
In first go it took a lot of time. After that speed is fine.

For anyone having the noise issue within comfyUI, you need to update ComfyUI to the last stable version ( 0.6.0) and to use the:
qwen_image_edit_2511_fp8mixed.safetensors + BF16 2511 Lightning 4-step LoRA
or
qwen_image_edit_2511_fp8_e4m3fn_scaled_lightning_comfyui.safetensors

I switched out the 2509 diffusion models and lightning loras for the 2511.
Doesn't seem to produce results I normally would when I use controlnets.
FP8 is complete noise but the BF16 works fine. I tested the FP8 with the comfyui scaled version.

While I can load and run the fp8 scaled model, I am not sure its actually running in fp8 mode as the comfyui log shows the following

Unet unexpected: scaled FP8

How do I ensure its actually running in fp8 mode?

Sign up or log in to comment