Z-Image-Distilled V3 2026/2/11

Thanks to Bubbliiiing, VideoX-Fun& Alibaba-PAI Provided us with a more efficient distillation solution

https://huggingface.co/alibaba-pai/Z-Image-Fun-Lora-Distill

Speed of Light, Power of Flow: The new ZID v3 "Lucis" is powered by the latest ZIB acceleration. Building on ZID v2 trainning sets, we've distilled a more efficient Zimage-based RedDX3. Now, in just 5 steps, you get solid results.

Rapid Prototyping: Test LoRA training hypotheses instantly with 'near-zero' latency.

Stochastic Pre-sampling: Serve as a high-speed, high-entropy source for ZiTurbo pipelines.

Hybrid Workflows: Pair seamlessly with Klein 9B for cascaded refinement or ensemble generation.

  • inference cfg: 1.0-1.5(建议1.0)
  • inference steps: 5(5-15步)
  • sampler / scheduler: Euler / simple

Preview images generated by Z-Image Distilled V3+Moody MIX V7(ZIT finetune) Hybrid Workflow,Just for showing the style difference between ZID(RedZDX3) and ZIT(fine-tunning),

no ranking intended =)

演示例图使用ZIDistilled V3+Moody MIX V7混合工作流程,不用做排名对比 (L = 'ZID v3', R = 'ZIT ft')

For more ZID v3 generated examples, please refrence

RedCraft | 红潮 | RedZDX⚡️Distilled [Civitai ]

Welcome to the era of instant creativity. Welcome to 'Lucis'.

Z-Image-Distilled V2 2026/2/05

To a certain extent, the problem of ZImage color deviation has been reduced, but it is recommended to adjust the color appropriately according to the art style

  • inference cfg: 1.0(建议1.0)
  • inference steps: 10(10-15步)
  • sampler / scheduler: Euler / simple

感谢🙏这位作者完成了Z-Image的FP8mixed混合量化方案:

https://huggingface.co/pachiiahri

已上传 FP8 混合精度版本,请给这位作者点赞👍

Also available in NVFP4 quantized format, optimized for acceleration on Blackwell architecture GPUs.Double speed, Half resources.( like RTX50XX, PRO6000, B200, and others )

Also supports non-50 series GPUs (automatic 16-bit operation)

以上是FP8 scale&mixed 直出工作流(所有例图工作流开放Civitai

精度混合方案来自 https://civitai.com/models/2172944/z-image-fp8

The art style leans towards realism Retains ZIB's creative ability and reduces the collapse of Human anatomy.

Thanks to @anyMODE(Civitai) for exporting ZID LoRAs

Z-Image-Distilled V1 2026/1/30

This model is a direct distillation-accelerated version based on the original Z-Image (non-Turbo) source. Its purpose is to test LoRA training effects on the Z-Image (non-turbo) version while significantly improving inference/test speed. The model does not incorporate any weights or style from Z-Image-Turbo at all — it is a pure-blood version based purely on Z-Image, effectively retaining the original Z-Image's adaptability, random diversity in outputs, and overall image style.

Compared to the official Z-Image, inference is much faster (good results achievable in just 10–20 steps); compared to the official Z-Image-Turbo, this model preserves stronger diversity, better LoRA compatibility, and greater fine-tuning potential, though it is slightly slower than Turbo (still far faster than the original Z-Image's 28–50 steps).

The model is mainly suitable for:

  • Users who want to train/test LoRAs on the Z-Image non-Turbo base
  • Scenarios needing faster generation than the original without sacrificing too much diversity and stylistic freedom
  • Artistic, illustration, concept design, and other generation tasks that require a certain level of randomness and style variety
  • Compatible with ComfyUI inference (layer prefix == model.diffusion_model)

Usage Instructions:

Basic workflow: please refer to the Z-Image-Turbo official workflow (fully compatible with the official Z-Image-Turbo workflow)

Recommended inference parameters:

  • inference cfg: 1.0–2.5 (recommended range: 1.0~1.8; higher values enhance prompt adherence)
  • inference steps: 10–20 (10 steps for quick previews, 15–20 steps for more stable quality)
  • sampler / scheduler: Euler / simple, or res_m, or any other compatible sampler

LoRA compatibility is good; recommended weight: 0.6~1.0, adjust as needed.

Also on: Civitai | Modelscope AIGC

RedCraft | 红潮造相 ⚡️ REDZimage | Updated-JAN30 | Latest - RedZiB ⚡️ DX1 Distilled Acceleration

Current Limitations & Future Directions

Current main limitations:

  • The distillation process causes some damage to text (especially very small-sized text), with rendering clarity and completeness inferior to the original Z-Image
  • Overall color tone remains consistent with the original ZI, but certain samplers can produce color cast issues (particularly noticeable excessive blue tint)

Next optimization directions:

  • Further stabilize generation quality under CFG=1 within 10 steps or fewer, striving to achieve more usable results that are closer to the original style even at very low step counts
  • Optimize negative prompt adherence when CFG > 1, improving control over negative descriptions and reducing interference from unwanted elements
  • Continue improving clarity and readability in small text areas while maintaining the speed advantages brought by distillation

We welcome feedback and generated examples from all users — let's collaborate to advance this pure-blood acceleration direction!

Model License:

Please follow the Apache-2.0 license of the Z-Image model.

Please follow the Apache-2.0 open source license for the Z-Image model.

Downloads last month
5,122
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for GuangyuanSD/Z-Image-Distilled

Base model

Tongyi-MAI/Z-Image
Finetuned
(17)
this model

Space using GuangyuanSD/Z-Image-Distilled 1