ODTSR: One-Step Diffusion Transformer for Controllable Real-World Image Super-Resolution

This repository contains the official implementation of ODTSR, a model presented in the paper: One-Step Diffusion Transformer for Controllable Real-World Image Super-Resolution

Authors: Yushun Fang, Yuxiang Chen, Shibo Yin, Qiang Hu, Jiangchao Yao, Ya Zhang, Xiaoyun Zhang, Yanfeng Wang Affiliations: Shanghai Jiao Tong University, Xiaohongshu Inc

Code: https://github.com/RedMediaTech/ODTSR

ODTSR Overview Framework

Overview

Recent advances in diffusion-based real-world image super-resolution (Real-ISR) have demonstrated remarkable perceptual quality, yet the balance between fidelity and controllability remains a problem: multi-step diffusion-based methods suffer from generative diversity and randomness, resulting in low fidelity, while one-step methods lose control flexibility due to fidelity-specific finetuning.

ODTSR addresses this by presenting a one-step diffusion transformer based on Qwen-Image that performs Real-ISR considering fidelity and controllability simultaneously. It introduces a newly designed Noise-hybrid Visual Stream (NVS) that receives low-quality images with adjustable noise (Control Noise) and consistent noise (Prior Noise). Furthermore, Fidelity-aware Adversarial Training (FAA) is employed to enhance controllability and achieve one-step inference. ODTSR not only achieves state-of-the-art (SOTA) performance on generic Real-ISR, but also enables prompt controllability on challenging scenarios such as real-world scene text image super-resolution (STISR) of Chinese characters without training on specific datasets.

Key Features

  • One-Step Super-Resolution: Based on Qwen-Image, ODTSR trains a single-step SR model using LoRA, with model parameters reaching 20B.
  • Controllability: With our proposed Noise-hybrid Visual Stream and Fidelity-aware Adversarial Training, the SR process can be jointly controlled by prompts as well as a Fidelity Weight $f$.
  • Multilingual Support: English and Chinese prompts are supported.
  • Versatile Performance: The model demonstrates strong performance in text images, fine-grained textures, and face images.

Visual Results

Results with fixed prompts & high fidelity

Results with fixed prompts & high fidelity
Under the high-fidelity setting with a fixed prompt, our model produces restorations that adhere more closely to the LQ input while remaining natural, significantly reducing the sense of AI processing.

Text Real-ISR Results

Text Real-ISR Results
In text scenarios, when the prompt specifies the text to be restored, the model automatically matches the LQ text and performs the restoration.

Controllable Real-ISR Results

Controllable Real-ISR Results
Qualitative results of controllable SR with prompt and adjustable Fidelity Weight (denoted as $f$) on Div2k-val dataset. As $f$ decreases from 1 to 0, detail generation and prompt adherence gradually strengthen.

Dependencies and Installation

  1. Prepare conda env:
    conda create -n yourenv python=3.11
    
  2. Install pytorch (we recommend torch==2.6.0):
    pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0  -f https://mirrors.aliyun.com/pytorch-wheels/cu124/
    
  3. Install this repo (based on DiffSynth-Studio). The required packages will be automatically installed:
    cd xxxx/ODTSR # Replace xxxx with your path
    pip3 install -e . -v  -i https://mirrors.cloud.tencent.com/pypi/simple
    
  4. (For training) Install basicsr:
    pip install basicsr
    
    Note: You can apply the the following command to fix a bug in basicsr. Make sure to replace /opt/conda with the path to your own conda environment:
    sed -i '8s/from torchvision.transforms.functional_tensor import rgb_to_grayscale/from torchvision.transforms._functional_tensor import rgb_to_grayscale/' /opt/conda/lib/python3.11/site-packages/basicsr/data/degradations.py
    
  5. Download base model to your disk: Qwen-Image
  6. (For training) Download base model to your disk: Wan2.1-T2V-1.3B
  7. (For inference) Download the trained ODTSR model weight: huggingface

Inference with Script

Note: you need at least 40GB GPU memory to infer. We will support CPU offload to reduce GPU memory usage soon. We now supports tile-based processing (tile size: 512ร—512), enabling input of arbitrary resolutions and SR at any scale factor.

Please replace experiments/qwen_one_step_gan/${EXP_DATE}/checkpoints/net_gen_iter_10001.pth with the trained ODTSR model weight.

sh examples/qwen_image/test_gan.sh
Inference Workflow

Inference with Gradio

sh examples/qwen_image/test_gradio.sh
Gradio Demo

License

This project is released under the Apache 2.0 license.

Acknowledgement

This project is based on DiffSynth-Studio. We also leveraged some of PiSA-SR's code in dataloader part. Thanks for the awesome work!

Citation

If ODTSR is helpful to you, please consider citing our paper:

@article{fang2025onestep,
      title={One-Step Diffusion Transformer for Controllable Real-World Image Super-Resolution},
      author={Fang, Yushun and Chen, Yuxiang and Yin, Shibo and Hu, Qiang and Yao, Jiangchao and Zhang, Ya and Zhang, Xiaoyun and Wang, Yanfeng},
      journal={arXiv preprint arXiv:2511.17138},
      year={2025}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support