ODTSR: One-Step Diffusion Transformer for Controllable Real-World Image Super-Resolution
This repository contains the official implementation of ODTSR, a model presented in the paper: One-Step Diffusion Transformer for Controllable Real-World Image Super-Resolution
Authors: Yushun Fang, Yuxiang Chen, Shibo Yin, Qiang Hu, Jiangchao Yao, Ya Zhang, Xiaoyun Zhang, Yanfeng Wang Affiliations: Shanghai Jiao Tong University, Xiaohongshu Inc
Code: https://github.com/RedMediaTech/ODTSR
Overview
Recent advances in diffusion-based real-world image super-resolution (Real-ISR) have demonstrated remarkable perceptual quality, yet the balance between fidelity and controllability remains a problem: multi-step diffusion-based methods suffer from generative diversity and randomness, resulting in low fidelity, while one-step methods lose control flexibility due to fidelity-specific finetuning.
ODTSR addresses this by presenting a one-step diffusion transformer based on Qwen-Image that performs Real-ISR considering fidelity and controllability simultaneously. It introduces a newly designed Noise-hybrid Visual Stream (NVS) that receives low-quality images with adjustable noise (Control Noise) and consistent noise (Prior Noise). Furthermore, Fidelity-aware Adversarial Training (FAA) is employed to enhance controllability and achieve one-step inference. ODTSR not only achieves state-of-the-art (SOTA) performance on generic Real-ISR, but also enables prompt controllability on challenging scenarios such as real-world scene text image super-resolution (STISR) of Chinese characters without training on specific datasets.
Key Features
- One-Step Super-Resolution: Based on Qwen-Image, ODTSR trains a single-step SR model using LoRA, with model parameters reaching 20B.
- Controllability: With our proposed Noise-hybrid Visual Stream and Fidelity-aware Adversarial Training, the SR process can be jointly controlled by prompts as well as a Fidelity Weight $f$.
- Multilingual Support: English and Chinese prompts are supported.
- Versatile Performance: The model demonstrates strong performance in text images, fine-grained textures, and face images.
Visual Results
Results with fixed prompts & high fidelity
Text Real-ISR Results
Controllable Real-ISR Results
Dependencies and Installation
- Prepare conda env:
conda create -n yourenv python=3.11 - Install
pytorch(we recommendtorch==2.6.0):pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 -f https://mirrors.aliyun.com/pytorch-wheels/cu124/ - Install this repo (based on DiffSynth-Studio). The required packages will be automatically installed:
cd xxxx/ODTSR # Replace xxxx with your path pip3 install -e . -v -i https://mirrors.cloud.tencent.com/pypi/simple - (For training) Install
basicsr:
Note: You can apply the the following command to fix a bug inpip install basicsrbasicsr. Make sure to replace/opt/condawith the path to your own conda environment:sed -i '8s/from torchvision.transforms.functional_tensor import rgb_to_grayscale/from torchvision.transforms._functional_tensor import rgb_to_grayscale/' /opt/conda/lib/python3.11/site-packages/basicsr/data/degradations.py - Download base model to your disk: Qwen-Image
- (For training) Download base model to your disk: Wan2.1-T2V-1.3B
- (For inference) Download the trained ODTSR model weight: huggingface
Inference with Script
Note: you need at least 40GB GPU memory to infer. We will support CPU offload to reduce GPU memory usage soon. We now supports tile-based processing (tile size: 512ร512), enabling input of arbitrary resolutions and SR at any scale factor.
Please replace experiments/qwen_one_step_gan/${EXP_DATE}/checkpoints/net_gen_iter_10001.pth with the trained ODTSR model weight.
sh examples/qwen_image/test_gan.sh
Inference with Gradio
sh examples/qwen_image/test_gradio.sh
License
This project is released under the Apache 2.0 license.
Acknowledgement
This project is based on DiffSynth-Studio. We also leveraged some of PiSA-SR's code in dataloader part. Thanks for the awesome work!
Citation
If ODTSR is helpful to you, please consider citing our paper:
@article{fang2025onestep,
title={One-Step Diffusion Transformer for Controllable Real-World Image Super-Resolution},
author={Fang, Yushun and Chen, Yuxiang and Yin, Shibo and Hu, Qiang and Yao, Jiangchao and Zhang, Ya and Zhang, Xiaoyun and Wang, Yanfeng},
journal={arXiv preprint arXiv:2511.17138},
year={2025}
}