PIPer: On-Device Environment Setup via Online Reinforcement Learning
Paper | Code

Democratizing environment setup with on-device sized models that match the performance of much larger proprietary systems
π― Overview
Environment setupβthe process of configuring systems to work with specific software projectsβremains a persistent challenge in software engineering. PIPer addresses this by training specialized on-device models that can automatically generate correct Bash scripts for environment configuration.
Our approach combines:
- π Supervised Fine-Tuning (SFT) with executable scripts from larger models
- π― Reinforcement Learning with Verifiable Rewards (RLVR) using lightweight proxy LLM-reward
π Key Results
| Model |
Size |
EnvBench avg@5 |
Cost per 1M tokens |
| PIPer |
8B |
19.4 |
$0.60 |
| GPT-4o |
- |
19.4 |
$15.00 |
| Qwen3-32B |
32B |
16.2 |
$2.00 |
| Qwen3-8B |
8B |
2.6 |
$0.60 |
π PIPer achieves 9Γ improvement over its base model while matching GPT-4o performance at 25x lower cost

π¦ Available Artifacts
π€ Model Checkpoints
π Datasets
π Reproduce the results
We use uv for dependency management and Ray for distributed training.
git clone https://github.com/JetBrains-Research/PIPer.git
cd PIPer
git submodule update --init --recursive
uv sync
To run the experiments, you need a node with at least 4 H200 GPUs and Ray installed and running.
Then you can run all the experiments with the following command:
uv run piper/hparams_entrypoint.py --multirun +experiment==llm-reward
You can look up the experiment Hydra configurations in piper/config/ folder, or print out the whole config with the following command:
uv run piper/hparams_entrypoint.py +experiment=llm-reward --info config
π Evaluation Benchmarks
| Benchmark |
Description |
Metric |
Our Result |
| EnvBench-Python |
329 Python repositories |
pass@5 |
π 27/329 |
| Repo2Run |
420 Python repositories |
pass@5 |
π 103/420 |
| Terminal-Bench |
80 terminal tasks |
pass@10 |
4/80 |
π License
This project is licensed under the MIT License - see the LICENSE file for details.