Co-rewarding
Collection
Co-rewarding is a novel self-supervised RL framework that improves training stability by seeking complementary supervision from another views. • 75 items • Updated
• 1
This is the Qwen3-8B-Base model trained by Entropy Minimization using DAPO-14k training set, as described in the paper Co-rewarding: Stable Self-supervised RL for Eliciting Reasoning in Large Language Models.
For more details, installation, and usage examples, please refer to the official Github repository: https://github.com/tmlr-group/Co-rewarding.