Update README.md
Browse files
README.md
CHANGED
|
@@ -11,12 +11,6 @@
|
|
| 11 |
<a href='https://huggingface.co/BitStarWalkin/SuperCorrect-7B'>
|
| 12 |
<img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-yellow'></a>
|
| 13 |
</p>
|
| 14 |
-
<details>
|
| 15 |
-
<summary>Click for full abstract</summary>
|
| 16 |
-
Large language models (LLMs) like GPT-4, PaLM, and LLaMA have shown significant improvements in various reasoning tasks. However, smaller models such as Llama-3-8B and DeepSeekMath-Base still struggle with complex mathematical reasoning because they fail to effectively identify and correct reasoning errors. Recent reflection-based methods aim to address these issues by enabling self-reflection and self-correction, but they still face challenges in independently detecting errors in their reasoning steps.
|
| 17 |
-
To overcome these limitations, we propose **SuperCorrect**, a novel two-stage framework that uses a large teacher model to *supervise* and *correct* both the reasoning and reflection processes of a smaller student model. In the first stage, we extract hierarchical high-level and detailed thought templates from the teacher model to guide the student model in eliciting more fine-grained reasoning thoughts. In the second stage, we introduce cross-model collaborative direct preference optimization (DPO) to enhance the self-correction abilities of the student model by following the teacher's correction traces during training. This cross-model DPO approach teaches the student model to effectively locate and resolve erroneous thoughts with error-driven insights from the teacher model, breaking the bottleneck of its thoughts and acquiring new skills and knowledge to tackle challenging problems.
|
| 18 |
-
Extensive experiments consistently demonstrate our superiority over previous methods. Notably, our **SuperCorrect-7B** model significantly **surpasses powerful DeepSeekMath-7B by 7.8%/5.3% and Qwen2.5-Math-7B by 15.1%/6.3%** on MATH/GSM8K benchmarks, achieving new SOTA performance among all 7B models.
|
| 19 |
-
</details>
|
| 20 |
|
| 21 |
## Introduction
|
| 22 |
|
|
@@ -29,9 +23,7 @@ Notably, our **SupperCorrect-7B** model significantly surpasses powerful **DeepS
|
|
| 29 |
Detailed performance and introduction are shown in our <a href="https://arxiv.org/"> 📑 Paper</a>.
|
| 30 |
|
| 31 |
<div align="left">
|
| 32 |
-
|
| 33 |
-
🚨 Unlike other LLMs, we incorporate LLMs with our pre-defined hierarchical thought template to conduct more deliberate reasoning than conventional CoT. It should be noted that our evaluation methods relies on pure mathematical reasoning abilities of LLMs, instead of leverage other programming methods such as PoT and ToRA.
|
| 34 |
-
</b>
|
| 35 |
</div>
|
| 36 |
|
| 37 |
## Examples
|
|
|
|
| 11 |
<a href='https://huggingface.co/BitStarWalkin/SuperCorrect-7B'>
|
| 12 |
<img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-yellow'></a>
|
| 13 |
</p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
## Introduction
|
| 16 |
|
|
|
|
| 23 |
Detailed performance and introduction are shown in our <a href="https://arxiv.org/"> 📑 Paper</a>.
|
| 24 |
|
| 25 |
<div align="left">
|
| 26 |
+
🚨 Unlike other LLMs, we incorporate LLMs with our pre-defined hierarchical thought template ([Buffer of Thought (BoT)](https://github.com/YangLing0818/buffer-of-thought-llm)) to conduct more deliberate reasoning than conventional CoT. It should be noted that our evaluation methods relies on pure mathematical reasoning abilities of LLMs, instead of leverage other programming methods such as PoT and ToRA.
|
|
|
|
|
|
|
| 27 |
</div>
|
| 28 |
|
| 29 |
## Examples
|