Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -7,22 +7,31 @@ tags:
|
|
| 7 |
---
|
| 8 |
# Long Horizon Execution
|
| 9 |
|
| 10 |
-
This project contains the dataset accompanying the paper "The Illusion of Diminishing Returns: Measuring Long Horizon Execution in LLMs"
|
| 11 |
|
| 12 |
-
|
| 13 |
-
`config.json` contains metadata about out dataset.
|
| 14 |
|
| 15 |
-
|
|
|
|
| 16 |
|
| 17 |
-
`
|
| 18 |
|
| 19 |
-
|
| 20 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
|
| 22 |
-
|
| 23 |

|
| 24 |
|
| 25 |
-
|
| 26 |
Find out more about our work at https://arxiv.org/abs/2509.09677. Consider citing us if you use our dataset.
|
| 27 |
```
|
| 28 |
@misc{
|
|
|
|
| 7 |
---
|
| 8 |
# Long Horizon Execution
|
| 9 |
|
| 10 |
+
This project contains the dataset accompanying the paper "[The Illusion of Diminishing Returns: Measuring Long Horizon Execution in LLMs](https://arxiv.org/abs/2509.09677)"
|
| 11 |
|
| 12 |
+
**GitHub:** [https://github.com/long-horizon-execution/measuring-execution/](https://github.com/long-horizon-execution/measuring-execution/)
|
|
|
|
| 13 |
|
| 14 |
+
## Description
|
| 15 |
+

|
| 16 |
|
| 17 |
+
This dataset is a synthetic benchmark designed to measure the pure execution capability of LLMs over long horizons. The core task is **key-value dictionary addition**. A fixed, in-context dictionary mapping five-letter English words (keys) to integer values is provided in `dictionary.json`. The model's goal is to maintain a running sum. In each turn, it receives one or more keys (defined by the turn complexity, `K`), retrieves their corresponding values from the dictionary, adds them to the running sum, and outputs the new sum. The primary metric for evaluation is the **task length**: the number of steps a model can execute before its accuracy drops below a certain threshold.
|
| 18 |
|
| 19 |
+
The dataset is designed to be programmatically generated and thus contamination-free. We only provide 100 samples here for ease of access, but more can be generated using the script [here](https://github.com/long-horizon-execution/measuring-execution/blob/main/generate_dataset_json.py).
|
| 20 |
+
|
| 21 |
+
## Using the dataset
|
| 22 |
+
`test.jsonl` contains the individual samples that can be used to prompt the LLM.
|
| 23 |
+
- _"input"_: contains the keys to be processed.
|
| 24 |
+
- _"values"_: contains the values mapped to the corresponding keys as described in `dictionary.json`.
|
| 25 |
+
- _"output"_: contains the expected running sum answers.
|
| 26 |
+
|
| 27 |
+
The provided dataset is configured with a turn complexity of `K=1` (one key per turn). To evaluate models on a higher turn complexity, such as `K=N`, you can post-process the data by grouping every `N` consecutive turns:
|
| 28 |
+
- _"input"_: Concatenate every `N` items into a single comma-separated string.
|
| 29 |
+
- _"output"_: The new running sum for the grouped turn is simply the **last** running sum from the original group of `N` turns.
|
| 30 |
|
| 31 |
+
## Benchmark
|
| 32 |

|
| 33 |
|
| 34 |
+
## Citation
|
| 35 |
Find out more about our work at https://arxiv.org/abs/2509.09677. Consider citing us if you use our dataset.
|
| 36 |
```
|
| 37 |
@misc{
|