arvindh75 commited on
Commit
75bfee7
·
verified ·
1 Parent(s): 49f6ef4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -9
README.md CHANGED
@@ -7,22 +7,31 @@ tags:
7
  ---
8
  # Long Horizon Execution
9
 
10
- This project contains the dataset accompanying the paper "The Illusion of Diminishing Returns: Measuring Long Horizon Execution in LLMs" to evaluate Long Horizon Execution capabilities of LLMs.
11
 
12
- ### File Structure
13
- `config.json` contains metadata about out dataset.
14
 
15
- `dictionary.json` contains the key-value dictionary.
 
16
 
17
- `test.jsonl` contains the individual samples with ground truth outputs.
18
 
19
- ### Setup
20
- ![Our task.](task_overview.png)
 
 
 
 
 
 
 
 
 
21
 
22
- ### Benchmark of Frontier Models
23
  ![Benchmark of Frontier models.](benchmark.png)
24
 
25
- ### Citation
26
  Find out more about our work at https://arxiv.org/abs/2509.09677. Consider citing us if you use our dataset.
27
  ```
28
  @misc{
 
7
  ---
8
  # Long Horizon Execution
9
 
10
+ This project contains the dataset accompanying the paper "[The Illusion of Diminishing Returns: Measuring Long Horizon Execution in LLMs](https://arxiv.org/abs/2509.09677)"
11
 
12
+ **GitHub:** [https://github.com/long-horizon-execution/measuring-execution/](https://github.com/long-horizon-execution/measuring-execution/)
 
13
 
14
+ ## Description
15
+ ![Our task.](task_overview.png)
16
 
17
+ This dataset is a synthetic benchmark designed to measure the pure execution capability of LLMs over long horizons. The core task is **key-value dictionary addition**. A fixed, in-context dictionary mapping five-letter English words (keys) to integer values is provided in `dictionary.json`. The model's goal is to maintain a running sum. In each turn, it receives one or more keys (defined by the turn complexity, `K`), retrieves their corresponding values from the dictionary, adds them to the running sum, and outputs the new sum. The primary metric for evaluation is the **task length**: the number of steps a model can execute before its accuracy drops below a certain threshold.
18
 
19
+ The dataset is designed to be programmatically generated and thus contamination-free. We only provide 100 samples here for ease of access, but more can be generated using the script [here](https://github.com/long-horizon-execution/measuring-execution/blob/main/generate_dataset_json.py).
20
+
21
+ ## Using the dataset
22
+ `test.jsonl` contains the individual samples that can be used to prompt the LLM.
23
+ - _"input"_: contains the keys to be processed.
24
+ - _"values"_: contains the values mapped to the corresponding keys as described in `dictionary.json`.
25
+ - _"output"_: contains the expected running sum answers.
26
+
27
+ The provided dataset is configured with a turn complexity of `K=1` (one key per turn). To evaluate models on a higher turn complexity, such as `K=N`, you can post-process the data by grouping every `N` consecutive turns:
28
+ - _"input"_: Concatenate every `N` items into a single comma-separated string.
29
+ - _"output"_: The new running sum for the grouped turn is simply the **last** running sum from the original group of `N` turns.
30
 
31
+ ## Benchmark
32
  ![Benchmark of Frontier models.](benchmark.png)
33
 
34
+ ## Citation
35
  Find out more about our work at https://arxiv.org/abs/2509.09677. Consider citing us if you use our dataset.
36
  ```
37
  @misc{