File size: 6,185 Bytes
9694177 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 |
# PLSemanticsBench
[](https://huggingface.co/datasets/EngineeringSoftware/PLSemanticsBench)
## Table of Contents
- [About](#about)
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Benchmark](#benchmark)
- [Citation](#citation)
## About
PLSemanticsBench is the first benchmark for evaluating LLMs as programming language interpreters. We introduce three tasks to evaluate this:
| Task | Description |
|------|-------------|
| ✨ **PredState**| Predicts the final program state |
| ✨ **PredRule** | Predicts the ordered sequence of semantic rules needed to evaluate a program|
| ✨ **PredTrace**| Predicts the step-by-step execution of a program |
## Installation
### System Requirements
- Python 3.11 or higher
- OpenAI API key (for running experiments)
### Step-by-Step Installation
1. Create and activate the conda environment:
```bash
conda env create -f env.yaml
conda activate plsemanticsbench
```
2. Set up your OpenAI API key:
```bash
export OPENAI_API_KEY='your-api-key-here'
```
## Quick Start
### Basic Example
Here's a minimal example to get started:
```python
from plsemanticsbench import GPTRunner, GPT_MODEL_ENUM
from plsemanticsbench import ExperimentArgs, LLMEvaluator
from plsemanticsbench import (
PROMPT_STRATEGY,
Task,
Formalization,
Semantics_Type,
Language,
PLDataset
)
# Model name
model_name = "o3-mini"
# Experiment args: Run the PredState task on the IMP language with
# standard semantics formalized using SOS and with direct prompting
exp_args = ExperimentArgs(
dataset=PLDataset.Human_Written,
task=Task.PredState,
language=Language.IMP,
formalization=Formalization.SOS,
semantics_type=Semantics_Type.Standard,
model_name=model_name,
prompt_strategy=PROMPT_STRATEGY.DA,
num_datapoints_to_run=2, # Run just 2 datapoints (omit to run entire dataset)
)
# Run inference using the OpenAI API
gpt_runner = GPTRunner(
gpt_model=GPT_MODEL_ENUM.O3_MINI,
args=exp_args,
)
# If prediction file is provided, the predictions will be saved to the file
predictions = gpt_runner.do_experiment()
llm_eval = LLMEvaluator(exp_args)
evaluation_result = llm_eval.evaluate_from_list(results=predictions, model_name=model_name)
print(evaluation_result)
```
### Expected Output
The evaluation results will look like:
```python
{
'accuracy': 1,
'malformed-count': 0,
}
```
## Benchmark
You can load the dataset using the `datasets` library. Here is an example:
```python
from datasets import load_dataset
# Load PredState task with standard semantics (uk) and K-semantics formalization (K) and with the Human Written (human-written) dataset
predstate_IMP_K_uk_human_written = load_dataset("EngineeringSoftware/PLSemanticsBench", name="predstate-IMP-K-uk-human-written")
# Load PredRule task with nonstandard semantics (mk) ans SOS formalization (SOS) and with the LLM Translated (llm-translated) dataset
predrule_IMP_SOS_mk_llm_translated = load_dataset("EngineeringSoftware/PLSemanticsBench", name="predrule-IMP-SOS-mk-llm-translated")
# Load PredState task with no-semantics (nk) and with the Fuzzer Generated (fuzzer-generated) dataset
predstate_IMP_nk_fuzzer_generated = load_dataset("EngineeringSoftware/PLSemanticsBench", name="predstate-IMP-nk-fuzzer-generated")
```
### Dataset Split
<table>
<tr>
<th>Task</th>
<th>Split</th>
<th>Description</th>
</tr>
<tr>
<td rowspan="5">✨ <strong>PredState</strong><br>(Final State Prediction)</td>
<td> predstate-IMP-nk-{dataset-name} </td>
<td> No semantics </td>
</tr>
<tr>
<td> predstate-IMP-K-uk-{dataset-name} </td>
<td>Standard semantics with K-semantics formalization</td>
</tr>
<tr>
<td> predstate-IMP-K-mk-{dataset-name} </td>
<td>Nonstandard semantics with K-semantics formalization</td>
</tr>
<tr>
<td> predstate-IMP-SOS-uk-{dataset-name} </td>
<td>Standard semantics with SOS formalization</td>
</tr>
<tr>
<td> predstate-IMP-SOS-mk-{dataset-name} </td>
<td>Nonstandard semantics with SOS formalization</td>
</tr>
<tr>
<td rowspan="4">✨ <strong>PredRule</strong><br>(Semantic Rule Prediction)</td>
<td> predrule-IMP-K-uk-human-written </td>
<td>Standard semantics with K-semantics formalization</td>
</tr>
<tr>
<td> predrule-IMP-K-mk-human-written </td>
<td>Nonstandard semantics with K-semantics formalization</td>
</tr>
<tr>
<td> predrule-IMP-SOS-uk-human-written </td>
<td>Standard semantics with SOS formalization</td>
</tr>
<tr>
<td> predrule-IMP-SOS-mk-human-written </td>
<td>Nonstandard semantics with SOS formalization</td>
</tr>
<tr>
<td rowspan="4">✨ <strong>PredTrace</strong><br>(Execution Trace Prediction)</td>
<td> predtrace-IMP-K-uk-human-written </td>
<td>Standard semantics with K-semantics formalization</td>
</tr>
<tr>
<td> predtrace-IMP-K-mk-human-written </td>
<td>Nonstandard semantics with K-semantics formalization</td>
</tr>
<tr>
<td> predtrace-IMP-SOS-uk-human-written </td>
<td>Standard semantics with SOS formalization</td>
</tr>
<tr>
<td> predtrace-IMP-SOS-mk-human-written </td>
<td>Nonstandard semantics with SOS formalization</td>
</tr>
</table>
### Data Example
One example of the dataset is as follows:
```json
{
"program": "int ans; ans = 1; ...",
"syntax": "<program> :: ...",
"semantics": "ℤ := Set of integers ...",
"mutated-program": "int ans; ans = 1; ...",
"mutation-pattern": "KeyWordSwap",
"exec-trace": [
{
"linenumber": 1,
"rule": ["Rule 38", "Rule 39"],
"state": {"ans": 1}
}
],
"ground-truth": "<answer>...</answer>"
}
```
## Citation
```bibtex
@article{ThimmaiahETAL25PLSemanticsBench,
title={PLSemanticsBench: Large Language Models As Programming Language Interpreters},
author={Aditya Thimmaiah, Jiyang Zhang, Jayanth Srinivasa, Junyi Jessy Li, Milos Gligoric},
year={2025},
archivePrefix={arXiv},
url={https://arxiv.org/abs/2510.03415},
}
```
## License
This project is licensed under the CC0-1.0 License.
|