Unlocking Implicit Experience: Synthesizing Tool-Use Trajectories from Text
Abstract
A text-based data synthesis approach generates multi-turn tool-use trajectories for large language models, achieving improved performance and reduced computational costs through a specialized trajectory synthesizer.
Enabling Large Language Models (LLMs) to effectively utilize tools in multi-turn interactions is essential for building capable autonomous agents. However, acquiring diverse and realistic multi-turn tool-use data remains a significant challenge. In this work, we propose a novel text-based paradigm. We observe that textual corpora naturally contain rich, multi-step problem-solving experiences, which can serve as an untapped, scalable, and authentic data source for multi-turn tool-use tasks. Based on this insight, we introduce GEM, a data synthesis pipeline that enables the generation and extraction of multi-turn tool-use trajectories from text corpora through a four-stage process: relevance filtering, workflow & tool extraction, trajectory grounding, and complexity refinement. To reduce the computational cost, we further train a specialized Trajectory Synthesizer via supervised fine-tuning. This model distills the complex generation pipeline into an efficient, end-to-end trajectory generator. Experiments demonstrate that our GEM-32B achieve a 16.5% improvement on the BFCL V3 Multi-turn benchmark. Our models partially surpass the performance of models trained on ฯ - bench (Airline and Retail) in-domain data, highlighting the superior generalization capability derived from our text-based synthesis paradigm. Notably, our Trajectory Synthesizer matches the quality of the full pipeline while significantly reducing inference latency and costs.
Community
We propose a novel "Text to Trajectory" paradigm to address the scarcity of multi-turn tool usage trajectory data needed to train agents. Traditional methods rely on predefined API sets to synthesize data, but this approach is limited by the scope of tools and is costly. We observe that text corpora naturally contain rich multi-step problem-solving experiences, which can be extracted and transformed into realistic, scalable, and high-quality multi-turn tool usage data. Based on this insight, we develop a pipeline called GEM to enable automatic generation and extraction of multi-turn tool-use trajectory to validate this paradigm.
arXiv explained breakdown of this paper ๐ https://arxivexplained.com/papers/unlocking-implicit-experience-synthesizing-tool-use-trajectories-from-text
arXivlens breakdown of this paper ๐ https://arxivlens.com/PaperView/Details/unlocking-implicit-experience-synthesizing-tool-use-trajectories-from-text-2185-1a09527b
- Executive Summary
- Detailed Breakdown
- Practical Applications
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- User-Oriented Multi-Turn Dialogue Generation with Tool Use at scale (2026)
- From Failure to Mastery: Generating Hard Samples for Tool-use Agents (2026)
- Close the Loop: Synthesizing Infinite Tool-Use Data via Multi-Agent Role-Playing (2025)
- GTM: Simulating the World of Tools for AI Agents (2025)
- ToolGym: an Open-world Tool-using Environment for Scalable Agent Testing and Data Curation (2026)
- AMAP Agentic Planning Technical Report (2025)
- Beyond Single-Shot: Multi-step Tool Retrieval via Query Planning (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper