VibecoderMcSwaggins commited on
Commit
3d25956
·
1 Parent(s): 5cac97d

docs: add SPEC_07 LangGraph Memory Architecture + update bug docs

Browse files

NEW DOCS:
- SPEC_07_LANGGRAPH_MEMORY_ARCH.md: Ironclad spec for structured
cognitive memory using LangGraph (Nov 2025 best practices)
- P3_ARCHITECTURAL_GAP_STRUCTURED_MEMORY.md: Bug report documenting
missing hypothesis/conflict tracking in AdvancedOrchestrator

Based on deep codebase audit and web search (Nov 2025):
- LangGraph chosen over Mem0 for orchestration (Mem0 better for personalization)
- Works with HuggingFace Inference API (Llama 3.1) - no OpenAI required
- Includes SQLite checkpointer for dev, MongoDB for prod

UPDATED:
- ACTIVE_BUGS.md: Added P3 architecture gaps, marked P0 Simple Mode as FIXED

docs/bugs/ACTIVE_BUGS.md CHANGED
@@ -4,29 +4,40 @@
4
 
5
  ## P0 - Blocker
6
 
7
- ### P0 - Simple Mode Never Synthesizes
8
- **File:** `P0_SIMPLE_MODE_NEVER_SYNTHESIZES.md`
9
 
10
- **Symptom:** Simple mode finds 455 sources but outputs only citations (no synthesis).
11
 
12
- **Root Causes:**
13
- 1. Judge never recommends "synthesize" (prompt too conservative)
14
- 2. Confidence drops to 0% in late iterations (context overflow / API failure)
15
- 3. Search derails to tangential topics (bone health instead of libido)
16
- 4. `_generate_partial_synthesis()` outputs garbage (just citations, no analysis)
17
 
18
- **Status:** Documented, fix plan ready.
 
 
19
 
20
- ---
 
 
21
 
22
- ## P3 - Edge Case
 
23
 
24
- *(None)*
 
 
25
 
26
  ---
27
 
28
  ## Resolved Bugs
29
 
 
 
 
 
 
 
 
 
 
30
  ### ~~P3 - Magentic Mode Missing Termination Guarantee~~ FIXED
31
  **Commit**: `d36ce3c` (2025-11-29)
32
 
 
4
 
5
  ## P0 - Blocker
6
 
7
+ *(None - P0 bugs resolved)*
 
8
 
9
+ ---
10
 
11
+ ## P3 - Architecture/Enhancement
 
 
 
 
12
 
13
+ ### P3 - Missing Structured Cognitive Memory
14
+ **File:** `P3_ARCHITECTURAL_GAP_STRUCTURED_MEMORY.md`
15
+ **Spec:** [SPEC_07_LANGGRAPH_MEMORY_ARCH.md](../specs/SPEC_07_LANGGRAPH_MEMORY_ARCH.md)
16
 
17
+ **Problem:** AdvancedOrchestrator uses chat-based state (context drift on long runs).
18
+ **Solution:** Implement LangGraph StateGraph with explicit hypothesis/conflict tracking.
19
+ **Status:** Spec complete, implementation pending.
20
 
21
+ ### P3 - Ephemeral Memory (No Persistence)
22
+ **File:** `P3_ARCHITECTURAL_GAP_EPHEMERAL_MEMORY.md`
23
 
24
+ **Problem:** ChromaDB uses in-memory client despite `settings.chroma_db_path` existing.
25
+ **Solution:** Switch to `PersistentClient(path=settings.chroma_db_path)`.
26
+ **Status:** Quick fix identified, not yet implemented.
27
 
28
  ---
29
 
30
  ## Resolved Bugs
31
 
32
+ ### ~~P0 - Simple Mode Never Synthesizes~~ FIXED
33
+ **PR:** [#71](https://github.com/The-Obstacle-Is-The-Way/DeepBoner/pull/71) (SPEC_06)
34
+ **Commit**: `5cac97d` (2025-11-29)
35
+
36
+ - Root cause: LLM-as-Judge recommendations were being IGNORED
37
+ - Fix: Code-enforced termination criteria (`_should_synthesize()`)
38
+ - Added combined score thresholds, late-iteration logic, emergency fallback
39
+ - Simple mode now synthesizes instead of spinning forever
40
+
41
  ### ~~P3 - Magentic Mode Missing Termination Guarantee~~ FIXED
42
  **Commit**: `d36ce3c` (2025-11-29)
43
 
docs/bugs/P3_ARCHITECTURAL_GAP_STRUCTURED_MEMORY.md ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # P3: Missing Structured Cognitive Memory (Shared Blackboard)
2
+
3
+ **Status:** OPEN
4
+ **Priority:** P3 (Architecture/Enhancement)
5
+ **Found By:** Deep Codebase Investigation
6
+ **Date:** 2025-11-29
7
+ **Spec:** [SPEC_07_LANGGRAPH_MEMORY_ARCH.md](../specs/SPEC_07_LANGGRAPH_MEMORY_ARCH.md)
8
+
9
+ ## Executive Summary
10
+
11
+ DeepBoner's `AdvancedOrchestrator` has **Data Memory** (vector store for papers) but lacks **Cognitive Memory** (structured state for hypotheses, conflicts, and research plan). This causes "context drift" on long runs and prevents intelligent conflict resolution.
12
+
13
+ ---
14
+
15
+ ## Current Architecture (What We Have)
16
+
17
+ ### 1. MagenticState (`src/agents/state.py:18-91`)
18
+ ```python
19
+ class MagenticState(BaseModel):
20
+ evidence: list[Evidence] = Field(default_factory=list)
21
+ embedding_service: Any = None # ChromaDB connection
22
+
23
+ def add_evidence(self, new_evidence: list[Evidence]) -> int: ...
24
+ async def search_related(self, query: str, n_results: int = 5) -> list[Evidence]: ...
25
+ ```
26
+ - **What it does:** Stores Evidence objects, URL-based deduplication, semantic search via embeddings.
27
+ - **What it DOESN'T do:** Track hypotheses, conflicts, or research plan status.
28
+
29
+ ### 2. EmbeddingService (`src/services/embeddings.py:29-180`)
30
+ ```python
31
+ self._client = chromadb.Client() # In-memory (Line 44)
32
+ self._collection = self._client.create_collection(
33
+ name=f"evidence_{uuid.uuid4().hex}", # Random name per session (Line 45-47)
34
+ ...
35
+ )
36
+ ```
37
+ - **What it does:** In-session semantic search/deduplication.
38
+ - **Limitation:** New collection per session, no persistence despite `settings.chroma_db_path` existing.
39
+
40
+ ### 3. AdvancedOrchestrator (`src/orchestrators/advanced.py:51-371`)
41
+ - Uses Microsoft's `agent-framework-core` (MagenticBuilder)
42
+ - State is implicit in chat history passed between agents
43
+ - Manager decides next step by reading conversation, not structured state
44
+
45
+ ---
46
+
47
+ ## The Problem
48
+
49
+ | Issue | Impact | Evidence |
50
+ |-------|--------|----------|
51
+ | **No Hypothesis Tracking** | Can't update hypothesis confidence systematically | `MagenticState` has no `hypotheses` field |
52
+ | **No Conflict Detection** | Contradictory sources are ignored | No `conflicts` list to flag Source A vs Source B |
53
+ | **Context Drift** | Manager forgets original query after 50+ messages | State lives only in chat, not structured object |
54
+ | **No Plan State** | Can't pause/resume research | No `research_plan` or `next_step` tracking |
55
+
56
+ ---
57
+
58
+ ## The Solution: LangGraph State Graph (Nov 2025 Best Practice)
59
+
60
+ ### Why LangGraph?
61
+
62
+ Based on [comprehensive analysis](https://latenode.com/blog/langgraph-multi-agent-orchestration-complete-framework-guide-architecture-analysis-2025):
63
+
64
+ 1. **Explicit State Schema:** TypedDict/Pydantic model that ALL agents read/write
65
+ 2. **State Reducers:** `Annotated[List[X], operator.add]` for appending (not overwriting)
66
+ 3. **HuggingFace Compatible:** Works with `langchain-huggingface` (Llama 3.1)
67
+ 4. **Production-Ready:** MongoDB checkpointer for persistence, SQLite for dev
68
+
69
+ ### Target Architecture
70
+
71
+ ```python
72
+ # src/agents/graph/state.py (PROPOSED)
73
+ from typing import Annotated, TypedDict, Literal
74
+ import operator
75
+
76
+ class Hypothesis(TypedDict):
77
+ id: str
78
+ statement: str
79
+ status: Literal["proposed", "validating", "confirmed", "refuted"]
80
+ confidence: float
81
+ supporting_evidence_ids: list[str]
82
+ contradicting_evidence_ids: list[str]
83
+
84
+ class Conflict(TypedDict):
85
+ id: str
86
+ description: str
87
+ source_a_id: str
88
+ source_b_id: str
89
+ status: Literal["open", "resolved"]
90
+ resolution: str | None
91
+
92
+ class ResearchState(TypedDict):
93
+ query: str # Immutable original question
94
+ hypotheses: Annotated[list[Hypothesis], operator.add]
95
+ conflicts: Annotated[list[Conflict], operator.add]
96
+ evidence_ids: Annotated[list[str], operator.add] # Links to ChromaDB
97
+ messages: Annotated[list[BaseMessage], operator.add]
98
+ next_step: Literal["search", "judge", "resolve", "synthesize", "finish"]
99
+ iteration_count: int
100
+ ```
101
+
102
+ ---
103
+
104
+ ## Implementation Dependencies
105
+
106
+ | Package | Purpose | Install |
107
+ |---------|---------|---------|
108
+ | `langgraph>=0.2` | State graph framework | `uv add langgraph` |
109
+ | `langchain>=0.3` | Base abstractions | `uv add langchain` |
110
+ | `langchain-huggingface` | Llama 3.1 integration | `uv add langchain-huggingface` |
111
+ | `langgraph-checkpoint-sqlite` | Dev persistence | `uv add langgraph-checkpoint-sqlite` |
112
+
113
+ **Note:** MongoDB checkpointer (`langgraph-checkpoint-mongodb`) recommended for production per [MongoDB blog](https://www.mongodb.com/company/blog/product-release-announcements/powering-long-term-memory-for-agents-langgraph).
114
+
115
+ ---
116
+
117
+ ## Alternative Considered: Mem0
118
+
119
+ [Mem0](https://mem0.ai/) specializes in long-term memory and [outperformed OpenAI by 26%](https://guptadeepak.com/the-ai-memory-wars-why-one-system-crushed-the-competition-and-its-not-openai/) in benchmarks. However:
120
+
121
+ - **Mem0 excels at:** User personalization, cross-session memory
122
+ - **LangGraph excels at:** Workflow orchestration, state machines
123
+ - **Verdict:** Use LangGraph for orchestration + optionally add Mem0 for user-level memory later
124
+
125
+ ---
126
+
127
+ ## Quick Win (Separate from LangGraph)
128
+
129
+ Enable ChromaDB persistence in `src/services/embeddings.py:44`:
130
+ ```python
131
+ # FROM:
132
+ self._client = chromadb.Client() # In-memory
133
+
134
+ # TO:
135
+ self._client = chromadb.PersistentClient(path=settings.chroma_db_path)
136
+ ```
137
+
138
+ This alone gives cross-session evidence persistence (P3_ARCHITECTURAL_GAP_EPHEMERAL_MEMORY fix).
139
+
140
+ ---
141
+
142
+ ## References
143
+
144
+ - [LangGraph Multi-Agent Orchestration Guide 2025](https://latenode.com/blog/langgraph-multi-agent-orchestration-complete-framework-guide-architecture-analysis-2025)
145
+ - [Long-Term Agentic Memory with LangGraph](https://medium.com/@anil.jain.baba/long-term-agentic-memory-with-langgraph-824050b09852)
146
+ - [LangGraph vs LangChain 2025](https://kanerika.com/blogs/langchain-vs-langgraph/)
147
+ - [MongoDB + LangGraph Checkpointers](https://www.mongodb.com/company/blog/product-release-announcements/powering-long-term-memory-for-agents-langgraph)
148
+ - [Mem0 + LangGraph Integration](https://datacouch.io/blog/build-smarter-ai-agents-mem0-langgraph-guide/)
docs/specs/SPEC_07_LANGGRAPH_MEMORY_ARCH.md ADDED
@@ -0,0 +1,492 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SPEC-07: Structured Cognitive Memory Architecture (LangGraph)
2
+
3
+ **Status:** APPROVED
4
+ **Priority:** HIGH (Strategic)
5
+ **Author:** DeepBoner Architecture Team
6
+ **Date:** 2025-11-29
7
+ **Last Updated:** 2025-11-29
8
+ **Related Bugs:** [P3_ARCHITECTURAL_GAP_STRUCTURED_MEMORY](../bugs/P3_ARCHITECTURAL_GAP_STRUCTURED_MEMORY.md)
9
+
10
+ ---
11
+
12
+ ## 1. Executive Summary
13
+
14
+ Upgrade DeepBoner's "Advanced Mode" from chat-based coordination to a **State-Driven Cognitive Architecture** using LangGraph. This enables:
15
+ - Explicit hypothesis tracking with confidence scores
16
+ - Automatic conflict detection and resolution
17
+ - Persistent research state (pause/resume)
18
+ - Context-aware decision making over long runs
19
+
20
+ ---
21
+
22
+ ## 2. Problem Statement
23
+
24
+ ### Current Architecture Limitations
25
+
26
+ The `AdvancedOrchestrator` (`src/orchestrators/advanced.py`) uses Microsoft's `agent-framework-core` with chat-based coordination:
27
+
28
+ ```python
29
+ # Current: State is IMPLICIT (chat history)
30
+ workflow = MagenticBuilder()
31
+ .participants(searcher=..., judge=..., ...)
32
+ .with_standard_manager(chat_client=..., max_round_count=10)
33
+ .build()
34
+ ```
35
+
36
+ | Problem | Root Cause | File Location |
37
+ |---------|------------|---------------|
38
+ | Context Drift | State lives only in chat messages | `advanced.py:126-132` |
39
+ | Conflict Blindness | No structured conflict tracking | `state.py` (no `conflicts` field) |
40
+ | No Hypothesis Management | `MagenticState` only tracks `evidence` | `state.py:21` |
41
+ | Can't Pause/Resume | No checkpointing mechanism | N/A |
42
+
43
+ ### Evidence from Codebase
44
+
45
+ **MagenticState (src/agents/state.py:18-26):**
46
+ ```python
47
+ class MagenticState(BaseModel):
48
+ evidence: list[Evidence] = Field(default_factory=list)
49
+ embedding_service: Any = None # Just data, no cognitive state
50
+ ```
51
+
52
+ **EmbeddingService (src/services/embeddings.py:44-47):**
53
+ ```python
54
+ self._client = chromadb.Client() # In-memory only
55
+ self._collection = self._client.create_collection(
56
+ name=f"evidence_{uuid.uuid4().hex}", # Random name = ephemeral
57
+ ...
58
+ )
59
+ ```
60
+
61
+ ---
62
+
63
+ ## 3. Solution: LangGraph State Graph
64
+
65
+ ### Why LangGraph? (November 2025 Analysis)
66
+
67
+ Based on [comprehensive framework comparison](https://kanerika.com/blogs/langchain-vs-langgraph/):
68
+
69
+ | Feature | `agent-framework-core` (Current) | LangGraph (Proposed) |
70
+ |---------|----------------------------------|----------------------|
71
+ | State Management | Implicit (chat) | Explicit (TypedDict) |
72
+ | Loops/Branches | Limited | Native support |
73
+ | Checkpointing | None | SQLite/MongoDB |
74
+ | HuggingFace | Requires OpenAI format | Native `langchain-huggingface` |
75
+
76
+ ### Architecture Overview
77
+
78
+ ```
79
+ ┌─────────────────────────────────────────────────────────────────┐
80
+ │ ResearchState │
81
+ │ ┌─────────────┬──────────────┬───────────────┬──────────────┐ │
82
+ │ │ query │ hypotheses │ conflicts │ next_step │ │
83
+ │ │ (string) │ (list) │ (list) │ (enum) │ │
84
+ │ └─────────────┴──────────────┴───────────────┴──────────────┘ │
85
+ └─────────────────────────────────────────────────────────────────┘
86
+
87
+
88
+ ┌─────────────────────────────────────────────────────────────────┐
89
+ │ StateGraph │
90
+ │ │
91
+ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
92
+ │ │ SEARCH │────▶│ JUDGE │────▶│ RESOLVE │ │
93
+ │ │ Node │ │ Node │ │ Node │ │
94
+ │ └──────────┘ └──────────┘ └──────────┘ │
95
+ │ ▲ │ │ │
96
+ │ │ ▼ │ │
97
+ │ │ ┌──────────┐ │ │
98
+ │ └──────────│SUPERVISOR│◀──────────┘ │
99
+ │ │ Node │ │
100
+ │ └──────────┘ │
101
+ │ │ │
102
+ │ ▼ │
103
+ │ ┌──────────┐ │
104
+ │ │SYNTHESIZE│ │
105
+ │ │ Node │ │
106
+ │ └──────────┘ │
107
+ └─────────────────────────────────────────────────────────────────┘
108
+ ```
109
+
110
+ ---
111
+
112
+ ## 4. Technical Specification
113
+
114
+ ### 4.1 State Schema
115
+
116
+ **File:** `src/agents/graph/state.py`
117
+
118
+ ```python
119
+ """Structured state for LangGraph research workflow."""
120
+ from typing import Annotated, TypedDict, Literal
121
+ import operator
122
+ from langchain_core.messages import BaseMessage
123
+
124
+
125
+ class Hypothesis(TypedDict):
126
+ """A research hypothesis with evidence tracking."""
127
+ id: str
128
+ statement: str
129
+ status: Literal["proposed", "validating", "confirmed", "refuted"]
130
+ confidence: float # 0.0 - 1.0
131
+ supporting_evidence_ids: list[str]
132
+ contradicting_evidence_ids: list[str]
133
+
134
+
135
+ class Conflict(TypedDict):
136
+ """A detected contradiction between sources."""
137
+ id: str
138
+ description: str
139
+ source_a_id: str
140
+ source_b_id: str
141
+ status: Literal["open", "resolved"]
142
+ resolution: str | None
143
+
144
+
145
+ class ResearchState(TypedDict):
146
+ """The cognitive state shared across all graph nodes.
147
+
148
+ Uses Annotated with operator.add for list fields to enable
149
+ additive updates (append) rather than replacement.
150
+ """
151
+ # Immutable context
152
+ query: str
153
+
154
+ # Cognitive state (the "blackboard")
155
+ hypotheses: Annotated[list[Hypothesis], operator.add]
156
+ conflicts: Annotated[list[Conflict], operator.add]
157
+
158
+ # Evidence links (actual content in ChromaDB)
159
+ evidence_ids: Annotated[list[str], operator.add]
160
+
161
+ # Chat history (for LLM context)
162
+ messages: Annotated[list[BaseMessage], operator.add]
163
+
164
+ # Control flow
165
+ next_step: Literal["search", "judge", "resolve", "synthesize", "finish"]
166
+ iteration_count: int
167
+ max_iterations: int
168
+ ```
169
+
170
+ ### 4.2 Graph Nodes
171
+
172
+ Each node is a pure function: `(state: ResearchState) -> dict`
173
+
174
+ **File:** `src/agents/graph/nodes.py`
175
+
176
+ ```python
177
+ """Graph node implementations."""
178
+ from langchain_core.messages import HumanMessage, AIMessage
179
+ from src.tools.pubmed import search_pubmed
180
+ from src.tools.clinicaltrials import search_clinicaltrials
181
+ from src.tools.europepmc import search_europepmc
182
+
183
+
184
+ async def search_node(state: ResearchState) -> dict:
185
+ """Execute search across all sources.
186
+
187
+ Returns partial state update (additive via operator.add).
188
+ """
189
+ query = state["query"]
190
+ # Reuse existing tools
191
+ results = await asyncio.gather(
192
+ search_pubmed(query),
193
+ search_clinicaltrials(query),
194
+ search_europepmc(query),
195
+ )
196
+ new_evidence_ids = [...] # Store in ChromaDB, return IDs
197
+ return {
198
+ "evidence_ids": new_evidence_ids,
199
+ "messages": [AIMessage(content=f"Found {len(new_evidence_ids)} papers")],
200
+ }
201
+
202
+
203
+ async def judge_node(state: ResearchState) -> dict:
204
+ """Evaluate evidence and update hypothesis confidence.
205
+
206
+ Key responsibility: Detect conflicts and flag them.
207
+ """
208
+ # LLM call to evaluate hypotheses against evidence
209
+ # If contradiction found: add to conflicts list
210
+ return {
211
+ "hypotheses": updated_hypotheses, # With new confidence scores
212
+ "conflicts": new_conflicts, # Any detected contradictions
213
+ "messages": [...],
214
+ }
215
+
216
+
217
+ async def resolve_node(state: ResearchState) -> dict:
218
+ """Handle open conflicts via tie-breaker logic.
219
+
220
+ Triggers targeted search or reasoning to resolve.
221
+ """
222
+ open_conflicts = [c for c in state["conflicts"] if c["status"] == "open"]
223
+ # For each conflict: search for decisive evidence or make judgment call
224
+ return {
225
+ "conflicts": resolved_conflicts,
226
+ "messages": [...],
227
+ }
228
+
229
+
230
+ async def synthesize_node(state: ResearchState) -> dict:
231
+ """Generate final research report.
232
+
233
+ Only uses confirmed hypotheses and resolved conflicts.
234
+ """
235
+ confirmed = [h for h in state["hypotheses"] if h["status"] == "confirmed"]
236
+ # Generate structured report
237
+ return {
238
+ "messages": [AIMessage(content=report_markdown)],
239
+ "next_step": "finish",
240
+ }
241
+
242
+
243
+ def supervisor_node(state: ResearchState) -> dict:
244
+ """Route to next node based on state.
245
+
246
+ This is the "brain" - uses LLM to decide next action
247
+ based on STRUCTURED STATE (not just chat).
248
+ """
249
+ # Decision logic:
250
+ # 1. If open conflicts exist -> "resolve"
251
+ # 2. If hypotheses need more evidence -> "search"
252
+ # 3. If evidence is sufficient -> "judge"
253
+ # 4. If all hypotheses confirmed -> "synthesize"
254
+ # 5. If max iterations -> "synthesize" (forced)
255
+ return {"next_step": decided_step, "iteration_count": state["iteration_count"] + 1}
256
+ ```
257
+
258
+ ### 4.3 Graph Definition
259
+
260
+ **File:** `src/agents/graph/workflow.py`
261
+
262
+ ```python
263
+ """LangGraph workflow definition."""
264
+ from langgraph.graph import StateGraph, END
265
+ from langgraph.checkpoint.sqlite import SqliteSaver
266
+
267
+ from src.agents.graph.state import ResearchState
268
+ from src.agents.graph.nodes import (
269
+ search_node,
270
+ judge_node,
271
+ resolve_node,
272
+ synthesize_node,
273
+ supervisor_node,
274
+ )
275
+
276
+
277
+ def create_research_graph(checkpointer=None):
278
+ """Build the research state graph.
279
+
280
+ Args:
281
+ checkpointer: Optional SqliteSaver/MongoDBSaver for persistence
282
+ """
283
+ graph = StateGraph(ResearchState)
284
+
285
+ # Add nodes
286
+ graph.add_node("supervisor", supervisor_node)
287
+ graph.add_node("search", search_node)
288
+ graph.add_node("judge", judge_node)
289
+ graph.add_node("resolve", resolve_node)
290
+ graph.add_node("synthesize", synthesize_node)
291
+
292
+ # Define edges (supervisor routes based on state.next_step)
293
+ graph.add_edge("search", "supervisor")
294
+ graph.add_edge("judge", "supervisor")
295
+ graph.add_edge("resolve", "supervisor")
296
+ graph.add_edge("synthesize", END)
297
+
298
+ # Conditional routing from supervisor
299
+ graph.add_conditional_edges(
300
+ "supervisor",
301
+ lambda state: state["next_step"],
302
+ {
303
+ "search": "search",
304
+ "judge": "judge",
305
+ "resolve": "resolve",
306
+ "synthesize": "synthesize",
307
+ "finish": END,
308
+ },
309
+ )
310
+
311
+ # Entry point
312
+ graph.set_entry_point("supervisor")
313
+
314
+ return graph.compile(checkpointer=checkpointer)
315
+ ```
316
+
317
+ ### 4.4 Orchestrator Integration
318
+
319
+ **File:** `src/orchestrators/langgraph_orchestrator.py`
320
+
321
+ ```python
322
+ """LangGraph-based orchestrator with structured state."""
323
+ from collections.abc import AsyncGenerator
324
+ from langgraph.checkpoint.sqlite.aio import AsyncSqliteSaver
325
+
326
+ from src.agents.graph.workflow import create_research_graph
327
+ from src.agents.graph.state import ResearchState
328
+ from src.orchestrators.base import OrchestratorProtocol
329
+ from src.utils.models import AgentEvent
330
+
331
+
332
+ class LangGraphOrchestrator(OrchestratorProtocol):
333
+ """State-driven research orchestrator using LangGraph."""
334
+
335
+ def __init__(
336
+ self,
337
+ max_iterations: int = 10,
338
+ checkpoint_path: str | None = None,
339
+ ):
340
+ self._max_iterations = max_iterations
341
+ self._checkpoint_path = checkpoint_path
342
+
343
+ async def run(self, query: str) -> AsyncGenerator[AgentEvent, None]:
344
+ """Execute research workflow with structured state."""
345
+ # Setup checkpointer (SQLite for dev, MongoDB for prod)
346
+ checkpointer = None
347
+ if self._checkpoint_path:
348
+ checkpointer = AsyncSqliteSaver.from_conn_string(self._checkpoint_path)
349
+
350
+ graph = create_research_graph(checkpointer)
351
+
352
+ # Initialize state
353
+ initial_state: ResearchState = {
354
+ "query": query,
355
+ "hypotheses": [],
356
+ "conflicts": [],
357
+ "evidence_ids": [],
358
+ "messages": [],
359
+ "next_step": "search",
360
+ "iteration_count": 0,
361
+ "max_iterations": self._max_iterations,
362
+ }
363
+
364
+ yield AgentEvent(type="started", message=f"Starting research: {query}")
365
+
366
+ # Stream through graph
367
+ async for event in graph.astream(initial_state):
368
+ # Convert graph events to AgentEvents
369
+ yield self._convert_event(event)
370
+ ```
371
+
372
+ ---
373
+
374
+ ## 5. Dependencies
375
+
376
+ ### Required Packages
377
+
378
+ ```toml
379
+ # pyproject.toml additions
380
+ [project.optional-dependencies]
381
+ langgraph = [
382
+ "langgraph>=0.2.50",
383
+ "langchain>=0.3.9",
384
+ "langchain-core>=0.3.21",
385
+ "langchain-huggingface>=0.1.2",
386
+ "langgraph-checkpoint-sqlite>=2.0.0",
387
+ ]
388
+ ```
389
+
390
+ ### Installation
391
+
392
+ ```bash
393
+ # Development
394
+ uv add langgraph langchain langchain-huggingface langgraph-checkpoint-sqlite
395
+
396
+ # Production (add MongoDB checkpointer)
397
+ uv add langgraph-checkpoint-mongodb
398
+ ```
399
+
400
+ ### HuggingFace Model Integration
401
+
402
+ ```python
403
+ # Using Llama 3.1 via HuggingFace Inference API
404
+ from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint
405
+
406
+ llm = HuggingFaceEndpoint(
407
+ repo_id="meta-llama/Llama-3.1-70B-Instruct",
408
+ task="text-generation",
409
+ max_new_tokens=2048,
410
+ huggingfacehub_api_token=settings.hf_token,
411
+ )
412
+ chat = ChatHuggingFace(llm=llm)
413
+ ```
414
+
415
+ ---
416
+
417
+ ## 6. Implementation Plan (TDD)
418
+
419
+ ### Phase 1: State Schema (2 hours)
420
+
421
+ 1. Create `src/agents/graph/__init__.py`
422
+ 2. Create `src/agents/graph/state.py` with TypedDict schemas
423
+ 3. Write `tests/unit/graph/test_state.py`:
424
+ - Test reducer behavior (operator.add)
425
+ - Test state initialization
426
+ - Test hypothesis/conflict type validation
427
+
428
+ ### Phase 2: Graph Nodes (4 hours)
429
+
430
+ 1. Create `src/agents/graph/nodes.py`
431
+ 2. Adapt existing tool calls (pubmed, clinicaltrials, europepmc)
432
+ 3. Write `tests/unit/graph/test_nodes.py`:
433
+ - Test each node in isolation (mock LLM)
434
+ - Test state update format
435
+
436
+ ### Phase 3: Workflow Graph (2 hours)
437
+
438
+ 1. Create `src/agents/graph/workflow.py`
439
+ 2. Wire up StateGraph with conditional edges
440
+ 3. Write `tests/integration/graph/test_workflow.py`:
441
+ - Test routing logic
442
+ - Test end-to-end with mocked nodes
443
+
444
+ ### Phase 4: Orchestrator (2 hours)
445
+
446
+ 1. Create `src/orchestrators/langgraph_orchestrator.py`
447
+ 2. Update `src/orchestrators/factory.py` to include "langgraph" mode
448
+ 3. Update `src/app.py` UI dropdown
449
+ 4. Write `tests/e2e/test_langgraph_mode.py`
450
+
451
+ ### Phase 5: Gradio Integration (1 hour)
452
+
453
+ 1. Add "God Mode" option to Gradio dropdown
454
+ 2. Test streaming events
455
+ 3. Verify checkpointing (pause/resume)
456
+
457
+ ---
458
+
459
+ ## 7. Migration Strategy
460
+
461
+ 1. **Parallel Implementation:** Build as new mode alongside existing "simple" and "magentic"
462
+ 2. **UI Dropdown:** Add "God Mode (Experimental)" option
463
+ 3. **Feature Flag:** Use `settings.enable_langgraph_mode` to control availability
464
+ 4. **Deprecation Path:** Once stable, deprecate "magentic" mode (Q1 2026)
465
+
466
+ ---
467
+
468
+ ## 8. Acceptance Criteria
469
+
470
+ - [ ] `ResearchState` TypedDict defined with all fields
471
+ - [ ] All 4 nodes (search, judge, resolve, synthesize) implemented
472
+ - [ ] Supervisor routing logic works based on structured state
473
+ - [ ] Checkpointing enables pause/resume
474
+ - [ ] Works with HuggingFace Inference API (no OpenAI required)
475
+ - [ ] Integration tests pass with mocked LLM
476
+ - [ ] E2E test passes with real API call
477
+
478
+ ---
479
+
480
+ ## 9. References
481
+
482
+ ### Primary Sources
483
+ - [LangGraph Official Docs](https://docs.langchain.com/oss/python/langgraph)
484
+ - [LangGraph Persistence Guide](https://docs.langchain.com/oss/python/langgraph/persistence)
485
+ - [MongoDB + LangGraph Integration](https://www.mongodb.com/docs/atlas/ai-integrations/langgraph/)
486
+
487
+ ### Research & Analysis
488
+ - [LangGraph Multi-Agent Orchestration 2025](https://latenode.com/blog/langgraph-multi-agent-orchestration-complete-framework-guide-architecture-analysis-2025)
489
+ - [LangChain vs LangGraph Comparison](https://kanerika.com/blogs/langchain-vs-langgraph/)
490
+ - [Building Deep Research Agents](https://towardsdatascience.com/langgraph-101-lets-build-a-deep-research-agent/)
491
+ - [Mem0 + LangGraph Integration](https://blog.futuresmart.ai/ai-agents-memory-mem0-langgraph-agent-integration)
492
+ - [AI Memory Wars Benchmark](https://guptadeepak.com/the-ai-memory-wars-why-one-system-crushed-the-competition-and-its-not-openai/)