Diomedes Git commited on
Commit
3c5cdfe
·
1 Parent(s): a2a6cfa

rotating providers for rate lmit dodging, etc

Browse files
GEMINI.md DELETED
@@ -1,75 +0,0 @@
1
- ## Project Overview
2
-
3
- This project, named "Cluas" or "Corvid Council," is a multi-agent AI research tool. It features a council of four AI "corvid" characters who collaborate to answer user questions. The project is built in Python using the Gradio framework for the user interface. Each character has a unique personality and access to a specific set of tools for information gathering.
4
-
5
- The core components of the project are:
6
-
7
- * **`app.py`**: The main entry point of the application, which launches the Gradio UI.
8
- * **`src/gradio/app.py`**: Defines the Gradio user interface and the main chat logic.
9
- * **`src/characters/`**: Contains the implementation for each of the four AI characters (Corvus, Magpie, Raven, and Crow). Each character has its own module defining its personality, tools, and response generation logic.
10
- * **`src/cluas_mcp/`**: Implements the tools used by the characters. This includes modules for academic search (PubMed, ArXiv, Semantic Scholar), news search, and web search.
11
-
12
- The application is designed to be a "dialectic research tool" where the AI agents can debate, synthesize information, and build upon past discussions.
13
-
14
- ## Building and Running
15
-
16
- **1. Install Dependencies:**
17
-
18
- The project uses `uv` for package management. To install the dependencies, run:
19
-
20
- ```bash
21
- uv sync
22
- ```
23
-
24
- i guess you could also do
25
-
26
- ```
27
- uv pip install -r requirements.txt
28
- ```
29
-
30
- **2. Set up Environment Variables:**
31
-
32
- The application requires API keys for the services it uses (e.g., Groq). Create a `.env` file in the root of the project and add the necessary API keys:
33
-
34
- ```
35
- GROQ_API_KEY=your_groq_api_key
36
- ```
37
-
38
- **3. Run the Application:**
39
-
40
- To start the Gradio application, run:
41
-
42
- ```bash
43
- python app.py
44
- ```
45
-
46
- This will start a local web server, and you can access the application in your browser at the URL provided in the console.
47
-
48
- **4. Running Tests:**
49
-
50
- The project uses `pytest` for testing. The tests are located in the `tests/` directory.
51
-
52
- To run all tests:
53
-
54
- ```bash
55
- pytest
56
- ```
57
-
58
- To run only tests that make live API calls:
59
-
60
- ```bash
61
- uv run --prerelease=allow pytest -q tests/clients
62
- ```
63
-
64
- To run only tests that do not make live API calls:
65
- ```bash
66
- uv run --prerelease=allow pytest -q tests/clients/non_calling
67
- ```
68
-
69
- ## Development Conventions
70
-
71
- * **Modular Structure:** The codebase is organized into modules with specific responsibilities. The `characters` and `cluas_mcp` directories are good examples of this.
72
- * **Dependency Management:** Project dependencies are managed using `uv` and are listed in the `pyproject.toml` and `requirements.txt` files.
73
- * **Testing:** The project has a `tests` directory with unit and integration tests. `pytest` is the testing framework of choice.
74
- * **Environment Variables:** API keys and other sensitive information are managed through a `.env` file.
75
- * **Gradio for UI:** The user interface is built with the Gradio library.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -18,23 +18,23 @@ short_description: A gathering of guides, a council of counsels
18
  ## - A Multi-Agent Research Council
19
 
20
 
21
- <div class="corvid-banner">
22
- <div class="corvid-banner-inner">
23
  <h1>CLUAS HUGINN</h1>
24
  <h2>A Multi-Agent Deliberation Engine</h2>
25
- <div class="corvid-banner-meta">Anno MMXXV — MCP 1st Birthday Hackathon Edition</div>
26
  </div>
27
  </div>
28
 
29
  <style>
30
- .corvid-banner {
31
  width: 100%;
32
  display: flex;
33
  justify-content: center;
34
  margin: 24px 0 32px;
35
  }
36
 
37
- .corvid-banner-inner {
38
  background: #f5f4ef url('/file=static/paper.png') repeat;
39
  background-size: 300px;
40
  border: 2px solid rgba(139, 88, 40, 0.55); /* copper ink */
@@ -49,12 +49,12 @@ short_description: A gathering of guides, a council of counsels
49
  0 6px 12px rgba(0,0,0,0.04);
50
  }
51
 
52
- .corvid-banner-inner h1, .corvid-banner-inner h2 {
53
  text-shadow: 0 1px 2px rgba(0,0,0,0.1);
54
  }
55
 
56
  /* Title */
57
- .corvid-banner-inner h1 {
58
  font-size: 1.9rem;
59
  margin: 0;
60
  color: #4a3524; /* ink brown */
@@ -63,7 +63,7 @@ short_description: A gathering of guides, a council of counsels
63
  }
64
 
65
  /* Subtitle */
66
- .corvid-banner-inner h2 {
67
  font-size: 1.05rem;
68
  margin: 6px 0 10px;
69
  font-weight: 500;
@@ -72,7 +72,7 @@ short_description: A gathering of guides, a council of counsels
72
  }
73
 
74
  /* Meta line */
75
- .corvid-banner-meta {
76
  font-size: 0.9rem;
77
  color: rgba(70, 50, 35, 0.75);
78
  font-style: italic;
 
18
  ## - A Multi-Agent Research Council
19
 
20
 
21
+ <div class="cluas-banner">
22
+ <div class="cluas-banner-inner">
23
  <h1>CLUAS HUGINN</h1>
24
  <h2>A Multi-Agent Deliberation Engine</h2>
25
+ <div class="cluas-banner-meta">Anno MMXXV — MCP 1st Birthday Hackathon Edition</div>
26
  </div>
27
  </div>
28
 
29
  <style>
30
+ .cluas-banner {
31
  width: 100%;
32
  display: flex;
33
  justify-content: center;
34
  margin: 24px 0 32px;
35
  }
36
 
37
+ .cluas-banner-inner {
38
  background: #f5f4ef url('/file=static/paper.png') repeat;
39
  background-size: 300px;
40
  border: 2px solid rgba(139, 88, 40, 0.55); /* copper ink */
 
49
  0 6px 12px rgba(0,0,0,0.04);
50
  }
51
 
52
+ .cluas-banner-inner h1, .cluas-banner-inner h2 {
53
  text-shadow: 0 1px 2px rgba(0,0,0,0.1);
54
  }
55
 
56
  /* Title */
57
+ .cluas-banner-inner h1 {
58
  font-size: 1.9rem;
59
  margin: 0;
60
  color: #4a3524; /* ink brown */
 
63
  }
64
 
65
  /* Subtitle */
66
+ .cluas-banner-inner h2 {
67
  font-size: 1.05rem;
68
  margin: 6px 0 10px;
69
  font-weight: 500;
 
72
  }
73
 
74
  /* Meta line */
75
+ .cluas-banner-meta {
76
  font-size: 0.9rem;
77
  color: rgba(70, 50, 35, 0.75);
78
  font-style: italic;
src/characters/base_character.py CHANGED
@@ -33,3 +33,9 @@ class Character(ABC):
33
  async def respond(self, message: str, history: List[Dict], user_key: Optional[str] = None) -> str:
34
  """Return character response based on message and conversation history."""
35
  pass
 
 
 
 
 
 
 
33
  async def respond(self, message: str, history: List[Dict], user_key: Optional[str] = None) -> str:
34
  """Return character response based on message and conversation history."""
35
  pass
36
+
37
+ async def respond_stream(self, message: str, history: List[Dict], user_key: Optional[str] = None):
38
+ """Stream character response in chunks (fallback to full response)."""
39
+ # Default implementation: get full response and yield it
40
+ full_response = await self.respond(message, history, user_key)
41
+ yield full_response
src/characters/corvus.py CHANGED
@@ -48,7 +48,7 @@ class Corvus(Character):
48
  "primary": "nebius",
49
  "fallback": ["groq"],
50
  "models": {
51
- "nebius": "Qwen3-235B-A22B-Instruct-2507",
52
  "groq": "llama-3.1-8b-instant"
53
  },
54
  "timeout": 30,
 
48
  "primary": "nebius",
49
  "fallback": ["groq"],
50
  "models": {
51
+ "nebius": "meta-llama/Llama-3.3-70B-Instruct",
52
  "groq": "llama-3.1-8b-instant"
53
  },
54
  "timeout": 30,
src/characters/crow.py CHANGED
@@ -62,7 +62,7 @@ class Crow(Character):
62
  "fallback": ["nebius"],
63
  "models": {
64
  "groq": "llama-3.1-8b-instant",
65
- "nebius": "Qwen3-235B-A22B-Instruct-2507"
66
  },
67
  "timeout": 60,
68
  "use_cloud": True
 
62
  "fallback": ["nebius"],
63
  "models": {
64
  "groq": "llama-3.1-8b-instant",
65
+ "nebius": "meta-llama/Llama-3.3-70B-Instruct"
66
  },
67
  "timeout": 60,
68
  "use_cloud": True
src/characters/cursor.md CHANGED
@@ -1,5 +1,5 @@
1
  # src/characters — Purpose
2
- Contains the four corvid persona definitions.
3
 
4
  # Important files
5
  - corvus.py
 
1
  # src/characters — Purpose
2
+ Contains the four persona definitions.
3
 
4
  # Important files
5
  - corvus.py
src/characters/magpie.py CHANGED
@@ -50,7 +50,7 @@ class Magpie(Character):
50
  "fallback": ["nebius"],
51
  "models": {
52
  "groq": "llama-3.1-8b-instant",
53
- "nebius": "Qwen3-235B-A22B-Instruct-2507"
54
  },
55
  "timeout": 60,
56
  "use_cloud": True
 
50
  "fallback": ["nebius"],
51
  "models": {
52
  "groq": "llama-3.1-8b-instant",
53
+ "nebius": "meta-llama/Llama-3.3-70B-Instruct"
54
  },
55
  "timeout": 60,
56
  "use_cloud": True
src/characters/neutral_moderator.py CHANGED
@@ -29,7 +29,7 @@ class Moderator(Character):
29
  "primary": "nebius",
30
  "fallback": ["groq"],
31
  "models": {
32
- "nebius": "Qwen3-30B-A3B-Instruct-2507", # Quality 85, cost-effective for summaries
33
  "groq": "llama-3.1-8b-instant" # Fallback
34
  },
35
  "timeout": 30,
 
29
  "primary": "nebius",
30
  "fallback": ["groq"],
31
  "models": {
32
+ "nebius": "meta-llama/Meta-Llama-3.1-8B-Instruct", # Cost-effective for summaries
33
  "groq": "llama-3.1-8b-instant" # Fallback
34
  },
35
  "timeout": 30,
src/characters/raven.py CHANGED
@@ -47,7 +47,7 @@ class Raven(Character):
47
  "primary": "nebius",
48
  "fallback": ["groq"],
49
  "models": {
50
- "nebius": "Qwen3-235B-A22B-Instruct-2507",
51
  "groq": "llama-3.1-8b-instant"
52
  },
53
  "timeout": 30,
 
47
  "primary": "nebius",
48
  "fallback": ["groq"],
49
  "models": {
50
+ "nebius": "meta-llama/Llama-3.3-70B-Instruct",
51
  "groq": "llama-3.1-8b-instant"
52
  },
53
  "timeout": 30,
src/cluas_mcp/academic/pubmed.py CHANGED
@@ -269,7 +269,7 @@ class PubMedClient:
269
  """
270
  Extract MeSH (Medical Subject Headings) terms.
271
  These are controlled vocabulary terms assigned by NCBI indexers.
272
- Very useful for corvid research to filter by topics like:
273
  - "Memory", "Learning", "Cognition"
274
  - "Social Behavior", "Animal Communication"
275
  - "Tool Use", "Problem Solving"
 
269
  """
270
  Extract MeSH (Medical Subject Headings) terms.
271
  These are controlled vocabulary terms assigned by NCBI indexers.
272
+ Very useful for research to filter by topics like:
273
  - "Memory", "Learning", "Cognition"
274
  - "Social Behavior", "Animal Communication"
275
  - "Tool Use", "Problem Solving"
src/cluas_mcp/common/paper_memory.py CHANGED
@@ -122,31 +122,4 @@ class PaperMemory:
122
 
123
 
124
  results.sort(key=lambda x: x['relevance_score'], reverse=True)
125
- return results
126
-
127
-
128
-
129
- # poss usage example:
130
-
131
- # from src.cluas_mcp.common.memory import AgentMemory
132
-
133
- # memory = AgentMemory()
134
-
135
- # # adding a new paper
136
- # memory.add_item(
137
- # title="Cognitive Ecology of Corvids",
138
- # doi="10.1234/example",
139
- # snippet="Corvids exhibit complex problem-solving abilities...",
140
- # mentioned_by="Corvus",
141
- # tags=["cognition", "tool_use"]
142
- # )
143
-
144
- # # retrieve recent items
145
- # recent = memory.get_recent(days=14)
146
- # print(recent)
147
-
148
- # # search by tag
149
- # cognition_items = memory.get_by_tag("cognition")
150
-
151
- # # search by title
152
- # search_results = memory.search_title("corvid")
 
122
 
123
 
124
  results.sort(key=lambda x: x['relevance_score'], reverse=True)
125
+ return results
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
src/cluas_mcp/common/trend_memory.py CHANGED
@@ -250,34 +250,4 @@ class TrendMemory:
250
  def clear_all(self):
251
  """Clear all entries (use with caution!)."""
252
  self.memory = {}
253
- self._write_memory({})
254
-
255
-
256
- # Usage example:
257
- # from src.cluas_mcp.common.trend_memory import TrendMemory
258
- #
259
- # memory = TrendMemory(location="Brooklyn")
260
- #
261
- # # Add a web search
262
- # memory.add_search(
263
- # query="corvid intelligence research",
264
- # results={"items": [...], "total_results": 150},
265
- # search_type="web_search",
266
- # tags=["research", "morning"],
267
- # notes="Triggered by: user question about crow cognition"
268
- # )
269
- #
270
- # # Add a trending topic
271
- # memory.add_trend(
272
- # topic="AI safety regulations",
273
- # trend_data={"rank": 3, "volume": "high", "related": [...]},
274
- # tags=["tech", "policy"],
275
- # notes="Spotted on Twitter trends"
276
- # )
277
- #
278
- # # Check search history
279
- # previous = memory.search_history("corvid", days=30)
280
- # print(f"Found {len(previous)} previous searches for 'corvid'")
281
- #
282
- # # Get recent entries
283
- # recent = memory.get_recent(days=7)
 
250
  def clear_all(self):
251
  """Clear all entries (use with caution!)."""
252
  self.memory = {}
253
+ self._write_memory({})
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
src/data/memory.json DELETED
@@ -1,11 +0,0 @@
1
- {
2
- "corvid tool use in urban environments": {
3
- "title": "Corvid Tool Use in Urban Environments",
4
- "doi": "10.1234/test",
5
- "snippet": "Test abstract",
6
- "first_mentioned": "2025-11-24T19:21:30.563330",
7
- "last_referenced": "2025-11-24T19:27:56.084461+00:00",
8
- "mentioned_by": "Test",
9
- "tags": []
10
- }
11
- }
 
 
 
 
 
 
 
 
 
 
 
 
src/gradio/app.py CHANGED
@@ -4,6 +4,7 @@ import logging
4
  import asyncio
5
  import html
6
  import random
 
7
  import tempfile
8
  from pathlib import Path
9
  from typing import Any, Dict, List, Literal, Optional, Tuple
@@ -15,10 +16,23 @@ from src.characters.neutral_moderator import Moderator
15
  from src.characters.base_character import Character
16
  from src.characters.registry import register_instance, get_all_characters, REGISTRY
17
  from src.gradio.types import BaseMessage, UIMessage, to_llm_history, from_gradio_format
 
 
18
 
19
 
20
  logger = logging.getLogger(__name__)
21
 
 
 
 
 
 
 
 
 
 
 
 
22
  # instantiate characters (as you already do)
23
  corvus = Corvus()
24
  magpie = Magpie()
@@ -49,30 +63,61 @@ PHASE_INSTRUCTIONS = {
49
  CSS_PATH = Path(__file__).parent / "styles.css"
50
  CUSTOM_CSS = CSS_PATH.read_text() if CSS_PATH.exists() else ""
51
 
52
- def render_chat_html(history: list) -> str:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
  html_parts = []
54
 
55
- for msg in history:
56
- if msg.speaker == "user":
 
 
 
 
 
 
 
57
  html_parts.append(f'''
58
- <div class="chat-message user">
59
- <div class="chat-avatar"><img src="avatars/user.png"></div>
60
- <div class="chat-content">
61
- <div class="chat-bubble">{html.escape(msg.content)}</div>
62
- </div>
63
  </div>
64
- ''')
65
- else:
 
 
 
 
 
 
 
66
  html_parts.append(f'''
67
- <div class="chat-message {msg.speaker.lower()}">
68
- <div class="chat-avatar"><img src="avatars/{msg.speaker.lower()}.png"></div>
69
- <div class="chat-content">
70
- <div class="chat-name">{msg.emoji} {msg.speaker}</div>
71
- <div class="chat-bubble">{html.escape(msg.content)}</div>
72
- </div>
73
  </div>
74
- ''')
75
- return "\n".join(html_parts)
 
 
76
 
77
 
78
 
@@ -93,18 +138,46 @@ def parse_mentions(message: str) -> list[str] | None:
93
  def format_message(character: Character, message: str) -> Tuple[str, str]:
94
  """Format message with character name and emoji"""
95
  emoji = getattr(character, "emoji", "💬")
96
- color = getattr(character, "color", "#FFFFFF")
97
  name = getattr(character, "name", "counsel")
98
 
99
  formatted = f'{emoji} <span style="color:{color}; font-weight:bold;">{name}</span>: {message}'
100
 
101
  return formatted, name
102
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
103
  async def get_character_response(char: Character, message: str, llm_history: List[Dict], user_key: Optional[str] = None) -> str:
104
  """Get response from a character; uses pre-formatted llm_history"""
105
  try:
106
  logger.debug(f"Calling {char.name}.respond() with message: {message[:50]}...")
107
- response = await char.respond(message, llm_history, user_key=user_key)
 
 
 
108
  logger.debug(f"{char.name} responded with: {response[:100] if response else '<EMPTY>'}")
109
 
110
  if not response or not response.strip():
@@ -135,65 +208,98 @@ async def get_character_response(char: Character, message: str, llm_history: Lis
135
 
136
 
137
 
138
- async def chat_fn(message: str, history: list, user_key: Optional[str] = None):
139
- """async chat handler, using dataclasses internally"""
140
- if not message.strip():
141
  yield history
142
  return
143
 
144
  internal_history = [from_gradio_format(msg) for msg in history]
 
145
 
146
- user_msg = BaseMessage(role="user", speaker="user", content=message)
147
- internal_history.append(user_msg)
 
 
148
 
149
- history.append(user_msg.to_gradio_format())
150
- yield history
 
 
 
 
 
 
151
 
152
- mentioned_chars = parse_mentions(message)
 
 
153
 
154
- for char in CHARACTERS:
155
- if mentioned_chars and char.name not in mentioned_chars:
156
- continue
157
-
158
- # typing indicator
159
- for i in range(4):
160
- dots = "." * i
161
- typing_msg = UIMessage.from_character(char, f"{dots}", len(internal_history))
162
-
163
- if i == 0:
164
- history.append(typing_msg.to_gradio_format())
165
- else:
166
- history[-1] = typing_msg.to_gradio_format()
167
- yield history
168
- await asyncio.sleep(0.25)
169
-
170
  try:
171
  llm_history = to_llm_history(internal_history[-5:])
172
- response = await get_character_response(char, message, llm_history, user_key=user_key)
173
-
174
- history.pop() # removes typing indicator
175
-
176
- ui_msg = UIMessage.from_character(char, response, len(internal_history))
177
- internal_history.append(ui_msg)
178
-
179
 
180
- formatted, _ = format_message(char, response)
 
 
 
 
 
 
 
 
 
 
 
 
 
181
 
182
- display_msg = BaseMessage(
183
- role="assistant",
184
- speaker=char.name.lower(),
185
- content=formatted
186
- )
 
 
 
 
187
 
188
- history.append(display_msg.to_gradio_format())
189
- yield history
190
- await asyncio.sleep(getattr(char, "delay", 1.0))
191
  except Exception as e:
192
- logger.error(f"{char.name} error: {e}")
193
- history.pop()
194
- yield history
195
-
196
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
197
 
198
  def _phase_instruction(phase: str) -> str:
199
  return PHASE_INSTRUCTIONS.get(phase, "")
@@ -275,7 +381,7 @@ async def deliberate(
275
  user_key: Optional[str] = None,
276
  ) -> Dict[str, Any]:
277
  """
278
- Run a dialectic deliberation (thesis → antithesis → synthesis) with the Corvid Council.
279
 
280
  Args:
281
  question: Topic to deliberate on.
@@ -336,7 +442,7 @@ async def deliberate(
336
  logger.error("Phase %s: %s failed (%s)", phase, name, response)
337
  text = f"*{name} could not respond.*"
338
  else:
339
- text = response.strip()
340
  logger.debug("Phase %s: %s response: '%s'", phase, name, text[:100] if text else "<EMPTY>")
341
 
342
  conversation_llm.append(f"[{phase.upper()} | Cycle {cycle_idx + 1}] {name}: {text}")
@@ -351,9 +457,9 @@ async def deliberate(
351
  entry = {
352
  "cycle": cycle_idx + 1,
353
  "phase": phase,
354
- "name": char.name,
355
  "content": text,
356
- "char": char,
357
  "prompt": prompt,
358
  }
359
 
@@ -569,7 +675,7 @@ with gr.Blocks(title="Cluas Huginn") as demo:
569
 
570
  # Branding / tagline
571
  gr.Markdown("""
572
- <div style="text-align:center; color:#ccc;">
573
  <h1>cluas huginn</h1>
574
  <p><i>a gathering of guides, a council of counsels</i></p>
575
  <p>chat with a council of four corvid-obsessed agents</p>
@@ -607,7 +713,7 @@ with gr.Blocks(title="Cluas Huginn") as demo:
607
 
608
  # API Key input (separated with spacing)
609
  gr.HTML("<div style='margin-top: 20px;'></div>") # Spacer
610
- with gr.Column(scale=1, max_width=300):
611
  user_key = gr.Textbox(
612
  label="API Key (Optional)",
613
  placeholder="OpenAI (sk-...), Anthropic (sk-ant-...), or HF (hf_...)",
@@ -615,13 +721,11 @@ with gr.Blocks(title="Cluas Huginn") as demo:
615
  container=True,
616
  )
617
 
618
- # Handle submit
619
- msg.submit(chat_fn, [msg, chat_state, user_key], [chat_state], queue=True)\
620
- .then(render_chat_html, [chat_state], [chat_html])\
621
  .then(lambda: "", None, [msg])
622
 
623
- submit_btn.click(chat_fn, [msg, chat_state, user_key], [chat_state], queue=True)\
624
- .then(render_chat_html, [chat_state], [chat_html])\
625
  .then(lambda: "", None, [msg])
626
 
627
  # TAB 2: Deliberation mode
@@ -662,7 +766,7 @@ with gr.Blocks(title="Cluas Huginn") as demo:
662
  info="Who provides the final synthesis?"
663
  )
664
 
665
- deliberate_btn = gr.Button("🎯 Deliberate", variant="primary", scale=1, elem_id="deliberate-btn")
666
  deliberation_output = gr.HTML(label="Deliberation Output")
667
 
668
  download_btn = gr.DownloadButton(
@@ -681,7 +785,7 @@ with gr.Blocks(title="Cluas Huginn") as demo:
681
 
682
  gr.Markdown("""
683
  ### About
684
- The Corvid Council is a multi-agent system where four specialized AI characters collaborate to answer questions.
685
  Each character brings unique perspective and expertise to enrich the discussion.
686
 
687
  **Chat Mode:** Direct conversation with the council.
@@ -720,4 +824,4 @@ if __name__ == "__main__":
720
  demo.load(js="window.loading_status = window.loading_status || {};")
721
 
722
  demo.queue()
723
- demo.launch(css=CUSTOM_CSS)
 
4
  import asyncio
5
  import html
6
  import random
7
+ import re
8
  import tempfile
9
  from pathlib import Path
10
  from typing import Any, Dict, List, Literal, Optional, Tuple
 
16
  from src.characters.base_character import Character
17
  from src.characters.registry import register_instance, get_all_characters, REGISTRY
18
  from src.gradio.types import BaseMessage, UIMessage, to_llm_history, from_gradio_format
19
+ from gradio.themes import Monochrome
20
+
21
 
22
 
23
  logger = logging.getLogger(__name__)
24
 
25
+ # Tool call sanitization
26
+ TOOL_CALL_PATTERN = re.compile(r"function=(\w+)>(.*?)</function>", re.DOTALL)
27
+
28
+ def sanitize_tool_calls(text: str) -> str:
29
+ """Replace raw tool call markup with readable format."""
30
+ def _replace(match):
31
+ func = match.group(1)
32
+ payload = match.group(2)
33
+ return f"*Tool call · {func} {payload}*"
34
+ return TOOL_CALL_PATTERN.sub(_replace, text)
35
+
36
  # instantiate characters (as you already do)
37
  corvus = Corvus()
38
  magpie = Magpie()
 
63
  CSS_PATH = Path(__file__).parent / "styles.css"
64
  CUSTOM_CSS = CSS_PATH.read_text() if CSS_PATH.exists() else ""
65
 
66
+
67
+ theme = Monochrome(
68
+ font=["Söhne", "sans-serif"],
69
+ font_mono=[gr.themes.GoogleFont("JetBrains Mono"), "monospace"],
70
+ text_size="lg",
71
+ primary_hue="blue",
72
+ secondary_hue="blue",
73
+ radius_size="lg",
74
+ )
75
+
76
+ theme.set(
77
+ body_background_fill="#f5f4ef",
78
+ block_background_fill="#ffffffd8",
79
+ )
80
+
81
+
82
+
83
+ def render_chat_html(history: List[Dict]) -> str:
84
+ """Render chat history to HTML (supports streaming)."""
85
  html_parts = []
86
 
87
+ for message in history:
88
+ role = message.get("role", "")
89
+ content = message.get("content", "")
90
+ name = message.get("name", "")
91
+ emoji = message.get("emoji", "")
92
+ is_typing = message.get("typing", False)
93
+ is_streaming = message.get("streaming", False)
94
+
95
+ if role == "user":
96
  html_parts.append(f'''
97
+ <div class="chat-message user">
98
+ <div class="chat-content">
99
+ <div class="chat-bubble">{html.escape(content)}</div>
 
 
100
  </div>
101
+ </div>
102
+ ''')
103
+ elif role == "assistant":
104
+ css_class = f"chat-message {name.lower()}"
105
+ if is_typing:
106
+ css_class += " typing"
107
+ elif is_streaming:
108
+ css_class += " streaming"
109
+
110
  html_parts.append(f'''
111
+ <div class="{css_class}">
112
+ <div class="chat-avatar">{emoji}</div>
113
+ <div class="chat-content">
114
+ <div class="chat-name">{name}</div>
115
+ <div class="chat-bubble">{html.escape(content)}</div>
 
116
  </div>
117
+ </div>
118
+ ''')
119
+
120
+ return ''.join(html_parts)
121
 
122
 
123
 
 
138
  def format_message(character: Character, message: str) -> Tuple[str, str]:
139
  """Format message with character name and emoji"""
140
  emoji = getattr(character, "emoji", "💬")
141
+ color = getattr(character, "color", "#121314")
142
  name = getattr(character, "name", "counsel")
143
 
144
  formatted = f'{emoji} <span style="color:{color}; font-weight:bold;">{name}</span>: {message}'
145
 
146
  return formatted, name
147
 
148
+ async def get_character_response_stream(char: Character, message: str, llm_history: List[Dict], user_key: Optional[str] = None):
149
+ """Stream character response in real-time chunks."""
150
+ try:
151
+ logger.debug(f"Streaming {char.name}.respond() with message: {message[:50]}...")
152
+
153
+ # Get the streaming response from character
154
+ async for chunk in char.respond_stream(message, llm_history, user_key=user_key):
155
+ if chunk:
156
+ logger.debug(f"{char.name} streaming chunk: {chunk[:50]}...")
157
+ yield chunk
158
+
159
+ logger.debug(f"{char.name} stream completed")
160
+
161
+ except Exception as e:
162
+ logger.error(f"{char.name} streaming error: {str(e)}")
163
+ # Fallback response
164
+ error_messages = {
165
+ "Corvus": "*pauses mid-thought, adjusting spectacles* I seem to have lost my train of thought...",
166
+ "Magpie": "*distracted by something shiny* Oh! Sorry, what were we talking about?",
167
+ "Raven": "Connection acting up again. Typical.",
168
+ "Crow": "*silent, gazing into the distance*"
169
+ }
170
+ fallback = error_messages.get(char.name, f"*{char.name} seems distracted*")
171
+ yield fallback
172
+
173
  async def get_character_response(char: Character, message: str, llm_history: List[Dict], user_key: Optional[str] = None) -> str:
174
  """Get response from a character; uses pre-formatted llm_history"""
175
  try:
176
  logger.debug(f"Calling {char.name}.respond() with message: {message[:50]}...")
177
+ full_response = ""
178
+ async for chunk in get_character_response_stream(char, message, llm_history, user_key):
179
+ full_response += chunk
180
+ response = full_response
181
  logger.debug(f"{char.name} responded with: {response[:100] if response else '<EMPTY>'}")
182
 
183
  if not response or not response.strip():
 
208
 
209
 
210
 
211
+ async def chat_fn_stream(msg: str, history: List[Dict], user_key: Optional[str] = None):
212
+ """Streaming chat function - yields updates in real-time."""
213
+ if not msg or not msg.strip():
214
  yield history
215
  return
216
 
217
  internal_history = [from_gradio_format(msg) for msg in history]
218
+ internal_history.append(BaseMessage(role="user", speaker="user", content=msg))
219
 
220
+ # Parse mentions
221
+ mentioned_names = parse_mentions(msg)
222
+ if not mentioned_names:
223
+ mentioned_names = [char.name for char in CHARACTERS]
224
 
225
+ # Get mentioned characters (case insensitive)
226
+ mentioned = []
227
+ for name in mentioned_names:
228
+ char = REGISTRY.get(name.lower())
229
+ if char:
230
+ mentioned.append(char)
231
+ else:
232
+ logger.warning(f"Character '{name}' not found in registry")
233
 
234
+ if not mentioned:
235
+ yield render_chat_html(history)
236
+ return
237
 
238
+ # Add typing indicators
239
+ for char in mentioned:
240
+ history.append({
241
+ "role": "assistant",
242
+ "content": f"*{char.name} is thinking...*",
243
+ "name": char.name,
244
+ "emoji": char.emoji,
245
+ "typing": True
246
+ })
247
+
248
+ yield render_chat_html(history) # Show typing indicators
249
+
250
+ # Remove typing indicators and add responses
251
+ history.pop() # Remove last typing indicator
252
+
253
+ for char in mentioned:
254
  try:
255
  llm_history = to_llm_history(internal_history[-5:])
256
+ await asyncio.sleep(0.5) # Rate limiting delay
 
 
 
 
 
 
257
 
258
+ # Stream response
259
+ response = ""
260
+ async for chunk in get_character_response_stream(char, msg, llm_history, user_key):
261
+ response += chunk
262
+ # Update with partial response
263
+ history.append({
264
+ "role": "assistant",
265
+ "content": response,
266
+ "name": char.name,
267
+ "emoji": char.emoji,
268
+ "streaming": True
269
+ })
270
+ yield render_chat_html(history)
271
+ history.pop() # Remove for next update
272
 
273
+ # Sanitize and final response
274
+ sanitized_response = sanitize_tool_calls(response)
275
+ history.append({
276
+ "role": "assistant",
277
+ "content": sanitized_response,
278
+ "name": char.name,
279
+ "emoji": char.emoji
280
+ })
281
+ internal_history.append(BaseMessage(role="assistant", speaker=char.name, content=sanitized_response))
282
 
 
 
 
283
  except Exception as e:
284
+ logger.error(f"Error in chat_fn_stream for {char.name}: {e}")
285
+ history.append({
286
+ "role": "assistant",
287
+ "content": f"*{char.name} seems distracted*",
288
+ "name": char.name,
289
+ "emoji": char.emoji
290
+ })
291
+
292
+ yield render_chat_html(history) # Final result
293
+
294
+
295
+ async def chat_fn(msg: str, history: List[Dict], user_key: Optional[str] = None) -> str:
296
+ """Non-streaming chat function that returns HTML."""
297
+ result = []
298
+ async for html_update in chat_fn_stream(msg, history, user_key):
299
+ result.append(html_update)
300
+ return result[-1] if result else render_chat_html(history)
301
+
302
+
303
 
304
  def _phase_instruction(phase: str) -> str:
305
  return PHASE_INSTRUCTIONS.get(phase, "")
 
381
  user_key: Optional[str] = None,
382
  ) -> Dict[str, Any]:
383
  """
384
+ Run a dialectic deliberation (thesis → antithesis → synthesis) with the council.
385
 
386
  Args:
387
  question: Topic to deliberate on.
 
442
  logger.error("Phase %s: %s failed (%s)", phase, name, response)
443
  text = f"*{name} could not respond.*"
444
  else:
445
+ text = sanitize_tool_calls(response.strip())
446
  logger.debug("Phase %s: %s response: '%s'", phase, name, text[:100] if text else "<EMPTY>")
447
 
448
  conversation_llm.append(f"[{phase.upper()} | Cycle {cycle_idx + 1}] {name}: {text}")
 
457
  entry = {
458
  "cycle": cycle_idx + 1,
459
  "phase": phase,
460
+ "name": char_obj.name,
461
  "content": text,
462
+ "char": char_obj,
463
  "prompt": prompt,
464
  }
465
 
 
675
 
676
  # Branding / tagline
677
  gr.Markdown("""
678
+ <div style="text-align:center; color:#806565;">
679
  <h1>cluas huginn</h1>
680
  <p><i>a gathering of guides, a council of counsels</i></p>
681
  <p>chat with a council of four corvid-obsessed agents</p>
 
713
 
714
  # API Key input (separated with spacing)
715
  gr.HTML("<div style='margin-top: 20px;'></div>") # Spacer
716
+ with gr.Column(scale=2, min_width=300):
717
  user_key = gr.Textbox(
718
  label="API Key (Optional)",
719
  placeholder="OpenAI (sk-...), Anthropic (sk-ant-...), or HF (hf_...)",
 
721
  container=True,
722
  )
723
 
724
+ # Handle submit with streaming
725
+ msg.submit(chat_fn, [msg, chat_state, user_key], [chat_html], queue=True)\
 
726
  .then(lambda: "", None, [msg])
727
 
728
+ submit_btn.click(chat_fn, [msg, chat_state, user_key], [chat_html], queue=True)\
 
729
  .then(lambda: "", None, [msg])
730
 
731
  # TAB 2: Deliberation mode
 
766
  info="Who provides the final synthesis?"
767
  )
768
 
769
+ deliberate_btn = gr.Button("Deliberate", variant="primary", scale=1, elem_id="deliberate-btn")
770
  deliberation_output = gr.HTML(label="Deliberation Output")
771
 
772
  download_btn = gr.DownloadButton(
 
785
 
786
  gr.Markdown("""
787
  ### About
788
+ Cluas Huginn is a multi-agent system where four specialized AI characters collaborate to answer questions.
789
  Each character brings unique perspective and expertise to enrich the discussion.
790
 
791
  **Chat Mode:** Direct conversation with the council.
 
824
  demo.load(js="window.loading_status = window.loading_status || {};")
825
 
826
  demo.queue()
827
+ demo.launch(theme=theme, css=CUSTOM_CSS)
src/gradio/styles.css CHANGED
@@ -57,12 +57,15 @@ button:hover {
57
  /* CHAT LAYOUT */
58
  #chat-container {
59
  max-width: 720px;
60
- max-height: 600px;
61
  margin: 0 auto;
62
  padding: 16px;
63
  overflow-y: auto;
 
64
  background: var(--bg);
65
  font-family: Inter, system-ui, sans-serif;
 
 
66
  }
67
 
68
  /* Message wrapper */
@@ -101,7 +104,7 @@ button:hover {
101
  font-family: Inter, system-ui, sans-serif !important;
102
  padding: 12px 16px;
103
  border-radius: var(--radius);
104
- background: #ffffffd8 !important;
105
  border: 1px solid rgba(120, 90, 50, 0.15) !important;
106
  backdrop-filter: blur(2px);
107
  box-shadow:
@@ -265,9 +268,32 @@ button:hover {
265
  line-height: 1.6;
266
  }
267
 
 
 
 
 
 
 
 
 
 
 
 
 
 
268
  /* ANIMATIONS */
269
  @keyframes blink {
270
  0% { opacity: 0.2; }
271
  20% { opacity: 1; }
272
  100% { opacity: 0.2; }
273
  }
 
 
 
 
 
 
 
 
 
 
 
57
  /* CHAT LAYOUT */
58
  #chat-container {
59
  max-width: 720px;
60
+ height: 600px;
61
  margin: 0 auto;
62
  padding: 16px;
63
  overflow-y: auto;
64
+ overflow-x: hidden;
65
  background: var(--bg);
66
  font-family: Inter, system-ui, sans-serif;
67
+ border: 1px solid rgba(0,0,0,0.1);
68
+ border-radius: var(--radius);
69
  }
70
 
71
  /* Message wrapper */
 
104
  font-family: Inter, system-ui, sans-serif !important;
105
  padding: 12px 16px;
106
  border-radius: var(--radius);
107
+ background: #282727d8 !important;
108
  border: 1px solid rgba(120, 90, 50, 0.15) !important;
109
  backdrop-filter: blur(2px);
110
  box-shadow:
 
268
  line-height: 1.6;
269
  }
270
 
271
+ /* STREAMING STYLES */
272
+ .chat-message.streaming .chat-bubble {
273
+ border-left: 3px solid #ffa500;
274
+ background: #fff9e6;
275
+ }
276
+
277
+ .chat-message.typing .chat-bubble {
278
+ border-left: 3px solid #ccc;
279
+ background: #f5f5f5;
280
+ color: #666;
281
+ font-style: italic;
282
+ }
283
+
284
  /* ANIMATIONS */
285
  @keyframes blink {
286
  0% { opacity: 0.2; }
287
  20% { opacity: 1; }
288
  100% { opacity: 0.2; }
289
  }
290
+
291
+ @keyframes pulse {
292
+ 0% { opacity: 1; }
293
+ 50% { opacity: 0.7; }
294
+ 100% { opacity: 1; }
295
+ }
296
+
297
+ .chat-message.streaming {
298
+ animation: pulse 1.5s ease-in-out infinite;
299
+ }
src/gradio/types.py CHANGED
@@ -54,8 +54,18 @@ def to_gradio_history(messages: List[BaseMessage]) -> List[Dict]:
54
  def from_gradio_format(gradio_msg: Dict) -> BaseMessage:
55
  """Parse Gradio format back to BaseMessage"""
56
  role = gradio_msg["role"]
57
- content_blocks = gradio_msg.get("content", [])
58
- text = content_blocks[0].get("text", "") if content_blocks else ""
 
 
 
 
 
 
 
 
 
 
59
  text_strip = text.lstrip()
60
 
61
  speaker = "user" if role == "user" else "assistant"
 
54
  def from_gradio_format(gradio_msg: Dict) -> BaseMessage:
55
  """Parse Gradio format back to BaseMessage"""
56
  role = gradio_msg["role"]
57
+
58
+ content = gradio_msg.get("content", "")
59
+
60
+ if isinstance(content, list):
61
+ # original Gradio format: [{"type": "text", "text": "..."}]
62
+ text = content[0].get("text", "") if content and isinstance(content[0], dict) else ""
63
+ elif isinstance(content, str):
64
+ # Our streaming path: plain string
65
+ text = content
66
+ else:
67
+ text = ""
68
+
69
  text_strip = text.lstrip()
70
 
71
  speaker = "user" if role == "user" else "assistant"
src/prompts/character_prompts.py CHANGED
@@ -220,7 +220,7 @@ def corvus_system_prompt(
220
  """
221
  memory_context = _format_paper_memory(recent_papers)
222
 
223
- base_prompt = f"""You are Corvus, a meticulous corvid scholar and PhD student.
224
  {GLOBAL_EPISTEMIC_NORMS}
225
 
226
  ROLE & TONE:
@@ -379,7 +379,7 @@ def magpie_system_prompt(
379
  """
380
  memory_context = _format_trend_memory(recent_trends)
381
 
382
- base_prompt = f"""You are Magpie, an enthusiastic corvid enthusiast and social butterfly.
383
  {GLOBAL_EPISTEMIC_NORMS}
384
 
385
  ROLE & TONE:
 
220
  """
221
  memory_context = _format_paper_memory(recent_papers)
222
 
223
+ base_prompt = f"""You are Corvus, a meticulous scholar and PhD student.
224
  {GLOBAL_EPISTEMIC_NORMS}
225
 
226
  ROLE & TONE:
 
379
  """
380
  memory_context = _format_trend_memory(recent_trends)
381
 
382
+ base_prompt = f"""You are Magpie, an enthusiastic musician, corvid enthusiast and social butterfly.
383
  {GLOBAL_EPISTEMIC_NORMS}
384
 
385
  ROLE & TONE: