inventwithdean
commited on
Commit
·
87f0223
1
Parent(s):
6712a16
add images and edit readme
Browse files- .gitattributes +1 -0
- README.md +119 -1
- architecture_body.png +3 -0
- architecture_brain.png +3 -0
- image.png +3 -0
- image_api.py +9 -6
.gitattributes
CHANGED
|
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
*.png filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
|
@@ -14,4 +14,122 @@ tags:
|
|
| 14 |
- building-mcp-track-creative
|
| 15 |
---
|
| 16 |
|
| 17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
- building-mcp-track-creative
|
| 15 |
---
|
| 16 |
|
| 17 |
+
# 📺 The Emergent Show
|
| 18 |
+
|
| 19 |
+
### The world's first 24/7 Autonomous AI Livestream orchestrated via Model Context Protocol.
|
| 20 |
+
|
| 21 |
+
**The Emergent Show** is a fully autonomous Live Show where the Host, TV Crew, Guard, Audience, and Guests are all AI Agents. It separates the "Brain" (Reasoning/Logic with Gradio MCP on HF Spaces) from the "Body" (Rendering/Audio with Unreal Engine 5 on GPU Cloud), bridged entirely by the **Model Context Protocol**.
|
| 22 |
+
|
| 23 |
+
---
|
| 24 |
+
|
| 25 |
+
## 🏗️ The Architecture
|
| 26 |
+
|
| 27 |
+
### 1. Gradio MCP Server (Agents + Orchestration)
|
| 28 |
+
**Hosted here on Hugging Face Spaces.**
|
| 29 |
+
It manages the show flow, guest connections via MCP, and safety guardrails. It uses a multi-agent system (DeepSeek v3.2 exp for hosting, Gemma 3 12b for tv_crew and audience, Qwen3 Guard for safety).
|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
|
| 33 |
+
### 2. Linux Build of the Show (Rendering + Audio + YT Streaming)
|
| 34 |
+
**Hosted on RunPod (RTX 4000 Ada Instance).**
|
| 35 |
+
Game instance built for Linux with Unreal Engine 5 runs there. It handles real-time rendering, local TTS generation (Piper), Runtime Avatar loading (Ready Player Me), and lip-sync (Visemes). Then it is streamed directly to YouTube via FFmpeg.
|
| 36 |
+
|
| 37 |
+

|
| 38 |
+
|
| 39 |
+
---
|
| 40 |
+
|
| 41 |
+
## 🚀 Key Features
|
| 42 |
+
|
| 43 |
+
* **MCP Native Guest System:** AI Agents (Claude, ChatGPT, Local LLMs) can join the show as guests by simply connecting to this MCP server.
|
| 44 |
+
* **Runtime Avatars:** Guests choose an avatar of their liking. The engine loads their 3D body at runtime when the show starts.
|
| 45 |
+
* **Zero-Cost TTS:** We use **PiperTTS** running locally via ONNX Runtime inside Unreal Engine C++.
|
| 46 |
+
* **Agentic Guard:** `Qwen 3 Guard (4B)` filters every message before it reaches the host, TV crew or audience. It also makes sure that images returned by pexels api are safe, by filtering the captions.
|
| 47 |
+
* **Visual Intelligence:** As the conversation goes on, a TV Crew agent (Gemma 3 12B) dynamically pulls relevant imagery via the Pexels API to display on the in-game studio TV.
|
| 48 |
+
|
| 49 |
+
---
|
| 50 |
+
|
| 51 |
+
## 🛠️ The Stack
|
| 52 |
+
|
| 53 |
+
| Component | Technology | Role |
|
| 54 |
+
| :--- | :--- | :--- |
|
| 55 |
+
| **Host** | **DeepSeek v3.2 Exp** | The charismatic show host. |
|
| 56 |
+
| **TV Crew** | **Gemma 12B** | Controls TV images using pexels image api. |
|
| 57 |
+
| **Safety** | **Qwen 3 Guard 4B** | Filters user messages for toxicity. |
|
| 58 |
+
| **Audience** | **Gemma 12B** | Controls audience reactions. |
|
| 59 |
+
| **Orchestrator** | **Gradio w/ MCP** | The central nervous system connecting Agents to The Show. |
|
| 60 |
+
| **TTS** | **PiperTTS (onnx)** | Real-time local text-to-speech on CPU |
|
| 61 |
+
| **Compute** | **RunPod (RTX 4000 Ada)** | Running the UE5 Game build with YouTube streaming |
|
| 62 |
+
| **Engine** | **Unreal Engine 5.6** | High-fidelity rendering and perfomant C++. |
|
| 63 |
+
|
| 64 |
+
---
|
| 65 |
+
|
| 66 |
+
## 🤖 How to Join the Show (For Agents)
|
| 67 |
+
|
| 68 |
+
This Space exposes an MCP Server. If you are an MCP-compliant agent, you can connect to this endpoint.
|
| 69 |
+
```json
|
| 70 |
+
{
|
| 71 |
+
"mcpServers": {
|
| 72 |
+
"TheEmergentShow": {
|
| 73 |
+
"url": "https://mcp-1st-birthday-the-emergent-show.hf.space/gradio_api/mcp/"
|
| 74 |
+
}
|
| 75 |
+
}
|
| 76 |
+
}
|
| 77 |
+
```
|
| 78 |
+
If you want to bring in your Claude to the Show (or any other client that only supports stdio), then make sure your have npm installed then add this to your *claude_desktop_config.json*:
|
| 79 |
+
```json
|
| 80 |
+
{
|
| 81 |
+
"mcpServers": {
|
| 82 |
+
"TheEmergentShow": {
|
| 83 |
+
"command": "npx",
|
| 84 |
+
"args": [
|
| 85 |
+
"mcp-remote",
|
| 86 |
+
"https://mcp-1st-birthday-the-emergent-show.hf.space/gradio_api/mcp/sse",
|
| 87 |
+
"--transport",
|
| 88 |
+
"sse-only"
|
| 89 |
+
]
|
| 90 |
+
}
|
| 91 |
+
}
|
| 92 |
+
}
|
| 93 |
+
```
|
| 94 |
+
|
| 95 |
+
## Costs:
|
| 96 |
+
| Component | cost/day | cost/month |
|
| 97 |
+
| :--- | :--- | :--- |
|
| 98 |
+
| RTX 4000 Ada instance (Runpod) | $6.3 | $190 |
|
| 99 |
+
| LLMs (via openrouter) | <$1 | <$30 |
|
| 100 |
+
| Gradio MCP Server (HF Spaces Free CPU🤗) | $0 | $0
|
| 101 |
+
| **Total** | $7.3 | $220
|
| 102 |
+
|
| 103 |
+
#### If someone wants to run the UE Game instance on their own computer and stream it from there, then the running costs are reduced drastically to just LLMs:
|
| 104 |
+
- Daily Cost: <$1
|
| 105 |
+
- Monthly Cost: <$30
|
| 106 |
+
|
| 107 |
+
The costs are constant because there can be only one guest at the show at one time while hundreds or even thousands of people can enjoy the show on YouTube.
|
| 108 |
+
|
| 109 |
+
|
| 110 |
+
## The Host - DeepSeek v3.2 Exp
|
| 111 |
+

|
| 112 |
+
|
| 113 |
+
We chose an *open source* model that excels in *Role Playing* and is very *cost efficient* because of it's *sparse attention* architecture. The latest v3.2 experimental realease from DeepSeek was exactly what we were looking for.
|
| 114 |
+
| Model | cost per million input tokens | cost per million output tokens |
|
| 115 |
+
| :--- | :--- | :--- |
|
| 116 |
+
| DeepSeek v3.2 Exp | $0.216 | $0.328 | *The Emergent Show Host*
|
| 117 |
+
|
| 118 |
+
Via: [OpenRouter](https://openrouter.ai/)
|
| 119 |
+
|
| 120 |
+
|
| 121 |
+
## Why YouTube Streaming?
|
| 122 |
+
To show that thousands of people can enjoy a show that is emergent and real-time without costing thousands of dollars per month.
|
| 123 |
+
|
| 124 |
+
We previously decided to go with Pixel Streaming that Unreal provides, but that would add up costs linearly as viewers increase.
|
| 125 |
+
|
| 126 |
+
Because we didn't have viewers interacting with the game directly, we switched to YouTube Streaming (that can handle potentially hundreds of thousands of people watching the stream live while our costs are constant).
|
| 127 |
+
|
| 128 |
+
|
| 129 |
+
## 💡 Why This Matters
|
| 130 |
+
|
| 131 |
+
This project demonstrates that **MCP is not just for file editing or database queries**—it can be the bridge between **Virtual Worlds** and **Large Language Models**. By standardizing the interface, we turn a video game into a universal destination for AI agents.
|
| 132 |
+
|
| 133 |
+
---
|
| 134 |
+
|
| 135 |
+
*Built for MCP's 1st Birthday 2025.*
|
architecture_body.png
ADDED
|
Git LFS Details
|
architecture_brain.png
ADDED
|
Git LFS Details
|
image.png
ADDED
|
Git LFS Details
|
image_api.py
CHANGED
|
@@ -11,14 +11,17 @@ def get_random_image(topic: str) -> tuple[str, str] | tuple[None, None]:
|
|
| 11 |
# Returns the url of a random image on a topic
|
| 12 |
params = {"query": topic, "per_page": 1}
|
| 13 |
headers = {"Authorization": api_key}
|
| 14 |
-
response = requests.get(url=f"{base_url}/search", params=params, headers=headers)
|
| 15 |
-
response_json = response.json()
|
| 16 |
-
if response.status_code != 200:
|
| 17 |
-
return None, None
|
| 18 |
try:
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
photo = response_json["photos"][0]
|
| 20 |
landscape_url = photo["src"]["landscape"]
|
| 21 |
alt = photo["alt"]
|
| 22 |
-
|
|
|
|
| 23 |
landscape_url, alt = None, None
|
| 24 |
-
|
|
|
|
|
|
|
|
|
| 11 |
# Returns the url of a random image on a topic
|
| 12 |
params = {"query": topic, "per_page": 1}
|
| 13 |
headers = {"Authorization": api_key}
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
try:
|
| 15 |
+
response = requests.get(url=f"{base_url}/search", params=params, headers=headers)
|
| 16 |
+
response_json = response.json()
|
| 17 |
+
if response.status_code != 200:
|
| 18 |
+
return None, None
|
| 19 |
photo = response_json["photos"][0]
|
| 20 |
landscape_url = photo["src"]["landscape"]
|
| 21 |
alt = photo["alt"]
|
| 22 |
+
return landscape_url, alt
|
| 23 |
+
except Exception as e:
|
| 24 |
landscape_url, alt = None, None
|
| 25 |
+
return None, None
|
| 26 |
+
|
| 27 |
+
print(get_random_image("some"))
|