Spaces:
Running
Make Analytics accordion dynamically update with exact MockTraceMind content
Browse filesChanged the Analytics tab accordion to show chart-specific explanations
matching MockTraceMind exactly. The accordion now updates dynamically
when the user selects a different chart type.
Changes:
- Created get_chart_explanation() function with exact MockTraceMind text
- Updated update_analytics() to return both chart and explanation
- Wired viz_type.change and app.load to update viz_explanation
- Each chart type now has its own focused explanation:
* Performance Heatmap: Color coding and metrics explanation
* Speed vs Accuracy: Quadrants, bubble interpretation, sweet spots
* Cost Efficiency: Cost bands, efficiency metrics, ROI guidance
All explanation text copied verbatim from MockTraceMind/app.py
get_visualization_explanation() function (lines 795-859).
|
@@ -597,16 +597,88 @@ def load_trends():
|
|
| 597 |
return fig
|
| 598 |
|
| 599 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 600 |
def update_analytics(viz_type):
|
| 601 |
-
"""Update analytics chart based on visualization type"""
|
| 602 |
df = data_loader.load_leaderboard()
|
| 603 |
|
|
|
|
| 604 |
if "Heatmap" in viz_type:
|
| 605 |
-
|
| 606 |
elif "Speed" in viz_type:
|
| 607 |
-
|
| 608 |
else:
|
| 609 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 610 |
|
| 611 |
|
| 612 |
def generate_card(top_n):
|
|
@@ -1065,8 +1137,6 @@ with gr.Blocks(title="TraceMind-AI", theme=theme) as app:
|
|
| 1065 |
- π¨ **Visual Design**: Gradient cards with model logos and performance metrics
|
| 1066 |
- π **Filters**: Use agent type, provider, and sorting controls above
|
| 1067 |
- π **Sort Options**: Click "Sort By" to order by success rate, cost, duration, or tokens
|
| 1068 |
-
- π **Click to Drill Down**: Click any model card to view detailed run information
|
| 1069 |
-
- π― **Quick Comparison**: Select 2+ runs and click "Compare" button
|
| 1070 |
|
| 1071 |
**Performance Indicators:**
|
| 1072 |
- π’ Green metrics = Excellent performance
|
|
@@ -1235,7 +1305,7 @@ with gr.Blocks(title="TraceMind-AI", theme=theme) as app:
|
|
| 1235 |
)
|
| 1236 |
analytics_chart = gr.Plot(label="Interactive Chart", show_label=False)
|
| 1237 |
|
| 1238 |
-
# Explanation panel in accordion
|
| 1239 |
with gr.Accordion("π‘ How to Read This Chart", open=False):
|
| 1240 |
viz_explanation = gr.Markdown("""
|
| 1241 |
#### π₯ Performance Heatmap
|
|
@@ -1252,34 +1322,6 @@ with gr.Blocks(title="TraceMind-AI", theme=theme) as app:
|
|
| 1252 |
- CO2 Emissions (g), GPU Utilization (%), Total Tokens
|
| 1253 |
|
| 1254 |
**Use it to:** Quickly identify which models excel in which areas
|
| 1255 |
-
|
| 1256 |
-
---
|
| 1257 |
-
|
| 1258 |
-
#### β‘ Speed vs Accuracy
|
| 1259 |
-
|
| 1260 |
-
**What it shows:** Trade-off between execution speed and success rate
|
| 1261 |
-
|
| 1262 |
-
**How to read it:**
|
| 1263 |
-
- Each bubble represents a model
|
| 1264 |
-
- X-axis: Average duration (lower is faster)
|
| 1265 |
-
- Y-axis: Success rate (higher is better)
|
| 1266 |
-
- Bubble size: Total cost (smaller bubbles = cheaper)
|
| 1267 |
-
- Bubble color: CO2 emissions (greener = more eco-friendly)
|
| 1268 |
-
|
| 1269 |
-
**Look for:** Models in top-left corner (fast + accurate)
|
| 1270 |
-
|
| 1271 |
-
---
|
| 1272 |
-
|
| 1273 |
-
#### π° Cost Efficiency
|
| 1274 |
-
|
| 1275 |
-
**What it shows:** Cost per successful test case
|
| 1276 |
-
|
| 1277 |
-
**How to read it:**
|
| 1278 |
-
- Bar chart showing cost efficiency (lower is better)
|
| 1279 |
-
- Calculated as: Total Cost / Successful Tests
|
| 1280 |
-
- Helps identify best value models
|
| 1281 |
-
|
| 1282 |
-
**Use it to:** Choose models that balance quality and budget
|
| 1283 |
""", elem_id="viz-explanation")
|
| 1284 |
|
| 1285 |
with gr.TabItem("π₯ Summary Card"):
|
|
@@ -1617,13 +1659,13 @@ with gr.Blocks(title="TraceMind-AI", theme=theme) as app:
|
|
| 1617 |
viz_type.change(
|
| 1618 |
fn=update_analytics,
|
| 1619 |
inputs=[viz_type],
|
| 1620 |
-
outputs=[analytics_chart]
|
| 1621 |
)
|
| 1622 |
|
| 1623 |
app.load(
|
| 1624 |
fn=update_analytics,
|
| 1625 |
inputs=[viz_type],
|
| 1626 |
-
outputs=[analytics_chart]
|
| 1627 |
)
|
| 1628 |
|
| 1629 |
generate_card_btn.click(
|
|
|
|
| 597 |
return fig
|
| 598 |
|
| 599 |
|
| 600 |
+
def get_chart_explanation(viz_type):
|
| 601 |
+
"""Get explanation text for the selected chart type"""
|
| 602 |
+
explanations = {
|
| 603 |
+
"π₯ Performance Heatmap": """
|
| 604 |
+
#### π₯ Performance Heatmap
|
| 605 |
+
|
| 606 |
+
**What it shows:** All models compared across all metrics in one view
|
| 607 |
+
|
| 608 |
+
**How to read it:**
|
| 609 |
+
- π’ **Green cells** = Better performance (higher is better)
|
| 610 |
+
- π‘ **Yellow cells** = Average performance
|
| 611 |
+
- π΄ **Red cells** = Worse performance (needs improvement)
|
| 612 |
+
|
| 613 |
+
**Metrics displayed:**
|
| 614 |
+
- Success Rate (%), Avg Duration (ms), Total Cost ($)
|
| 615 |
+
- CO2 Emissions (g), GPU Utilization (%), Total Tokens
|
| 616 |
+
|
| 617 |
+
**Use it to:** Quickly identify which models excel in which areas
|
| 618 |
+
""",
|
| 619 |
+
|
| 620 |
+
"β‘ Speed vs Accuracy": """
|
| 621 |
+
#### β‘ Speed vs Accuracy Trade-off
|
| 622 |
+
|
| 623 |
+
**What it shows:** The relationship between model speed and accuracy
|
| 624 |
+
|
| 625 |
+
**How to read it:**
|
| 626 |
+
- **X-axis** = Average Duration (log scale) - left is faster
|
| 627 |
+
- **Y-axis** = Success Rate (%) - higher is better
|
| 628 |
+
- **Bubble size** = Total Cost - larger bubbles are more expensive
|
| 629 |
+
- **Color** = Agent Type (tool/code/both)
|
| 630 |
+
|
| 631 |
+
**Sweet spot:** Top-left quadrant = β **Fast & Accurate** models
|
| 632 |
+
|
| 633 |
+
**Quadrant lines:**
|
| 634 |
+
- Median lines split the chart into 4 zones
|
| 635 |
+
- Models above/left of medians are better than average
|
| 636 |
+
|
| 637 |
+
**Use it to:** Find models that balance speed and accuracy for your needs
|
| 638 |
+
""",
|
| 639 |
+
|
| 640 |
+
"π° Cost Efficiency": """
|
| 641 |
+
#### π° Cost-Performance Efficiency
|
| 642 |
+
|
| 643 |
+
**What it shows:** Best value-for-money models
|
| 644 |
+
|
| 645 |
+
**How to read it:**
|
| 646 |
+
- **X-axis** = Total Cost (log scale) - left is cheaper
|
| 647 |
+
- **Y-axis** = Success Rate (%) - higher is better
|
| 648 |
+
- **Bubble size** = Duration - smaller bubbles are faster
|
| 649 |
+
- **Color** = Provider (blue=API, green=GPU/local)
|
| 650 |
+
- **β Stars** = Top 3 most efficient models
|
| 651 |
+
|
| 652 |
+
**Cost bands:**
|
| 653 |
+
- π’ **Budget** = < $0.01 per run
|
| 654 |
+
- π‘ **Mid-Range** = $0.01 - $0.10 per run
|
| 655 |
+
- π **Premium** = > $0.10 per run
|
| 656 |
+
|
| 657 |
+
**Efficiency metric:** Success Rate Γ· Cost (higher is better)
|
| 658 |
+
|
| 659 |
+
**Use it to:** Maximize ROI by finding models with best success-to-cost ratio
|
| 660 |
+
"""
|
| 661 |
+
}
|
| 662 |
+
|
| 663 |
+
return explanations.get(viz_type, explanations["π₯ Performance Heatmap"])
|
| 664 |
+
|
| 665 |
+
|
| 666 |
def update_analytics(viz_type):
|
| 667 |
+
"""Update analytics chart and explanation based on visualization type"""
|
| 668 |
df = data_loader.load_leaderboard()
|
| 669 |
|
| 670 |
+
# Get chart
|
| 671 |
if "Heatmap" in viz_type:
|
| 672 |
+
chart = create_performance_heatmap(df)
|
| 673 |
elif "Speed" in viz_type:
|
| 674 |
+
chart = create_speed_accuracy_scatter(df)
|
| 675 |
else:
|
| 676 |
+
chart = create_cost_efficiency_scatter(df)
|
| 677 |
+
|
| 678 |
+
# Get explanation
|
| 679 |
+
explanation = get_chart_explanation(viz_type)
|
| 680 |
+
|
| 681 |
+
return chart, explanation
|
| 682 |
|
| 683 |
|
| 684 |
def generate_card(top_n):
|
|
|
|
| 1137 |
- π¨ **Visual Design**: Gradient cards with model logos and performance metrics
|
| 1138 |
- π **Filters**: Use agent type, provider, and sorting controls above
|
| 1139 |
- π **Sort Options**: Click "Sort By" to order by success rate, cost, duration, or tokens
|
|
|
|
|
|
|
| 1140 |
|
| 1141 |
**Performance Indicators:**
|
| 1142 |
- π’ Green metrics = Excellent performance
|
|
|
|
| 1305 |
)
|
| 1306 |
analytics_chart = gr.Plot(label="Interactive Chart", show_label=False)
|
| 1307 |
|
| 1308 |
+
# Explanation panel in accordion (dynamically updates based on chart selection)
|
| 1309 |
with gr.Accordion("π‘ How to Read This Chart", open=False):
|
| 1310 |
viz_explanation = gr.Markdown("""
|
| 1311 |
#### π₯ Performance Heatmap
|
|
|
|
| 1322 |
- CO2 Emissions (g), GPU Utilization (%), Total Tokens
|
| 1323 |
|
| 1324 |
**Use it to:** Quickly identify which models excel in which areas
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1325 |
""", elem_id="viz-explanation")
|
| 1326 |
|
| 1327 |
with gr.TabItem("π₯ Summary Card"):
|
|
|
|
| 1659 |
viz_type.change(
|
| 1660 |
fn=update_analytics,
|
| 1661 |
inputs=[viz_type],
|
| 1662 |
+
outputs=[analytics_chart, viz_explanation]
|
| 1663 |
)
|
| 1664 |
|
| 1665 |
app.load(
|
| 1666 |
fn=update_analytics,
|
| 1667 |
inputs=[viz_type],
|
| 1668 |
+
outputs=[analytics_chart, viz_explanation]
|
| 1669 |
)
|
| 1670 |
|
| 1671 |
generate_card_btn.click(
|