Spaces:
Sleeping
Sleeping
Evgueni Poloukarov
commited on
Commit
·
3c8562f
1
Parent(s):
dfe40ac
feat: add JupyterLab inference notebook for HF Space with GPU support
Browse files- Smoke test notebook (1 border x 7 days validation)
- JupyterLab SDK configuration with A10G GPU
- Dynamic forecast system (time-aware, prevents data leakage)
- Extended dataset support (Oct 2023 - Oct 14, 2025)
Notebook tests complete inference pipeline:
1. Load extended dataset from HF (17,880 rows)
2. Feature categorization (603 full + 12 partial + 1,899 historical)
3. Time-aware data extraction
4. Chronos-2 Large GPU inference
5. Result visualization with Altair
Ready for GPU testing on HuggingFace Space.
- README.md +115 -0
- inference_smoke_test.ipynb +303 -0
- requirements.txt +19 -29
README.md
ADDED
|
@@ -0,0 +1,115 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: FBMC Chronos-2 Zero-Shot Forecasting
|
| 3 |
+
emoji: ⚡
|
| 4 |
+
colorFrom: blue
|
| 5 |
+
colorTo: green
|
| 6 |
+
sdk: jupyterlab
|
| 7 |
+
sdk_version: "4.0.0"
|
| 8 |
+
app_file: inference_smoke_test.ipynb
|
| 9 |
+
pinned: false
|
| 10 |
+
license: mit
|
| 11 |
+
hardware: a10g-small
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# FBMC Flow-Based Market Coupling Forecasting
|
| 15 |
+
|
| 16 |
+
Zero-shot electricity cross-border flow forecasting for 38 European FBMC borders using Amazon Chronos 2.
|
| 17 |
+
|
| 18 |
+
## 🚀 Quick Start
|
| 19 |
+
|
| 20 |
+
This HuggingFace Space provides interactive Jupyter notebooks for running zero-shot forecasts on GPU.
|
| 21 |
+
|
| 22 |
+
### Available Notebooks
|
| 23 |
+
|
| 24 |
+
1. **`inference_smoke_test.ipynb`** - Quick validation (1 border × 7 days, ~1 min)
|
| 25 |
+
2. **`inference_full_14day.ipynb`** - Production forecast (38 borders × 14 days, ~5 min)
|
| 26 |
+
3. **`evaluation.ipynb`** - Performance analysis vs actuals
|
| 27 |
+
|
| 28 |
+
### How to Use
|
| 29 |
+
|
| 30 |
+
1. Open any notebook in JupyterLab
|
| 31 |
+
2. Run all cells (Cell → Run All)
|
| 32 |
+
3. View results and visualizations inline
|
| 33 |
+
|
| 34 |
+
## 📊 Dataset
|
| 35 |
+
|
| 36 |
+
**Source**: [evgueni-p/fbmc-features-24month](https://huggingface.co/datasets/evgueni-p/fbmc-features-24month)
|
| 37 |
+
|
| 38 |
+
- **Rows**: 17,880 hourly observations
|
| 39 |
+
- **Date Range**: Oct 1, 2023 - Oct 14, 2025
|
| 40 |
+
- **Features**: 2,553 engineered features
|
| 41 |
+
- Weather: 375 features (52 grid points)
|
| 42 |
+
- ENTSO-E: ~1,863 features (generation, demand, prices, outages)
|
| 43 |
+
- JAO: 276 features (CNEC binding, RAM, utilization, LTA, net positions)
|
| 44 |
+
- Temporal: 39 features (hour, day, month, etc.)
|
| 45 |
+
|
| 46 |
+
- **Targets**: 38 FBMC cross-border flows (MW)
|
| 47 |
+
|
| 48 |
+
## 🔬 Model
|
| 49 |
+
|
| 50 |
+
**Amazon Chronos 2 Large** (710M parameters)
|
| 51 |
+
- Pre-trained foundation model for time series
|
| 52 |
+
- Zero-shot inference (no fine-tuning)
|
| 53 |
+
- Multivariate forecasting with future covariates
|
| 54 |
+
- Dynamic time-aware data extraction (prevents leakage)
|
| 55 |
+
|
| 56 |
+
## ⚡ Hardware
|
| 57 |
+
|
| 58 |
+
**GPU**: NVIDIA A10G (24GB VRAM)
|
| 59 |
+
- Model inference: ~5 minutes for complete 14-day forecast
|
| 60 |
+
- Recommended for production workloads
|
| 61 |
+
|
| 62 |
+
## 📈 Performance Target
|
| 63 |
+
|
| 64 |
+
**D+1 MAE Goal**: <150 MW per border
|
| 65 |
+
|
| 66 |
+
This is a zero-shot baseline. Fine-tuning (Phase 2) expected to improve accuracy by 20-40%.
|
| 67 |
+
|
| 68 |
+
## 🔐 Requirements
|
| 69 |
+
|
| 70 |
+
Set `HF_TOKEN` in Space secrets to access the private dataset.
|
| 71 |
+
|
| 72 |
+
## 🛠️ Technical Details
|
| 73 |
+
|
| 74 |
+
### Feature Availability Windows
|
| 75 |
+
|
| 76 |
+
The system implements time-aware forecasting to prevent data leakage:
|
| 77 |
+
|
| 78 |
+
- **Full-horizon D+14** (603 features): Weather, CNEC outages, LTA
|
| 79 |
+
- **Partial D+1** (12 features): Load forecasts (masked D+2-D+14)
|
| 80 |
+
- **Historical only** (1,899 features): Prices, generation, demand
|
| 81 |
+
|
| 82 |
+
### Dynamic Forecast System
|
| 83 |
+
|
| 84 |
+
Uses `DynamicForecast` module to extract context and future covariates based on run date:
|
| 85 |
+
- Context window: 512 hours (historical data)
|
| 86 |
+
- Forecast horizon: 336 hours (14 days)
|
| 87 |
+
- Automatic masking for partial availability
|
| 88 |
+
|
| 89 |
+
## 📚 Documentation
|
| 90 |
+
|
| 91 |
+
- [Project Repository](https://github.com/evgspacdmy/fbmc_chronos2)
|
| 92 |
+
- [Activity Log](https://github.com/evgspacdmy/fbmc_chronos2/blob/main/doc/activity.md)
|
| 93 |
+
- [Feature Engineering Details](https://github.com/evgspacdmy/fbmc_chronos2/tree/main/src/feature_engineering)
|
| 94 |
+
|
| 95 |
+
## 🔄 Phase 2 Roadmap
|
| 96 |
+
|
| 97 |
+
Future improvements (not included in zero-shot MVP):
|
| 98 |
+
- Fine-tuning on FBMC data
|
| 99 |
+
- Ensemble methods
|
| 100 |
+
- Probabilistic forecasting
|
| 101 |
+
- Real-time data pipeline
|
| 102 |
+
- Production API
|
| 103 |
+
|
| 104 |
+
## 👤 Author
|
| 105 |
+
|
| 106 |
+
**Evgueni Poloukarov**
|
| 107 |
+
|
| 108 |
+
## 📄 License
|
| 109 |
+
|
| 110 |
+
MIT License - See LICENSE file for details
|
| 111 |
+
|
| 112 |
+
---
|
| 113 |
+
|
| 114 |
+
**Last Updated**: 2025-11-14
|
| 115 |
+
**Version**: 1.0.0 (Zero-Shot MVP)
|
inference_smoke_test.ipynb
ADDED
|
@@ -0,0 +1,303 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"cells": [
|
| 3 |
+
{
|
| 4 |
+
"cell_type": "markdown",
|
| 5 |
+
"metadata": {},
|
| 6 |
+
"source": [
|
| 7 |
+
"# FBMC Chronos-2 Zero-Shot Inference - Smoke Test\n",
|
| 8 |
+
"\n",
|
| 9 |
+
"**Quick validation**: 1 border × 7 days (168 hours)\n",
|
| 10 |
+
"\n",
|
| 11 |
+
"This notebook tests the complete inference pipeline on HuggingFace Space with GPU acceleration."
|
| 12 |
+
]
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"cell_type": "markdown",
|
| 16 |
+
"metadata": {},
|
| 17 |
+
"source": [
|
| 18 |
+
"## 1. Environment Setup"
|
| 19 |
+
]
|
| 20 |
+
},
|
| 21 |
+
{
|
| 22 |
+
"cell_type": "code",
|
| 23 |
+
"execution_count": null,
|
| 24 |
+
"metadata": {},
|
| 25 |
+
"outputs": [],
|
| 26 |
+
"source": [
|
| 27 |
+
"import time\n",
|
| 28 |
+
"import os\n",
|
| 29 |
+
"import polars as pl\n",
|
| 30 |
+
"import torch\n",
|
| 31 |
+
"from datetime import datetime, timedelta\n",
|
| 32 |
+
"from datasets import load_dataset\n",
|
| 33 |
+
"from chronos import ChronosPipeline\n",
|
| 34 |
+
"import altair as alt\n",
|
| 35 |
+
"\n",
|
| 36 |
+
"# Add src to path for imports\n",
|
| 37 |
+
"import sys\n",
|
| 38 |
+
"sys.path.append('/home/user/app/src') # HF Space path\n",
|
| 39 |
+
"\n",
|
| 40 |
+
"from forecasting.dynamic_forecast import DynamicForecast\n",
|
| 41 |
+
"from forecasting.feature_availability import FeatureAvailability\n",
|
| 42 |
+
"\n",
|
| 43 |
+
"print(\"Environment setup complete\")\n",
|
| 44 |
+
"print(f\"PyTorch version: {torch.__version__}\")\n",
|
| 45 |
+
"print(f\"GPU available: {torch.cuda.is_available()}\")\n",
|
| 46 |
+
"if torch.cuda.is_available():\n",
|
| 47 |
+
" print(f\"GPU device: {torch.cuda.get_device_name(0)}\")"
|
| 48 |
+
]
|
| 49 |
+
},
|
| 50 |
+
{
|
| 51 |
+
"cell_type": "markdown",
|
| 52 |
+
"metadata": {},
|
| 53 |
+
"source": [
|
| 54 |
+
"## 2. Load Extended Dataset from HuggingFace"
|
| 55 |
+
]
|
| 56 |
+
},
|
| 57 |
+
{
|
| 58 |
+
"cell_type": "code",
|
| 59 |
+
"execution_count": null,
|
| 60 |
+
"metadata": {},
|
| 61 |
+
"outputs": [],
|
| 62 |
+
"source": [
|
| 63 |
+
"print(\"Loading dataset from HuggingFace...\")\n",
|
| 64 |
+
"start_time = time.time()\n",
|
| 65 |
+
"\n",
|
| 66 |
+
"# Load dataset\n",
|
| 67 |
+
"hf_token = os.getenv(\"HF_TOKEN\")\n",
|
| 68 |
+
"dataset = load_dataset(\n",
|
| 69 |
+
" \"evgueni-p/fbmc-features-24month\",\n",
|
| 70 |
+
" split=\"train\",\n",
|
| 71 |
+
" token=hf_token\n",
|
| 72 |
+
")\n",
|
| 73 |
+
"\n",
|
| 74 |
+
"# Convert to Polars\n",
|
| 75 |
+
"df = pl.from_arrow(dataset.data.table)\n",
|
| 76 |
+
"\n",
|
| 77 |
+
"print(f\"✓ Loaded: {df.shape}\")\n",
|
| 78 |
+
"print(f\" Date range: {df['timestamp'].min()} to {df['timestamp'].max()}\")\n",
|
| 79 |
+
"print(f\" Load time: {time.time() - start_time:.1f}s\")"
|
| 80 |
+
]
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"cell_type": "markdown",
|
| 84 |
+
"metadata": {},
|
| 85 |
+
"source": [
|
| 86 |
+
"## 3. Configure Dynamic Forecast System"
|
| 87 |
+
]
|
| 88 |
+
},
|
| 89 |
+
{
|
| 90 |
+
"cell_type": "code",
|
| 91 |
+
"execution_count": null,
|
| 92 |
+
"metadata": {},
|
| 93 |
+
"outputs": [],
|
| 94 |
+
"source": [
|
| 95 |
+
"# Categorize features by availability\n",
|
| 96 |
+
"categories = FeatureAvailability.categorize_features(df.columns)\n",
|
| 97 |
+
"\n",
|
| 98 |
+
"print(\"Feature categorization:\")\n",
|
| 99 |
+
"print(f\" Full-horizon D+14: {len(categories['full_horizon_d14'])} features\")\n",
|
| 100 |
+
"print(f\" Partial D+1: {len(categories['partial_d1'])} features\")\n",
|
| 101 |
+
"print(f\" Historical only: {len(categories['historical'])} features\")\n",
|
| 102 |
+
"print(f\" Total: {sum(len(v) for v in categories.values())} features\")\n",
|
| 103 |
+
"\n",
|
| 104 |
+
"# Identify target borders\n",
|
| 105 |
+
"target_cols = [col for col in df.columns if col.startswith('target_border_')]\n",
|
| 106 |
+
"borders = [col.replace('target_border_', '') for col in target_cols]\n",
|
| 107 |
+
"print(f\"\\n✓ Found {len(borders)} borders\")\n",
|
| 108 |
+
"print(f\" Test border: {borders[0]}\")"
|
| 109 |
+
]
|
| 110 |
+
},
|
| 111 |
+
{
|
| 112 |
+
"cell_type": "markdown",
|
| 113 |
+
"metadata": {},
|
| 114 |
+
"source": [
|
| 115 |
+
"## 4. Prepare Test Data with Time-Aware Extraction"
|
| 116 |
+
]
|
| 117 |
+
},
|
| 118 |
+
{
|
| 119 |
+
"cell_type": "code",
|
| 120 |
+
"execution_count": null,
|
| 121 |
+
"metadata": {},
|
| 122 |
+
"outputs": [],
|
| 123 |
+
"source": [
|
| 124 |
+
"# Test configuration\n",
|
| 125 |
+
"test_border = borders[0]\n",
|
| 126 |
+
"prediction_hours = 168 # 7 days\n",
|
| 127 |
+
"context_hours = 512 # Context window\n",
|
| 128 |
+
"\n",
|
| 129 |
+
"# Use Sept 30 as run date (requires Oct 1-7 future covariates)\n",
|
| 130 |
+
"run_date = datetime(2025, 9, 30, 23, 0)\n",
|
| 131 |
+
"\n",
|
| 132 |
+
"print(f\"Test configuration:\")\n",
|
| 133 |
+
"print(f\" Run date: {run_date}\")\n",
|
| 134 |
+
"print(f\" Context: {context_hours} hours (historical)\")\n",
|
| 135 |
+
"print(f\" Forecast: {prediction_hours} hours (7 days)\")\n",
|
| 136 |
+
"print(f\" Forecast range: Oct 1 00:00 to Oct 7 23:00\")\n",
|
| 137 |
+
"\n",
|
| 138 |
+
"# Initialize dynamic forecast\n",
|
| 139 |
+
"forecaster = DynamicForecast(\n",
|
| 140 |
+
" df=df,\n",
|
| 141 |
+
" feature_categories=categories\n",
|
| 142 |
+
")\n",
|
| 143 |
+
"\n",
|
| 144 |
+
"# Extract data with leakage prevention\n",
|
| 145 |
+
"context_data, future_data = forecaster.prepare_forecast_data(\n",
|
| 146 |
+
" run_date=run_date,\n",
|
| 147 |
+
" border=test_border\n",
|
| 148 |
+
")\n",
|
| 149 |
+
"\n",
|
| 150 |
+
"print(f\"\\n✓ Data extracted:\")\n",
|
| 151 |
+
"print(f\" Context: {context_data.shape}\")\n",
|
| 152 |
+
"print(f\" Future: {future_data.shape}\")\n",
|
| 153 |
+
"print(f\" Leakage check: PASSED\")"
|
| 154 |
+
]
|
| 155 |
+
},
|
| 156 |
+
{
|
| 157 |
+
"cell_type": "markdown",
|
| 158 |
+
"metadata": {},
|
| 159 |
+
"source": [
|
| 160 |
+
"## 5. Load Chronos-2 Model on GPU"
|
| 161 |
+
]
|
| 162 |
+
},
|
| 163 |
+
{
|
| 164 |
+
"cell_type": "code",
|
| 165 |
+
"execution_count": null,
|
| 166 |
+
"metadata": {},
|
| 167 |
+
"outputs": [],
|
| 168 |
+
"source": [
|
| 169 |
+
"print(\"Loading Chronos-2 Large model...\")\n",
|
| 170 |
+
"start_time = time.time()\n",
|
| 171 |
+
"\n",
|
| 172 |
+
"pipeline = ChronosPipeline.from_pretrained(\n",
|
| 173 |
+
" \"amazon/chronos-t5-large\",\n",
|
| 174 |
+
" device_map=\"cuda\",\n",
|
| 175 |
+
" torch_dtype=torch.bfloat16\n",
|
| 176 |
+
")\n",
|
| 177 |
+
"\n",
|
| 178 |
+
"print(f\"✓ Model loaded in {time.time() - start_time:.1f}s\")\n",
|
| 179 |
+
"print(f\" Device: {next(pipeline.model.parameters()).device}\")\n",
|
| 180 |
+
"print(f\" Dtype: {next(pipeline.model.parameters()).dtype}\")"
|
| 181 |
+
]
|
| 182 |
+
},
|
| 183 |
+
{
|
| 184 |
+
"cell_type": "markdown",
|
| 185 |
+
"metadata": {},
|
| 186 |
+
"source": [
|
| 187 |
+
"## 6. Run Zero-Shot Inference"
|
| 188 |
+
]
|
| 189 |
+
},
|
| 190 |
+
{
|
| 191 |
+
"cell_type": "code",
|
| 192 |
+
"execution_count": null,
|
| 193 |
+
"metadata": {},
|
| 194 |
+
"outputs": [],
|
| 195 |
+
"source": [
|
| 196 |
+
"print(\"Running zero-shot inference...\")\n",
|
| 197 |
+
"start_time = time.time()\n",
|
| 198 |
+
"\n",
|
| 199 |
+
"# Take last 512 hours of context\n",
|
| 200 |
+
"context = context_data.select([test_border]).to_numpy()[-context_hours:].flatten()\n",
|
| 201 |
+
"\n",
|
| 202 |
+
"# Run forecast\n",
|
| 203 |
+
"forecast = pipeline.predict(\n",
|
| 204 |
+
" context=context,\n",
|
| 205 |
+
" prediction_length=prediction_hours,\n",
|
| 206 |
+
" num_samples=20\n",
|
| 207 |
+
")\n",
|
| 208 |
+
"\n",
|
| 209 |
+
"# Get median forecast\n",
|
| 210 |
+
"forecast_median = forecast.numpy().median(axis=0)\n",
|
| 211 |
+
"\n",
|
| 212 |
+
"inference_time = time.time() - start_time\n",
|
| 213 |
+
"print(f\"✓ Inference complete in {inference_time:.1f}s\")\n",
|
| 214 |
+
"print(f\" Forecast shape: {forecast.shape}\")\n",
|
| 215 |
+
"print(f\" Median forecast range: [{forecast_median.min():.0f}, {forecast_median.max():.0f}] MW\")"
|
| 216 |
+
]
|
| 217 |
+
},
|
| 218 |
+
{
|
| 219 |
+
"cell_type": "markdown",
|
| 220 |
+
"metadata": {},
|
| 221 |
+
"source": [
|
| 222 |
+
"## 7. Visualize Results"
|
| 223 |
+
]
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"cell_type": "code",
|
| 227 |
+
"execution_count": null,
|
| 228 |
+
"metadata": {},
|
| 229 |
+
"outputs": [],
|
| 230 |
+
"source": [
|
| 231 |
+
"# Prepare data for visualization\n",
|
| 232 |
+
"forecast_timestamps = pl.datetime_range(\n",
|
| 233 |
+
" datetime(2025, 10, 1, 0, 0),\n",
|
| 234 |
+
" datetime(2025, 10, 7, 23, 0),\n",
|
| 235 |
+
" interval='1h',\n",
|
| 236 |
+
" eager=True\n",
|
| 237 |
+
")\n",
|
| 238 |
+
"\n",
|
| 239 |
+
"viz_data = pl.DataFrame({\n",
|
| 240 |
+
" 'timestamp': forecast_timestamps,\n",
|
| 241 |
+
" 'forecast': forecast_median.tolist()\n",
|
| 242 |
+
"})\n",
|
| 243 |
+
"\n",
|
| 244 |
+
"# Create chart\n",
|
| 245 |
+
"chart = alt.Chart(viz_data.to_pandas()).mark_line().encode(\n",
|
| 246 |
+
" x=alt.X('timestamp:T', title='Date'),\n",
|
| 247 |
+
" y=alt.Y('forecast:Q', title='Flow (MW)'),\n",
|
| 248 |
+
" tooltip=['timestamp:T', alt.Tooltip('forecast:Q', format='.0f')]\n",
|
| 249 |
+
").properties(\n",
|
| 250 |
+
" width=800,\n",
|
| 251 |
+
" height=400,\n",
|
| 252 |
+
" title=f'Zero-Shot Forecast: {test_border} (Oct 1-7, 2025)'\n",
|
| 253 |
+
")\n",
|
| 254 |
+
"\n",
|
| 255 |
+
"chart"
|
| 256 |
+
]
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"cell_type": "markdown",
|
| 260 |
+
"metadata": {},
|
| 261 |
+
"source": [
|
| 262 |
+
"## 8. Summary"
|
| 263 |
+
]
|
| 264 |
+
},
|
| 265 |
+
{
|
| 266 |
+
"cell_type": "code",
|
| 267 |
+
"execution_count": null,
|
| 268 |
+
"metadata": {},
|
| 269 |
+
"outputs": [],
|
| 270 |
+
"source": [
|
| 271 |
+
"print(\"=\"*60)\n",
|
| 272 |
+
"print(\"SMOKE TEST COMPLETE\")\n",
|
| 273 |
+
"print(\"=\"*60)\n",
|
| 274 |
+
"print(f\"Border: {test_border}\")\n",
|
| 275 |
+
"print(f\"Forecast period: Oct 1-7, 2025 (168 hours)\")\n",
|
| 276 |
+
"print(f\"Inference time: {inference_time:.1f}s\")\n",
|
| 277 |
+
"print(f\"GPU: {torch.cuda.get_device_name(0) if torch.cuda.is_available() else 'N/A'}\")\n",
|
| 278 |
+
"print(f\"\\n✓ Zero-shot forecasting working on HuggingFace Space!\")"
|
| 279 |
+
]
|
| 280 |
+
}
|
| 281 |
+
],
|
| 282 |
+
"metadata": {
|
| 283 |
+
"kernelspec": {
|
| 284 |
+
"display_name": "Python 3",
|
| 285 |
+
"language": "python",
|
| 286 |
+
"name": "python3"
|
| 287 |
+
},
|
| 288 |
+
"language_info": {
|
| 289 |
+
"codemirror_mode": {
|
| 290 |
+
"name": "ipython",
|
| 291 |
+
"version": 3
|
| 292 |
+
},
|
| 293 |
+
"file_extension": ".py",
|
| 294 |
+
"mimetype": "text/x-python",
|
| 295 |
+
"name": "python",
|
| 296 |
+
"nbconvert_exporter": "python",
|
| 297 |
+
"pygments_lexer": "ipython3",
|
| 298 |
+
"version": "3.10.0"
|
| 299 |
+
}
|
| 300 |
+
},
|
| 301 |
+
"nbformat": 4,
|
| 302 |
+
"nbformat_minor": 4
|
| 303 |
+
}
|
requirements.txt
CHANGED
|
@@ -1,36 +1,26 @@
|
|
| 1 |
-
#
|
| 2 |
-
|
| 3 |
-
pyarrow>=13.0.0
|
| 4 |
-
numpy>=1.24.0
|
| 5 |
-
scikit-learn>=1.3.0
|
| 6 |
|
| 7 |
-
#
|
| 8 |
-
chronos-forecasting>=1.0.0
|
| 9 |
-
transformers>=4.35.0
|
| 10 |
torch>=2.0.0
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
entsoe-py>=0.5.0
|
| 14 |
-
jao-py>=0.6.0
|
| 15 |
-
requests>=2.31.0
|
| 16 |
-
|
| 17 |
-
# HuggingFace Integration (for Datasets, NOT Git LFS)
|
| 18 |
datasets>=2.14.0
|
| 19 |
-
|
|
|
|
| 20 |
|
| 21 |
-
#
|
| 22 |
-
|
| 23 |
-
marimo>=0.9.0
|
| 24 |
-
jupyter>=1.0.0
|
| 25 |
-
ipykernel>=6.25.0
|
| 26 |
|
| 27 |
-
#
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
tqdm>=4.66.0
|
| 31 |
|
| 32 |
-
#
|
| 33 |
-
|
|
|
|
|
|
|
| 34 |
|
| 35 |
-
#
|
| 36 |
-
|
|
|
|
|
|
| 1 |
+
# HuggingFace Space Requirements for FBMC Chronos-2 Forecasting
|
| 2 |
+
# GPU-optimized dependencies for JupyterLab SDK
|
|
|
|
|
|
|
|
|
|
| 3 |
|
| 4 |
+
# Core ML/Data
|
|
|
|
|
|
|
| 5 |
torch>=2.0.0
|
| 6 |
+
transformers>=4.35.0
|
| 7 |
+
chronos-forecasting>=1.2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
datasets>=2.14.0
|
| 9 |
+
polars>=0.19.0
|
| 10 |
+
pyarrow>=13.0.0
|
| 11 |
|
| 12 |
+
# HuggingFace
|
| 13 |
+
huggingface-hub>=0.19.0
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
+
# Visualization
|
| 16 |
+
altair>=5.0.0
|
| 17 |
+
vega-datasets
|
|
|
|
| 18 |
|
| 19 |
+
# Jupyter
|
| 20 |
+
ipykernel
|
| 21 |
+
jupyter
|
| 22 |
+
jupyterlab
|
| 23 |
|
| 24 |
+
# Utilities
|
| 25 |
+
python-dotenv
|
| 26 |
+
tqdm
|