SolarHive

SolarHive E4B Ollama — Edge Solar Energy Intelligence

LoRA fine-tuned Gemma 4 E4B (8B), merged to 16-bit safetensors for edge deployment via Ollama. Community energy data never leaves the neighborhood.

This repository contains merged safetensors (base model + LoRA adapters baked together). Ollama can import safetensors via ollama create --experimental — no GGUF conversion needed. For comparison, the official base model is also available as gemma4:e4b (GGUF, 9.6 GB).

Built for the Gemma 4 Good Hackathon (Google DeepMind x Kaggle).

Base Model google/gemma-4-e4b-it
Architecture Dense + PLE — 8B total, 4.5B effective
Fine-Tuning LoRA via Unsloth (BF16)
Training Data 1,029 community solar energy examples
Converged Loss 0.952
Benchmark 7/8 (5/5 domain Q&A + 2/3 tool calling)
Training Time 282 seconds (~4.7 minutes)
Compute Google Colab Pro
License MIT (adapters) / Gemma Terms (base model)

Model Overview

SolarHive E4B is the edge companion to SolarHive 26B A4B. While the 26B model powers cloud inference with full multimodal VQA, the E4B model is optimized for local deployment via Ollama on consumer hardware.

Privacy-first: Running Gemma 4 locally means community energy data never leaves the neighborhood. No cloud dependency, no internet requirement, no data privacy concerns. A village in rural India, a suburb in Michigan, and a coastal town recovering from a hurricane all get the same intelligence.

This repository contains the fully merged model (base + LoRA baked together) — no separate base model download needed.


Training Details

Parameter Value
Method LoRA via Unsloth FastVisionModel (BF16, RTX PRO 6000 96 GB)
LoRA rank 16
LoRA alpha 16
LoRA dropout 0
Target modules All linear layers
Learning rate 2e-4
Optimizer AdamW 8-bit
Warmup steps 5
Epochs 3
Max sequence length 2048
Precision BF16
Seed 3407
Trainable parameters 41.2M / 8.0B (0.51%)

Training Loss

Metric Value
Converged loss (last 20 steps) 0.952
Final step loss 1.088
Minimum loss 0.455
Total steps 195
Training time 282 seconds

Training Data

Same 1,029 examples as the 26B A4B model:

  • 413 hand-crafted examples spanning 15+ US cities and 9 energy domains
  • 516 API-grounded examples from live Open-Meteo, PVWatts, OWM, and EIA data
  • 100 tool-calling examples (50 with tools, 50 without)

See the SolarHive Dataset for full documentation.

Hardware

  • GPU: NVIDIA RTX PRO 6000 Blackwell Server Edition (96 GB GDDR7)
  • Platform: Google Colab Pro (G4 VM)

Benchmark Results

Domain Q&A (5/5)

Question Result
Solar production when humidity exceeds 80%? Correct
Battery SOC threshold for grid export? Correct
Home #3 underperforming 22% — diagnostic checklist? Correct
Winter snow on panels — prioritize actions? Correct
Grid frequency 59.8 Hz — microgrid implications? Correct

Tool Calling (2/3)

Question Expected Called Status
Current battery state? get_battery_state get_battery_status Fail
Solar production in Seattle? get_solar_production or get_weather get_solar_production Pass
General panel maintenance tips? None None Pass

How to Use

Loading with transformers

from transformers import AutoModelForCausalLM, AutoProcessor
import torch

model = AutoModelForCausalLM.from_pretrained(
    "Truthseeker87/solarhive-e4b-ollama",  # This repo (merged safetensors)
    dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True,
)
processor = AutoProcessor.from_pretrained(
    "Truthseeker87/solarhive-e4b-ollama",
    trust_remote_code=True,
)

Edge Deployment via Ollama

Ollama supports Gemma 4 safetensors import via the --experimental flag. No GGUF conversion needed.

# 1. Clone or download this repo
git clone https://huggingface.co/Truthseeker87/solarhive-e4b-ollama
cd solarhive-e4b-ollama

# 2. Create a Modelfile
cat > Modelfile << 'EOF'
FROM .
SYSTEM "You are SolarHive, an AI energy advisor for a community of 12 homes with rooftop solar and shared battery storage in Ann Arbor, Michigan. Use the available tools to get real-time data before answering. Be specific, reference actual data, and keep responses concise (3-5 sentences)."
PARAMETER temperature 1.0
PARAMETER top_p 0.95
PARAMETER top_k 64
PARAMETER num_ctx 4096
EOF

# 3. Import the model (--experimental required for Gemma 4 safetensors)
ollama create solarhive --experimental -f Modelfile

# 4. Run it
ollama run solarhive "What's the best time to run my dishwasher today?"

Note: The official base (non-fine-tuned) E4B is also available as a pre-built GGUF on ollama.com/library/gemma4:e4b (9.6 GB, Q4_K_M). This fine-tuned version adds 1,029 examples of community solar domain expertise.


Companion Repositories

Model Repository Purpose
SolarHive 26B A4B LoRA solarhive-26b-a4b-lora Cloud inference with full multimodal + function calling
SolarHive E4B Ollama This repo Edge deployment via Ollama (merged safetensors)
SolarHive Dataset solarhive-community-solar-1k 1,029 training examples
GitHub the-gemma4-good-hackathon-solarhive Full source code, notebooks, data principles

Links

Built with Gemma 4 in Ann Arbor, Michigan. April 2026.

Gemma is a trademark of Google LLC.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Evaluation results