TECHNICAL OVERVIEW

Purpose-built models. Not prompted.

Three domain-specific models trained from scratch on EV charging telemetry. No fine-tuned foundation models. No prompt engineering. Raw physics.

Why custom models?

  • Custom Transformer Architectures Purpose-built encoder-decoder transformers with rotary position embeddings. Not fine-tuned foundation models — trained from scratch on EV charging telemetry.
  • Unsupervised Reconstruction Models learn normal charging physics through reconstruction. Anomalies are sessions the model cannot faithfully reproduce — no labeled failure data needed.
  • Physics-Aware Feature Engineering Input features encode domain physics: power curves, energy gradients, SoC trajectories, per-feature activations matched to physical constraints.
  • Per-Timestep Precision Multi-scale temporal convolutions classify anomalies at individual timestep resolution. Pinpoints the exact moment and duration of each failure.
  • Sub-5ms Inference Full 3-layer pipeline < 5ms per session. Millions of sessions per day on a single GPU.
  • Transfer Learning Pipeline Layer 1 encoder is frozen and reused as backbone for Layer 2. Only the classification head trains — zero catastrophic forgetting.
  • Per-Client Training Small model footprint (~1.4M params) means each client can have models trained exclusively on their own network data. Dedicated models, not shared weights.

Rule-based monitoring catches errors. We catch failures.

OCPP error codes tell you what the station reported. Our models tell you what actually happened — including the failures the station never noticed.
Rule-Based / OCPP Solidstudio AI
Detection Error codes only Learned reconstruction
Coverage Known failure modes Silent failures included
Granularity Session-level Per-timestep
Speed Reactive Real-time (<5ms)
Learning Static rules Per-station baselines
Maintenance Manual rule updates Self-updating
SESSION · LAYER 1

CSAR v2

Charging Session Anomaly Recognition

Transformer autoencoder trained to reconstruct healthy charging sessions from engineered physics features. Uses rotary position embeddings (RoPE) for variable-length sequence handling. Anomaly score is derived from reconstruction error — sessions the model cannot faithfully reproduce are flagged.

       ╭──────────────────────────────╮
       │                              │
       │  ┌╮  ╭──╮      ╭─╮           │
 in ──┤│  │╰──╯  │  ╭╮  │ ╰──╮        ├── score
       │  │      ╰──╯╰──╯    │        │
       │  │                  ▓▓▓▓     │
       │  ╰────────────────────╯      │
       │          reconstruct         │
       ╰──────────────────────────────╯
  • Detects silent charging failures invisible to OCPP error codes
  • RoPE-based positional encoding — handles variable-length sessions natively, no padding
  • Per-feature output activations matched to physical constraints (rates, gradients, absolutes)
  • Encoder embeddings serve as transfer-learning backbone for Layer 2
  • Calibrated threshold derived from reconstruction error distribution on known-normal data
·  ·  ·
inference output
{
"session_id": "sess_4829a7c",
"is_anomaly": true,
"anomaly_score": 0.0847,
"threshold": 0.0312,
"confidence": 0.72,
"sequence_length": 482,
}
production scenario
// 14:22 — Berlin, connector_2
// Driver charges BMW iX, session looks normal
// OCPP reports: status=Charging, no errors
// ───────────────────────────────────
Energy delivery dropped 40% at min 35
OCPP never reported this.
// → session flagged, forwarded to CSAR-C v2
SESSION · LAYER 2

CSAR-C v2

Per-Timestep Anomaly Classification

Multi-scale temporal convolution head trained on frozen CSAR v2 encoder representations. Consumes encoder embeddings, per-feature reconstruction error, and bypass features to produce per-timestep multi-label classification. The CSAR v2 backbone is completely frozen — zero catastrophic forgetting.

╭────────────────────────────────╮
│  t=   0···142━━━━168···T       │
│                                │
│  ep   ····▓▓▓▓▓▓▓····          │
│  kd   ··········▓▓···          │
│  gd   ···················      │
│  sp   ···················      │
│                                │
│  → 6 labels × T timesteps      │
╰────────────────────────────────╯
  • Multi-label classification across 6 anomaly categories at every timestep
  • Multi-scale convolutions capture patterns across different temporal windows simultaneously
  • Per-feature reconstruction error as explicit input — the model knows which features CSAR v2 struggled with
  • Bypass features preserve signed values the autoencoder clamps (critical for reverse energy flow detection)
  • Post-processing: temporal segment merging and confidence-based filtering
Anomaly categories: ENERGY_PIT · KWH_DECREASE · GHOST_DRAW · STUCK_POWER · CHARGE_DRIFT · CHARGING_EXPECTED
·  ·  ·
inference output
{
"session_id": "sess_4829a7c",
"has_anomalies": true,
"labels": ["energy_pit", "kwh_decrease"],
"anomaly_ranges": [
{
"type": "energy_pit",
"start_index": 142,
"end_index": 168,
"confidence": 0.91
}
]
}
production scenario
// 14:22 — same session, escalated from Layer 1
// Scanned 482 timesteps in 1.2ms
// ───────────────────────────────────
Found: energy_pit (6.5 min) + kwh_decrease
Inverter throttling at t=142
Counter rollback at t=165
Driver lost ~3.2 kWh without knowing
// → webhook → ops dashboard → ticket created
STATION · LAYER 3

SHADE

Station Health Anomaly Detection

Compact transformer autoencoder operating at the station-day level. Reconstructs 24-hour operational profiles from aggregated health metrics with RoPE temporal encoding. Uses directional scoring — an asymmetric error function that penalizes the right kind of deviation.

╭────────────────────────────────╮
│                                │
│  ▁▂▃▅▇█▇▅▃▂▁    24h profile    │
│  ···········▼···               │
│            08:30               │
│                                │
│  score: 0.41    DEGRADED       │
│  ▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔▔     │
╰────────────────────────────────╯
  • Reconstructs full 24h station profile from aggregated health, utilization, and power metrics
  • Directional scoring: penalizes under-delivery of energy but not over-delivery
  • Activity masking forces reconstruction from health signals alone — prevents trivial shortcuts
  • Per-bucket, per-feature error breakdown enables precise temporal localization of degradation
  • Tracks multi-day score trends for predictive maintenance alerting
·  ·  ·
inference output
{
"station_id": "CPO_BERLIN_A7",
"date": "2026-02-28",
"day_score": 0.41,
"is_anomaly": true,
"anomaly_buckets": [
{
"bucket": 34, // 08:30–08:45
"score": 0.87,
"top_feature": "utilization_rate"
}
]
}
production scenario
// End of day — Berlin A7 daily health check
// 47 sessions processed
// ───────────────────────────────────
8 anomalies found
Morning peak (08:30) worst — utilization cratered
3rd consecutive day of decline
connector_2 degrading, connector_1 fine
// → field team dispatched for inspection

Start monitoring your network today.

Free plan. No credit card. 100 sessions included.

Talk to us

Register your interest today

Registration opens soon. We'll be in touch within a few hours to demonstrate our models.

By submitting, you agree to our Privacy Policy.