EV Charging Optimization

Understand How Smart Charging Saves Money

Compare different charging strategies side by side. See the trade-offs, find the optimal approach for your plaza.

Three Algorithm Families

Each family takes a fundamentally different approach to the charging problem.

Rule-Based
Deterministic heuristics like FIFO (First In, First Out), Fair Share, and Valley Filling. Fast, predictable, easy to understand — but don’t optimize for cost or constraints.
MPC (Model Predictive Control)
Mathematical optimization over a rolling time horizon. Uses demand and price forecasts to minimize cost — but only as good as its predictions.
RL (Reinforcement Learning)
Learns optimal charging policies from experience. Handles uncertainty without forecasts — the frontier of smart charging research.

What You Can Do

Everything you need to explore, compare, and understand EV charging strategies.

Configure Scenarios
Set charger counts, power limits, EV arrival patterns, pricing profiles, solar, and battery storage.
Compare Side by Side
Run multiple algorithm families on the same scenario and see the differences instantly.
Visualize Results
Interactive charts: power flow, SOC curves, cost accumulation, and energy distribution.
Learn & Understand
Built as a teaching tool — understand why algorithms behave differently, not just that they do.
Cumulative Cost Over 24 Hours
Same plaza, same day — see how each algorithm family accumulates cost differently

Rule-Based

325

Baseline

MPC

156

52% cheaper than Rule-Based

RL

105

68% cheaper than Rule-Based

Compare Charging Strategies in Seconds

Three steps. No setup headaches. Just pick, run, and see.

1
Pick a Scenario
Choose a preset or build your own: set charger count, EV schedules, pricing, solar, and battery.
2
Run All Three
One click simulates Rule-Based, MPC, and RL on the exact same scenario.
3
See the Difference
Interactive charts reveal where each approach wins and why — cost, peak demand, constraint compliance.

Algorithm Comparison

How the three families stack up across key dimensions.

Rule-BasedMPCRL
Needs forecasts?
No
Yes
No
Handles uncertainty?
No
Partially
Yes
Optimality
Low–Medium
High (with good data)
High
Constraint handling
Manual
Built-in
Learned
Computation
Instant
Medium
Slow to train, fast at inference
Best for
Quick baselines
Known environments
Uncertain, real-world conditions

Ready to Optimize?

Launch the simulator and see how different algorithms perform on your charging plaza configuration.

Open Simulator