A month ago I built AiUsage — a simple web dashboard to track my AI usage across providers. It had a calendar you could click, two provider integrations, and a sample chart. It worked. But after stumbling across CodexBar, a macOS menu bar app that monitors token usage and rate limits across 24+ AI tools, I realized my dashboard was barely scratching the surface.
So I tore the whole thing down and rebuilt it from scratch. Same concept, completely different execution. The result is AiUsage v2.0.
What Changed and Why
The original version tracked two things: which days you used AI, and how many prompts you sent. That's useful for building a habit, but it tells you nothing about where your money is going or which providers are eating through your quotas. CodexBar solved this elegantly on macOS with dual-bar meters showing session and weekly limits. I wanted that same visibility, but in a browser.
The v2.0 rewrite changed almost everything:
- 24 providers instead of 2. Claude, OpenAI, Gemini, Cursor, Copilot, Codex, DeepSeek, Mistral, Windsurf, JetBrains AI, Augment, Amp, Perplexity, and more. Each with its own color, category, and quota settings.
- Dual quota meters. Every provider now shows session usage and weekly usage as separate progress bars. When you're at 70%, the bar turns amber. Past 90%, it goes red. You can see at a glance who's running hot.
- Real cost tracking. Every entry logs tokens and cost. The dashboard aggregates it into monthly totals, 30-day trend charts, and per-provider breakdowns.
- GitHub-style heatmap. The old calendar was a clickable grid. The new one is a full-year contribution graph with five intensity levels based on token volume. Click any cell to see what you did that day.
- Analytics page. Cost history charts, provider breakdowns, token usage by model, and plan utilization meters. Daily, weekly, and monthly views.
- Settings that matter. Display mode (percentage vs. pace vs. both), data export/import as JSON, and theme toggling.
The Design Decisions
I went with a "mission telemetry" aesthetic — deep dark backgrounds, glassmorphic cards with subtle blur, teal accents for healthy metrics, amber for costs, and red for limits. The intent was to feel like a control room, not a toy. Every number uses JetBrains Mono. Every label is uppercase monospace. The data should feel precise.
The sidebar is permanent on desktop and slides in on mobile. Navigation uses SVG icons instead of emoji. Cards have backdrop-filter blur so they feel layered against the dot-grid background. Small things, but they add up to something that feels considered rather than thrown together.
I kept the stack exactly the same: vanilla HTML, CSS, and JavaScript. No React, no build step, no dependencies. The entire app is three files. I find there's something satisfying about building a fully interactive dashboard with zero npm packages. It forces you to think about what you actually need.
The Data Model
The v1 data model stored usage days as booleans and entries with prompts and minutes. The v2 model is substantially richer:
Each entry now tracks provider, model, tokens, cost, prompts, and date. Each provider stores its own session quota, weekly quota, reset timestamps, plan type, and 30-day cost history. Usage days aggregate total tokens, total cost, and which providers were active. Everything persists in localStorage and can be exported to JSON for backup.
On first load, the app seeds 30 days of realistic demo data across five providers so the dashboard isn't empty. This was important — a monitoring tool with blank charts doesn't communicate its value.
Deploying on Vercel
The original version lived on GitHub Pages. For v2, I added Vercel deployment as the primary host. For a static site it's trivial — just a vercel.json with security headers and an output directory pointing to root. Push to main and it's live in seconds. GitHub Pages still works as a mirror.
What I Took from CodexBar
CodexBar is a native Swift app with 300+ source files, WidgetKit support, and a CLI tool. I'm not competing with that. But its core insight — that developers need at-a-glance visibility into AI quota consumption across multiple providers — translates perfectly to a web dashboard. The dual session/weekly meter concept, the provider-dense layout, the cost-first analytics: those ideas are universal regardless of platform.
The biggest lesson: a monitoring tool should show you something useful the moment you open it. No setup wizard, no onboarding flow. Just data. The demo seed and pre-configured providers make that possible.
What's Next
There's room to go further. Real API integration to pull actual usage data from providers. Budget alerts when daily spend crosses a threshold. Trend comparison between periods. Maybe even a browser extension that logs prompts automatically. But for now, the dashboard does what I need: it gives me one place to understand my AI spending across every tool I use.
Check it out at ai-usage-opal.vercel.app or browse the source on GitHub.