I use AI every single day. Between writing code, debugging pipelines, drafting documentation, and just thinking through problems, tools like ChatGPT, Claude, and Gemini have become part of my daily workflow. But here's the thing: I had absolutely no idea how much I was actually using them. Not a clue.
Then one month, I looked at my billing across providers and the numbers genuinely surprised me. Not in a catastrophic way, but enough to make me wonder where all those tokens were going. Which provider was I leaning on the most? Was my usage steady, or were there spikes I wasn't aware of? I didn't have answers to any of it.
Why I Needed This
The problem is simple. Every AI provider gives you a usage page buried somewhere in their dashboard. OpenAI has one, Anthropic has one, Google has one. But none of them talk to each other. If you want to understand your total AI consumption across all the tools you use, you're left manually checking three different dashboards and trying to piece together the picture yourself.
I wanted a single place to see everything. Daily token counts, cost breakdowns, trends over time. Not because I was trying to cut costs necessarily, but because I believe you should have visibility into anything you're spending money on regularly. As a data engineer, the idea of operating without a dashboard for something I use this heavily felt wrong.
How It Works
So I built AiUsage. It's a web dashboard that pulls usage data from OpenAI, Claude, and Gemini, then visualizes it in one unified view. You can see your token consumption broken down by provider and by day. You can see cost estimates based on each provider's pricing. And you can spot patterns, like whether you tend to burn through more tokens on certain days of the week, or whether one provider is slowly becoming your default without you realizing it.
The stack is straightforward. It's a web app that fetches data from each provider's API, normalizes it into a common format, and renders it with charts and summary cards. Nothing fancy architecturally, but the value is in the aggregation. Having everything in one place changes how you think about your usage.
I deliberately kept the design minimal. I didn't want another bloated analytics tool. I wanted something I could open, glance at, and immediately understand. A few key numbers, a couple of charts, and that's it.
What I Learned
Building this taught me a few things. First, API usage data is inconsistent across providers. Each one structures their usage endpoints differently, returns different granularity, and uses different terminology. Normalizing all of that into a single schema took more effort than I expected.
Second, I was surprised by my own usage patterns. I assumed I used Claude and ChatGPT roughly equally. Turns out I was heavily skewed toward one provider for coding tasks and the other for writing. That kind of insight is exactly what I was hoping to surface.
Third, building tools for yourself is genuinely one of the best ways to learn. There's no ambiguity about requirements when you're the user. You know exactly what's missing, what's annoying, and what actually matters. Every feature decision was instant because I just had to ask myself what I'd want to see.
What's Next
Right now AiUsage does what I need it to do, but there's room to grow. I'd like to add budget alerts so I get notified if my daily spend crosses a threshold I set. I'm also thinking about adding support for more providers as I start experimenting with newer models. And there's an interesting angle around tracking not just how much I use these tools, but how effectively I use them, though that's a harder problem.
If you use multiple AI providers and have ever been surprised by a bill, or just want to understand your habits better, feel free to check out the repo on GitHub. It's open source and I'd love to hear what others would want to track.