SpendLens gives engineering teams real visibility into AI coding tool costs — per-developer attribution, team budgets, and full audit trail across Claude Code, Cursor, and Copilot.
The numbers most engineering leaders discover only after the monthly bill arrives.
A transparent proxy between your developers' AI tools and every LLM provider. Zero code changes required.
Developers run eval $(spendlens connect) or launch the interactive shell. A local header-injection proxy redirects AI tool traffic through SpendLens transparently — no code changes, no config files to edit.
The proxy extracts token usage from every response — including SSE streaming chunks. SpendLens computes cost from a 17-model pricing table and attributes it to the exact developer, team, and tool in real time.
OPA policy checks run before every request reaches the LLM. SpendLens enforces team budgets, per-developer usage caps, and model restrictions at the gateway — overspend is blocked, not just reported after the fact.
Every request emits an ECS-compliant NDJSON audit event with SHA-256 body hashes — never raw prompts, never API keys. Queryable in Elasticsearch, ClickHouse, or your existing SIEM within seconds.
See exactly what each developer costs across Claude Code, Cursor, Copilot, and any LLM-powered tool. Token-level tracking with cost computation from a 17-model pricing table — attributed to the specific developer, team, and session.
Real-time dashboards showing AI spend by team, developer, model, and tool. Extracts usage from both standard JSON responses and SSE streaming chunks — with burn rate tracking and runway estimates built in.
Budget enforcement at the gateway — not after the bill arrives. OPA preflight checks block over-budget requests before they reach the LLM. Set per-team budgets, per-developer caps, and model restrictions in version-controlled Rego policies.
ECS-compliant audit events flow into Elasticsearch and ClickHouse for real-time Kibana dashboards. See cost by developer, team, model, and tool — with burn rate tracking, runway estimates, and budget utilization alerts.
SpendLens gives each stakeholder the view they need to understand and control developer AI costs.
Your developers adopted Claude Code and Cursor overnight. Now you need to know the cost. SpendLens gives you per-developer, per-team attribution — with budget controls that prevent surprise bills without slowing anyone down.
Right now, AI coding tool costs show up as a single line on the Anthropic or OpenAI invoice. SpendLens breaks it down by team, developer, and model — so you can forecast accurately and set budgets that engineering actually follows.
See who on your team uses which AI tools and how much it costs. Identify high-spend developers, compare across tools (Claude Code vs Cursor vs Copilot), and make informed decisions about which AI tools to standardize on.
Join the early access list and finally know what your engineering team's AI tools actually cost.