Tree-sitter JSON parser
A WebAssembly build of Tree-sitter validates payloads locally and reports precise line and column errors, so messy JSON never leaves your browser.
Token Tamer started as a personal need to make hidden token costs visible. Under the brutalist surface it's a stack of deliberate choices—from Next.js and React to WebAssembly parsing and local embeddings—that keep every analysis private, fast, and explainable.
A WebAssembly build of Tree-sitter validates payloads locally and reports precise line and column errors, so messy JSON never leaves your browser.
LenML tokenizers mirror the official vocabularies for GPT-4o, Claude 3, Gemma 3, Qwen 3, and other modern models, so the counts you see match what APIs bill against.
Similar-key insights run entirely on-device. Compact Gemma 3n embeddings pair with a lightweight k-means pass to cluster redundant field names in milliseconds.
Token accounting runs inside a dedicated web worker that streams results back to the UI, letting you prune branches in milliseconds without blocking the main thread.
Large arrays render the first 200 children with expand-on-demand controls. That keeps the brutalist tree snappy even when you feed it megabyte-scale payloads.
Thick borders, uppercase typography, and reusable components like BrutalistSelect and GuideLink keep every surface consistent—from the optimizer to the long-form guides.
This is why token efficiency matters:
When I built Token Tamer, I wanted to make these hidden costs visible. Because once you see them, you can’t unsee them. Every verbose field name, every redundant structure, every unnecessary nesting level—they all add up. And in the age of ubiquitous AI, what adds up eventually matters.
You tell me. Drop the next capability you want in the contact form and help steer the roadmap.