Case study Audius Blog
You can't fix what you can't see.
Why I rebuilt the Audius blog, and what owning the stack let us fix.
The Audius blog was getting hammered by bot traffic, and nobody could tell us why.
It ran on Webflow at the time. Monthly visitors hovered where they always did. Usage numbers, the kind Webflow bills on, climbed steadily upward. The two didn’t add up. For every real visit, the platform was clocking gigabytes of traffic. The only explanation that made the numbers work was bot abuse, and the only people positioned to confirm it were Webflow support.
We spent a long time troubleshooting with them. They pointed in different directions. Nothing they pointed at fixed the problem. The account kept getting auto-upgraded to more expensive tiers because of the overages. The bill crossed $400 a month, paying for traffic we didn’t have.
The rebuild
I rebuilt the blog from scratch. Self-hosted on Vercel, content on Sanity, both on free tiers. The new stack cost effectively nothing per month. I redesigned the editorial layer along the way. Typography, image treatment, infinite scroll over pagination, the things you’d expect from a redesign. The bigger architectural choice was making the content machine-readable as a first-class concern. Markdown mirrors of every post. Per-category llms.txt indexes for LLM agents. Plain-text routes for any system that needs to consume the content programmatically. The blog was rebuilt for two audiences from the start: human readers and AI agents.
Then the bot pattern reappeared on the new site.
The fix
This time I could see it. Owning the stack meant I had access to every request, every response, every cache hit. With the help of AI I could read the patterns and direct the fixes without needing to be a security engineer.
The interesting tension was that the site was explicitly designed for AI agents to read. The LLM endpoints were a feature, not a bug. So the defense couldn’t just block bots. It had to distinguish wanted bot traffic from unwanted bot traffic.
The fix worked in layers. Aggressive caching at the edge so repeat requests didn’t reach the origin. Bot detection that throttled unverified bots instead of blocking them, with rate limits tight enough to make abuse expensive but loose enough that legitimate AI crawlers could keep working. Hardened inputs so anything malformed got rejected before it hit the origin. And monitoring so the next anomaly would be visible immediately.
What it cost
The rebuild paid for itself in the first month. Webflow had been costing more than $400 per month. The new stack costs effectively nothing.
The harder thing to put a number on is the rest of it. The vendor relationship made the bot problem unfixable because the vendor couldn’t see what was happening either. Owning the stack didn’t just save money. It made the fix possible. The architectural decisions that came with it wouldn’t have been available inside a managed platform. Designing for AI agents as readers. Structuring the security model around that audience. Building the observability to debug it. None of that was possible until we owned the entire stack.