Most takes on AI and agency work fall into one of two camps: doom scrolling about job loss, or vendor pitches dressed up as insight. Neither reflects what’s actually happening inside a working studio. This is the ground-level version, from someone who has written code professionally for over a decade and runs a small UI/UX, development, and digital marketing agency in Bali.
The honest summary: AI automation didn’t replace agency work. It changed which parts of the work are worth paying for.
What Actually Compressed, and What Didn’t
Here is the split as it looks from inside the studio. These tasks got dramatically faster:
- First-draft copy and content
- Component scaffolding and boilerplate
- Research synthesis from interview transcripts and competitor scans
- Performance reporting and dashboard prep
- Initial UI variations and design exploration
These tasks barely moved:
- Information architecture decisions
- Edge-case handling in production code
- Stakeholder alignment and discovery
- Brand voice and creative direction
- Judgment calls about what to build next
If your role was 80% in the first list, you’re feeling the pressure. If your role was 80% in the second list, you’re probably busier than before. The first bucket got cheap, so clients now want more strategic work inside the same engagement.
A Real Project: Before and After
A typical marketing site rebuild two years ago looked like this:
| Phase | Then | Now |
|---|---|---|
| Discovery | 1 week | 1 week |
| Wireframes | 1 week | 2 days |
| Visual design | 2 weeks | 1 week |
| Frontend build | 2 weeks | 1 week |
| QA and launch | 1 week | 3 days |
| Total | ~7 weeks | ~3 weeks |
That’s a 50%+ reduction in calendar time. But the framing is misleading if you stop there. The team didn’t shrink. Two things happened instead: more scope got added inside the same engagement (more iterations, more A/B tests, more polish on the parts users actually touch), and senior-to-junior time rebalanced. The hours spent hammering out a navbar mostly disappeared. The hours spent figuring out why an auth flow drops 12% of users expanded.

️ The Stack in Daily Use
No buzzwords. Here is what actually runs the work:
Engineering
- Cursor + Claude for component scaffolding and refactors
- Custom
AGENTS.mdfiles per project so the AI has context on project conventions - LibreChat as an internal AI gateway: auditable, multi-model, no data leaking to consumer accounts
- Core stack unchanged:
Node.js,Python,React,Go
Design and Content
- AI for first-pass copy variations against tight briefs (the briefs are the bottleneck, not the model)
- Figma with AI plugins for repetitive layout work
- Human editorial pass before anything reaches the client
Operations
- Automated weekly client reports: analytics pulled, summary drafted by AI, strategist edits before send
- AI-assisted intake forms that pre-qualify leads before discovery calls
The pattern is consistent across all three areas: AI handles volume, humans handle direction. Reverse that order and you ship garbage faster.
⚠️ Four Traps Worth Naming
These are the failure modes seen most often in agencies that are struggling with AI adoption:
1. Treating AI as a margin grab
The agency cuts internal production time by 60% and keeps charging the same rates without expanding scope or improving outcomes. The client captures none of the benefit. This works for about one renewal cycle, then the client figures it out.
2. Automating broken processes
If your discovery process is broken, automating it produces broken discovery faster. AI is a multiplier on whatever’s underneath it. Diagnose the process before you automate it.
3. Tool stacking without integration
Some agency pitches list 15+ AI tools. In practice, 13 of them aren’t connected to each other and the team uses 3. What matters is reliable end-to-end workflows, not the size of the logo grid.
4. Removing the human in the loop entirely
AI-generated content with no editorial pass is the agency equivalent of shipping console.log to production. It mostly works. Until it doesn’t, very publicly.

The Pricing Problem Nobody Has Solved
Traditional agency retainers are priced on time: N hours per month, defined scope. That model assumes time is the constraint. AI broke that assumption.
If the same output now takes 10 hours instead of 40, there are three options:
- Option A: Charge for 10 hours at the old rate. The client wins, the agency loses 75% of revenue on that engagement.
- Option B: Charge for 10 hours at 4x the rate. The client revolts unless the outcome clearly justifies it.
- Option C: Charge for the outcome. Pipeline generated, conversion improved, ship date hit.
The agency has moved selectively toward Option C on engagements where the outcome is measurable and there’s enough signal to predict delivery. It’s better for clients and significantly riskier for the agency. Vague scopes don’t protect anyone in outcome-based pricing.
For engineers thinking about freelancing or starting an agency right now: pick your pricing model early, and price the value, not the hours. The hours metric is becoming structurally misleading.
What This Means for Developers at Agencies
The most exposed parts of an agency developer’s job are the most templated ones: landing pages with no novel state, CRUD admin panels, glue-code integrations. Those have been collapsing for a while now.
The less exposed parts are the ones that require holding the whole system in your head: API design that anticipates the next three features, refactors that don’t break six other things, performance work, security review, debugging in production. None of that has gotten meaningfully cheaper.
Three practical moves worth making:
- Get fluent with AI tooling, but don’t outsource your judgment to it. The engineers thriving right now treat AI like a fast junior dev: useful, needs supervision.
- Push toward work that requires system-level reasoning. That’s where compensation is heading.
- Learn enough product and business context to participate in scoping conversations. “I just build what’s specced” is a shrinking job description.
The Takeaway
AI didn’t kill agency work. It killed the cheap version of agency work. What’s left is the part that requires judgment, context, and ownership of outcomes. That’s harder to do and harder to fake. From inside the change, it also looks more sustainable.
The line between judgment and execution shifted. It didn’t disappear.

