top of page

The AI Tax: Why Agencies Are Bleeding Margins (And How to Stop It)

  • Writer: Nikolaos Lampropoulos
    Nikolaos Lampropoulos
  • Dec 11, 2025
  • 11 min read

Agencies are currently caught in a value trap: investing heavily in AI capabilities to stay relevant, while simultaneously facing fee compression from clients who expect that technology to lower their bills.


The value you're creating with AI is being immediately captured through pricing pressure, while the costs keep piling up on your P&L. This isn't sustainable, and it's getting worse.


A typical agency employee now juggles subscriptions to ChatGPT Plus, Midjourney, specialized analytics tools, and whatever enterprise platform their holding company mandates. Multiply that by headcount, and you're looking at $200-500 per employee per month in AI costs alone. For a 50-person agency, that's a quarter million dollars annually—just in subscriptions.


Meanwhile, in pitch after pitch, you're hearing the same refrain: "Your competitors are using AI, so we expect this to cost less." The economics are brutal and backwards. Agencies that don't adopt AI lose pitches. Agencies that do adopt AI see margins compress faster. There's no obvious path forward, and most agencies are making it worse through scattered, reactive technology adoption.


But this isn't inevitable. The problem isn't AI itself—it's how agencies are approaching it. Let's explore a different path.


The Four Failure Modes


1. Strategy Vacuum


Most agencies are adopting AI tactically rather than strategically. There's no operating model that answers fundamental questions:

  • What does our agency do differently because AI exists?

  • Which capabilities become AI-augmented versus AI-native versus staying human-led

  • What's our defendable position in a market where everyone has access to the same tools?


Without this clarity, every department buys tools reactively:

  • Creative team sees a demo of an AI image generator and signs up

  • Strategy team needs help with research and subscribes to a different AI tool

  • Media team finds a new AI optimization platform that will make them stand out

  • Account teams discover new AI client servicing tools that will fix broken processes and stop things from falling through the cracks - especially with bigger accounts


The result: You're paying for AI, but you don't have an AI strategy. You have a collection of tools, not a transformation.


2. Silo Proliferation


Walk through your agency and ask people what AI tools they're using. You'll be shocked by the answer—not because there are too few, but because there are too many, and nobody knows the full list.


Creative has Midjourney and Runway. Strategy has ChatGPT Enterprise and Perplexity. Media has three different optimization platforms. Analysts have their own set of tools. Account teams discovered another set. The production team found something else entirely.


Nobody's talking to each other. Nobody's sharing learnings. Nobody's negotiating enterprise deals. Your data can't flow between systems. Shadow IT becomes a threat. Your finance team can't track ROI. Your leadership has no visibility into total AI spend or impact.


You have zero enterprise negotiating power because you're buying retail, one subscription at a time. You're paying full price for capabilities you're probably duplicating across multiple platforms.


3. Random Acts of AI


Agencies are implementing "patches"—isolated AI experiments that solve point problems but don't connect to business outcomes.


A copywriter uses Claude to draft social posts, saving 30 minutes a day. An analyst uses ChatGPT for research synthesis, speeding up deck creation. A strategist uses AI to generate consumer personas more quickly.


These create marginal productivity gains for individuals but don't transform how the agency delivers value or differentiates in the market. They're efficiency improvements that clients immediately capture through scope reduction or pricing pressure. The agency bears the cost, the client captures the benefit, and nothing fundamentally changes about your competitive position.


Meanwhile, the subscription costs accumulate month after month, year after year, without corresponding margin improvement or revenue growth. You're running faster to stay in the same place, except now you're paying for the privilege.


4. No Innovation Discipline


There's no framework for evaluating which AI use cases actually matter. No process for testing hypotheses, measuring impact, deciding what to scale versus sunset, or deploying solutions systematically.


Everyone experiments with everything. Some experiments show promise, but there's no mechanism to scale them. Other experiments clearly fail, but the subscriptions continue because nobody's tracking them. The breakthrough use cases—the ones that could actually transform your business—get lost among the noise.


You need discipline. You need a framework that separates signal from noise, winners from losers, strategic investments from tactical distractions.


The Strategic Framework: From Cost Center to Competitive Advantage


Here's how to turn the AI tax into a strategic advantage through a disciplined, value-focused approach.


Step 1: Define Your AI-Powered Value Proposition


Before buying another tool or renewing another subscription, stop and answer these questions:


What client outcomes improve because we use AI? Not "we're faster"—what specific business results get better? Does their brand strategy become more insight-driven? Do their campaigns perform better? Do they get to market faster with higher quality? If you can't articulate improved outcomes, you're just doing the same work cheaper.


Which of our services become defensibly better with AI versus just incrementally cheaper? This is the crucial distinction. If AI just makes you faster at the same work, clients will demand price cuts. If AI makes your output demonstrably better—more strategic, more data-informed, more effective—you can maintain or increase pricing.


What new offerings can we provide that weren't economically viable before? Real-time creative optimization used to require teams of people. AI-powered predictive modeling was only available to enterprises with data science teams. Brand strategy informed by continuous consumer sentiment analysis was impossible at scale. What can you now offer that creates new revenue streams?


Where does human judgment remain our differentiator versus where AI becomes the product? Not everything should be augmented. Some capabilities—deep strategic thinking, nuanced creative judgment, client relationship management—are human-led with AI as support. Others might become AI-native with human oversight. Be clear about which is which.


Practical Exercise: Map Your Service Portfolio


Create a 2x2 matrix with your current service offerings:

  • X-axis: Human judgment value (low to high)

  • Y-axis: AI augmentation potential (low to high)


This reveals four zones:

Automate Zone (low human judgment, high AI potential): Systematize with AI and redeploy talent to higher-value work. Examples: routine reporting, basic media plans, standard creative formats.


Augment Zone (high human judgment, high AI potential): This is your strategic focus—where AI makes your expertise more valuable, not redundant. Examples: brand strategy informed by AI-analyzed data, creative concepts generated through AI-human collaboration, media planning that combines AI optimization with strategic insight.


Preserve Zone (high human judgment, low AI potential): Keep investing in human excellence. Examples: client relationship management, complex stakeholder navigation, transformational creative thinking.


Evaluate Zone (low human judgment, low AI potential): Question whether you should offer this at all. These are commodity services with no AI advantage and no human differentiation. Either transform them or stop offering them.


This exercise forces clarity about where AI actually creates value in your business model, not just where it creates activity.


Step 2: Build Your Use Case Portfolio


Not all AI use cases are created equal. Stop treating every AI application as equally important. Instead, categorize opportunities across three tiers:


Tier 1: Foundation (Enable the Business)

These are table stakes—the AI capabilities you need to remain competitive but that won't differentiate you in the market. Focus on consolidation and cost efficiency:

  • Content generation acceleration

  • Research synthesis and insight extraction

  • Meeting documentation and follow-up

  • Basic creative ideation support

  • Standard reporting automation


Investment principle: Standardize on enterprise platforms. Negotiate volume pricing. Make these universally available but don't expect major differentiation. These are cost-of-entry capabilities that keep you in the game.


Tier 2: Differentiation (Win More Business)

These are capabilities that make your agency demonstrably better at specific things clients care about and are willing to pay for:

  • Predictive campaign performance modeling

  • Automated creative testing and optimization at scale

  • Real-time audience insight generation

  • Strategic scenario planning with AI-powered analysis

  • Brand health monitoring with AI-synthesized signals

  • Competitive intelligence platforms with continuous tracking


Investment principle: Be selective. Choose 3-5 use cases where AI transforms your delivery quality or speed in ways clients will notice and value. Build deep capability here rather than shallow capability everywhere. These become your competitive advantages—the reasons clients choose you over others.


Tier 3: Innovation (Create New Revenue)

These are net-new offerings that weren't possible or economical before AI:

  • AI-powered brand strategy tools you license to clients

  • Continuous creative optimization services with real-time feedback

  • Predictive media planning platforms

  • Custom AI models trained on client data

  • AI-augmented consumer research methodologies

  • Automated campaign creation with strategic guardrails


Investment principle: Treat these as R&D with defined budgets, timelines, and success metrics. Most will fail. That's expected and healthy. Focus on learning velocity and calculated risk-taking. The winners become new service lines that generate incremental revenue and differentiate your positioning.


Step 3: Implement Enterprise AI Governance


Create a lightweight but disciplined process that prevents the chaos while enabling innovation:


AI Investment Committee (Meets Regularly)


Membership:

  • Representative from each major department (Creative, Strategy, Media, Account)

  • Finance lead to track spend and ROI

  • IT/Operations for technical evaluation and integration

  • Leadership for strategic alignment and final approval


This isn't about creating bureaucracy. It's about creating shared visibility, coordinated decision-making, and accountability for results.


Evaluation Framework


Any AI tool request—whether it's a $10/month individual subscription or a $250K enterprise platform—must answer these six questions:


1. Use Case Clarity: What specific business outcome improves? How do we measure it? "This will save time" isn't enough. "This will reduce pitch deck creation time by 40%, allowing us to respond to 30% more RFPs" is specific and measurable.


2. Strategic Alignment: Does this fit our Tier 1/2/3 portfolio strategy? If it's not in one of these categories, why are we considering it? Are we adding a new strategic focus, or is this scope creep?


3. Build vs. Buy: Is this a commodity capability everyone can access (buy) or a potential differentiator (consider building or heavily customizing)? Building takes longer but creates defensible advantage. Buying is faster but creates no moat.


4. Integration Requirements: Does this connect with our existing systems or create another data silo? Can we get insights out and back into our workflows, or is this a standalone tool that fragments our operations further?


5. Total Cost of Ownership: License cost is just the beginning. What about training? Integration? Maintenance? Support? The real cost is often 2-3x the license fee once you factor in everything required to make it work.


6. Success Criteria: What metrics prove this is working after 90 days? Be specific. "People like it" isn't success criteria. "80% weekly active usage rate and 25% reduction in task completion time" is.


This creates appropriate friction without creating gridlock. Small experiments move fast. Strategic investments get scrutiny.


Step 4: Test, Measure, Decide


Stop letting AI tools accumulate without accountability. Implement a disciplined innovation cycle that treats every AI investment as a hypothesis to be validated or disproven:


Phase 1: Hypothesis

  • Define the problem clearly: Not "we need to be faster" but "pitch deck creation takes 40 hours and delays our response time, causing us to decline 30% of RFP opportunities."

  • State the expected outcome: "This tool will reduce pitch deck creation to 15 hours, allowing us to respond to 95% of RFP opportunities."

  • Identify success metrics: Usage rate, time savings, quality scores, win rate impact—whatever matters for this specific use case.

  • Determine minimum viable test: What's the smallest experiment that can validate or disprove the hypothesis? Usually 5-10 users, 30-60 days.

  • Get committee approval for pilot budget and timeline.


Phase 2: Pilot

  • Limited user group: Carefully selected power users who will give honest feedback and push the tool to its limits.

  • Structured feedback collection: Weekly surveys, bi-weekly interviews, continuous usage tracking. Don't rely on informal feedback or anecdotes.

  • Quantitative impact measurement: Track the metrics you defined. If you said it would save 25 hours, prove it saved 25 hours. Time tracking, output measurement, quality assessments—measure what matters.

  • Weekly check-ins with the executive sponsor to address issues, make adjustments, and maintain momentum.


Phase 3: Evaluation

Honest, data-driven assessment:

  • Did we hit success criteria? Yes/no for each metric. Be rigorous. "Sort of" isn't good enough.

  • What was the actual cost? Include license fees, training time, integration work, support burden, opportunity cost of user time.

  • Is this scalable? What works for 10 people might break for 100. Does the value persist at scale, or were we just measuring novelty effects?

  • Does ROI justify broader deployment? Calculate the total investment versus the measured value creation. Is the payback period acceptable?


Phase 4: Decision

Make one of three calls:

Scale: Roll out enterprise-wide with proper training, integration, and support. Negotiate better pricing for volume. Create enablement materials. Track ongoing usage and impact. This becomes part of your operating model.


Iterate: Extend the pilot with specific modifications based on learnings. Maybe the tool works but needs different workflows. Maybe it works for some roles but not others. Define what you're testing in the iteration and set a timeline for the next decision point.


Sunset: Kill it decisively and reallocate the budget. This isn't failure—it's learning. Document the learnings so you don't repeat the mistake.


Critical mindset shift: Celebrate killing bad ideas as much as scaling good ones. The point isn't to have a 100% success rate. The point is learning velocity—rapid experimentation, clear measurement, honest evaluation, and decisive action.


Most agencies keep tools running indefinitely because nobody wants to admit the experiment didn't work. That's how you end up with scattered subscriptions delivering marginal value.


Step 5: Track the Four Value Dimensions


Create a monthly dashboard that monitors AI impact across four dimensions. This becomes your single source of truth for AI ROI and guides future investment decisions.


1. Cost Savings


Track direct cost reductions:

  • Third-party vendor cost reduction: Replacing stock photo subscriptions with AI generation, reducing outsourced research, eliminating commodity creative production vendors.

  • Decreased reliance on outsourced services: Work you used to send to freelancers or production shops that you now handle in-house with AI augmentation.

  • Lower recruitment needs through productivity gains: If AI-driven productivity means you need 45 people instead of 50 to deliver the same revenue, that's $500K+ in avoided costs.


Calculate this monthly. Be specific. "We saved money" is meaningless. "We reduced stock asset costs by $2,400/month and eliminated $8,000/month in basic design outsourcing" is actionable.


2. Productivity Gains


Measure efficiency improvements:

  • Time saved on defined tasks: Use time tracking data. How long did pitch decks take before? How long now? How much time does research synthesis take with AI versus without?

  • Increased throughput: Campaigns delivered per FTE, clients served per account person, creative concepts generated per designer. Volume metrics matter.

  • Reduced turnaround times: Pitch prep time, campaign execution cycles, client deliverable creation. Speed creates competitive advantage when it doesn't compromise quality.


Critical question: Are productivity gains being captured as margin improvement or given away through client pricing pressure? Track this ruthlessly. If you're 30% more efficient but clients are paying 30% less, you haven't improved your economics—you've just maintained them while adding AI costs.


3. Revenue Enhancement


Measure top-line impact:

  • Win rate improvement: Are you winning more pitches? Can you attribute any of that to AI-powered capabilities? Track pitches where AI tools were featured versus not featured.

  • New service revenue: Tier 3 innovations that become billable offerings. AI-powered brand tracking, continuous creative optimization, predictive analytics services—track revenue from offerings that didn't exist before.

  • Expansion revenue: Existing clients buying more because of AI capabilities. Upsells, cross-sells, scope expansions that happen because you can now offer things you couldn't before.

  • Premium pricing: Are clients willing to pay more for AI-augmented services? Track pricing premiums on proposals that feature AI capabilities.


This is where the real transformation happens. Cost savings and productivity gains are defensive.


Revenue enhancement is offensive—it's how you turn AI from a cost burden into a growth engine.


4. Strategic Positioning


Harder to quantify but track qualitatively through quarterly reviews:

  • Are we being invited to different kinds of conversations? Are clients asking us about innovation, transformation, advanced capabilities versus just execution?

  • Are clients asking for our AI capabilities specifically? Do RFPs mention our AI tools or methodologies? Are we winning because of AI, not despite the cost?

  • Are we attracting different caliber talent? Do job candidates mention AI capabilities as a draw? Are we able to recruit from tech companies or other innovation-forward agencies?

  • Are we being featured in industry conversations? Speaking opportunities, press mentions, case study requests, partnership inquiries—signals that we're seen as a leader.

  • Are competitors following our moves? If agencies are copying your AI strategy, you're leading the market.


This dimension doesn't show up in the monthly P&L, but it determines your long-term position in the market.


The Strategic Choice


The AI tax is real, but it's not inevitable. It's the cost of reactive technology adoption without strategic intent.


Agencies that continue adopting AI this way—scattered tools, no coordination, no measurement, no strategy—will see margins compress further as costs accumulate without corresponding value capture. The math doesn't work, and hope isn't a strategy.


Agencies that approach AI strategically will transform it from a cost burden into a competitive moat:

  • Clear operating models that define what changes and what doesn't

  • Disciplined investment frameworks that separate strategic bets from tactical distractions

  • Rigorous measurement that proves value creation and guides resource allocation

  • Portfolio management that balances foundation, differentiation, and innovation

  • Governance structures that enable fast experimentation with appropriate oversight


The question isn't whether to invest in AI. You're already doing that, probably more than you realize when you add up all the subscriptions across your organization.


The question is whether that investment is strategic or accidental, coordinated or fragmented, measured or assumed, value-creating or cost-generating.


The agencies that figure this out—that approach AI as a strategic transformation rather than a technology adoption exercise—will define the next decade of the industry.


The rest will become cautionary tales about technology adoption without strategic thinking, about cost accumulation without value creation, about running faster while losing ground.


Which kind of agency will you be?

 
 
 

Comments


Film Clapboard

GET IN TOUCH

hello@shapesandnumbers.com
London - New York

© 2024 by Shapes + Numbers

FOLLOW US

WORKING HOURS

Mon - Fri: 8am - 8pm
Saturday: 9am - 5pm
Sunday: 9am - 5pm

bottom of page