top of page
Media (14)_edited.jpg

THE CONTROL ROOM

Where strategic experience meets the future of innovation.

OpenAI Advertising in ChatGPT: What Sam Altman's "Last Resort" Reveals About AI Unit Economics

  • Writer: Tony Grayson
    Tony Grayson
  • 2 days ago
  • 26 min read

By Tony Grayson, President & GM of Northstar Enterprise + Defense | Built & Exited Top 10 Modular Data Center Company | Top 10 Data Center Influencer | Former SVP Oracle, AWS & Meta | U.S. Navy Nuclear Submarine Commander | Stockdale Award Recipient


Published: January 21, 2025


Illustration showing AI brain surrounded by frontier model inference data, with advertising billboards reading "Buy Now," "Sponsored," and "Advertisement" crowding below. Data centers and power transmission towers anchor the scene. Title reads "The Agent Paradox: Truth vs. Transaction."
The Agent Paradox: AI's frontier model ambitions collide with advertising economics. When the business model shifts from truth engine to transaction engine, the product follows. Data centers and power infrastructure form the foundation, while sponsored content crowds the path forward.



Key Takeaways

"Twenty months ago, Sam Altman called advertising in AI "uniquely unsettling" and a "last resort." Last week, OpenAI announced ads are coming to ChatGPT. The market is forcing consumer-internet economics onto frontier-model inference earlier than most expected—and it has massive implications for everyone building AI infrastructure." — Tony Grayson, President & GM Northstar Enterprise + Defense

What Does OpenAI's Advertising Announcement Mean?

J.P. Morgan's analysis says the AI industry needs $650 billion in annual revenue just to deliver a 10% return on infrastructure investments through 2030. That's 0.58% of global GDP. The equivalent of $35/month from every iPhone user. Forever.

OpenAI's turn to advertising isn't a business model choice—it's a signal that subscriptions can't cover the cost of free-tier inference at scale. For those of us building the physical layer, this changes how we underwrite demand.


Why Should Infrastructure Investors Care About ChatGPT Ads?

The Mission: Understand what OpenAI's ad pivot signals about AI business model sustainability—and what it means for infrastructure.

The Reality: We suffer from what psychologist Ellen Langer called the "illusion of control"—the tendency to overestimate our ability to influence events. AI investors structure their models around hockey-stick revenue projections when the underlying economics remain unproven. We assume demand will materialize to justify $5+ trillion in infrastructure spending. We believe the revenue curve will be "up and to the right."

"The illusion of control is the most expensive cognitive bias in infrastructure investing. We build to demand curves that exist only in pitch decks." — Tony Grayson

The Tactical Takeaway: The CEOs themselves are warning us. When Satya Nadella tells his own employees he's "haunted" by the fate of Digital Equipment Corporation, and Sundar Pichai tells the BBC that "no company is going to be immune" if the bubble bursts, we should listen. The business model you choose shapes the product you end up building.


What Did OpenAI Actually Announce About Advertising?

I just got back from PTC (Power, Thermal, and Cooling) in Dallas. The conversations on the floor were all megawatts and fiber pairs. The big hyperscalers were there talking about power procurement, cooling innovation, and the race to bring capacity online.

But the quietest conversation in the hallways was monetization.


On the same day the data center industry was discussing how to add 122 gigawatts of capacity by 2030, OpenAI announced it will begin testing ads in ChatGPT. The rollout is limited: logged-in U.S. adults on Free and Go tiers only, with paid subscriptions remaining ad-free. Ads will appear as sponsored placements below responses in commercial-intent contexts—product comparisons, shopping, and service recommendations—and OpenAI has stated that ads will be clearly separated from answers and won't influence model outputs. Sensitive categories are excluded.

Twenty months ago, Sam Altman stood on stage at Harvard and called advertising in AI "uniquely unsettling," a business model he described as a "last resort."


"Ads plus AI is sort of uniquely unsettling to me," Altman said in May 2024. "I kind of think of ads as a last resort for us for a business model."


Last week, that last resort became a real lever—earlier than most people expected.

To be clear: ads don't necessarily signal failure. Spotify, YouTube, and Hulu all use ad-supported tiers to capture price-sensitive users while upselling power users to premium subscriptions. That's standard price discrimination, not desperation.


But the AI case is different in one critical respect: marginal cost. When you scroll through Spotify, the incremental server cost is essentially zero. When you query ChatGPT, every response requires real compute. The free tier isn't free to serve—and those costs grow with usage. That's why the timing of this announcement matters more than the announcement itself.

How Much Money Is OpenAI Actually Losing?

OpenAI's CFO Sarah Friar has cited an annualized revenue run-rate north of $20 billion. That sounds impressive until you look at the cost curve.


Depending on which source you use, 2026 is framed as roughly $14 billion of losses or closer to $17 billion of cash burn. Different accounting lenses, same message: scale is outrunning comfortable subsidy. The Financial Times described OpenAI as an "era-defining money furnace."early 202


The company continues raising capital at unprecedented scale, including a $40 billion round that valued it at $300 billion. It has committed to $1.4 trillion in infrastructure spending over the next eight years.

But here's what the fundraising obscures: ads this early signal that free-tier demand is outgrowing what subscriptions and API revenue can comfortably subsidize at current unit costs.


Why Are AI Inference Economics So Different from Social Media?

Yes, ads can be additive. But only when marginal cost per user is near zero.

Inference isn't.

Traditional platforms monetize idle attention. You scroll through Instagram, and Meta pays almost nothing for that incremental scroll. The server cost is trivial. The attention is the product.

AI inference is different. Every query costs real money. Every response requires compute. Unlike social media, where switching means abandoning your social graph, AI chatbots have relatively low switching costs. Yes, users have saved history, custom instructions, and workflow integrations—but these create friction, not lock-in. Moving from ChatGPT to Claude or Gemini takes minutes, not months. The marginal cost per query doesn't vanish at scale—it just gets cheaper more slowly than you'd hope.


OpenAI previously said ChatGPT had roughly 500 million weekly users. Recent reporting puts that figure as high as 800 million weekly actives. But only ~35 million are paying subscribers—roughly 5%—meaning the free tier dominates. Unlike social media, where incremental engagement costs almost nothing, every incremental AI session carries real inference COGS.

"Social media monetizes idle attention at near-zero marginal cost. AI inference monetizes active compute at real marginal cost. That's not a difference in degree. That's a difference in kind." — Tony Grayson

Ads and commerce are the only mass-scale consumer lever once you've hit the "$0 baseline" expectation.


Can't Ads Just Be Additive Revenue?

The obvious counterargument: ads don't mean subscriptions have failed. They're incremental revenue on top of a working subscription business.


That's true in principle. But three things make the AI case different:


Timing matters. Ads are arriving while inference unit costs are still high. If OpenAI had waited until costs dropped 90% (through custom silicon, architectural improvements, or scale), ads would be pure margin. Introducing them now, while COGS per query remains elevated, suggests the free tier's cash burn is uncomfortable.


Incentive gradient matters. Ad-supported products optimize for engagement and conversion. Subscription products optimize for utility and retention. These aren't always aligned. The product that maximizes ad revenue may not be the product that maximizes user value—and in AI, where the promise is "impartial assistant," that tension is sharper than in entertainment.


Demand quality matters to infrastructure investors. When your tenant's revenue model shifts from subscriptions (predictable, recurring) to advertising (seasonal, campaign-dependent, substitution-prone), your demand forecasts need recalibration. This isn't a judgment about OpenAI's business—it's a statement about how lenders and investors underwrite capacity.


What Does J.P. Morgan Say About AI Infrastructure Returns?

J.P. Morgan's analysis on AI infrastructure spending deserves attention:

"Big picture, to drive a 10 percent return on our modeled AI investments through 2030 would require ~$650 billion of annual revenue into perpetuity, which is an astonishingly large number. But for context, that equates to 58bp of global GDP, or $34.72/month from every current iPhone user, or $180/month from every Netflix subscriber."

To put that in perspective: there are 1.5 billion iPhone users globally. There are 300 million Netflix subscribers. The math requires extracting unprecedented value from a consumer base that increasingly expects AI to be free.


The report explicitly warns: "Our biggest fear would be a repeat of the telecom and fiber buildout experience, where the revenue curve failed to materialize at a pace that justified continued investment."


That warning deserves a deeper look.


What Can the Telecom Bust Teach Us About the AI Bubble?

Between 1996 and 2001, telecommunications companies invested more than $500 billion—mostly financed with debt—into fiber optic infrastructure. The thesis: internet traffic was doubling every 100 days, and whoever built the infrastructure would own the future.


The thesis was right about demand. It was wrong about timing.


The collapse between 2000 and 2002 erased more than $2 trillion in market value. The most instructive example is WorldCom, which went from a $150 billion market cap in January 2000 to bankruptcy by July 2002—a 99.9% decline. Executives had inflated revenues through capacity-swap schemes that masked the fundamental problem: supply had outrun the revenue curve. By 2004, analysts estimated only about one-tenth of installed fiber was actually "lit." The other 90% sat dark.


Here's the critical point: demand didn't evaporate. It grew. Internet traffic continued expanding throughout the bust. The failure mode wasn't demand collapse—it was capital structure versus revenue timing. The debt couldn't wait for the demand curve to catch up.


The eventual beneficiaries weren't the builders. Netflix, Google, and the cloud providers of the 2010s built their empires on infrastructure acquired for pennies on the dollar from bankrupted carriers. The fiber got used—just under new ownership, at fire-sale prices, a decade later.

"The telecom bust didn't destroy the infrastructure. It destroyed the investors who built it. That's the template AI infrastructure investors should study." — Tony Grayson

J.P. Morgan's report isn't invoking telecom casually. They're pointing to a specific failure mode where the revenue curve never materialized at a pace that justified continued investment. If AI follows the same pattern, the winners won't be the companies spending $5 trillion on data centers today. They'll be the companies that buy those data centers out of bankruptcy in 2030.


Are Tech CEOs Warning About an AI Bubble?

The warnings aren't coming from bears anymore. They're coming from the people running the companies.

At the World Economic Forum on January 20, 2026, Microsoft CEO Satya Nadella sat down with BlackRock's Larry Fink and delivered a surprisingly candid assessment:

"A telltale sign of if it's a bubble would be if all we are talking about are the tech firms."

Nadella isn't describing a hypothetical. He's describing current conditions. At the same forum, PwC released its 29th Global CEO Survey, showing that only 10-12% of companies reported seeing benefits from AI on the revenue or cost side. In contrast, 56% reported getting nothing from it. That follows an even more damning finding from August 2025: 95% of generative AI pilots were failing. J.P. Morgan's own analysis found that AI-related investments contributed to 1.1% of GDP growth in the first half of 2025—but that growth came from capex, not productivity gains. Tech firms buying GPUs from other tech firms. That's the circularity Nadella is warning about. And OpenAI's turn to advertising is the tell: if enterprises were monetizing AI at scale, consumer ads wouldn't be necessary. The fact that they're reaching for the lowest-margin, highest-churn revenue model suggests the enterprise demand curve isn't steep enough to cover the free tier.

"When 95% of AI pilots fail and the response is to sell ads to consumers, that's not a growth strategy. That's a survival strategy." — Tony Grayson

The other CEOs see it too.


Google CEO Sundar Pichai told the BBC in November 2025:

"I think no company is going to be immune, including us."

Pichai acknowledged there was "irrationality" behind the boom in artificial intelligence investment. He warned of the "immense" energy requirements of AI, which could reach 200 gigawatts globally by 2030—the annual equivalent of Brazil's electric consumption.

In an employee town hall in September 2025, Nadella invoked the cautionary tale of Digital Equipment Corporation, a computing giant of the 1970s that vanished by the 1990s:

"Our industry is full of case studies of companies that were great once, that just disappeared. I'm haunted by one particular one called DEC."

When Nadella tells his own people he's "haunted" by DEC, he's signaling that incumbency won't protect you in an architecture shift.


One VC invested in an OpenAI rival put it more bluntly to The Economist: "This is the WeWork story on steroids."


These aren't bears talking their book. These are the executives responsible for deploying capital at scale.


Is AI Becoming a Commodity? What About DeepSeek and Open Source?


For most consumer tasks, perceived quality differences between frontier models are narrowing. Google's Gemini, Microsoft's Copilot, Meta's Llama, and Chinese alternatives like DeepSeek all offer capable free tiers or open-source access.


How Much Cheaper Is DeepSeek Than OpenAI?

DeepSeek has rapidly scaled into tens of millions of users, demonstrating that price elasticity in AI is real. The pricing comparison is stark:

Provider

Input (per 1M tokens)

Output (per 1M tokens)

DeepSeek V3

$0.28

$0.42

DeepSeek R1

$0.55

$2.19

GPT-4o

$3.00

$10.00

Claude 3.5 Sonnet

$3.00

$15.00

That's a 10x cost advantage at the API level. When a capable model costs less, price-sensitive users switch.


What Does the Open-Source Explosion Mean for OpenAI?

The open-source AI landscape has shifted materially. Total model downloads have moved from U.S.-dominant to China-dominant, with DeepSeek, Qwen, and other Chinese labs leading in global adoption.


The strategic implication is straightforward: if your differentiation depends on raw model capability, you're increasingly competing with low-cost alternatives. Open-source and Chinese models now perform competently on most consumer tasks. The moat isn't the model anymore—it's distribution, trust, regulatory compliance, and unit economics.

"If your differentiation depends on raw model capability, you're competing with free. The moat isn't the model anymore. It's distribution, trust, and unit economics." — Tony Grayson

OpenAI's response—advertising—acknowledges this reality. They can't win on price against DeepSeek. They can't win on open-source distribution against Meta. Their lever is consumer brand recognition and the hope that habit creates switching costs.


One important caveat: U.S. enterprises and government contractors likely cannot use DeepSeek due to data export laws, security compliance requirements


(SOC2/FedRAMP), and trade restrictions. This regulatory firewall creates an artificial floor for OpenAI and Microsoft pricing in enterprise and government markets—but it doesn't protect the consumer tier where ads are being introduced.



Why Don't AI Chatbots Have Real Switching Costs?

The moat is shifting from model weights to distribution, UX, trust, and unit economics per token.


For casual users, switching costs are essentially zero. Unlike abandoning your Facebook social graph, trying a different chatbot takes 30 seconds.


Enterprise is different: integrations, security reviews, workflows, and procurement cycles create real friction. But consumer AI? The baseline expectation is increasingly $0.

Here's the uncomfortable truth for OpenAI: they don't own a platform default the way Apple, Google, or Microsoft do. ChatGPT's distribution sits atop others' surfaces. They have a destination, not a default.


How Does OpenAI Advertising Change Infrastructure Investment?

For those of us building the physical layer, the questions are getting more specific.


How Does Ad-Driven Demand Differ from Enterprise Contracts?

Capex decisions increasingly depend on unit economics per query, not hype cycles. When your customer's business model shifts from subscriptions to advertising, your contracted demand forecasts need recalibration.

Ad-driven demand runs differently than enterprise contracts:

  • Seasonal variation: Commercial intent spikes around holidays, product launches, shopping seasons

  • Campaign dependency: Demand tied to marketing budgets, not organic growth

  • Substitution vulnerability: Users leave when experience degrades

If demand ties to ad RPM and consumer churn, lenders underwrite the physical layer with shorter commitments, more optionality, and tighter take-or-pay. Not 10-year utility-style assumptions.


Can Advertising Actually Cover AI Inference Costs?

Let's make the numbers concrete.

Google's traditional search economics generate approximately $0.0161 revenue per query with a cost of about $0.0106 per query—a healthy 34% operating margin on the Services business unit. That's a system optimized over two decades with massive measurement and attribution infrastructure.


Early 2023 estimates put ChatGPT's cost at roughly $0.36 per query on basic tasks. On frontier reasoning models like o1 Pro, costs scale dramatically higher—OpenAI has acknowledged that some power users of the $200/month ChatGPT Pro subscription are running complex reasoning chains that cost more to serve than the subscription generates. The exact cost-per-query varies widely by model and task complexity, but the direction is clear: advanced reasoning is expensive.


Even with efficiency improvements, we're talking about costs that are 10-100x higher than traditional search per user interaction.

For ads to cover inference costs, you need either:

  1. Massive ad load: If inference costs $0.10/query and ad RPM runs at $0.02/query (generous for a nascent format), you need 5 ads per response to break even. That's a user experience disaster.

  2. Premium pricing: Advertisers pay 10-20x search rates for conversational placement. Possible for some categories, but not at scale.

  3. Radical cost reduction: Inference costs must drop 90%+ to approach search economics. Hardware efficiency gains alone won't get there.

"The math on AI advertising doesn't work unless you believe inference costs drop 90% or advertisers pay 10x search rates. Neither assumption survives contact with reality." — Tony Grayson

The math suggests ads are supplementary revenue, not a foundation. They help cover the free tier's cash burn but don't solve the structural unit economics problem.


How Are Lenders Underwriting AI Data Center Deals Differently?

The financing environment has shifted materially in the past 18 months. What lenders are actually underwriting now:


Contract quality matters more than ever. Lenders underwrite against long-dated, contracted cash flows from creditworthy hyperscalers. According to Norton Rose Fulbright: "The value that lenders are underwriting is long-dated streams of very high quality cash flows with very strong credit supporting the cash flows."


Speculative builds face scrutiny. Deals built on speculative demand or verbal tenant interest are viewed with caution. "Just because hyperscalers are expanding globally does not mean they will lease a given facility at a given time," warns Mawer Investment Management.


Lease-debt mismatch is a red flag. Some structures have tenants on 3-5 year leases while financing is structured on 10-year terms. If tenants don't renew, this creates significant refinancing and cash flow risk.


Off-balance-sheet structures proliferate. Meta's "Beignet Investor" SPV stacked $30 billion in financing from Pimco, BlackRock, Apollo, and Blue Owl for its Hyperion data center. xAI used an SPV to secure $20 billion for Nvidia GPUs. CoreWeave raised $2.6 billion through an SPV tied to an OpenAI contract. UBS estimates private credit tied to big tech has reached approximately $450 billion, up roughly $100 billion year over year.

The concern: these structures keep liabilities off balance sheets, flattering leverage ratios and ROIC optics, but can hide economic exposure. Lease commitments, take-or-pay contracts, and capacity guarantees can behave like debt under stress. If multiple AI SPVs mark down simultaneously, hitting private credit funds with correlated exposures, contagion becomes real.


Oaktree's Howard Marks questions the yields: spreads sometimes only 100 basis points above Treasuries. "Is it prudent to accept 30 years of technological uncertainty to make a fixed-income investment that yields little more than riskless debt?"


How Does Ad-Driven Demand Change Infrastructure Underwriting?


For those of us building or financing the physical layer, an ad-supported demand profile looks different from an enterprise-contract profile:

  • Demand becomes RPM × session-mix sensitive. Growth in free-tier users is no longer unambiguously good news—it's a COGS liability unless ad monetization scales proportionally.

  • Seasonality increases. Commercial intent spikes around holidays, product launches, and shopping seasons. Q4 looks different from Q1.

  • Campaign dependence creates volatility. Demand tied to marketing budgets is more cyclical than demand tied to enterprise workflows.

  • Substitution risk rises. Unlike enterprise integrations with procurement cycles and security reviews, consumer users are one tab away from switching to a competitor.

  • Contract terms tighten. Lenders underwriting against ad-driven demand want shorter commitments, more optionality, and tighter take-or-pay structures—not 10-year utility-style assumptions.


None of this means OpenAI's business is failing. It means the demand signal is different, and infrastructure underwriting should reflect that.


Is There Enough Power for AI Data Centers?

And a physical wall hits before the financial one.

We aren't just running out of capital runway. We're running out of power.


How Big Is the Data Center Power Queue?

ERCOT (Texas) is now tracking approximately 226 gigawatts of large load interconnection requests as of November 2025—nearly quadruple the 63 GW reported at the end of 2024. More than 70% of these requests are from data centers. Many individual projects exceed 1 GW per site—the equivalent of each project asking for half the power produced by Hoover Dam.


For context: ERCOT's peak demand record was 85.5 GW in August 2023. The queue represents 2.6x that peak. Every single request. Obviously, not all will materialize—but even 20% conversion would overwhelm the grid.


CAISO (California) has more than 400 GW of projects sitting in interconnection queues

nationally, with California implementing new prioritization processes to separate "ready-to-build" projects from speculative applications.


What Does "Connect and Manage" Mean for Data Center Operators?

PJM Interconnection just announced a comprehensive framework for integrating large loads—including data centers—that includes "connect and manage" approaches. Translation: you can't just buy power. You have to accept curtailment during system stress or bring your own new generation.

For data center operators, this means:

  • Curtailment exposure: Your facility may be required to reduce load during grid emergencies. That's operationally devastating for AI training workloads.

  • Own-generation requirements: Building or contracting for dedicated power supply, not just grid connection.

  • Financial commitments: The Public Utility Commission of Texas is considering requirements of $3,000 per MW of requested capacity in interim security deposits, plus $150,000-$300,000 interconnection study fees.

"The grid is no longer a utility you plug into. It's a constraint you build around. That changes everything about site selection, capital structure, and timeline." — Tony Grayson

Why Do Power Lead Times Make the AI Buildout Impossible?

Current lead times for natural gas turbines have ballooned to three to four years. Large power transformers are running 120-210 weeks (that's 2-4 years just for transformers). Nuclear plants have historically taken over a decade to build.

Adding 122 GW of power by 2030—J.P. Morgan's base case for AI data centers—requires infrastructure decisions that should have been made years ago. The turbines ordered today won't spin up until 2028-2029 at the earliest.


Grid Strategies' 2025 National Load Growth Report notes that data center market analysts indicate growth is unlikely to require more than 65 GW through 2030—significantly less than the 90 GW linked to data centers in utility forecasts. Either the timing or magnitude of submitted load forecasts collectively overstate demand. This "phantom load" phenomenon—speculative applications that may never materialize—creates its own planning challenges.


The grid is becoming the binding constraint. Power availability—not land, not demand, not capital—will determine where and how AI infrastructure gets built. The grid is no longer a utility you plug into. It's a constraint you build around.


What Can xAI's Memphis Project Teach Us About AI Power Claims?

Consider xAI's Memphis buildout. Musk talks about "2 gigawatts" and "the most powerful AI system on Earth." The filings tell a different story.


TVA approved 150MW. MLGW's public notice states that power options "from 260 MW to 1.1 GW" have been discussed, not contracted. Solaris Energy Infrastructure's SEC disclosures show approximately 460MW of turbines currently installed or under construction, with 1.1GW not expected until Q2 2027.

The gigawatt that makes headlines is aspirational capacity, not power flowing to racks.

"When someone says 'contracted for 1GW,' it often means 100MW firm with an option for the rest—if capital, permits, and interconnects materialize. Spoiler: they usually don't." — Tony Grayson

This is the gap between announcement and energization that infrastructure underwriters are learning to price. When someone says "contracted for 1GW," it often means 100MW firm with an option for the rest—if capital, permits, and interconnects materialize.


Spoiler: they usually don't.


The xAI example isn't an outlier. It's the template. Press releases announce gigawatts. SEC filings show megawatts. The delta between aspiration and energization is where capital goes to die.


Why Do ChatGPT Ads Break AI's "Truth Engine" Promise?

The ad model built social media empires, but it also created engagement-at-all-costs incentives that produced misinformation, rage bait, and platform decay.

Search ads work because you're leaving to buy something. The user intent is transactional. You type "best running shoes" and you want to buy running shoes. The ad is aligned with your goal.


AI ads break differently because you're staying to learn something.

If an AI optimizes for showing you ads, it's no longer optimizing for giving you the best answer. The product drifts from "truth engine" toward "transaction engine."

"Search ads work because you're leaving to buy something. AI ads break because you're staying to learn something. The moment the product optimizes for transactions over truth, it stops being useful." — Tony Grayson

Will Users Actually Tolerate Ads in ChatGPT?

For users: Unlike social media, where switching means abandoning your social graph, AI chatbots have almost no lock-in. If ChatGPT shows me an ad and Claude doesn't, I'm one tab away from leaving.


For advertisers: Search intent is clean. AI prompts are messy, conversational, and one hallucination away from a brand safety crisis. AI answers collapse the funnel, making attribution and ROI harder to demonstrate.


Google and Meta spent decades building massive measurement and attribution stacks. Advertisers will hesitate to shift budget to a nascent conversational format without comparable proof of performance.


OpenAI is betting that habit and brand recognition hold users in place, and that advertisers will pay premium rates for an audience that can evaporate the moment the experience degrades.


Does Advertising Arrive Before AI Norms Stabilize?

To be fair, AI ads are normalizing across the category. Perplexity launched sponsored questions. Microsoft runs ads in Copilot. But even if ads become standard, the incentive gradient still matters.


If ads arrive before model differentiation stabilizes, they shape the product while the norms are still forming. That's not a moral failing. It's a product development reality that infrastructure investors need to price in.

What Is the "Agent Paradox" in AI Monetization?

This pivot collides head-on with OpenAI's stated roadmap for "Agentic AI"—models that execute tasks on your behalf.


Ads in search work because the user still makes the final click. But if I ask an AI agent to "book the best flight to London," and that agent is subsidized by Delta, the trust evaporates.


It's extremely hard to build an impartial digital executive assistant that is also a billboard.


Why Does Advertising Create a Fiduciary Conflict for AI Agents?

The legal framework here is illuminating. Under traditional agency law, an agent has a fiduciary duty to the principal—a legal obligation to act in the best interests of the person they serve. This includes the duty of loyalty: all the agent's actions must be for the benefit of the principal, not the agent or third parties.


Consider this scenario from legal scholars: you task an AI agent with purchasing a product for under $500. The agent identifies two sellers with identical products at $425 and $450. If the agent chooses the $450 option because that vendor has a deal with the AI company, the agent has violated the duty of loyalty—even if you never knew the cheaper option existed.


This isn't hypothetical. It's the core business model of advertising: steering attention and action toward paying clients. When an AI "steers assets toward products with higher internal fees, it is generating a conflict of interest."

"You cannot build a fiduciary agent that is also a billboard. The business model you choose determines whose interests the product serves." — Tony Grayson

What Are Concrete Examples of AI Agent Split Loyalty?

Travel booking: You ask an agent to find the best flight. The "best" objectively is United at $450. But American paid for priority placement. The agent shows American at $475 first, with United buried. You book American. You paid $25 more. The agent fulfilled its duty to the advertiser, not you.


Product recommendations: You ask for the best laptop for video editing. The objectively best option is $1,200. But Dell paid for premium positioning. The agent emphasizes a Dell at $1,400, mentioning the better option only in passing. You buy the Dell. You're $200 poorer and less well-served.


Financial decisions: You ask an agent to compare savings accounts. The best APY is 5.1%. But Chase paid for placement. The agent leads with Chase at 4.7%, mentioning the better option as an "alternative." You open Chase. You lose basis points forever.

In each case, the agent technically answered your question. In each case, the answer was shaded by commercial influence. In each case, you—the principal—were slightly worse off than if the agent served only your interests.


Now multiply by millions of interactions per day. The aggregate wealth transfer from users to advertisers is substantial, even if each individual distortion is small.


Why Does Advertising Cap OpenAI's Product Ambition?

Critics will say they can separate ads from answers. But agents collapse the funnel. The moment monetization shares a control plane with decision-making, perceived neutrality becomes fragile—even if there's a firewall.


The research is clear on this. Stanford Law's analysis of AI agents notes that fiduciary duty "is one of the highest standards of care imposed by law." When an AI company operates a transactional agent for customers, a principal-agent relationship may exist—and with it, potential liability for conflicts of interest.


As one Harvard Business Review analysis put it: creating legal frameworks that establish fiduciary duty, encouraging market-based enforcement, and designing agents to keep sensitive data secure are all necessary for AI agents to be trusted. Advertising fundamentally undermines the first requirement.


By introducing ads now, OpenAI risks capping its product ambition. They are choosing to be a media company rather than a utility.


A utility serves the user. It's judged by reliability and impartiality. A media company serves advertisers. It's judged by engagement and conversion.


You cannot be both. The business model you choose shapes the product you end up building. OpenAI's stated goal is artificial general intelligence that benefits humanity. Their revealed preference is advertising revenue that benefits shareholders.


Those goals will diverge. The agent that serves advertisers cannot simultaneously serve as your trusted digital proxy. The moment users understand this, the product becomes a search engine with extra steps—not the autonomous assistant that justifies a $300 billion valuation.


What's the Bottom Line on OpenAI Advertising?

OpenAI isn't dying. They raised $40 billion at a $300 billion valuation. They have 800 million users and $20 billion in annualized revenue.


But they're burning cash like a company that can't find sustainable unit economics. And they won't be the last.


The pattern is already repeating. Massive investment, explosive user growth, marginal cost that doesn't shrink fast enough, and eventually the same question: how do you monetize intelligence when users expect it to be free?

"The intelligence is becoming a commodity. The only durable moats are distribution and context. Most AI companies don't own either." — Tony Grayson

The intelligence is becoming a commodity. The only durable moats are distribution and context. Most AI companies don't own either.


The business model you choose shapes the product you end up building. That's true for OpenAI. It's true for every company building on this stack.


We're going to learn the hardest part of the AI revolution wasn't building intelligence.

It was making intelligence pay for itself.


What Metrics Should You Watch for the AI Bubble?

Three metrics that will tell the story over the next 12 months:

  1. Whether ad load creeps: Does OpenAI gradually expand ad placements from commercial contexts to broader query types? The slope matters.

  2. Whether churn tracks exposure: Do users who see ads churn faster than those who don't? This will determine whether the ad model is additive or cannibalistic.

  3. Whether cost-per-query falls faster than RPM: Is inference cost declining faster than ad revenue per session? If not, the unit economics never work.


Key Takeaway

"The warnings are coming from inside the house. Nadella is 'haunted' by DEC. Pichai says 'no company is immune.' VCs are calling it 'WeWork on steroids.' J.P. Morgan says the industry needs $650 billion in annual revenue just to break even on infrastructure. Twenty months ago, Sam Altman called advertising in AI 'uniquely unsettling' and a 'last resort.' Last week, it became the plan. The market is forcing consumer-internet economics onto frontier-model inference. The question is whether the revenue will materialize before the capital runs out. History suggests we should be cautious about assuming it will." — Tony Grayson

Related Reading


Frequently Asked Questions

What did Sam Altman say about advertising in AI?

In May 2024, during an event at Harvard University, OpenAI CEO Sam Altman described advertising combined with AI as "uniquely unsettling" and called it a "last resort" business model. Less than twenty months later, in January 2026, OpenAI announced it would begin testing ads in ChatGPT for free and Go tier users in the United States.

How much money is OpenAI losing?

OpenAI's financial picture depends on how you measure it. The Information reported projected operating losses of approximately $14 billion for 2026. Cash burn—which includes infrastructure spending and capital commitments—runs higher, with some analyses suggesting figures closer to $17 billion. OpenAI's CFO Sarah Friar has cited an annualized revenue run-rate north of $20 billion, though run-rate and recognized revenue are different metrics. The company raised $40 billion at a $300 billion valuation and has announced infrastructure commitments of $1.4 trillion over the next eight years—though commitments and actual spend are not the same thing.

What did Satya Nadella say about the AI bubble?

In an employee town hall in September 2025, Nadella invoked the cautionary tale of Digital Equipment Corporation, a computing giant of the 1970s that vanished by the 1990s: "Our industry is full of case studies of companies that were great once, that just disappeared. I'm haunted by one particular one called DEC." When a CEO tells his own people he's "haunted" by a company that failed to adapt to platform shifts, he's signaling that incumbency won't protect you.

What is the J.P. Morgan estimate for AI revenue requirements?

J.P. Morgan's analysis calculates that achieving a 10% return on AI infrastructure investments through 2030 would require approximately $650 billion in annual revenue "into perpetuity." This equals 0.58% of global GDP, or the equivalent of $34.72/month from every iPhone user, or $180/month from every Netflix subscriber.

How does DeepSeek's pricing compare to OpenAI?

DeepSeek's pricing is approximately 10-100x cheaper than comparable models from OpenAI or Anthropic. DeepSeek V3.2 charges roughly $0.28 per million input tokens and $0.42 per million output tokens, compared to $3-10 for GPT-4o and $3-15 for Claude 3.5 Sonnet. This demonstrates significant price elasticity in AI markets.

What is PJM's "connect and manage" framework?

PJM Interconnection, the grid operator serving 67 million people across 13 states, announced in January 2026 a framework for integrating large loads (including data centers) that includes options for new customers to bring their own generation or accept curtailment during system stress. This signals that power availability—not just capital—is becoming a constraint on AI infrastructure buildout.

Why is AI inference different from social media economics?

Traditional social platforms have near-zero marginal cost per user interaction—scrolling through a feed costs almost nothing in server resources. AI inference is different: every query requires real compute, and every response has a tangible cost of goods sold. This makes the ad-supported model more challenging because the "free" tier actually costs money to serve, and those costs grow with usage.

What does OpenAI's ad announcement mean for AI infrastructure investors?

When a major AI company's business model shifts from subscriptions to advertising, demand forecasting changes materially. Ad-driven demand is more seasonal, campaign-dependent, and substitution-prone than enterprise contracts. This affects how lenders and investors underwrite data center capacity—potentially requiring shorter commitments, more optionality, and tighter take-or-pay structures rather than utility-style 10-year assumptions.

What is the "agent paradox" in AI monetization?

The agent paradox refers to the fundamental conflict between AI agents that execute tasks on users' behalf and advertising-based business models. If an AI agent is subsidized by advertisers, users may not trust its recommendations. You cannot build an impartial digital executive assistant that is also a billboard—fiduciary duty to the user clashes with fiduciary duty to the advertiser.

What did Sundar Pichai say about the AI bubble?

In a November 2025 BBC interview, Google/Alphabet CEO Sundar Pichai acknowledged there was "irrationality" behind the AI investment boom and warned that "no company is going to be immune, including us" if the bubble were to burst. He also highlighted the "immense" energy requirements of AI, which could reach 200 gigawatts globally by 2030.

What happened in the telecom bust of 2000-2002?

Between 1996 and 2001, telecom companies invested over $500 billion in fiber infrastructure based on the belief that internet traffic was doubling every 100 days. The collapse between 2000-2002 erased more than $2 trillion in market value. WorldCom went from $150 billion market cap to bankruptcy with $11 billion in accounting fraud. Global Crossing filed for bankruptcy with $12.4 billion in debt. By 2004, only 10% of installed fiber was actually being used. The eventual beneficiaries were companies like Netflix and Google that bought the infrastructure at fire-sale prices years later.

How much power is in the ERCOT data center queue?

As of November 2025, ERCOT (Texas grid operator) is tracking approximately 226 GW of large load interconnection requests—nearly quadruple the 63 GW reported at end of 2024. More than 70% are from data centers, with many individual projects exceeding 1 GW. For context, ERCOT's peak demand record was 85.5 GW. The queue represents 2.6x that peak.

What does AI agent "fiduciary duty" mean?

Under traditional agency law, an agent has a fiduciary duty to act in the principal's best interests. When an AI agent executes tasks on behalf of a user, conflicts arise if the agent is also incentivized by advertisers. If an agent chooses a more expensive vendor because that vendor paid for placement—without disclosing the cheaper option—it has violated the duty of loyalty, even if the user never knew a better option existed. This is why ads and agentic AI are fundamentally in tension.

How are lenders underwriting AI data center financing differently now?

Lenders now prioritize long-dated contracted cash flows from creditworthy hyperscalers over speculative builds. Key concerns include: lease-debt mismatch (3-5 year leases against 10-year financing), power interconnection security, and off-balance-sheet structures (SPVs) that can hide economic exposure. UBS estimates private credit tied to big tech has reached ~$450 billion. Howard Marks questions whether spreads of only 100bp above Treasuries adequately compensate for "30 years of technological uncertainty."

What is the cost difference between AI inference and traditional search?

Google's search economics generate approximately $0.016 revenue per query with $0.011 cost—a healthy 34% margin optimized over decades. AI inference costs are fundamentally different: early ChatGPT estimates were $0.36 per query, and complex reasoning queries can now cost up to $1,000. Even with efficiency improvements, AI inference costs are 10-100x higher per interaction than traditional search.


Sources

  1. OpenAI: Our approach to advertising and expanding access to ChatGPT (January 16, 2026)

  2. Men's Journal: ChatGPT Moves Forward With CEO's 'Last Resort' (January 17, 2026)

  3. Fortune: Satya Nadella's biggest AI bubble warning (January 20, 2026)

  4. Futurism: Microsoft CEO Concerned AI Will Destroy the Entire Company (September 20, 2025)

  5. Daily Sabah: Google boss warns 'no company immune' if AI bubble were to burst (November 18, 2025)

  6. Tom's Hardware: J.P. Morgan calls out AI spend, says $650 billion in annual revenue required (November 11, 2025)

  7. Data Center Dynamics: JPMorgan: Global data center and AI infra spend to hit $5 trillion (November 12, 2025)

  8. The Information: OpenAI Projections Imply Losses Tripling to $14 Billion in 2026 (October 14, 2024)

  9. Fortune: OpenAI says it plans to report stunning annual losses through 2028 (November 12, 2025)

  10. IntuitionLabs: DeepSeek's Low Inference Cost Explained (October 24, 2025)

  11. PJM Inside Lines: PJM Board Outlines Plans To Integrate Large Loads Reliably (January 16, 2026)

  12. CNN: ChatGPT to start showing users ads based on their conversations (January 16, 2026)

  13. The Register: ChatGPT will get ads. Free and Go users first (January 17, 2026)

  14. Psychology Today: Illusion of Control

  15. Wikipedia: Telecoms crash - Historical reference on 2000-2001 telecom bubble

  16. Wikipedia: WorldCom scandal - $11 billion accounting fraud, largest U.S. bankruptcy at the time

  17. MOI Global: Parallels Between the Hyperscalers and the Telecom Firms of the 1990s (March 18, 2025)

  18. Latitude Media: ERCOT's large load queue has nearly quadrupled in a single year (December 3, 2025)

  19. Grid Strategies: Power Demand Forecasts Revised Up for Third Year Running (2025)

  20. Norton Rose Fulbright: Data Center Financing Structures (June 2025)

  21. Fortune: AI data center boom sparks fears of glut amid lending frenzy (December 12, 2025)

  22. iCapital: Data Center Infrastructure: Moving from Cash to Debt (November 20, 2025)

  23. Red Hat Developer: The state of open source AI models in 2025 (January 7, 2026)

  24. arXiv: AI Agents and the Law - Legal analysis of fiduciary duty and AI agents (August 2025)

  25. Stanford Law School: From Fine Print to Machine Code: How AI Agents are Rewriting the Rules of Engagement (January 14, 2025)

  26. Kitces.com: Major Compliance Risks Advisors Face When Using AI Tools (October 20, 2025)

  27. Harvard Business Review: Can AI Agents Be Trusted? (May 26, 2025)


____________________________________


Tony Grayson is a recognized Top 10 Data Center Influencer, a successful entrepreneur, and the President & General Manager of Northstar Enterprise + Defense.


A former U.S. Navy Submarine Commander and recipient of the prestigious VADM Stockdale Award, Tony is a leading authority on the convergence of nuclear energy, AI infrastructure, and national defense. His career is defined by building at scale: he led global infrastructure strategy as a Senior Vice President for AWSMeta, and Oracle before taking over a failing modular data center company, building it to $200M+ in contracts, and selling it to Northstar


Today, he leads strategy and execution for critical defense programs and AI infrastructure, building AI factories and cloud regions that survive contact with reality.

Comments


bottom of page