Is Nvidia the Next Cisco? The 25-Year Warning Explained
- Tony Grayson
- Dec 24, 2025
- 19 min read
Updated: Jan 6
By Tony Grayson, President & GM of Northstar Enterprise + Defense | Built & Exited Top 10 Modular Data Center Company | Top 10 Data Center Influencer | Former SVP Oracle, AWS & Meta | U.S. Navy Nuclear Submarine Commander | Stockdale Award Recipient
Published: December 23, 2025 | Updated: January 5, 2026 | Verified: January 5, 2026
TL;DR:
"Is Nvidia the Next Cisco? Nvidia isn't going bust, the tech is real. The problem is valuation: mid-20x sales for what's becoming a utility business. Cisco proved the Internet was real, too; investors still waited 25 years to break even. When open standards (UALink) commoditize the interconnect and custom chips (TPUs, Trainium) erode margins, multiple compression follows." — Tony Grayson, President & GM Northstar Enterprise + Defense
In 30 seconds:
Nvidia is priced for perfection at $5T+ market cap. History shows that being right about the technology (like Cisco was about the Internet) doesn't save you if you overpay. The bear case isn't bankruptcy—it's margin compression as AI infrastructure becomes a utility.
COMMANDER'S INTENT: THE 25-YEAR WARNING
The Mission: Identify the inflection point where AI infrastructure shifts from high-margin innovation to commoditized utility.
The Reality: On December 10, 2025, Cisco finally reclaimed its split-adjusted March 2000 peak ($80.25). It took 25 years for the "plumbing" to catch up to the valuation.
The Tactical Takeaway: Don't confuse demand for moat. As open standards such as UALink commoditize interconnect and the Cisco Secure AI Factory simplifies deployment, NVIDIA faces the same "Cisco Moment": massive growth, but massive multiple compression.

DECEMBER 2025 MARKET SIGNAL
The 25-Year Event: On December 10, 2025, Cisco closed at $80.25—finally reclaiming its split-adjusted March 2000 dot-com peak for the first time in a quarter century. This milestone anchors everything you're about to read. When the "plumbing" of one revolution takes 25 years to justify its valuation, what does it mean for investors who are now paying 25x sales for the "plumbing" of the next one?
The Hidden Trap: Building vs. Selling Most investors get the money part wrong because they confuse two different things:
Training (The Sunk Cost): Think of this like building a massive factory. Companies spend billions of dollars upfront to build AI models. That is pure expense.
The Problem: You only earn that money back through Inference (selling the model's answers). But if the price of selling those answers drops—which is happening right now—you never make back the billions you spent building the factory in the first place.
The problem? Using AI is getting cheaper fast. Competitors like GPT-4, Claude, Gemini, and Llama are now mostly interchangeable—like Coke and Pepsi. Because they are all “good enough,” prices for using them (inference) have crashed by over 80%.
The Cash Burn Reality: Reuters reports OpenAI is estimated to be burning roughly $8 billion this year while targeting $12.7 billion in revenue. They are spending cash to build models, but because the models are becoming commodities, they can’t charge a premium. Smart investors see this and are pivoting. They are betting on the boring stuff—infrastructure and workflow tools—rather than the flashy models.
Is Nvidia the Next Cisco? The 25-Year Warning Explained.
The Disconnect
There are two ways to look at AI right now, and they don’t match:
Wall Street looks at “Monopoly Margins” (infinite profit).
Data Center Operators (the people building the facilities) consider physics: power bills, cooling limits, and hardware degradation.
The Unit Economics
The Scale Reality: It’s Not About Speed Anymore. Google already processes roughly 980 trillion tokens per month. That volume is staggering. When you operate at that scale, the game changes. You stop caring about having the "fastest" chip on the market; you only care about the cheapest one that can actually handle the workload.
The Valuation Trap: Wall Street’s models are broken because they assume today’s high prices will last forever. They won't. As AI models start to look identical, buyers realize they have choices and stop paying extra. The tech stack stops looking like a luxury product and starts looking like a utility—just like electricity. That shift isn't just a theory; it is the dangerous gap between a multi-trillion-dollar valuation and reality.
Steel vs. Silicon: The Parallel That Matters
Dimension | CISCO (2000) | NVIDIA (2025) |
What They Built | Hardware that built the pipes (routers, switches) | Hardware that builds the intelligence (GPUs, interconnects) |
The Thesis | "Internet traffic will grow forever." | "AI compute demand will grow forever." |
The Reality | Traffic grew 10,000x. The stock took 25 years to recover. | Demand is real. The valuation assumes monopoly margins in a commoditizing market. |
The Moat Threat | Ethernet (open standard) replaced proprietary stacks | UALink (open standard) targeting NVLink proprietary lock-in |
The Inventory Risk | $2.25B write-down from "demand mirage" | Watch for "Ghost Demand" in neocloud GPU orders |
The Twist | Cisco is still profitable. The stock wasn't. | NVIDIA can dominate and still deliver poor returns if you overpay. |
What Is the Zero-to-One Trap—And Why Is Wall Street Reading It Backwards?
Investors love quoting Peter Thiel’s book Zero to One, but they are applying it wrong.
The "Zero to One" Trap. To understand the risk, you have to understand the difference between inventing something and copying it.
Zero to One (The Breakthrough): This is the first iPhone or the first search engine. It’s a monopoly moment. Because there is no competition, you can charge whatever you want. High valuations make sense here.
One to N (The Copycat): This is the 10th search engine or the 100th chatbot. Now you are just scaling a utility. Competition kicks in, margins get crushed, and you are left fighting for scraps.
The Reality Check: Wall Street is valuing Nvidia and OpenAI as if we are still in the "Zero to One" invention phase. We aren't. The invention already happened. We are now in the "One to N" phase—just optimizing chips and trying to make the process cheaper.
Efficiency is great, but you don’t pay 20x sales just for an efficiency upgrade. We are moving from "Magic" to "Utility," and utilities simply do not get tech valuations.
Why Doesn't Being the 'Fastest' Chip Matter Anymore?
There is a massive difference between being the "fastest" and being the "most profitable." Analysts keep missing this distinction.
The "Ferrari" Problem: Think of Nvidia’s H100 chips like Ferraris. They are incredible machines—fast, expensive, and high-maintenance. Startups love them because they look great in a pitch deck and impress investors. But here is the reality check: If you are running a business at "planet scale" (like Amazon or Google), you don’t need a Ferrari to deliver a pizza. You need a fleet of reliable, fuel-efficient hybrids.
The Reality on the Ground I saw this firsthand at Oracle. We cut our deployment times by 70%—not by buying faster hardware, but just by fixing our sloppy processes. At scale, raw speed takes a backseat to unit economics (cost per task). That is why AWS and Google are building their own chips. They aren't trying to beat Nvidia on a spec sheet; they are simply trying to lower the monthly bill.
Let’s Do the Math
Here is why custom chips are killing Nvidia’s moat. Imagine a standard large data center (100 Megawatts), assuming roughly continuous load.
If you use a custom chip that is just 20% more power-efficient at $0.08/kWh, you save $14 million a year in electricity. If blended power rises ~20%, you save ~$17 million a year.
That is pure profit. That is why every major tech giant is designing their own chips. They aren’t trying to beat Nvidia on speed; they are trying to beat them on the electric bill.
What Do Data Center Operators See That Wall Street Doesn't?
While Wall Street looks at spreadsheets, data center operators are looking at concrete and copper. Here is the physical nightmare of building AI capacity right now.
You want a transformer? Get in line. The wait is 12–24 months, depending on the location. It’s not just the equipment, either; it’s the building itself. The new AI racks (NVL-class) are so heavy that we need to consult structural engineers to determine whether the concrete floor can support them before we can discuss software. Then there is the heat. We are running out of chilled-water capacity. And when things break—which they do when running 24/7—you can’t just call a local plumber. This is high-pressure liquid cooling; the parts are rare, and finding someone who actually knows how to fix a leak in a live server cluster is like finding a unicorn.
The "Power Line" Problem. This is the ultimate bottleneck: Utility Substations. In many regions, the wait time to get power from the grid is now 3 to 5 years. You cannot buy your way to the front of this line. Utilities operate at the speed of government regulation, not Silicon Valley hype.
The Takeaway: These are physical laws. You can double Nvidia’s stock price tomorrow, but that won’t make the concrete dry faster, and it certainly won't make the power grid upgrade itself overnight.
Why Did Cisco Investors Wait 25 Years to Break Even?
This brings us back to the ghost of the dot-com bubble.
The History Lesson
Cisco proved the Internet was real. They were the “plumbing” of the web, just like Nvidia is the plumbing of AI. But if you bought Cisco stock at its peak in March 2000, you didn’t break even until December 10, 2025—25 years to reclaim the dot-com peak on a split-adjusted price basis (nominal price; total return with dividends and inflation-adjusted return tell different stories).
Did the company fail? No. Cisco stayed profitable, paid dividends, and ran the internet the whole time.
The lesson: Being right about the technology (the Internet or AI) doesn’t save you if you overpay for the stock.
The Cisco Parallel—More Precisely
The Tech Didn't Fail, The Price Did: Cisco didn’t lose because its technology stopped working. They lost because the market got tired of paying a premium for a proprietary system. Eventually, the industry chose Ethernet—a standard that was open, cheap, and "good enough"—over Cisco's expensive, custom stack. The power shifted from the seller to the buyer.
History is repeating. We are seeing the exact same pattern right now with UALink. This is the industry’s attempt to do to Nvidia what Ethernet did to Cisco: turn the expensive, proprietary connecting wires into a cheap, standard commodity.
The Market Cap Reality
At its peak, Cisco represented roughly ~5% of the total U.S. market cap.
On October 29, 2025, Nvidia crossed $5 trillion in market cap—larger than the GDP of Germany or Japan. First company ever.
The Open Standard Threat: UALink's 2025 Milestone
The industry has moved from "paper standard" to a real threat. In April 2025, the UALink Consortium released the UALink 200G 1.0 Specification—the first open standard interconnect designed specifically to challenge NVIDIA's NVLink monopoly.
The Numbers:
85+ member companies (up from the original 9 promoters in May 2024)
Board members: Alibaba, AMD, Apple, Astera Labs, AWS, Cisco, Google, HPE, Intel, Meta, Microsoft, and Synopsys
Specification: 200 Gbps per lane, up to 1,024 accelerators per AI pod
First hardware: Synopsys has a UALink IP solution scheduled for H2 2025
Why It Matters: UALink is based on the same 802.3 Ethernet PHY standard that already dominates the networking market. The industry learned from Cisco's commoditization—Ethernet beat proprietary, and history suggests "open and cheaper" usually wins against "closed and expensive."
Is NVIDIA's Moat Really Stronger Than Cisco's Was?
To be fair, Nvidia has built a fortress that Cisco never had. Three distinct moats:
The Software Moat (CUDA)
Nvidia has spent 10 years getting developers addicted to its software tools (CUDA, cuDNN, TensorRT, Triton). Switching away from Nvidia costs billions in re-coding. This is real lock-in.
The System-Integration Moat
Nvidia isn’t just selling a chip anymore; they are selling the whole rack (the NVL72). It includes the chips, the cooling, the power delivery, and the networking. You can’t just swap one part out; it’s an all-or-nothing system. Matching this system-level integration takes years.
The Network-Effects Moat
Everyone trains on the same stack. The ecosystem reinforces itself—more developers means more tools means more developers.
The “Yes, But…” Caveat
Each moat can remain valid while margins continue to compress. Look at Intel. They dominated the PC market for decades with x86, but competition eventually forced them to lower prices. You can keep the crown but lose the kingdom’s wealth.
Cisco also had operational whiplash—brutal inventory and order cycles during the bubble. Nvidia is operationally distinct: greater software attach, a more recurring-revenue mix, and integrated systems. However, operational excellence doesn’t prevent margin compression when the market becomes commoditized.
Cisco Secure AI Factory Section
The Partnership Paradox: Cisco Inside the NVIDIA Stack
Here's the nuance that complicates the simple "bubble" narrative: Unlike 2000, Cisco and NVIDIA are now deeply integrated partners.
In March 2025, the companies unveiled the Cisco Secure AI Factory with NVIDIA—a turnkey reference architecture that packages NVIDIA compute, Cisco networking, and partner storage into an "Easy Button" for enterprise AI deployment. By October 2025, Cisco became the first partner to offer an NVIDIA Cloud Partner-compliant reference architecture.
What This Means:
Signal | Interpretation |
"Cisco inside NVIDIA" | The innovation premium is effectively over when the "disruptor" starts shipping inside the "legacy" box |
Turnkey deployment | We've moved from Innovation (Zero-to-One) to Utility (One-to-N) |
Enterprise simplification | The "Hard" is becoming the "Easy"—great for adoption, brutal for margins |
The Counter-Argument: This partnership could extend NVIDIA's dominance by making its stack the enterprise default. But it also signals exactly what happens at the end of a technology cycle: the pioneers and the incumbents merge, the rough edges get smoothed out, and the premium evaporates.
What Are the 4 Signals That NVIDIA's Premium Is Over?
If you want to know when the stock price will compress, watch these four signals:
The Shift to Inference: Training clusters have massive profit margins. Inference chips (the day-to-day workhorses) are commodities. As the world does more inference than training, Nvidia’s average price per chip drops.
The “Custom Chip” Leaks: Every time Amazon or Google announces they are moving a workload to their own chips (TPUs or Trainium), that is money leaving Nvidia’s pocket directly. Watch for disclosures showing more than 20-30% of new inference deployments on in-house silicon.
The Open Standard (UALink): If the UALink alliance actually ships working interconnects in volume (not just spec releases), Nvidia’s lock-in breaks.
Contract Re-negotiations: Watch for the moment when big companies start demanding discounts. That is margin compression in real time.
The “Circular Financing” Shell Game
This is the part that makes economists nervous. There is a weird loop of money happening right now:
Nvidia invests in a cloud company (like CoreWeave).
That cloud company uses the money to buy Nvidia chips.
Nvidia records it as “Revenue.”
It’s becoming hard to tell what is real organic growth and what is just money being shuffled around between a small group of friends. In at least one major case, public filings describe Nvidia structured backstops requiring it to purchase unsold CoreWeave capacity under multi-year agreements extending into the early 2030s.
As MIT economist Daron Acemoglu put it, “These kinds of deals eventually reveal a house of cards.”
The Financial Reality: Reuters reports OpenAI is estimated to burn roughly $8 billion this year while targeting $12.7 billion in revenue. That is a razor-thin margin for a company that is supposed to be the future of tech.
The Submarine Commander’s View: The “Oxygen Vent” Problem
As Tony Grayson learned commanding a nuclear submarine: "If 40% of your oxygen comes from just two vents, you don't sleep well."
The Concentration Risk
Right now, two customer accounts represent nearly 40% of Nvidia’s total revenue, according to Nvidia’s Q2 SEC filing. Two.
If Amazon or Microsoft decides to shift just 15% of their traffic to their own chips, Nvidia’s growth story takes a massive hit. The industry hates “Single Points of Failure.” That is why UALink exists—not because companies love “Open Source,” but because relying on one supplier for 40% of your oxygen is suicide.
The Depreciation Time Bomb
Here is the math problem nobody wants to discuss:
The Chips: Last about 3–4 years competitively.
The Building: Depreciates over 20 years.
We are putting 3-year assets into 20-year buildings.
CNN reports that OpenAI’s CFO recently admitted their future depends on chips lasting “four or five years.” If they burn out faster (which they often do when run 24/7), the economics collapse. She even floated government backstops for their debt if depreciation accelerates.
Some executives have openly questioned whether current economics clear a profit hurdle at scale.
You can’t build a sustainable business if your expensive machinery becomes economically obsolete every ~36 months, unless you have massive margins—and as we established, those margins are shrinking.
Where the Value Goes: The “Agentic” Thesis
Hardware eventually becomes a commodity. It happened to Intel, it happened to Cisco. So where does the profit go next? It moves up the stack.
The Difference Between a “Copilot” and an “Agent”
Copilot (The Helper): You ask it to draft an email. You read it, edit it, and send it. It makes you 10–20% faster, but it’s “Read-Only.” You are still liable for the work.
Problem: No lock-in. It’s easy to switch from ChatGPT to Claude.
Agent (The Worker): This software has “Write-Access.” It doesn’t just draft the email; it logs into the ERP system, orders the parts, files, and submits the permit, and updates the database.
Advantage: Massive lock-in. Once an AI agent is wired into your supply chain or safety systems, you can’t just rip it out. That is where the sustainable revenue lives.
The “Value Accrual” Ladder
Think of the AI industry like a 5-layer cake. The money eventually settles at the very bottom (Real Estate) or the very top (Software Lock-in). The middle gets squeezed.
Power and Land (The Bottom): Utilities, water rights, permits. 15–20 year lock-in. Safe rent.
Interconnect: Dark fiber, cables, network topology. 5–10 year advantage.
Operations: Keeping the servers running, uptime SLAs. 3–5 year advantage.
Governance: Compliance, security approvals (FedRAMP, SOC 2). 2–3 year advantage.
Workflow Agents (The Top): Software that actually runs the company. Indefinite lock-in. High margin.
The Squeeze: Cisco lived in Layer 2. Nvidia lives in Layers 2 and 3. History shows the “rent check” accrues to the Landlord (Layer 1) and the “profit margin” accrues to the Software Boss (Layer 5).
The Verdict: Bet on the Road, Not the Car
"The bear case for Nvidia isn't bankruptcy—it's multiple compression.
You don't pay mid-20s times sales for a utility company."
— Tony Grayson, former SVP Oracle, AWS, Meta
The bear case for Nvidia isn’t that it will go bankrupt. The technology is real. The demand is real. The risk is that Nvidia becomes a utility.
Utilities are great companies. They are stable, essential, and pay dividends. But you do not pay mid-20s times sales for a utility company. You don’t pay “Hyper-Growth” prices for a company that is about to face “Commodity” competition.
Cisco is still here. It’s still essential. It just took 25 years for the stock price to recover because investors paid too much for the “plumbing.”
KEY TAKEAWAY: The bear case for Nvidia isn't bankruptcy—it's multiple compression. Cisco stayed profitable for 25 years while investors waited to break even. When open standards commoditize the moat and custom chips erode margins, you don't pay 20x sales for a utility.
Market Warning Signs
Oracle dropped 5% just because one data center deal fell through. Microsoft dropped 2% on rumors of missed AI sales targets.
The market is jumpy. That means the valuations are fragile.
If I’m Wrong
To be intellectually honest, here is how Nvidia could prove me wrong and keep its monopoly:
NVLink Stays King: If the open standard (UALink) fails and companies can’t connect different chips together, Nvidia keeps its lock-in.
Custom Chips Fail: If Google’s TPUs and Amazon’s Trainium turn out to be garbage, everyone has to go back to Nvidia.
Inference Gets Harder: If future AI models need super-complex “reasoning” that only Nvidia chips can handle, the commodity argument dies.
Agent Adoption Stalls: Liability concerns, integration complexity, and enterprise conservatism keep agents in pilot purgatory for a decade.
Any of these could extend Nvidia’s monopoly-margin era. The question is probability, not possibility.
The Investment Thesis
The so-called “Model War” is a race to the bottom, with margins for model creators eroded by competition and open-source alternatives. However, the infrastructure required to operate these commoditized models represents the true value proposition.
Whether a company uses OpenAI, Claude, or a private Llama instance, they all need the same thing: secure, power-dense, low-latency space to run that inference.
The rent check accrues to power, cooling, and specialized real estate. The defensible software margin accrues to agents integrated into systems of record.
Winners of the next decade won’t lay pipes. They’ll build agents that do actual work.
Tony Grayson is a recognized Top 10 Data Center Influencer, a successful entrepreneur, and the President & General Manager of Northstar Enterprise + Defense.
A former U.S. Navy Submarine Commander and recipient of the prestigious VADM Stockdale Award, Tony is a leading authority on the convergence of nuclear energy, AI infrastructure, and national defense. His career is defined by building at scale: he led global infrastructure strategy as a Senior Vice President for AWS, Meta, and Oracle before founding and selling a top-10 modular data center company.
Read more at: tonygraysonvet.com
Frequently Asked Questions
Is Nvidia overvalued in 2025?
At mid-20s times sales, Nvidia is priced far above the semiconductor peer set. That multiple doesn’t survive when open standards erode proprietary moats. Cisco correctly predicted internet growth—it still delivered 25 years of zero returns to peak buyers on a split-adjusted basis. Being right about demand doesn’t save you from multiple compressions.
What is the Zero to One framework for AI investing?
Peter Thiel’s framework from his book Zero to One. Zero-to-One means breakthrough innovation with no competition—high multiples justified. One-to-N means scaling, commodity competition, and margin compression. Generative AI was a Zero-to-One in November 2022 with the launch of ChatGPT. Current spending is One-to-N. Pricing should match the phase, but Wall Street hasn’t adjusted.
Is the AI model layer commoditizing?
Yes. ChatGPT launched as a true Zero-to-One moment—nothing else could do what it did. Today, Claude, Gemini, and Llama offer good-enough substitutability for many enterprise use cases. GPT-4 API pricing has dropped over 80%. The moat is now distributed at a massive scale, not technology. Distribution erodes when products become interchangeable. OpenAI is estimated to be spending ~$8B this year, staying ahead of free alternatives.
What is the difference between AI training and inference?
Training is largely upfront CapEx that must be amortized—you spend billions building the model. That investment only pays back when inference queries start flowing. Inference is where revenue happens: every API call, every token. But inference is commoditizing as models become interchangeable, so margins compress even as the meter runs. The amortization only works if inference pricing stays firm—and it isn’t.
How long did it take Cisco to recover from 2000?
25 years to reclaim the dot-com peak on a split-adjusted price basis. Cisco finally hit its March 2000 high again on December 10, 2025. The company stayed profitable throughout—paid dividends, generated cash, and remained essential to how the internet works. Didn’t matter if you bought at peak. Note: this is a nominal price; total return with dividends reinvested and inflation-adjusted return tell different stories.
What is the difference between AI copilots and agents?
Copilots are read-only: they suggest, you decide, you’re liable. Agents have write-access to operational systems—they execute autonomously and change records. Copilots add 10-20% speed. Agents eliminate functions entirely. The key difference is lock-in: switching copilots is easy, but ripping out an agent wired into your ERP, supply chain, or safety systems is extremely costly. That’s where competitive advantage lives.
Is AI in a bubble?
Infrastructure spending looks bubble-like—estimates range into the hundreds of billions annually, with projections of trillions through the end of the decade. Surveys suggest a large majority of enterprise AI pilots fail to reach measurable ROI. But the demand is real. The parallel isn’t “AI is fake.” It’s “AI is real, but valuations assume monopoly margins in a commodity market.” Cisco proved the Internet was real. Investors still lost 25 years.
Will Nvidia stock crash?
Not necessarily a “crash,” but more likely a “compression.” Nvidia could stay profitable, keep growing, and still deliver poor returns if you buy at mid-20s times sales for what’s becoming a utility business. Exactly what happened to Cisco? The bear case isn’t bankruptcy—it’s multiple compression. UALink and custom silicon (TPUs, Trainium, Inferentia) are eroding the moat right now.
What is NVLink, and why does UALink matter?
NVLink is Nvidia’s proprietary GPU interconnect—a major lock-in source because clusters built around NVLink are painful to switch. UALink is an open standard formed in 2024 by AMD, Google, Intel, Meta, and Microsoft—the industry’s attempt to commoditize scale-up interconnect the same way Ethernet commoditized networking. If UALink succeeds and ships in volume, GPU clusters become interchangeable, and Nvidia loses pricing power. The same pattern killed Cisco when Ethernet beat proprietary alternatives.
What is the “circular financing” problem?
Nvidia invests in cloud companies like CoreWeave. Those companies use the investment to buy Nvidia chips. Nvidia records it as revenue. It’s becoming hard to distinguish organic growth from money shuffled among a small group of companies. SEC filings show Nvidia has structured backstops requiring it to purchase unsold CoreWeave capacity into the early 2030s. These arrangements inflate reported revenue while masking actual market demand.
Why do operators care about power and cooling so much?
Because those are the real constraints, not chip benchmarks, getting utility power can take 3-5 years in many regions. Transformer lead times run 12-24+ months. Chilled water capacity is finite. High-density AI racks are so heavy that you have to re-check if the floor can hold them. The supply chain for liquid cooling parts is immature, and skilled maintenance labor is scarce. You can’t solve physics problems by raising the stock price.
Who is Tony Grayson?
Tony Grayson is President and GM of Northstar Enterprise and Defense, building modular AI data centers. Previously SVP Physical Infrastructure at Oracle ($1.3B budget), with senior roles at AWS and Meta. Former Navy submarine commander (USS Providence SSN-719), Stockdale Award recipient, and Top 10 Data Center Influencer. Advisory roles with TerraPower and Holtec International. Writes at tonygraysonvet.com.
Why did it take Cisco 25 years to recover?
It wasn't a failure of technology—Internet traffic grew by 10,000x. It was a failure of valuation. Cisco's price-to-sales multiple of ~27x in 2000 took a quarter-century of actual growth to "fill." The company stayed profitable, paid dividends, and remained essential. The stock price had to wait for reality to catch up to the price tag. On December 10, 2025, it finally did.
Is the NVIDIA-Cisco partnership a "bubble sign"?
Not necessarily a bubble, but certainly a signal of commoditization. It represents the shift from Innovation to Utility. When the "Plumbing" (Cisco) and the "Engine" (NVIDIA) merge into a single "Easy Button" for enterprises, it signals we've moved from Zero-to-One (invention) to One-to-N (scaling). That's great for adoption. It's terrible for monopoly margins.
What is the $2.25B lesson from Cisco 2001?
The Inventory Write-down. A lack of demand didn't trigger Cisco's collapse—it was triggered by a "demand mirage" where buyers over-ordered, expecting infinite growth. When reality hit, Cisco wrote off $2.25 billion in unsellable inventory. Today, monitor "Ghost Demand" in neocloud GPU orders: Is every GPU purchase backed by a paying customer, or is it circular financing among a small group of well-funded friends?
What is UALink and why should investors care?
UALink is the industry's attempt to do to NVIDIA what Ethernet did to Cisco: turn expensive, proprietary connecting wires into a cheap, standard commodity. Launched in May 2024 and releasing its 1.0 specification in April 2025, the consortium now includes 85+ members—AMD, Apple, AWS, Cisco, Google, Intel, Meta, Microsoft, and more. If UALink succeeds, GPU clusters become interchangeable, and NVIDIA loses its interconnect lock-in.
Sources:
Market Events & Stock Data
AI Economics & OpenAI
NVIDIA Concentration & Financing
Open Standards & Technology
UALink Consortium Launch (May 2024) — Business Wire
NVIDIA CUDA Toolkit — NVIDIA Developer
Books & References
Zero to One by Peter Thiel — Amazon
All-In Podcast (Sacks on NVIDIA/Cisco) — YouTube
UALink 2025 Updates
UALink Consortium Official Site — 85+ members, 1.0 spec
UALink 1.0 Specification White Paper (April 2025) — Technical details
UALink 1.0 Specification Overview — Board members, timeline
UALink Releases 200G 1.0 Spec — Data Center Dynamics
UALink Adds Alibaba, Apple, Synopsys to Board — Business Wire
UALink vs NVLink Analysis — Network World
Cisco Secure AI Factory with NVIDIA
Cisco Secure AI Factory Launch (March 2025) — Cisco Newsroom
Cisco AI Innovations at GTC (Oct 2025) — First NVIDIA Cloud Partner architecture
Cisco + NVIDIA Security for Enterprise AI — NVIDIA Blog
Cisco Secure AI Factory FAQ — Cisco




Comments