Skip to main content

From Chip Wars to Space Data Centers: The Next Decade of AI

AI-assisted

From chip wars to space data centers, from the SaaS life-or-death dilemma to the essence of investing — a deep dive into a high-density interview

GPUs, TPUs, & The Economics of AI Explained
Invest Like the Best podcast · Hosted by Patrick O'ShaughnessyYouTube
Gavin Baker
Gavin Baker· Guest
Managing Partner & CIO, Atreides Management

Former Fidelity portfolio manager, 18 years of investment experience, founded Atreides Management in 2019

Investing is a pursuit of truth. If you find the truth first, and you're right, that's how you generate alpha. And it has to be a truth that others haven't yet seen.

This quote comes from Atreides Management founder Gavin Baker's interview on Patrick O'Shaughnessy's podcast "Invest Like the Best." Gavin is regarded as one of the most passionate and insightful investors in tech investing, and this nearly two-hour conversation covered GPUs, TPUs, AI economics, space data centers, the future of SaaS, and even his life transition from ski instructor to investor.

This interview is incredibly information-dense, with too many "stop and think" moments. Here are some of the most thought-provoking insights.


How to Track AI Development? Start by Spending $200

At the start, Patrick asked a very practical question: When a new model like Gemini 3 launches, how do you process this information?

Gavin's answer was direct: you have to use it yourself.

But the key isn't just "using it" — it's which version you use. He was surprised by investors who try the free version and conclude "AI is nothing special":

The free version is like dealing with a 10-year-old, and then based on this 10-year-old's performance, predicting what they'll be like at 35. You can pay — actually, you have to pay to get the highest tier membership, $200 per month. Those are the real 30-35-year-old adults.

This analogy is spot-on. The gap between different AI model tiers follows the same logic — many people try a free model, conclude "AI is nothing special," but if you've used Claude 4.5 Opus, Gemini 3 Pro, or GPT-5.2 Reasoning — the top-tier models — the experience is completely different.

On the topic of spending, I personally spend about $300/month subscribing to various AI products, with the bulk going to Claude Code Max ($250/month). If you're a developer with significant coding needs, I strongly recommend subscribing directly to the official Claude Code ($125 minimum) rather than using mirror sites. You never know if mirror sites are actually using the real model, and the Max plan's cost-effectiveness is actually excellent — far more economical than pay-per-use.

As for information channels, Gavin's answer might surprise many: X (Twitter).

He says AI development largely "happens in real-time on X." There are probably 500 to 1,000 people on Earth who truly understand the AI frontier, a significant portion in China, and you need to closely follow these people. He specifically mentions Andrej Karpathy:

Every piece that Andrej Karpathy writes, you need to read it three times. Minimum.

Andrej Karpathy
Andrej Karpathy· Coined 'Vibe Coding'
Founder, Eureka Labs

Stanford PhD, OpenAI founding member, former Tesla AI Director, founded AI education company Eureka Labs in 2024

As someone who also follows AI developments, I deeply resonate with this. AI discussions on Twitter are indeed more real-time and in-depth than any news outlet. Researchers from labs post directly about the latest developments, and even "argue" with each other — Gavin mentions that Meta's PyTorch team and Google's Jax team once had a public dispute on X, until both lab heads had to step in and declare: "Our people are not allowed to trash-talk the other lab."


Scaling Laws: Our "Ancient Egyptian Moment"

After Gemini 3's release, many focused on what it revealed about scaling laws. Gavin offered a perspective I'd never heard before:

Our understanding of pre-training scaling laws is probably like the ancient Egyptians' understanding of the sun. They could measure with extraordinary precision — the east-west axis of the Great Pyramid aligns perfectly with the equinoxes, as does Stonehenge. Perfect measurement. But they didn't understand orbital mechanics. They didn't know why the sun rises in the east and sets in the west.

This analogy made me pause for a long time. We can indeed predict very precisely: increase a model's compute by 10x and performance improves by a certain amount. But we don't know why. This isn't a "law" — it's an "empirical observation," one we measure with extraordinary precision but whose underlying principles we don't understand.

So why does Gemini 3 matter? Because it proved this "empirical observation" still holds. At a time when Blackwell chips were delayed and everyone worried whether "scaling laws have hit a wall," Gemini 3 gave a clear answer: they haven't.

But what's more interesting is what Gavin said next: without the emergence of reasoning models, AI development from 2024 to 2025 should have stagnated.

Why? Because after XAI managed to get 200,000 Hopper GPUs working in concert, the next step required waiting for Blackwell chips. You can't keep more than 200,000 Hopper GPUs "coherent" — simply put, working as a unified system. And Blackwell was delayed.

Without reasoning models, from mid-2024 to now, there would have been zero progress in AI. Everything would have stalled. Can you imagine what that would have meant for markets? We'd be living in a completely different environment. Reasoning models in some ways saved AI, because they allowed progress without Blackwell.

This is a perspective I hadn't considered before: reasoning models (like o1) aren't just a new capability — they actually "saved" the entire AI industry's development trajectory.


Chip Wars: Google Is "Sucking Out the Oxygen"

On the GPU vs TPU competition, Gavin said something that stuck with me:

Google is currently the lowest-cost producer of tokens. What they've been doing, I would say, is "sucking the economic oxygen out of the AI ecosystem" — which is an extremely rational strategy for them.

As the low-cost producer, Google has been offering AI services at low prices (even at a loss), making life difficult for competitors. This is a classic tech industry play, but Gavin points out an interesting shift:

AI is the first time in my career where being the "low-cost producer" actually matters in tech. Apple isn't worth trillions because they're the low-cost producer of phones. Microsoft isn't worth trillions because they're the low-cost producer of software. NVIDIA isn't worth trillions because they're the low-cost producer of AI accelerators. It's never mattered before.

But in the AI era, when power becomes the limiting factor, tokens per watt becomes crucial. If you can produce 3-5x more tokens per watt, that's 3-5x the revenue. The price of compute becomes irrelevant because the bottleneck is power.

This landscape is about to change. Blackwell chips are finally being deployed, and Gavin predicts the first Blackwell model will come from XAI:

Blackwell

NVIDIA's next-generation GPU architecture released in 2024, succeeding Hopper. Built on TSMC 4nm process, Blackwell integrates 208 billion transistors per chip, supports FP4 precision, delivers ~5x AI inference performance improvement over Hopper, and 25x better energy efficiency. The first product, B200, entered mass production in late 2024.

Source: NVIDIAVisit

According to Jensen, nobody builds data centers faster than Elon. Jensen has said this publicly.

Once Blackwell and subsequent Ruben chips are deployed at scale, Google's advantage as the low-cost producer will evaporate. Will they still be willing to operate their AI business at -30% gross margins then? The math will completely change.


Space Data Centers: Crazy, But Right from First Principles

When Patrick asked about "any crazy ideas not being discussed enough," Gavin brought up space data centers. At first I thought he was joking, but after hearing his analysis, I realized this might be the most visionary part of the entire interview.

From a first-principles perspective, space data centers are superior to earth-based data centers in every dimension.

His argument:

1. Energy: In space, satellites are exposed to sunlight 24 hours a day, with solar radiation 6x stronger than on the ground. And because there's always sunlight, you don't need batteries — batteries are a significant portion of costs. So the lowest-cost energy source in the solar system is "space solar."

2. Cooling: On Earth, most data center costs and weight go to cooling. But in space? Cooling is free. Put the radiators on the satellite's shaded side, where temperatures approach absolute zero.

3. Networking: In data centers, racks connect via fiber optic — essentially lasers through cables. What's the only thing faster? Lasers through vacuum. So laser-connected satellites in space would actually have faster networking than terrestrial data centers.

4. User experience: Currently, when you ask AI a question, the signal travels from your phone to a cell tower, through fiber, to some data center, gets processed, and returns the same way. But if satellites can communicate directly with phones (Starlink has already demonstrated direct-to-phone capability), the entire chain becomes much shorter.

Of course, this requires Starship mass launches to become reality — probably 5-6 more years. But Gavin points to an interesting convergence: Tesla, SpaceX, and XAI are converging. XAI will be the "intelligence module" for Optimus robots, SpaceX will build data centers in space to provide compute for AI — these three companies are forming a flywheel of mutually reinforcing competitive advantages.


SaaS's "Burning Platform"

If the previous sections made you excited about AI's future, this one might worry you about many existing companies.

Gavin states bluntly: application SaaS companies are making the exact same mistake that brick-and-mortar retailers made when facing e-commerce.

Brick-and-mortar retailers looked at Amazon and thought "e-commerce is a low-margin business — how could it be more efficient than us? Customers currently pay to come to our stores and carry products home themselves." They clearly saw customer demand but refused to invest because they didn't like e-commerce's margin structure. The result? Amazon's North American retail margins are now higher than many traditional retailers.

SaaS companies face the same situation now. Traditional software is written once and can be infinitely replicated and distributed, with gross margins reaching 80-90%. But AI is different — every use requires new computation, and good AI companies might only achieve 40% margins.

If you want to build AI agents and you're unwilling to operate at less than 35% gross margins, you will never succeed. Because AI-native companies are operating at those margins. If you want to protect 80% gross margins, you are guaranteeing yourself failure in AI. Absolutely guaranteeing.

Gavin calls this a "life-or-death decision," and except for Microsoft, almost everyone is failing.

He references Nokia's famous "burning platform" memo: your platform is on fire. But there's actually a perfectly good new platform right next to you. Jump over, then go back and put out the fire on the old one. Now you have two platforms.

Salesforce, ServiceNow, HubSpot, GitLab, Atlassian — he believes all these companies can and should run this playbook: publicly disclose your AI revenue, publicly disclose your AI gross margins (low margins actually prove it's "real AI"), then point to your venture-backed competitors who are still losing money and say "I have something they don't: a business that generates cash flow."


An Investor's Origin Story

Near the end, Patrick asked a more personal question: how would you explain what you do to a young person?

Gavin's answer starts with "investing is a pursuit of truth," but the really interesting part is his life story.

His original plan was: teach skiing in winter, guide rafting in summer, rock climb in the off-season, while dabbling in novel writing and wildlife photography. This was his "life plan" in college, and his parents were fully supportive.

But his parents made one small request: could he find a professional internship — just one, anything?

The only internship he could find was at a brokerage firm's private wealth management division. The job was simple: whenever the firm published a research report, he'd check which clients held that stock, then mail them the report.

Then he started reading those reports.

I thought: "Oh my God, this is the most interesting thing I can imagine."

He understood investing as a "game of skill and luck," somewhat like poker. You can lose due to bad luck — say, a meteorite hits your company's headquarters — but most of the time, skill matters. And gaining an edge means having the deepest historical knowledge, combined with the most accurate understanding of the present world, to form a differentiated view of "what happens next."

That was day three of his internship. He went to a bookstore, bought Peter Lynch's book, and finished it in two days. Then he read Buffett, read Market Wizards, read Buffett's shareholder letters — twice. Then taught himself accounting. Back at school, he switched his major from English and History to History and Economics.

He also shared a formative experience working as a cleaner. While working at Alta ski resort, he cleaned hotel rooms. Once, while cleaning, he noticed a guest reading the same book he was reading. He said "that's a great book, I'm about the same place as you." The guest looked at him like an alien, then asked with even more shock: "You read books?"

That experience permanently changed how I treat other people.


Epilogue: Whatever AI Needs, It Gets

Near the interview's close, Gavin said something I found most fascinating:

Over the past two years, whatever AI needed to keep developing, it got. Have you ever seen U.S. public opinion shift on any issue as fast as it shifted on nuclear power? It just happened. And it happened right when AI needed it to happen. Now we're hitting power constraints on Earth, and suddenly the discussion about space data centers appears. Every time something might slow AI down, everything accelerates instead.

This recalls Kevin Kelly's "technium" concept from What Technology Wants: technology as a whole seems to have its own will, wanting to become ever more powerful.

Maybe it's just coincidence. Maybe it's just many smart people solving problems. But the pattern Gavin observes — AI encounters an obstacle, and the obstacle somehow gets removed — is indeed worth pondering.

Comments

Table of Contents