STOCK TITAN

Everyone Is Wrong About the AI "Bubble" — Here's What I See Building These Systems

After nearly a decade building AI systems daily, here's what the endless bubble debates are missing

Key Numbers at a Glance

Data Center Power (2024)
415 TWh
~1.5% of global electricity
Big Tech AI Capex (2025)**
$300B–$350B
Combined guidance range
Amazon Robots
1,000,000+
Across 300+ facilities
Grid Connection Wait
7+ Years
In some US regions*

*Grid connection times vary significantly by region and project size. See Energy Reality section for details.
**Combined 2025 capex guidance from official company announcements: Microsoft ~$80B (Microsoft Blog), Alphabet ~$75B (CNBC), Meta $64–72B (Meta IR), Amazon $100B+ (CNBC). Sum: $319B–$327B; later reporting shows Amazon approaching $125B.

TL;DR: The Thesis in 60 Seconds

  • The bears are wrong: AI capability is genuinely unprecedented and still improving. The technology is not fake.
  • The bulls are wrong: Most current AI applications are poorly designed and will fail or commoditize. Today's winners won't all be tomorrow's winners.
  • The real insight: Value is migrating up what I call the "Scarcity Stack": from chips to energy to physical execution. The next wave is robotics and industrial automation, not chatbots.
  • What to watch: Inference costs, power deals, grid access, robotics deployments, and trust infrastructure (not hype cycles).

Table of Contents

Corridor showing transition from dot-com era decay to modern AI infrastructure with server racks and robotics

The AI Bears Are Wrong. The Bulls Are Wrong Too.

I keep seeing the same AI bubble arguments repeated. Some are reasonable. Most collapse different cycles into one confused story.

The bears look at failing chatbot startups and declare the whole thing is over. The bulls look at capability curves and declare every company needs AI or dies. Both camps are confusing a technology transition with a product cycle with a market cycle, three completely different things.

Here's what I've learned building AI systems since 2016: the bubble isn't in AI. The bubble is in the assumption that today's applications represent what AI will become.

Or, put more simply:

Pets.com dying didn't prove the internet was fake. AI wrappers dying won't prove AI is fake.

What most people miss: the bottleneck is starting to shift. Right now it's still chips (who has GPUs?), but energy is entering the conversation (who has power and grid access?). Eventually it becomes physical execution and trust (who can make AI do things in the real world, safely and accountably?). I call this progression the Scarcity Stack, and it's the key to understanding where value actually moves.

But let me start somewhere unexpected.

Let's Be Honest: You're Tired of AI

Let's be honest: you're exhausted.

You're tired of every SaaS product forcing a mediocre chatbot into your workflow that nobody asked for. You've watched companies slap "AI" on products that clearly don't need it, and you've seen another startup pitch claiming to be "the ChatGPT of [random industry]".

I'm tired too.

This exhaustion is real. And it's exactly how people felt in 1999.

Back then, companies were adding ".com" to their names just to pump stock prices. We had "internet-enabled" pet food delivery that lost money on every order. It was a mania of low-effort applications forced onto a public that wasn't ready.

The exhaustion you feel right now? That's a signal. But it's not the signal most people think it is.

Exhaustion isn't evidence of a bubble. It's evidence that we're in the awkward teenage phase of a genuinely transformative technology.

And if you check out now, you might miss what's actually happening underneath the noise.

Key Terms Explained Simply

Before we go deeper, let me define a few terms you'll see throughout this piece. If you're already familiar with AI infrastructure, feel free to skip ahead.

Training vs. Inference

  • Training = Teaching the AI. You feed it massive amounts of data so it learns patterns. This is expensive and happens occasionally (building the brain).
  • Inference = Using the AI. Once trained, the model answers questions, generates text, or makes decisions. This happens continuously and scales with users (using the brain).

Token
A chunk of text the AI reads and writes. Roughly, 1 token ≈ 4 characters or ¾ of a word. When you hear "cost per token", it means how much it costs for the AI to process or generate that chunk.

PUE (Power Usage Effectiveness)
A data center efficiency ratio. PUE of 1.0 means all electricity goes to computing; PUE of 2.0 means half goes to cooling and overhead. Lower is better. Modern AI data centers target 1.1-1.3.

PPA (Power Purchase Agreement)
A long-term contract to buy electricity at a fixed price, often from a specific power plant. Tech companies are signing 10-20 year PPAs to secure reliable power for data centers.

Agent Identity
Treating AI agents like users in a security system, with permissions, authentication, and audit logs. This becomes critical when AI can take actions (book meetings, execute trades, access data) rather than just answer questions.

Interconnection Queue
The line of projects waiting to connect to the electrical grid. New data centers must apply, get studied, and wait for approval. In some US regions, this queue stretches 5-10+ years.

What I've Learned Building AI Since 2016

I've been working with artificial intelligence since 2016.

Not talking about it. Not writing think pieces. Actually building systems: training models, debugging failures at 2 AM, watching projects that seemed promising collapse into nothing, and occasionally seeing something genuinely work.

When I founded StockTitan in 2019, AI was already at the core of what we built. Not as a marketing buzzword, but as the actual infrastructure processing financial data at a scale and speed that would be impossible otherwise.

In 2016, here's what AI could do:

  • Image classification that worked okay with millions of training examples
  • Sentiment analysis that broke whenever someone used sarcasm
  • Speech recognition that required quiet rooms and clear accents
  • Recommendation systems that were useful but fragile

Here's what AI can do now:

  • Systems that understand context, nuance, and intent across languages
  • Models that reason through multi-step problems
  • Vision systems that navigate complex real-world environments
  • Code generation that genuinely accelerates professional developers

This isn't incremental progress. This is a phase transition.

Where AI genuinely helps us at StockTitan:

  • Classification: "Is this about earnings, M&A, legal issues, or guidance?"
  • Summarization: "What changed? What matters?"
  • Extraction: "Pull the numbers, names, and deadlines"
  • Clustering: "Which of these 50 stories are actually the same story?"

Where AI predictably fails:

  • It hallucinates when you ask it to infer missing facts
  • It sounds confident even when it's completely uncertain
  • It can be gamed by repeated narratives
  • It misses nuance and sarcasm regularly

This is why the "AI is magic" crowd sounds detached to people who actually build. The real work isn't "add AI". The real work is defining tasks precisely, constraining outputs aggressively, verifying against primary sources, and measuring accuracy relentlessly.

Most companies right now are doing the equivalent of: "We added AI summaries. Ship it".

That's the current reality. It doesn't mean the technology is fake. It means the industry is learning where AI is worth the cost and where it isn't.

The Dot-Com Lesson Everyone Misremembers

In 1999, Pets.com raised $82.5 million to sell pet food online. They spent $17 million on advertising in a single quarter. They folded within a year of their IPO, becoming the poster child for dot-com excess.

Around the same time, Amazon (also selling things online) saw its stock price collapse from over $100 to under $10. That's a decline exceeding 90%. The company faced serious liquidity concerns and had to raise capital through a convertible bond offering in early 2000, just before the broader market crashed.

The narrative at the time was clear: "The internet was overhyped. These companies will never make money. The whole thing was speculative mania".

That narrative was partially correct. Pets.com was a terrible business. Webvan was a terrible business. Most dot-com companies were terrible businesses built on the assumption that "being online" was a business model.

But here's what that narrative missed: the internet was exactly as transformative as the hype suggested. Just not in the way people expected, and not on the timeline they predicted.

Amazon, at its lowest point in 2001, had a market cap in the low single-digit billions. Today it's worth on the order of two trillion dollars. The transformation was real, but it took far longer and looked far messier than anyone predicted.

The "Pets.com" of 2026

  • AI copywriting startups (features, not companies)
  • AI image generators with no moat
  • "Wrappers" around OpenAI that add no value
  • Products that were fine before AI was shoved in

The "Amazon" of 2026

  • Physical infrastructure builders
  • Robotics and industrial automation
  • Companies solving energy and grid constraints
  • Platforms that make AI execution trustworthy

The lesson isn't "bubbles are fake". The lesson is: transformative technologies create both real bubbles (in bad applications) and real value (in good applications) simultaneously.

The people screaming "bubble!" in 2001 were right about Pets.com and wrong about the internet. The people screaming "bubble!" today might be right about AI toothbrushes and wrong about AI itself.

Three Layers People Keep Confusing

When people argue about whether "AI is a bubble", they're usually talking past each other because they're conflating three completely different things:

Layer 1: Technology Adoption (What the World Can Do)

This is the real shift. AI systems keep getting more capable, cheaper, and easier to deploy. This layer is not a bubble. It's a genuine expansion of what's possible.

Layer 2: Product Reality (What Users Actually Want)

Most products fail. Many "AI features" are forced. Users get fatigued. Then a few use cases break through and become normal. This layer sees lots of failures, which is normal for any platform shift.

Layer 3: Market Valuation (What Investors Price In, and When)

Markets overshoot. They also undershoot. Great technologies can be paired with terrible timing. This layer is where bubble dynamics actually play out.

When someone says "AI is a bubble", they might be talking about Layer 2 or Layer 3, while you're thinking about Layer 1. That's why the debate feels endless.

The internet analogy that actually helps: in the early internet era, "internet-enabled" was printed on everything. It was mostly marketing. Many products were nonsense. Then, quietly, the internet became the default plumbing of everything that mattered.

AI is going through the same branding phase. The difference is speed.

The Scarcity Stack: Where Value Actually Moves

When a technology becomes widely available, value often shifts to whatever remains scarce and hard to replicate.

For AI, I think about value migration in layers, what I call the "Scarcity Stack":

The AI Scarcity Stack

5
Physical Instantiation
Robots, logistics, factories
4
Energy & Cooling
Power, grid access, siting
3
Trust & Identity
Who can do what, governance
2
Orchestration
Making compute useful at scale
1
Compute
Chips, data centers

Value tends to migrate upward as lower layers commoditize

How Value Migration Actually Works

The concept is simple: as lower layers become abundant, value migrates to whatever remains constrained.

But let me be clear about where we actually are:

2023-2025: The dominant conversation is still chips. "Who has GPUs? Who can get GPUs?" NVIDIA and AMD aren't going anywhere. The constraint remains Layer 1 (Compute), and value continues to accrue to whoever controls chip supply. This will remain true for years.

What's emerging in 2025: Energy is just starting to enter the conversation. "Who has power? Who can get grid connections?" We're seeing the first major power purchase agreements and nuclear deals. The four largest US hyperscalers (Microsoft, Google, Amazon, Meta) are now spending over $300 billion annually on infrastructure, and a growing portion of that is going toward securing power, not just chips. But this is the beginning, not a transition that already happened.

What comes next: Layer 5 (Physical Instantiation) and Layer 3 (Trust) will eventually become the key constraints. Who can make AI do things in the real world? Who can govern AI agents safely? We're not there yet, but the companies positioning now will have advantages when we arrive.

This is why single-layer narratives are fragile. If you only see the GPU story, you'll miss energy becoming a bottleneck. If you only see energy, you'll miss that physical execution is where defensibility ultimately emerges.

A Concrete Example

Consider Amazon. They're investing across multiple layers simultaneously:

  • Compute: Developing custom chips (Trainium, Inferentia)
  • Orchestration: AWS Bedrock for model deployment
  • Energy: $650 million acquisition of a Pennsylvania data center campus adjacent to a nuclear plant
  • Physical: Over 1 million robots in warehouses

That's not "betting on AI". That's positioning across the entire scarcity stack.

The question isn't "is AI a bubble?" It's: what becomes scarce next, and who is positioned for that scarcity?

How This Maps to Public Companies

Important: This is not a recommendation to buy or sell any security. This section maps research signals to public companies as examples of how to think about the scarcity stack framework. Always do your own research.

For those following AI infrastructure through public markets, here's how the scarcity stack maps to observable companies and the signals worth tracking:

Compute & Inference Economics

Examples: NVDA, AMD

Research signals to track:

  • Data center revenue growth rates (quarterly earnings)
  • Inference vs. training revenue mix (inference is the growing share)
  • Competitive chip announcements and benchmarks
  • Customer concentration in hyperscalers

Why inference matters: Training a model is expensive but happens once. Inference (using the model) happens billions of times daily and scales with users. As inference becomes the dominant cost, efficiency improvements and cost-per-token metrics become critical competitive factors.

Orchestration & Platform

Examples: MSFT (Azure AI), GOOGL (Vertex AI), AMZN (Bedrock)

Research signals to track:

  • AI services revenue disclosure
  • Enterprise adoption metrics in earnings calls
  • Developer ecosystem growth
  • Model deployment tooling announcements

Trust & Identity Infrastructure

Examples: MSFT (Entra Agent ID), enterprise governance platforms, model risk management vendors

Research signals to track:

  • Agent identity documentation and enterprise features
  • Governance and audit tooling announcements
  • Enterprise security certifications for AI systems (SOC 2, ISO 27001 for AI workflows)
  • Regulated industry AI approval frameworks (FDA, SEC, FINRA guidance on AI use)

Why this matters: Imagine an AI agent authorized to execute trades on your behalf or submit SEC filings for your company. Who approved that action? What were its permissions? Can you prove it did what it was supposed to? Without identity infrastructure (authentication, permissions, audit logs, approval workflows, and regulatory compliance), enterprises can't deploy agents for anything consequential. This extends beyond any single vendor: banks need model risk management, healthcare needs FDA-compliant AI documentation, and any regulated workflow needs immutable audit trails. This is the "passport system" for AI.

Energy & Data Center Infrastructure

Examples: Utilities with AI load growth, Data Center REITs

Research signals to track:

  • Hyperscaler capex guidance (currently $300B–$350B annually across major players)
  • Power purchase agreement announcements (length, MW, source)
  • Regional interconnection queue data
  • Efficiency improvements (PUE metrics trending toward 1.1-1.2)

Physical Instantiation & Robotics

Examples: AMZN (1M+ robots), TSLA (Optimus program)

Research signals to track:

  • Robot deployment numbers in earnings
  • Warehouse/factory automation metrics
  • Humanoid robotics milestones and timelines
  • Manufacturing cost curves

Key insight: Watch for companies that are investing across multiple layers of the stack simultaneously. Single-layer exposure may be vulnerable to value migration.

The Energy Reality (It's Not What You Think)

If you've read any AI skeptic content lately, you've seen the energy argument: AI data centers will consume impossible amounts of electricity, the grid can't handle it, this proves the whole thing is unsustainable.

Let me share the actual numbers.

According to the International Energy Agency's 2025 report, data centers consumed about 415 terawatt-hours (TWh) of electricity in 2024. That's approximately 1.5% of global electricity consumption.

By 2030, the IEA projects this will roughly double to about 945 TWh, equivalent to Japan's current total electricity consumption. Nature reported on these projections, noting the significant role AI plays in driving growth.

That sounds dramatic. But let's put it in context.

Global electricity consumption in 2024 was approximately 30,000 TWh. The projected increase for data centers through 2030 represents a meaningful but manageable portion of total global electricity demand growth over that period.

Is this growth significant? Yes. Is it a crisis that makes AI unsustainable? No.

The Real Constraint: It's the Plug, Not the Power

Here's what the doomsayers miss: the constraint is local and time-based, not global.

Imagine you want to build a data center today:

  • Construction time: You can build the server hall in 18 months
  • Grid connection time: In parts of the US, the interconnection queue (the line of projects waiting to connect to the electrical grid) now stretches 7 to 10 years for large loads

This varies dramatically by region. Some areas have shorter queues; others are essentially closed to new large loads. The constraint isn't "the world is out of electricity". It's "the permitting and transmission infrastructure can't keep up with demand in specific locations".

Ireland offers a stark example. The Central Statistics Office reported data centers accounted for 21% of total metered electricity consumption in 2023. EirGrid, the grid operator, warned about a possible "mass exodus" of data centers if new connection agreements couldn't be signed.

This isn't "the world is out of energy". It's "this region's grid and policy timeline matters".

Why Big Tech Is Buying Nuclear Plants

If energy were just a rounding error, you wouldn't see hyperscalers locking in long-duration power deals.

Microsoft signed a 20-year agreement with Constellation tied to the restart of Three Mile Island Unit 1, adding about 835 MW of carbon-free power.

Amazon Web Services entered a power purchase agreement with Talen Energy for up to 1,920 MW from the Susquehanna nuclear plant. Separately, AWS paid $650 million to acquire a data center campus in Pennsylvania specifically because it was adjacent to existing nuclear generation.

They're not doing this because they love nuclear power. They're doing it because those plants are already connected to the grid. They bought a VIP pass to the grid. While competitors wait years in a regulatory queue, they can turn on servers much faster.

Simple analogy: Think of AI compute like a fleet of trucks. GPUs are the trucks. Models are the cargo. Energy is the fuel. Grid connection is the road access permit. You can own the best trucks on Earth, but if you can't get fuel at the depot and permits for the roads, your trucks sit idle.

The energy "crisis" argument also ignores something important: the IEA projects that renewables will meet roughly half of data center electricity demand growth through 2030.

Is energy a consideration for AI infrastructure? Absolutely. Is it a reason to dismiss the entire technology? That's like saying cars were a bubble because horses didn't need gasoline.

From Digital AI to Physical AI

Here's what the bubble-callers are missing: most of the economic value from AI hasn't fully materialized yet.

Right now, most AI applications are digital-to-digital: AI helping you write emails, AI summarizing documents, AI generating images. These applications are useful but limited. They're also easy to replicate, which means they'll get commoditized fast.

The transformative applications, the ones that will actually justify the infrastructure buildout, are AI-to-physical: robots, manufacturing, logistics, scientific research, drug discovery, materials science.

We Haven't Even Started

Here's what most people don't grasp: the compute and energy required for physical AI dwarfs what we're using today.

Running ChatGPT means generating text tokens. That's computationally intensive, but it happens in discrete bursts: you ask a question, the model thinks, you get an answer.

Running a robot in real-time is a different beast entirely. You need continuous inference from multiple sensors (cameras, LIDAR, force sensors), real-time decision-making at millisecond latency, and constant motor control adjustments. All of this runs simultaneously, indefinitely, for every robot in the fleet.

When people debate whether we have enough GPUs and power to run today's chatbots, they're missing the scale of what's coming. Industrial automation at scale will require orders of magnitude more hardware and energy than anything we've deployed so far. We're not at the end of the infrastructure buildout. We're at the very beginning.

The Numbers Tell the Story

Morgan Stanley estimates the humanoid robotics market could reach $5 trillion by 2050. Goldman Sachs projects the market hitting $38 billion by 2035, with their forecast increasing sixfold from earlier estimates specifically because of AI progress.

Why? Because the bottleneck for robotics was never hardware. It was software: specifically, the ability for robots to understand and navigate unstructured environments, manipulate objects they've never seen before, and adapt to unexpected situations.

That's exactly what modern AI unlocks.

Amazon as a Case Study

Amazon announced it deployed its one millionth robot, with a robotics network spanning more than 300 facilities worldwide.

These aren't the "dumb" robots of the past that followed magnetic tape on the floor. These are AI-driven agents that "see" chaos, navigate around spilled boxes, and sort millions of unique items.

This is the moat.

You can copy a chatbot's code in a weekend. You cannot copy a logistics network of a million robots and hundreds of warehouses.

Digital AI (Commoditizing)

  • Write emails
  • Summarize documents
  • Generate images
  • Answer questions

Physical AI (Defensible)

  • Warehouse automation
  • Manufacturing optimization
  • Autonomous logistics
  • Scientific research acceleration

The era of "digital wrappers" is ending. The era of "industrial automation" is beginning.

What I See Building StockTitan

When you run a platform like StockTitan, you're handling:

  • High-volume text (news, SEC filings, press releases)
  • Real-time updates that can't wait
  • Messy, inconsistent data from dozens of sources
  • Constant topic drift as markets evolve

The daily reality of building with AI is humbling. Every model we deploy has failure modes. Every pipeline needs monitoring. Every "improvement" can introduce new edge cases.

One challenge we face constantly: getting neutral evaluations when your sources have built-in biases. A press release is designed to present the company in the best possible light. Bad quarterly results get reframed as "strategic repositioning". A failed product launch becomes "refocusing on core competencies". A lawsuit becomes "we believe the claims are without merit".

This is where AI can be dangerously naive. If you feed a language model a press release that skillfully reframes a 40% revenue decline as "challenging market conditions requiring temporary adjustments", the AI will often accept that framing at face value. It sounds confident. It uses professional language. It must be fine, right?

This is the sycophancy problem that plagues current LLMs. They're trained to be helpful and agreeable, which makes them susceptible to persuasive framing. Show them a well-written spin document, and they'll often echo the spin rather than cut through it. Unless you explicitly design systems to cross-reference claims, compare against historical data, and flag inconsistencies, the AI becomes an amplifier of whoever wrote the source material.

This is why I'm skeptical of both camps in the AI debate. The bears who've never built anything dismiss capabilities that are genuinely transformative. The bulls who've never shipped anything underestimate how hard it is to make AI reliable when your inputs are designed to deceive.

The truth is in the middle: AI is genuinely powerful and genuinely limited, often in the same system, on the same day.

What Could Prove Me Wrong

Intellectual honesty requires acknowledging what would change my view. Here's what I'm watching:

Signals That Would Weaken the AI Thesis

  • Inference costs stop falling. If the cost per token plateaus or rises, AI deployment economics break for many use cases. This is the single most important metric to watch.
  • Energy costs spike faster than efficiency improves. If power becomes genuinely scarce and expensive faster than PUE improves, infrastructure buildout stalls.
  • Regulation blocks deployment in key industries. Heavy-handed AI regulation (particularly in EU or US) could significantly slow adoption in healthcare, finance, and autonomous systems.
  • Robotics remains stuck in demos. If humanoid and industrial robots can't show clear ROI at scale within 3-5 years, the "physical AI" thesis weakens considerably.
  • Enterprises refuse agent autonomy. If liability concerns prevent AI agents from taking real actions (not just suggesting), the value ceiling drops dramatically.
  • Open source commoditizes everything. If models become truly commodity and no differentiation emerges in any layer, returns compress for everyone including infrastructure providers.

I don't expect any of these to fully materialize, but I'm watching all of them. The absence of these signals reinforces the thesis; their emergence would require reassessment.

Five Questions Worth Tracking

Instead of asking "Is AI a bubble?", here are five measurable questions you can track over time:

1. Is demand real, or just demos?

Look for durable signals: repeat usage, workflow integration, budget lines that persist beyond pilots. Demos are cheap. Renewals are evidence.

2. Are costs falling faster than usage is rising?

This is the "inference economics" question. Training is expensive but happens occasionally. Inference happens continuously and scales with users. If costs fall and usage rises, AI spreads.

3. Is power and grid access becoming a gating factor?

Track: long-term power deals (like Microsoft's Constellation PPA), capacity announcements with MW figures, regional interconnection queue data.

4. Is trust infrastructure becoming mandatory?

Track: agent identity tooling, governance and audit features, enterprise requirements that treat "agents" like users. Microsoft's Entra Agent ID documentation is a concrete signal this is being formalized.

5. Is AI leaving the screen?

Track: robotics deployment, automation in logistics and manufacturing, "digital twin" adoption, real-world performance (not just demos). Amazon's one million robot milestone is one public marker of physical scale.

This isn't a prediction framework. It's a way to avoid getting hypnotized by narratives.

The Bottom Line

If you force me to summarize nearly a decade of building with AI:

  • AI is not a fad. The underlying capability is genuinely unprecedented.
  • Many current AI products are bad and will die. The wrappers, the forced features, the marketing gimmicks: most disappear.
  • That failure is part of the learning curve. Bad applications failing is not the same as the technology failing.
  • The next big wave is physical, not purely digital. Robotics, manufacturing, logistics: hard to copy, defensible at scale.
  • Energy shapes strategy but isn't a hard stop. The constraint is local and time-based, not global.
  • Timing matters enormously. Being early is indistinguishable from being wrong until suddenly it isn't.

If you remember one thing: Don't confuse "AI is being misapplied in many places" with "AI is a bubble that will burst". The first is obviously true. The second fundamentally misunderstands what's happening.

Twenty-five years ago, people watched Pets.com implode and concluded the internet was a fad. Some of those people missed the entire digital transformation of society.

I believe we're at a similar moment. The bad applications are obvious and deserve to fail. The transformative applications are still being built, often by people too busy working to write hot takes.

The question isn't whether there's a bubble. There are always bubbles in new technologies, because capital is impatient and hype cycles are human nature.

The question is whether you can distinguish between the applications that are bubbles and the technology that's transformation.

From where I sit, in the code, in the data, in the daily work of building these systems, the transformation is unmistakably real.

And it's just getting started.

Frequently Asked Questions

What percentage of global electricity do AI data centers actually use?

According to the International Energy Agency, data centers (including AI) consumed approximately 415 TWh in 2024, which represents about 1.5% of global electricity consumption. This is projected to roughly double by 2030, but will still remain a single-digit percentage of global electricity use.

Is the AI bubble like the dot-com bubble?

There are similarities, but the lesson is often misremembered. The dot-com crash didn't prove the internet was a fad. It proved that bad applications fail while transformative technology continues. Many AI applications today will fail (like Pets.com), but the underlying technology is genuinely transformative (like Amazon). The question isn't whether there's a bubble, but whether you can distinguish between bubble applications and transformative technology.

Why are tech companies signing nuclear power deals?

Major tech companies like Microsoft and Amazon are signing long-term power purchase agreements with nuclear plants not because of a global energy shortage, but because of local grid connection constraints. Getting new high-voltage transmission lines approved can take 7-10 years in some US regions. By partnering with existing nuclear plants that are already connected to the grid, these companies gain faster access to reliable power while competitors wait in regulatory queues.

What's the difference between digital AI and physical AI?

Digital AI handles tasks like writing, summarizing, and generating content, things that happen entirely in software. Physical AI extends into the real world: robotics, manufacturing, logistics, and automation. Digital AI is easier to replicate and will likely commoditize quickly. Physical AI requires hardware, integration, and real-world expertise, making it much harder to copy and more defensible as a business.

How many robots does Amazon actually operate?

Amazon announced it has deployed over one million robots across more than 300 facilities worldwide. These are AI-driven systems that can navigate unstructured environments, sort unique items, and adapt to unexpected situations, a significant advancement over earlier automation that followed fixed paths.

What is inference in AI and why does it matter?

Training is teaching the AI (expensive, happens occasionally). Inference is using the AI to answer questions or make decisions (happens continuously, scales with users). As AI adoption grows, inference becomes the dominant cost. Falling inference costs enable wider deployment; rising costs would constrain it. This is why "cost per token" is a critical metric to watch.

Sources

Last updated: January 5, 2026. For corrections or updates, contact our editorial team.

About Mario Federico

Mario Federico is the founder of StockTitan, which he launched in 2019 after building and coding the platform's core infrastructure from the ground up.

With over 20 years of programming experience (since 2003) and 13+ years in the stock market (since 2012), Mario combines deep technical expertise with practical market knowledge. He has been working hands-on with AI and machine learning systems since 2016, well before the current wave of generative AI.

His background includes serving as Technical Manager for industrial automation companies, where he gained experience building mission-critical systems that operate in the real world, not just in demos.

The information provided in this article is for educational and informational purposes only. It does not constitute financial advice, investment recommendation, or an endorsement of any particular investment strategy. Past performance does not guarantee future results. Investors should conduct their own research and consult with a qualified financial advisor before making investment decisions.