OpenAI and Broadcom Team Up to Build 10GW of Custom AI Chips Starting in 2026

Ethan Cole
Ethan Cole I’m Ethan Cole, a digital journalist based in New York. I write about how technology shapes culture and everyday life — from AI and machine learning to cloud services, cybersecurity, hardware, mobile apps, software, and Web3. I’ve been working in tech media for over 7 years, covering everything from big industry news to indie app launches. I enjoy making complex topics easy to understand and showing how new tools actually matter in the real world. Outside of work, I’m a big fan of gaming, coffee, and sci-fi books. You’ll often find me testing a new mobile app, playing the latest indie game, or exploring AI tools for creativity.
7 min read 64 views
OpenAI and Broadcom Team Up to Build 10GW of Custom AI Chips Starting in 2026

OpenAI just dropped another hardware bombshell. On October 13, the company announced a multi-year partnership with Broadcom to design and build custom AI chips — 10 gigawatts worth of them. That’s a staggering amount of computing power, and it represents OpenAI’s most ambitious move yet to break free from its dependence on Nvidia’s GPUs.

Here’s how it works: OpenAI handles the chip design and system architecture, while Broadcom takes care of actually building the hardware and getting it deployed. The first racks are scheduled to go live in the second half of 2026, with the full rollout wrapping up by the end of 2029. It’s a massive undertaking, and it signals just how serious OpenAI is about controlling its own silicon destiny.

Breaking Up With Nvidia (Sort Of)

Right now, OpenAI runs almost entirely on Nvidia GPUs. They’re powerful, they’re everywhere, and they’re expensive. But OpenAI isn’t content to keep renting someone else’s hardware forever. These new Broadcom-built chips will be custom-designed from the ground up specifically for training and running OpenAI’s models.

The approach is fundamentally different from the GPU clusters everyone uses today. Instead of general-purpose graphics processors doing double duty for AI work, these will be purpose-built accelerators tailored exactly to what OpenAI needs. Think of it like the difference between renting a generic sedan versus custom-building a race car for a specific track.

OpenAI and Broadcom have actually been working together on this for over 18 months already. This announcement just makes it official. The companies are keeping most technical details under wraps for now, but they did confirm the systems will use Ethernet-based networking, which suggests they’re building for flexibility and scale rather than locking themselves into any one vendor’s ecosystem.

The deployment will happen in phases over several years, starting with those first racks in late 2026. That gradual approach makes sense — you don’t want to bet everything on untested silicon all at once.

OpenAI Is Hedging Its Bets Across Multiple Chip Partners

Here’s where things get really interesting: this Broadcom deal brings OpenAI’s total hardware commitments to around 26 gigawatts. That includes roughly 10 gigawatts of Nvidia infrastructure they’ve already got, plus an undisclosed chunk of AMD’s upcoming MI series chips.

So OpenAI isn’t ditching Nvidia entirely — they’re just making sure they have options. It’s smart strategy. By spreading their bets across multiple chip suppliers, they reduce risk, keep leverage in negotiations, and make sure they’re not stuck if one vendor hits production problems or falls behind on performance.

There was some confusion about whether OpenAI might be the mysterious $10 billion customer that Broadcom mentioned in a previous earnings call. Turns out, nope. During a CNBC interview, Broadcom’s semiconductor president Charlie Kawwas appeared with OpenAI’s Greg Brockman and joked about it directly: “I would love to take a $10 billion [purchase order] from my good friend Greg. He has not given me that PO yet.”

So while the deal is worth “multiple billions of dollars” according to the Wall Street Journal, it’s not quite at that ten-figure level. Still massive, just not quite that massive.

Why Broadcom Makes Sense for OpenAI

Broadcom isn’t exactly a household name like Nvidia, but they’re a heavyweight in custom chip design. They’ve been building specialized silicon for big tech companies for years — including Google’s TPU chips that power much of Google’s AI infrastructure.

That experience matters. Designing cutting-edge AI accelerators is brutally hard. You need teams of expert engineers, close relationships with chip fabrication plants, and years of accumulated knowledge about what works and what doesn’t. By partnering with Broadcom, OpenAI gets all of that without having to build a massive silicon engineering team from scratch.

Broadcom brings proven Ethernet networking tech and chiplet designs to the table. Chiplets are basically Lego blocks for processors — instead of one giant chip, you connect multiple smaller chips together. This approach can be more flexible and easier to manufacture than trying to build one enormous piece of silicon.

For OpenAI, this partnership means they can focus on designing the architecture they need while Broadcom handles the nitty-gritty details of actually making it work in silicon. It’s a division of labor that plays to each company’s strengths.

The Industry Is Racing Away From Nvidia’s Monopoly

OpenAI isn’t alone in this custom chip quest. Amazon, Google, Meta, and Microsoft are all developing their own AI accelerators. The pattern is clear: the biggest AI players want hardware that’s optimized for their specific workloads, and they’re willing to invest billions to get it.

For Nvidia, this has to be concerning. The company absolutely dominates AI hardware right now, and their CUDA software platform is the industry standard. But watching your biggest customers invest heavily in alternatives can’t feel great, even if you’re still selling them tons of GPUs in the meantime.

The real question is whether any of these custom chips can match what Nvidia offers. It’s not just about raw computing power — Nvidia has spent nearly two decades building CUDA into an incredibly mature software ecosystem. Developers know how to use it, countless tools and libraries are built for it, and optimization is well understood.

Building competing hardware is one thing. Building a software ecosystem that developers actually want to use? That’s a much harder problem. Google has managed it with TPUs, but they’ve also had years to iterate and refine. We’ll have to wait and see whether Broadcom and OpenAI can pull off something similar.

AI chip race — Nvidia, Google, Meta, Microsoft, and Amazon compete in next-gen custom hardware beyond CUDA dominance.

Key Details Are Still Missing

Despite the big announcement, there’s a lot we still don’t know. OpenAI and Broadcom haven’t said which chip manufacturer will actually fabricate these accelerators. That matters enormously — getting access to cutting-edge manufacturing capacity at TSMC or Samsung is extremely competitive, and production timelines depend heavily on who’s making your chips.

They also haven’t revealed details about packaging technology or memory architecture. These might sound like technical minutiae, but they’re actually critical to performance. AI workloads are often limited by memory bandwidth rather than pure processing power, so the memory choices will determine whether these chips hit their performance targets.

And packaging — how you physically connect multiple chips together — determines how well chiplet designs actually scale. Advanced packaging is its own specialized field, with limited capacity and long lead times.

With deployment starting in the second half of 2026, the clock is already ticking. Chip development typically takes several years from design to production, so OpenAI and Broadcom are working on compressed timelines. They’ve got about 20 months to nail down all these details and start getting silicon out the door.

What This Means for the AI Hardware Race

This partnership is a big deal not just for OpenAI, but for the entire AI industry. It shows how seriously major players are taking hardware strategy, and how much they’re willing to invest to get exactly what they need.

The economics make sense when you’re operating at OpenAI’s scale. Training cutting-edge models costs hundreds of millions of dollars in compute. Even small efficiency gains from custom hardware can translate to massive savings over time. And for inference — actually running the models to serve users — optimization becomes even more important as usage scales.

The next few years will show whether OpenAI’s bet on custom silicon pays off. Can they design chips that genuinely outperform Nvidia’s best offerings? Can they build software tooling that makes these chips easy for their engineers to use? Can Broadcom deliver on schedule and at scale?

There’s real risk here. Custom chip development is expensive and unforgiving. If the chips underperform or arrive late, OpenAI will be stuck with its existing Nvidia infrastructure and a bunch of sunk costs. But if it works, they’ll have a significant competitive advantage in an industry where every bit of efficiency matters.

One thing’s for sure: the AI hardware landscape is getting a lot more interesting. Nvidia’s dominance isn’t going away overnight, but the emergence of multiple well-funded competitors pursuing custom solutions suggests the future might be more diverse than the present. And for an industry moving as fast as AI, that diversity could drive innovation in ways we haven’t seen yet.

Share this article: