OpenAI has announced a major partnership with AMD to build six gigawatts of AI infrastructure using the chipmaker’s Instinct GPUs, in a deal AMD expects will generate tens of billions of dollars in revenue. The agreement also includes an option for OpenAI to acquire up to 10 percent of AMD’s equity through milestone-based share purchases.
The partnership comes just weeks after NVIDIA pledged a $100 billion investment in OpenAI, highlighting the intense competition among semiconductor manufacturers to secure major AI customers as demand for computational power continues escalating.
AMD Positioned as Core Strategic Compute Partner

Under the agreement, AMD will serve as a core strategic compute partner to drive large-scale deployments of OpenAI’s technology. The AI company will utilize AMD’s Instinct GPU lineup, beginning with the deployment of one gigawatt of Instinct MI450 GPUs during the second half of 2026.
“AMD’s leadership in high-performance chips will enable us to accelerate progress and bring the benefits of advanced AI to everyone faster,” Sam Altman, co-founder and CEO of OpenAI, said in a statement.
The deal structures AMD as a strategic alternative to NVIDIA in OpenAI’s infrastructure buildout, diversifying the AI company’s hardware supply chain while providing AMD with a flagship customer for its data center GPU products.
Equity Option Ties Partnership to Performance Milestones
The agreement includes an unusual equity component that could give OpenAI significant ownership in AMD. OpenAI gains the option to purchase 160 million AMD shares at one penny each, with shares vesting incrementally as deployment milestones are achieved.
The vesting schedule begins with the first one-gigawatt deployment and continues through subsequent buildout phases. If fully exercised, the share purchases would grant OpenAI approximately 10 percent ownership of AMD, creating long-term alignment between the companies beyond a typical supplier-customer relationship.
This equity structure incentivizes AMD to meet aggressive deployment timelines while giving OpenAI favorable terms on a substantial stake in a leading semiconductor company. The penny-per-share pricing represents a significant discount to AMD’s current market value, though the shares vest only as OpenAI commits to purchasing and deploying AMD hardware at scale.
Competitive Landscape Intensifies Among Chip Manufacturers
OpenAI’s agreements with multiple chip suppliers reflect the AI industry’s voracious appetite for computational resources and the strategic importance of diversified hardware partnerships.
Beyond the AMD deal, OpenAI recently finalized a $100 billion arrangement with NVIDIA to build at least 10 gigawatts of AI data center capacity. That investment will flow in stages timed to each new gigawatt of power deployed, with initial deployments also targeted for the second half of 2026.
The parallel timelines suggest OpenAI is orchestrating simultaneous infrastructure buildouts across multiple hardware platforms, potentially hedging against supply chain constraints while maximizing available computational capacity.
NVIDIA recently invested $5 billion in Intel to “seamlessly” connect “the strengths of NVIDIA’s AI and accelerated computing with Intel’s leading CPU technologies and x86 ecosystem.” That partnership tasks Intel with creating NVIDIA-custom x86 CPUs for the market and AI infrastructure platforms.
These interconnected deals illustrate how AI development has become inseparable from semiconductor manufacturing capacity, with major players forming complex partnership networks to secure hardware access and optimize infrastructure performance.
Microsoft Relationship Adds Strategic Complexity
OpenAI’s hardware partnerships occur against the backdrop of its existing relationship with Microsoft, which has invested over $13 billion in the AI company in exchange for 49 percent of its profits. The two companies are actively working on technology sharing arrangements.
Microsoft’s substantial financial stake and profit participation create interesting dynamics as OpenAI makes independent infrastructure decisions. While Microsoft operates its own Azure cloud platform with established hardware relationships, OpenAI is securing direct chip supply agreements that could eventually power proprietary infrastructure separate from Microsoft’s cloud services.
The strategic rationale may involve ensuring OpenAI maintains flexibility in how it deploys computational resources, potentially avoiding over-dependence on any single partner—even one as significant as Microsoft.
Infrastructure Scale Reflects AI Computing Demands

The gigawatt-scale measurements in these announcements highlight the extraordinary power requirements of modern AI systems. A gigawatt equals one billion watts, enough to power approximately 750,000 homes. OpenAI’s plans to deploy 16 gigawatts across AMD and NVIDIA infrastructure would consume electrical power equivalent to a medium-sized country.
These power levels reflect the computational intensity of training and running large language models and other AI systems. Each generation of AI models requires more parameters, larger training datasets, and longer training runs, driving exponential increases in hardware and energy requirements.
The infrastructure investments also suggest OpenAI is planning for significant growth in both model development and deployment. Building 16 gigawatts of capacity indicates expectations for substantially expanded AI services, whether through existing products like ChatGPT or new applications yet to be announced.
Industry Implications and Competitive Dynamics
The wave of infrastructure announcements and partnership formations signals several important industry trends. First, AI development has become a capital-intensive enterprise requiring billions in hardware investment, creating high barriers to entry that favor well-funded organizations.
Second, semiconductor manufacturers are competing aggressively for AI workload dominance, with companies like AMD seeking to challenge NVIDIA’s current market leadership. OpenAI’s willingness to work with multiple suppliers creates opportunities for AMD and others to prove their capabilities on high-profile projects.
Third, the interconnected nature of recent partnerships—OpenAI with AMD and NVIDIA, NVIDIA with Intel, Microsoft with OpenAI—suggests the AI industry is forming into cooperative-competitive ecosystems rather than discrete vendor relationships.
These dynamics will likely accelerate as AI capabilities expand and computational requirements continue growing. The companies that secure reliable access to cutting-edge hardware while managing costs effectively will hold significant competitive advantages in developing and deploying AI systems at scale.