“The future will belong to those who understand how to combine intelligence with infrastructure.” - Jensen Huang, CEO, Nvidia
A strategic shift in AI hardware
OpenAI is set to produce its first in-house artificial intelligence (AI) chips in collaboration with Broadcom by 2026. The move marks OpenAI’s attempt to reduce its reliance on Nvidia’s GPUs, which currently dominate the AI hardware market. According to Financial Times and Reuters, OpenAI will initially use these chips internally to power its own infrastructure for models like ChatGPT.
Broadcom steps into AI accelerator design
Broadcom will design and manufacture AI accelerators exclusively for OpenAI. The deal, reportedly worth over $10 billion in orders, places Broadcom in direct competition with Nvidia. CEO Hock Tan noted that fiscal 2026 could see “significant” revenue growth driven by rising AI demand, particularly from clients like OpenAI.
A new chapter in AI chip diversification
The collaboration comes as AI developers worldwide face shortages and high costs of Nvidia chips. OpenAI’s partnership with Broadcom and Taiwan’s TSMC will allow it to create a more diverse and stable hardware supply chain while exploring AMD alternatives. This helps meet surging global demand for computing power and training efficiency.
The market’s response and implications
Following the announcement, Broadcom’s shares surged over 16%, boosting its market value by $218 billion. The company’s AI semiconductor revenue, already exceeding $5 billion, is projected to reach $6.2 billion in the next quarter. Investors view this partnership as a key development in reshaping the AI hardware ecosystem.
The broader AI infrastructure race
As OpenAI and Broadcom advance chip design, competitors from the US to China continue investing billions in AI data centers and custom processors. The collaboration signals a turning point where software innovators like OpenAI are taking hardware control to optimize performance and costs.

COMMENTS