TAIPEI, TAIWAN – Just months after unveiling its powerful Blackwell architecture, Nvidia has once again accelerated the future of artificial intelligence, announcing its next-generation AI platform, codenamed ‘Rubin.’ During a keynote address at the Computex 2024 tech conference, CEO Jensen Huang revealed that the company is shifting to a relentless one-year release cycle for its AI hardware, a move designed to cement its market dominance and drastically quicken the pace of innovation.
The announcement signals that Nvidia is already looking beyond its recently launched Blackwell GPUs, which are still in production and yet to reach customers. The upcoming Rubin AI platform, slated for 2026, will feature new GPUs, a new central processing unit (CPU) named ‘Vera,’ and advanced networking components, including the NVLink 6 switch and CX9 SuperNICs. This integrated platform approach is core to Nvidia’s strategy of providing a complete, high-performance solution for the massive data centers powering the AI revolution.
Before Rubin’s arrival, Nvidia plans to release an enhanced ‘Blackwell Ultra’ GPU in 2025. This new annual cadence puts immense pressure on competitors like AMD and Intel, who are racing to capture a share of the booming AI accelerator market. By announcing a roadmap two years into the future, Nvidia provides a clear, albeit aggressive, vision for its customers, encouraging long-term investment in its ecosystem.
Huang emphasized that the demand for generative AI is fueling a technological transition, requiring an ever-increasing amount of computing power. “Our company is on a one-year rhythm,” Huang stated, underscoring the shift from their previous two-year cycle. This accelerated roadmap ensures that the tools to build next-generation AI models and infrastructure will evolve faster than ever, solidifying Nvidia’s position at the epicenter of the AI industry for the foreseeable future.


