Louise Ai agent - David S. Nishimoto

Louise ai agent - IBM vs Rigetti in the quantum computer race


Listen Later

Rigetti Computing (RGTI) and IBM are pursuing distinct but overlapping pathways toward scaling quantum computing, each with different architectures and theoretical limits for growth. Rigetti’s modular chiplet approach, currently demonstrated with their 36-qubit system and planned 100+ qubit processors by the end of 2025, emphasizes building smaller, high-fidelity superconducting qubit modules that can be interconnected flexibly. This modular design, inspired by classical semiconductor industry practices, reduces manufacturing complexity and cost, enabling incremental scale-up by snapping together reliable building blocks. Theoretically, this approach allows scaling by adding more chiplets, each engineered for speed and precision, without the exponentially increasing fabrication errors seen in large monolithic chips. However, Rigetti’s practical scaling potential is likely to face challenges beyond the few hundred qubit range initially, as interconnecting numerous chiplets while maintaining coherence and low error rates remains difficult. This strategy is well-suited for early commercial systems that prioritize speed, gate fidelity, and upgrade agility over ultra-large qubit counts in the near term.

In contrast, IBM’s roadmap, epitomized by the Nighthawk processor launched in 2025 with 120 qubits arranged in a square lattice, illustrates a vision aimed at large-scale, fault-tolerant quantum computing by 2029 and beyond. IBM’s theoretical scaling potential is greater, harnessing long-range chip-to-chip “l-couplers” to interconnect multiple Nighthawk-like modules into systems reaching up to 1,080 physical qubits by 2027, with plans for fault-tolerant architectures like Starling delivering 200+ logical qubits capable of running 100 million quantum gates by 2029. The square lattice topology, offering four nearest neighbors per qubit, enables more efficient quantum operations, reducing overhead from SWAP gates and improving circuit depth. IBM’s focus on error correction techniques, quantum memory integration (Quantum Kookaburra), and modular architectures that link nodes over meter-scale distances theoretically supports scaling toward thousands, and eventually tens of thousands, of qubits—creating a path to universal, fault-tolerant quantum computers at industrial scale. While Rigetti focuses on speed and modular practicality, IBM operates on a timeline anticipating orders-of-magnitude larger systems with full error correction and unprecedented gate complexity.

The key difference in scale potential rests on the intended target: Rigetti’s modular chiplets enable practical, incremental scaling likely up to a few hundred qubits in the near term, prioritizing rapid gains in gate fidelity and speed for commercial utility. IBM’s approach is explicitly designed to scale into the thousands of qubits with fault tolerance as the goal, targeting fully error-corrected quantum computing in the late 2020s and beyond. While both companies use modular designs, Rigetti’s strategy is more granular and manufacturing-centric, emphasizing high-quality small units combined efficiently, whereas IBM’s modularity incorporates sophisticated inter-module couplers and error-corrected logical qubits, aiming for much larger and more integrated quantum systems.


...more
View all episodesView all episodes
Download on the App Store

Louise Ai agent - David S. NishimotoBy David Nishimoto