Overview // what quantum computers actually do
A quantum computer is not a faster classical computer. It is a fundamentally different kind of machine that exploits the physical laws of quantum mechanics to perform certain computations that would take the age of the universe on classical hardware.
Classical computers manipulate bits — each a definite 0 or 1. They run deterministic or probabilistic algorithms using logic gates. Quantum computers manipulate qubits using reversible unitary operations. The magic lies in three resources unavailable classically: superposition (a qubit encodes 0 and 1 simultaneously), entanglement (correlations between qubits with no classical analogue), and interference (paths to wrong answers cancel; paths to right answers reinforce). Used together, these resources allow exponential or polynomial speedups on specific problem classes.
The three killer applications are well-established: factoring large integers (Shor's algorithm, polynomial time vs. sub-exponential classical); unstructured search (Grover's algorithm, O(√N) vs. O(N) classical); and simulating quantum systems (molecules, materials, quantum field theories — problems that are exponentially hard classically by the nature of Hilbert space). Everything else is an open research question.
| Attribute | Classical Computer | Quantum Computer |
|---|---|---|
| Basic unit | Bit: 0 or 1 | Qubit: α|0⟩ + β|1⟩ |
| Operations | Logic gates (AND, OR, NOT…) | Unitary transforms (H, CNOT, T…) |
| Parallelism | Explicit, deterministic | Quantum superposition over 2ⁿ states |
| Readout | Deterministic | Probabilistic (Born rule) |
| Primary limits | Clock speed, memory, energy | Decoherence, gate noise, limited qubits |
| State of the art (2026) | Trillion-transistor CPUs/GPUs | IBM 1000+ qubit processors; Google Willow; IonQ Forte |
IBM's Heron and Condor processors have crossed the 1000-qubit mark. Google's Willow chip demonstrated below-threshold error correction in 2024, a landmark result. IonQ's Forte system targets 35 algorithmic qubits. None of these can yet run Shor's algorithm against RSA-2048 (that requires ~4000 fault-tolerant logical qubits) but the trajectory is clear: fault-tolerant quantum computing is an engineering challenge, not a physics impossibility.
Why quantum? // the speedup story
Quantum computers don't speed up everything. They offer proven exponential or polynomial advantages on a specific set of problems — and those problems happen to include the foundations of modern cryptography.
The three established speedups
- Factoring (Shor's algorithm). The best classical algorithm for factoring an n-bit integer (the General Number Field Sieve) runs in sub-exponential time — roughly exp(n^{1/3}). Shor's quantum algorithm factors in polynomial time, O(n³). RSA-2048 encryption rests entirely on the assumption that factoring is hard. A large fault-tolerant quantum computer would break it. Post-quantum cryptography (NIST's CRYSTALS-Kyber, Dilithium) exists for this reason.
- Unstructured search (Grover's algorithm). Searching an unsorted database of N items classically requires O(N) queries in the worst case. Grover's algorithm finds the target in O(√N) queries — a provably optimal quadratic speedup. For N = 10¹⁸ entries, that's a billion times fewer queries.
- Quantum simulation. Simulating a quantum system of n particles requires Hilbert space dimension 2ⁿ on a classical computer — exponential. A quantum computer can simulate quantum dynamics in polynomial time (Feynman's original insight, 1982). Applications: drug discovery, catalyst design, materials science, battery chemistry. This is considered the most commercially valuable near-term application.
The catch
Quantum computers are not general-purpose speedups. They require near-absolute-zero temperatures (superconducting qubits need ~15 millikelvin — colder than outer space), exquisitely low-noise environments, and complex control electronics. Most classical tasks — web servers, databases, video encoding, machine learning inference — have no known quantum speedup. The quantum advantage is narrow but profound where it exists.
Quantum computers do not try all answers simultaneously in a brute-force way. They use interference — precisely engineered cancellation of wrong-answer probability amplitudes — to amplify correct answers. Without a clever algorithm exploiting interference, a quantum computer gives you nothing useful except a random sample.
Quantum mechanics review // the physics underneath
Quantum computing rests on three postulates of quantum mechanics. You don't need to derive them from first principles, but you do need to feel comfortable with them.
Superposition
A qubit is a two-level quantum system. Its state lives in a two-dimensional complex vector space (a Hilbert space). The two computational basis states are |0⟩ and |1⟩ (pronounced "ket zero" and "ket one"). The most general single-qubit state is:
The superposition state
- α, β
- Complex amplitudes. Their magnitudes squared give measurement probabilities. They are not directly observable.
- |0⟩, |1⟩
- Computational basis vectors. |0⟩ = (1, 0)ᵀ and |1⟩ = (0, 1)ᵀ as column vectors.
- |α|²+|β|²=1
- The normalization constraint. Total probability must equal 1.
Before measurement, the qubit is genuinely in both states simultaneously — this is not an epistemic statement about our ignorance. It is the physical reality described by quantum mechanics. Superposition is not classical probability: the amplitudes are complex numbers that can interfere.
Measurement
When you measure a qubit in state α|0⟩ + β|1⟩ in the computational basis, the Born rule gives the outcome: you observe |0⟩ with probability |α|² and |1⟩ with probability |β|². The measurement is irreversible — after measuring, the state collapses to the observed outcome and all other information is destroyed. This is why quantum algorithms must be designed carefully: you can't "peek" at intermediate results without destroying the computation.
You can never copy an unknown quantum state (No-Cloning Theorem). You can never learn α and β exactly from a single measurement — only a statistical distribution from many measurements. Quantum information is fundamentally different from classical information.
Dirac notation
Paul Dirac invented a compact notation for quantum mechanics that quantum computing inherits. A ket |ψ⟩ is a column vector representing a quantum state. A bra ⟨ψ| is the conjugate transpose (row vector). Their products:
The inner product ⟨φ|ψ⟩ measures overlap: if |φ⟩ and |ψ⟩ are orthogonal (e.g., |0⟩ and |1⟩), their inner product is 0. Orthogonal states are perfectly distinguishable. All of quantum circuit theory can be written in this notation — gates are unitary matrices acting on ket vectors.
Qubits & gates // the quantum hardware primitives
A qubit's state lives on the surface of the Bloch sphere. Gates are rotations of that sphere. Two-qubit gates create entanglement — the resource that makes quantum computing powerful.
The Bloch sphere
Every single-qubit pure state can be written as |ψ⟩ = cos(θ/2)|0⟩ + eiφsin(θ/2)|1⟩, where θ ∈ [0°, 180°] is the polar angle and φ ∈ [0°, 360°) is the azimuthal angle. This maps every pure qubit state to a unique point on the unit sphere — the Bloch sphere. The north pole (θ=0) is |0⟩; the south pole (θ=180°) is |1⟩; the equator is equal superposition. Single-qubit gates are rotations of the sphere.
Interactive Bloch Sphere
P(|0⟩) = 1.000
P(|1⟩) = 0.000
X = 0.000 Y = 0.000 Z = 1.000
Single-qubit gates
Single-qubit gates are 2×2 unitary matrices (U†U = I). They rotate the Bloch sphere. Every physical gate can be decomposed into three Euler-angle rotations.
| Gate | Matrix | Bloch action | Notes |
|---|---|---|---|
| X (NOT) | ⎡0 1⎤
⎣1 0⎦ |
180° rotation about X axis | Flips |0⟩↔|1⟩; quantum NOT gate |
| Y | ⎡0 −i⎤
⎣i 0⎦ |
180° rotation about Y axis | Pauli-Y; combines bit and phase flip |
| Z | ⎡1 0⎤
⎣0 −1⎦ |
180° rotation about Z axis | Phase flip; leaves |0⟩ unchanged, |1⟩ → −|1⟩ |
| H (Hadamard) | 1/√2 ⎡1 1⎤
⎣1 −1⎦ |
180° rotation about X+Z axis | |0⟩ → |+⟩, |1⟩ → |−⟩. Creates superposition from basis state. |
| S | ⎡1 0⎤
⎣0 i⎦ |
90° rotation about Z axis | S = Z^{1/2}; phase gate |
| T | ⎡1 0 ⎤
⎣0 e^{iπ/4}⎦ |
45° rotation about Z axis | T = S^{1/2}; critical for universality with H+CNOT |
| Rx(θ) | ⎡cos θ/2 −i·sin θ/2⎤
⎣−i·sin θ/2 cos θ/2⎦ |
θ rotation about X axis | Continuous rotation; Rx(π) = −iX |
Two-qubit gates and entanglement
Two-qubit gates act on a 4-dimensional Hilbert space (C² ⊗ C²). The CNOT (controlled-NOT) is the workhorse entangling gate. Its 4×4 matrix in the {|00⟩, |01⟩, |10⟩, |11⟩} basis:
When the control qubit is in superposition (|0⟩ + |1⟩)/√2, CNOT creates an entangled state that cannot be written as a product of two independent qubits. Other essential two-qubit gates: CZ (controlled-Z), SWAP (exchanges two qubits, equivalent to 3 CNOTs), and general controlled-U (applies unitary U to target if control is |1⟩).
No single-qubit gate can create entanglement. Entanglement is purely a multi-qubit phenomenon. The ability to entangle qubits is what separates quantum computers from classical probabilistic computers. All quantum speedups ultimately trace back to entanglement + interference working together.
Quantum circuits // the programming model
The circuit model is the standard abstraction for quantum algorithms. It is the quantum equivalent of a logic circuit diagram.
The circuit model
In the circuit model, horizontal lines represent qubits (time flows left to right). Boxes on a line represent gates applied to that qubit. Vertical lines connecting two qubit lines represent two-qubit gates. A meter symbol at the end represents measurement. Initial states are usually |0⟩. Here is a minimal circuit creating a Bell state:
H on q₀ creates equal superposition; CNOT (●⊕ notation) entangles q₀ and q₁. After measurement both qubits are always equal — either both 0 or both 1.
Universality
A gate set is universal if any n-qubit unitary can be approximated to arbitrary precision using circuits built from those gates. The Solovay-Kitaev theorem guarantees that {H, T, CNOT} is universal — any quantum algorithm can be compiled into H, T, and CNOT gates. In practice, native gate sets depend on hardware (e.g., IBM uses ECR and RZ gates; ion-trap processors use Mølmer-Sørensen gates).
Circuit depth, width, and T-count
Width = number of qubits. Depth = length of the longest path through the circuit (number of sequential gate layers). Depth × width gives a rough measure of total computational work. For fault-tolerant quantum computing, the relevant cost metric is T-gate count: T gates are expensive to implement fault-tolerantly (they require state distillation) while Clifford gates (H, S, CNOT, CZ) are "cheap." Minimizing T-count is an active compiler research area.
The Clifford group (generated by H, S, CNOT) is efficiently simulable classically (Gottesman-Knill theorem). Adding T gates makes the circuit universal and classically hard. This is the formal sense in which T gates are the source of quantum advantage.
Core algorithms // the quantum canon
Four algorithms form the core curriculum of quantum computing. Together they illustrate superposition-based parallelism, amplitude amplification, period-finding, and the quantum Fourier transform.
Deutsch-Jozsa: constant vs. balanced
The simplest quantum speedup. Given a black-box function f: {0,1}ⁿ → {0,1}, promised either constant (all inputs → same output) or balanced (exactly half inputs → 0, half → 1): determine which. Classically requires N/2 + 1 queries in the worst case. The quantum algorithm uses 1 query.
The trick: prepare a uniform superposition of all 2ⁿ inputs, query f once to write its value into a phase (the phase kickback trick), then interfere amplitudes. Constant functions leave all amplitudes in-phase; balanced functions cancel. One measurement decides.
Grover's search: O(√N) oracle calls
Given an unsorted database of N items and an oracle that identifies the target, Grover's algorithm finds the target in O(√N) oracle calls — a provable quadratic speedup (proven optimal for quantum computers too). The idea is amplitude amplification:
- Oracle step: The oracle flips the phase of the target state: |x⟩ → −|x⟩ if x is the target, unchanged otherwise. This doesn't reveal which item is the target — it just marks it with a phase.
- Diffusion step: Apply a "reflection about the average" operator (H⊗ⁿ · (2|0⟩⟨0| − I) · H⊗ⁿ). This reflects all amplitudes about the mean, amplifying the marked state and suppressing the rest.
- Repeat ≈ π√N/4 times, then measure. The target appears with high probability.
Shor's algorithm: factoring in polynomial time
Shor's 1994 algorithm factors an n-bit integer N in O(n³) time — exponentially faster than the best classical algorithm. The reduction: factoring N reduces to finding the order r of a random integer a modulo N (i.e., the smallest r such that aʳ ≡ 1 mod N). The quantum subroutine uses the quantum Fourier transform to find r efficiently. Once r is found, gcd(a^{r/2} ± 1, N) gives the factors with high probability.
RSA-2048 requires approximately 4000 logical qubits (each protected by ~1000 physical qubits with surface codes) running for hours. No current hardware can do this, but post-quantum cryptography migration is urgent because adversaries can harvest-now-decrypt-later: record encrypted traffic today, decrypt it once quantum hardware matures.
Quantum Fourier Transform (QFT)
The QFT is the quantum analogue of the Discrete Fourier Transform (DFT). Given a quantum state Σⱼ xⱼ|j⟩, the QFT produces Σₖ yₖ|k⟩ where y = DFT(x). The classical FFT requires O(N log N) operations for N = 2ⁿ inputs. The QFT circuit requires only O(n²) gates — an exponential speedup in circuit size (though reading all N output amplitudes still requires N measurements).
The QFT is not by itself useful (you can't read the Fourier coefficients directly without O(N) measurements). Its power comes from using the Fourier domain structure inside a larger algorithm — as in Shor's period-finding, quantum phase estimation, and the hidden subgroup problem framework.
Entanglement // the quantum resource
Entanglement is not magic. It is a precise mathematical property of multi-qubit states — and it is the resource that enables quantum teleportation, superdense coding, and much of quantum advantage.
Bell states
The four Bell states are the maximally entangled two-qubit states. They form an orthonormal basis for the 4-dimensional two-qubit Hilbert space:
If two parties (Alice and Bob) each hold one qubit of a Bell pair, measuring Alice's qubit instantly determines the distribution of outcomes for Bob's qubit — regardless of the distance between them. This is not faster-than-light signalling (the individual outcomes are still random), but it is a provably non-classical correlation, verified by Bell inequality violations (Nobel Prize 2022: Aspect, Clauser, Zeilinger).
Quantum teleportation
Quantum teleportation transfers an unknown qubit state |ψ⟩ from Alice to Bob using 1 ebit (shared Bell pair) plus 2 classical bits of communication. The state is transferred perfectly — not copied (No-Cloning is not violated: Alice's original qubit is destroyed in the process).
Superdense coding
The dual protocol: Alice can send 2 classical bits to Bob by transmitting only 1 qubit, provided they share a Bell pair in advance. Alice applies I, X, Z, or iY to her qubit (encoding 00, 01, 10, or 11 respectively), sends her qubit to Bob. Bob measures in the Bell basis to recover both bits. One ebit shared in advance + one qubit sent = two classical bits received. This is the Holevo bound in action: 1 qubit carries at most 1 classical bit of information without entanglement, but 2 classical bits with it.
Quantum error correction // fighting decoherence
Quantum states are fragile. Any interaction with the environment introduces errors. Quantum error correction (QEC) encodes one logical qubit into many physical qubits, enabling detection and correction of errors without ever measuring — and thus collapsing — the logical state.
Sources of noise
Quantum errors fall into three main classes:
- Bit flip: |0⟩ → |1⟩ or vice versa (analogous to a classical bit flip). Caused by unwanted X-axis rotations from magnetic noise.
- Phase flip: |+⟩ → |−⟩ (no classical analogue). Caused by energy fluctuations that accumulate phase (dephasing). This is typically the dominant error mode in superconducting qubits.
- Depolarizing noise: The qubit state is replaced by the maximally mixed state I/2 with probability p. This is the catch-all model used in most theoretical analysis; it subsumes both bit and phase flips.
Decoherence occurs when entanglement builds up between the qubit and its environment, effectively measuring the qubit from the outside. The characteristic timescale is T₂ (dephasing time) — from ~100μs for superconducting qubits to ~1 second for trapped ions.
Shor's 9-qubit code
Peter Shor's 1995 QEC code was the first to correct both bit flips and phase flips — proving quantum error correction is possible in principle. The encoding:
One logical qubit → 9 physical qubits. The outer 3-qubit repetition code corrects bit flips; the inner phase-encoded structure corrects phase flips. Syndrome measurement (measuring stabilizer operators without touching the logical state) reveals which qubit — if any — suffered an error, so it can be corrected. The key insight: you can measure which error occurred without measuring what state you're in.
Surface codes
The leading practical QEC approach as of 2026. A surface code arranges qubits on a 2D lattice; logical qubits are encoded in the global topological properties of the lattice. Key properties:
- Threshold theorem: If the physical error rate per gate is below ~1% (the fault-tolerance threshold), encoding into a surface code suppresses logical error rates exponentially as the code distance d grows. Google's Willow result (2024) demonstrated below-threshold operation at distance 7.
- Overhead: A distance-d surface code uses d² physical qubits per logical qubit. Breaking RSA-2048 requires ~4000 logical qubits at distance ~30, meaning ~4 million physical qubits. Current hardware has ~1000 physical qubits.
- Only local gates: Surface code operations require only nearest-neighbor two-qubit gates, making them compatible with 2D superconducting qubit layouts.
Hardware platforms // building a quantum computer
Multiple physical implementations compete to be the platform that scales to fault-tolerant quantum computing. Each has radically different engineering tradeoffs.
Superconducting qubits
The dominant platform in 2026 (IBM, Google, Rigetti). Qubits are superconducting LC circuits that behave quantum-mechanically at ~15 millikelvin — 200× colder than outer space. Gate times: 10–100 ns. T₂: 100–500 μs. Advantages: fast gates, CMOS-compatible fabrication, straightforward to scale in 2D. Challenges: extreme cooling requirements, frequency crowding as qubit counts grow.
Trapped ion qubits
Charged atoms (typically Yb⁺ or Ca⁺) held in electromagnetic traps and manipulated with lasers. The trap itself can sit at room temperature; only the ion's quantum state needs to be cold (laser-cooled to near absolute zero). Gate times: ~1 μs (much slower than superconducting). T₂: seconds to hours. Gate fidelity: among the best of any platform (~99.9% two-qubit). Platforms: IonQ Forte, Quantinuum H-Series. Challenge: slow gates limit circuit depth before decoherence.
Photonic and neutral atom platforms
Two emerging challengers:
- Photonic (PsiQuantum, Xanadu): Qubits are photons; gates are beamsplitters and phase shifters. Can operate at room temperature. The catch: photons don't interact easily, making deterministic two-qubit gates hard. PsiQuantum's approach uses measurement-based quantum computing with probabilistic fusion gates in silicon photonics at scale.
- Neutral atoms (QuEra, Pasqal, Atom Computing): Individual atoms held in reconfigurable optical tweezer arrays. Can be rearranged mid-circuit. T₂: ~seconds. Two-qubit gates via Rydberg blockade (~200 ns). QuEra's Aquila processor has 256 qubits. High connectivity and mid-circuit measurement make these competitive for near-term algorithms.
| Platform | Key players (2026) | Gate speed | T₂ coherence | 2-qubit fidelity | Temperature |
|---|---|---|---|---|---|
| Superconducting | IBM, Google, Rigetti | 10–100 ns | 100–500 μs | ~99.5% | ~15 mK |
| Trapped ion | IonQ, Quantinuum | ~1 μs | seconds–hours | ~99.9% | Room temp trap, laser-cooled ions |
| Neutral atom | QuEra, Pasqal, Atom Computing | ~200 ns (Rydberg) | ~1–10 s | ~99.5% | Room temp trap, laser-cooled atoms |
| Photonic | PsiQuantum, Xanadu | ps–ns | N/A (photons don't decohere) | ~95–99% (probabilistic) | Room temp or 4K |
Near-term (NISQ) // what works today
NISQ — Noisy Intermediate-Scale Quantum — is the term John Preskill coined in 2018 for today's quantum hardware: 50–1000 qubits, no error correction, limited coherence. What, if anything, can these devices do usefully?
VQE: Variational Quantum Eigensolver
VQE is a hybrid quantum-classical algorithm for finding ground-state energies of molecular Hamiltonians — the #1 proposed NISQ application. A parameterized quantum circuit (the ansatz) prepares a trial state; the quantum processor evaluates the energy expectation value; a classical optimizer adjusts the parameters. Repeat until convergence. The goal: classically intractable molecules (FeMoco for nitrogen fixation, retinal for vision, battery electrolytes) that require ~100–200 error-corrected logical qubits. Current NISQ devices have too much noise to outperform classical methods on any interesting molecule, but the approach is correct in principle. The quantum advantage arrives with error correction.
QAOA: Quantum Approximate Optimization Algorithm
QAOA targets combinatorial optimization problems (Max-Cut, traveling salesman, portfolio optimization). It alternates between a problem Hamiltonian (encoding the objective function) and a mixing Hamiltonian (driving transitions between states). At depth p → ∞, QAOA approaches exact optimal solutions. At low depth (NISQ-accessible), it produces approximate solutions. Whether QAOA beats the best classical heuristics (simulated annealing, branch-and-bound) on practically relevant problem instances is an open and actively disputed question. Honest assessment: no proven advantage on real problems as of 2026.
Quantum machine learning — what's real vs. hype
Quantum machine learning (QML) proposes using quantum computers to speed up ML tasks. The reality check:
- Real: quantum kernel methods. Quantum feature maps can express data in an exponentially large feature space. If the quantum kernel captures structure classical kernels miss, there is a provable speedup (but finding such problems is hard).
- Real: quantum linear algebra. HHL algorithm solves linear systems exponentially faster — but only under strong conditions (sparse, well-conditioned matrix; quantum input/output; no need to read out all solution components classically). These conditions rarely hold in practice.
- Hype: quantum neural networks. Parameterized quantum circuits training on classical data. No proven advantage over classical neural networks. Barren plateau problem (gradient vanishes exponentially in qubit count) makes training hard at scale.
- Hype: "quantum AI" chips for inference. Current classical GPUs vastly outperform quantum hardware for ML inference and training. There is no physical basis for quantum advantage on generic classical data at current hardware sizes.
No NISQ algorithm has demonstrated a practical quantum advantage over state-of-the-art classical algorithms on a problem of real-world relevance. The most credible near-term value is quantum simulation (chemistry, materials) once error-corrected qubits arrive in the ~100–1000 logical qubit range. That is likely 5–10 years away. The hype cycle has peaked and retracted; serious quantum computing work now focuses on the engineering path to fault tolerance.
Industry players // who is building quantum computers and how
Quantum computing hardware is no longer confined to university labs. A global cohort of companies — spanning superconducting transmons, trapped ions, photonic waveguides, neutral atoms, and exotic topological qubits — are competing to demonstrate fault-tolerant, commercially useful quantum processors. Here is a deep look at the major players, their technical approaches, and where each stands in mid-2026.
Raw qubit count is an unreliable metric. What matters is circuit layer operations per second (CLOPS), 2-qubit gate fidelity (how accurately a two-qubit unitary is applied), and coherence times T1 (energy relaxation) and T2 (dephasing). A 50-qubit processor with 99.9% 2-qubit fidelity is more powerful for deep circuits than a 1,000-qubit processor at 99.0%.
IBM Quantum
Technology approach: Superconducting transmon qubits on dilution refrigerators cooled to ~15 mK. IBM uses fixed-frequency qubits coupled via cross-resonance gates, enabling all-to-nearest-neighbour connectivity on heavy-hex lattice topologies. The heavy-hex layout reduces error rates by limiting each qubit to at most three nearest-neighbour couplers, suppressing frequency-collision errors.
Qubit count and quality (2026): The flagship Heron r2 processor delivers 156 qubits at 99.9% median 2-qubit gate fidelity — a significant leap over the 433-qubit Osprey's 99.1%. IBM's philosophy since 2023 has deliberately traded raw qubit count for quality: fewer, better qubits connected with tunable couplers. T1 coherence times on Heron approach 300 µs; T2 exceeds 200 µs. The 1,121-qubit Condor remains a research showcase rather than a production system.
Key milestones: IBM launched the first publicly accessible quantum computer via cloud (IBM QX) in 2016, democratising quantum experimentation at a stroke. The 2022 Osprey and 2023 Condor systems demonstrated that a kilobit-scale chip was physically possible. In 2023, IBM published a landmark Nature paper using Heron to out-perform classical simulation on a 127-qubit kicked Ising model, offering a credible demonstration of quantum utility — though the claim remains debated. IBM's roadmap targets 100,000+ physical qubits in modular architectures by the late 2020s using quantum interconnects.
Cloud access and SDK: IBM Quantum Platform provides free and premium cloud access. Qiskit (open-source, Python) is the world's most-used quantum SDK with over 550,000 registered users. Qiskit Runtime executes workloads close to hardware with classical pre/post-processing, reducing I/O latency. Qiskit Transpiler optimises circuit depth for target topologies.
Business model: IBM offers a freemium model with open access to small processors and IBM Quantum Premium Plans for enterprise. Revenue is primarily from cloud compute time, system sales to national labs, and consulting. Target markets include financial services (portfolio optimisation, Monte Carlo), life sciences (molecular simulation), and logistics.
What makes IBM unique: The combination of Qiskit's ecosystem dominance, the IBM Quantum Network (200+ member organisations), and the heavy-hex topology that enables the highest qubit counts at useful fidelities in the superconducting space. No other player has come close to IBM's developer community or educational investment.
Assessment: IBM is the strongest ecosystem player by a wide margin. Qiskit and the IBM Quantum Network give it unmatched developer mindshare. Hardware quality has improved dramatically since 2022, and the modular architecture strategy is technically credible. The main challenge is the classic superconducting problem: scaling qubit count without degrading fidelity across a monolithic chip, and achieving the ~1,000:1 physical-to-logical qubit ratio needed for fault-tolerant Shor at cryptographic scales.
Google Quantum AI
Technology approach: Superconducting transmon qubits with tunable couplers, operating at ~10–15 mK. Google uses a 2D grid topology and is the pioneer of the cross-entropy benchmarking (XEB) metric for characterising circuit fidelity at scale. Tunable couplers allow gate times of 20–40 ns with high on/off ratio, enabling fast, high-fidelity 2-qubit operations.
Qubit count and quality (2026): The Willow chip (105 qubits, announced late 2024) achieves below-threshold quantum error correction: adding more physical qubits to a surface code logical qubit actually reduces the logical error rate — the first demonstration of this key QEC threshold condition. Willow's 2-qubit gate fidelity runs at ~99.7% with T1 values around 100 µs. Google has not publicly disclosed a 2026 successor beyond roadmap hints at 1,000+ qubit systems.
Key milestones: Google's 2019 "quantum supremacy" paper claimed 200 seconds for a task that would take Summit 10,000 years — a landmark instantly contested by IBM (who argued a better classical simulation could run in 2.5 days). Regardless, the Sycamore experiment was a genuine demonstration of quantum circuits outrunning any available classical simulation at that moment. The Willow threshold result is arguably more significant: it provides concrete evidence that the surface code approach to error correction works as theory predicts.
Cloud access and SDK: Cirq (open-source, Python) is Google's primary quantum SDK. Cloud access is restricted: Google Quantum AI hardware is not generally available on Google Cloud; select research partners get access via Google's internal programs. This is a strategic weakness versus IBM's open ecosystem.
Business model: Google funds quantum AI as a long-term strategic research bet, not primarily for near-term cloud revenue. The priority is demonstrating fault-tolerant advantage for real computational problems — molecular simulation is the stated first target. Revenue generation from quantum compute is a secondary concern to establishing technical leadership.
What makes Google unique: Willow's below-threshold QEC result is a genuine scientific milestone. Google has the deepest physics and materials science team in the field, and its tunable coupler architecture achieves some of the fastest and most accurate 2-qubit gates of any superconducting system. The XEB benchmarking framework they pioneered is now used across the industry.
Assessment: Google has the strongest published results on gate fidelity and error correction below threshold. The Willow paper is the most compelling QEC demonstration in the superconducting space. The closed-access model limits ecosystem growth and developer adoption, which is a structural disadvantage if broad quantum software development matters in the long run. Google vs. IBM is the superconducting arms race to watch.
IonQ
Technology approach: Trapped-ion qubits — individual ytterbium (Yb+) ions suspended in electromagnetic Paul traps and laser-cooled to near absolute zero. All-to-all connectivity: any two ions in a chain can directly interact via a shared motional mode without routing through intermediate qubits. This eliminates SWAP overhead that plagues superconducting architectures. Gate operations are slower (100 µs–1 ms range) than superconducting (~20–100 ns) but far more accurate.
Qubit count and quality (2026): IonQ uses Algorithmic Qubits (AQ) as its primary metric — a hardware-agnostic measure of how many qubits you can usefully run a full algorithm on, accounting for connectivity and fidelity. IonQ Forte Enterprise (launched 2024–2025) is rated at 35 AQ. In raw terms, the system uses 36 trapped ions with 2-qubit gate fidelity exceeding 99.9% for native Mølmer–Sørensen gates. T2 coherence times on trapped ions can reach seconds to minutes — orders of magnitude longer than superconducting qubits.
Key milestones: IonQ became the first publicly traded pure-play quantum company (NYSE: IONQ, 2021). It was the first to demonstrate 32 AQ capability and has consistently published best-in-class fidelity numbers. IonQ is developing barium (Ba+) ions as a second species, which emits photons in the visible range suitable for photonic interconnects between traps — a key step toward distributed trapped-ion systems at scale.
Cloud access and SDK: Available on Amazon Braket, Microsoft Azure Quantum, and Google Cloud. IonQ is cloud-platform agnostic, which is a strategic strength. Supports Qiskit, Cirq, and native IonQ SDK. IonQ has also partnered with the US Air Force Research Laboratory and Oak Ridge National Lab.
Business model: SaaS cloud quantum access; enterprise contracts with defence, finance, and pharma; government research contracts. IonQ is publicly traded and must balance R&D spend against quarterly reporting, which creates tension with long-horizon hardware investment.
What makes IonQ unique: The AQ metric is the most honest single-number summary of useful circuit capacity in the industry. All-to-all connectivity means a 35-AQ IonQ system can run circuits that would require hundreds of additional SWAP gates on a nearest-neighbour superconducting chip of the same size. Long coherence times make IonQ the natural choice for algorithms requiring many sequential gates.
Assessment: IonQ has the best gate fidelity of any commercially available system and the AQ metric gives a more honest picture of real algorithmic capability than raw qubit count. The challenge is scaling: ion chains become unstable beyond ~30–50 ions, so scaling requires shuttling ions between multiple traps or building photonic links between separate modules — both hard engineering problems. Clock speed is also slow; a 1,000-gate deep circuit may take hundreds of milliseconds.
Quantinuum
Technology approach: Trapped-ion using ytterbium ions in a surface-electrode "racetrack" trap architecture. Quantinuum's QCCD (Quantum Charge-Coupled Device) architecture allows ions to be physically shuttled through junctions within the trap chip, combining the long coherence of trapped ions with the scalability of modular racetrack designs. This is distinct from the static-chain architecture used by IonQ.
Qubit count and quality (2026): The H2-1 processor achieves 56 physical qubits with 2-qubit gate fidelity of 99.9%+ and a System Model H2 quantum volume of 217 = 131,072 — the highest published quantum volume as of early 2026. T1 and T2 times extend to seconds. Quantinuum's gate fidelity benchmarks have been independently verified and are among the most credible in the industry.
Key milestones: Quantinuum was formed in 2021 from the merger of Honeywell Quantum Solutions and Cambridge Quantum Computing. In 2023, H2 demonstrated the creation of non-Abelian anyons — exotic quasiparticles with topological properties relevant to topological quantum memory. In 2024, Quantinuum published results on real-time error correction with mid-circuit measurements and conditional logic on H2, a key capability for fault-tolerant computation. The TKET compiler (from Cambridge Quantum) is considered one of the most capable hardware-agnostic optimising compilers available.
Cloud access and SDK: Accessible via Azure Quantum and Quantinuum's own platform. The pytket SDK supports multiple backends including IBM, IonQ, and Quantinuum hardware. Quantinuum also develops InQuanto (quantum chemistry) and LAMBEQ (quantum NLP) application libraries.
Business model: Premium enterprise and research access; Honeywell parent provides deep industrial application expertise in chemistry, materials, and process optimisation. Quantinuum targets pharmaceutical discovery and financial risk modelling as near-term high-value verticals.
What makes Quantinuum unique: The QCCD racetrack architecture is the most credible path to scaling trapped-ion systems beyond ~100 qubits while maintaining fidelity. Quantinuum is the only company with both world-class hardware and a full application software stack (InQuanto, LAMBEQ, TKET) for domain-specific quantum advantage. The Honeywell parentage gives it exceptional manufacturing process control.
Assessment: Quantinuum has arguably the strongest combination of fidelity, quantum volume, and mid-circuit measurement capability in the industry. The QCCD architecture is technically superior to static ion chains for scaling. The main bottleneck is the same as IonQ: scaling to thousands of qubits while maintaining fidelity, and increasing gate speed to make deep circuits tractable in wall-clock time.
Microsoft Azure Quantum
Technology approach: Microsoft is pursuing topological qubits based on Majorana zero modes — exotic non-Abelian anyons predicted to arise at the ends of specially engineered semiconductor-superconductor (InAs-Al) nanowires. Topological qubits are theoretically far more intrinsically stable against local noise than transmon or trapped-ion qubits, potentially requiring orders of magnitude fewer physical qubits per logical qubit. This is the highest-risk, highest-reward bet in quantum hardware.
Qubit count and quality (2026): In early 2025, Microsoft published a landmark Nature paper reporting the creation of a "topological gap" in InAs-Al nanowires consistent with a topological phase — a necessary (though not sufficient) condition for Majorana qubits. In 2025–2026, Microsoft demonstrated an 8-qubit "Topological Core" prototype with coherence properties consistent with topological protection. No large-scale universal topological quantum computer exists yet. On the practical side, Azure Quantum provides cloud access to IonQ, Quantinuum, Rigetti, and neutral-atom hardware from partners while the topological platform matures.
Key milestones: Microsoft's 2018 retraction of a high-profile Science paper (claiming Majorana signatures later found to be artefacts) set the program back years and damaged credibility. The 2025 Nature paper and Topological Core demonstration represent a genuine rehabilitation, though independent experimental replication remains in progress. Microsoft's Q# programming language and Azure Quantum Development Kit are mature developer tools with good classical simulation infrastructure.
Cloud access and SDK: Azure Quantum aggregates multiple hardware providers under a single API. Q# is a domain-specific language optimised for quantum algorithms with tight integration into the Azure ecosystem. The Azure Quantum Resource Estimator is the most comprehensive publicly available tool for projecting physical resource requirements of fault-tolerant algorithms — invaluable for understanding how far away a useful quantum computer actually is.
Business model: Azure Quantum is integrated into Microsoft's cloud business. Near-term revenue comes from selling cloud compute time on partner hardware and quantum simulation. The long-term bet is that topological qubits will enable a generation of fault-tolerant systems that leapfrog competitors requiring 1,000:1 physical-to-logical qubit overhead.
What makes Microsoft unique: The topological qubit bet, if it works, would be transformative — the theoretical overhead reduction compared to surface codes is enormous. Azure Quantum's multi-vendor aggregation is also uniquely positioned for enterprise customers who want hardware optionality. The Resource Estimator is a genuinely differentiated tool that helps customers plan quantum roadmaps honestly.
Assessment: Microsoft's topological bet is bold and scientifically credible. If Majorana qubits work at scale, the overhead advantages are enormous. But the timeline is deeply uncertain, and every other player will have multiple generations of superconducting and trapped-ion systems deployed commercially before topological systems reach parity. Microsoft's cloud aggregation strategy is smart hedging. The Azure Quantum Resource Estimator is the best available tool for understanding what fault-tolerant quantum computing actually requires.
Rigetti Computing
Technology approach: Superconducting transmon qubits with a proprietary multi-chip module (MCM) approach — bonding multiple smaller chips to create larger processors, similar to chiplet architectures in classical computing. Rigetti operates its own full-stack fabrication facility (Fab-1), giving it end-to-end control from chip design through software, which competitors using external fabs lack.
Qubit count and quality (2026): The Ankaa-3 system delivers 84 qubits with median 2-qubit (iSWAP) gate fidelity around 99.0–99.5%. T1 coherence times run ~60–80 µs. Rigetti lags IBM and Google on fidelity but has focused on improving CLOPS (circuit execution throughput) and fast parametric compilation — important for variational algorithms that require many repeated circuit evaluations with varying parameters.
Key milestones: Rigetti was among the earliest companies to offer cloud quantum access (Forest platform, 2017). It was the first company to fabricate a multi-chip quantum processor (Aspen series). Rigetti pioneered the parametric compilation approach that enables fast parameter sweeps for VQE and QAOA workloads. As a publicly traded company (RGTI), Rigetti has faced significant revenue pressure but remains one of the few companies with end-to-end vertical integration.
Cloud access and SDK: Rigetti QCS (Quantum Cloud Services) provides direct hardware access with low-latency QPU scheduling. pyQuil is Rigetti's Python SDK; circuits can also be submitted via Qiskit through a provider plugin. Rigetti also offers Quil (Quantum Instruction Language), a low-level assembly-like language for direct QPU control at the gate pulse level.
Business model: Cloud quantum access; on-premises system sales to government and defence customers; DARPA and DOE research contracts. Rigetti's vertical integration is a cost differentiator for government procurement where supply chain control matters.
What makes Rigetti unique: End-to-end fabrication control. Rigetti can iterate on qubit design, packaging, and control electronics in-house on a timescale that external-fab-dependent competitors cannot match. The MCM chiplet approach is a credible path to larger processor sizes without the yield problems of single monolithic chips at scale.
Assessment: Rigetti is the scrappiest of the major players — a lean team with genuine fabrication expertise that larger players outsource. The MCM approach to scaling is technically interesting and may prove important for modular architectures. The risk is falling further behind IBM and Google on fidelity, which makes it harder to win enterprise workloads requiring circuit depth. Rigetti's survival as a public company through 2026 has been notable given its small revenue base.
D-Wave Systems
Technology approach: Quantum annealing — a fundamentally different paradigm from gate-model quantum computing. D-Wave processors implement a physical Ising Hamiltonian that is adiabatically evolved from an easy ground state to one encoding a combinatorial optimisation problem. The processor is not a universal quantum computer; it cannot run Shor's or Grover's algorithms. It is purpose-built for quadratic unconstrained binary optimisation (QUBO) problems.
Qubit count and quality (2026): The Advantage2 system implements 4,400 qubits with the Pegasus topology (each qubit connected to up to 20 others). Coherence times are microseconds — extremely short compared to gate-model systems — because annealing doesn't require gate-level coherence. The relevant quality metric is solution quality on optimisation benchmarks relative to classical solvers, not gate fidelity. D-Wave also offers a gate-model prototype for exploratory work.
Key milestones: D-Wave sold the first commercial quantum computing system to Lockheed Martin in 2011 — the world's first commercial quantum computer sale, predating gate-model cloud access by five years. Google, NASA, and USRA operated a D-Wave 2X at the Quantum AI Lab from 2013–2017. Despite persistent academic debate over whether D-Wave's advantage was truly quantum, more recent Advantage2 benchmarks on structured hard instances show genuine quantum-enhanced sampling that classical solvers struggle to match on specific problem families.
Cloud access and SDK: Leap quantum cloud service; Ocean SDK (Python) provides problem formulation, hybrid solvers, and direct QPU access. D-Wave's Hybrid Solver Service breaks large problems into sub-problems that run on QPU and classical CPU, enabling optimisation over problems with millions of variables — far beyond what any gate-model system can handle today.
Business model: Cloud subscriptions (Leap), enterprise hybrid solver contracts, on-premises system sales. D-Wave targets logistics (routing, scheduling), finance (portfolio optimisation), life sciences (protein folding sub-problems), and materials science. It is the only pure-play quantum company with a decade of enterprise revenue history.
What makes D-Wave unique: D-Wave is the only company delivering quantum hardware at a scale (4,400 qubits) and problem size where the quantum device operates on inputs that would be computationally infeasible to fully simulate classically. The hybrid solver approach is also the most mature integration of quantum and classical heuristics for real enterprise problems. Their commercial track record is unmatched.
Assessment: D-Wave occupies a unique niche. Annealing is not the path to universal fault-tolerant quantum computing, but it may deliver commercial value on specific optimisation problem classes sooner than gate-model systems. The hybrid approach that mixes QPU annealing with classical heuristics is pragmatic. The main risk is that advances in classical optimisation (GPU-accelerated solvers, learned heuristics) keep narrowing the gap D-Wave needs to demonstrate. Their honest framing — "we solve specific optimisation problems, not general computation" — is refreshingly grounded.
Amazon Braket
Technology approach: Amazon Braket is a quantum cloud aggregator rather than a hardware manufacturer. It provides unified API access to multiple hardware backends: IonQ (trapped ion), Rigetti (superconducting), OQC (superconducting, UK), QuEra (neutral atom / analog), and formerly D-Wave. Amazon's own hardware research arm (AWS Center for Quantum Computing at Caltech) is developing superconducting qubits, but no commercial AWS-native QPU was publicly available as of mid-2026.
Qubit count and quality (2026): Via hosted hardware partners. QuEra's Aquila neutral-atom processor (256 qubits, analog mode) is the largest system on Braket; it runs quantum simulation via programmable Rydberg atom arrays rather than discrete gate-model circuits. IonQ Forte and Rigetti Ankaa-3 provide gate-model access. OQC's Lucy superconducting processor rounds out the offering.
Key milestones: Braket launched in 2019 as the first major cloud provider to offer quantum hardware as a managed service. AWS introduced Amazon Braket Direct (dedicated QPU reservations) and Hybrid Jobs (co-locating classical EC2 compute with QPU calls to minimise round-trip latency). The Amazon Quantum Solutions Lab provides enterprise consulting for customers exploring quantum applications.
Cloud access and SDK: Amazon Braket SDK (Python, open-source) supports all hardware backends with a unified circuit model. Integration with SageMaker, Lambda, and S3 allows quantum workloads to be embedded in standard ML and data pipelines. Supports OpenQASM 3.0 for hardware-agnostic circuit description. Braket also runs a free simulator tier.
Business model: Pay-per-task QPU access (per-shot pricing), hybrid job compute time, and Braket Direct reservations. Amazon's quantum strategy is to be the infrastructure layer: capture quantum cloud revenue regardless of which hardware technology wins, while conducting its own hardware research as a long-term hedge.
What makes Amazon Braket unique: The tightest integration of quantum compute with classical cloud infrastructure of any provider. IAM roles, VPC isolation, CloudWatch metrics, and SageMaker integration mean enterprise security and observability requirements are met out of the box — something that standalone quantum cloud platforms cannot yet match. Braket's hardware-agnostic SDK abstracts away vendor lock-in.
Assessment: Braket's multi-vendor model gives customers hardware optionality and a single integration target. The lack of AWS-native QPU hardware means Amazon hasn't differentiated on quantum performance, but its classical cloud integration is a genuine operational advantage for enterprise deployments. If Amazon's internal superconducting program matures, Braket becomes a full-stack competitor overnight. For enterprises wanting to experiment with quantum without committing to a single vendor, Braket is the pragmatic choice.
Side-by-side comparison (mid-2026)
| Player | Qubit technology | Qubit count (2026) | 2-qubit gate fidelity | Cloud platform | Primary use case |
|---|---|---|---|---|---|
| IBM Quantum | Superconducting transmon (heavy-hex) | 156 (Heron r2) | ~99.9% | IBM Quantum Platform / Qiskit | Utility-scale simulation, finance, logistics |
| Google Quantum AI | Superconducting transmon (2D grid) | 105 (Willow) | ~99.7% | Research partners only (Cirq) | QEC threshold demos, molecular simulation |
| IonQ | Trapped ion (Yb+, static chain) | 36 / 35 AQ (Forte) | ~99.9%+ | Braket / Azure / GCP / IonQ Cloud | High-fidelity NISQ circuits, chemistry |
| Quantinuum | Trapped ion (Yb+, QCCD racetrack) | 56 (H2-1) | ~99.9%+ | Azure Quantum / Quantinuum Platform | Drug discovery, financial risk, QEC research |
| Microsoft | Topological (Majorana nanowire) + partners | 8 (Topological Core prototype) | TBD (topological) | Azure Quantum (multi-vendor) | Long-horizon fault-tolerant computing |
| Rigetti | Superconducting transmon (MCM chiplet) | 84 (Ankaa-3) | ~99.0–99.5% | Rigetti QCS | Variational algorithms, VQE, QAOA |
| D-Wave | Quantum annealing (flux qubits, Pegasus) | 4,400 (Advantage2) | N/A (annealing) | D-Wave Leap (Ocean SDK) | Combinatorial optimisation (QUBO) |
| Amazon Braket | Aggregator (IonQ, Rigetti, QuEra, OQC) | Via partners | Via partners | Amazon Braket SDK | Cloud-native quantum workflows, hybrid jobs |
No quantum computer in 2026 can solve a practically important problem that a well-resourced classical computer cannot. The gap is closing — IBM's quantum utility result and Google's Willow threshold result are genuine milestones — but the ~1,000× overhead required to turn NISQ physical qubits into fault-tolerant logical qubits remains the central unsolved engineering challenge. The companies that figure out how to manufacture high-fidelity qubits at scale, connect them with quantum interconnects, and run real-time error correction will define the quantum era. The race is genuinely open.