Training Quantum Physics What Quantum Computers Can and Cannot Solve: Complexity, Advantage, and Limits
13 / 15

What Quantum Computers Can and Cannot Solve: Complexity, Advantage, and Limits

35 min Quantum Physics

What Quantum Computers Can and Cannot Solve: Complexity, Advantage, and Limits

Quantum complexity theory provides the rigorous mathematical framework for understanding which computational problems quantum computers can solve efficiently and which remain intractable even with unlimited quantum resources. The central object is the class BQP (Bounded-error Quantum Polynomial time): the set of decision problems solvable by a quantum computer in polynomial time with error probability at most $1/3$. Understanding BQP's relationship to P, NP, and other classical complexity classes is the deepest open question at the intersection of quantum physics and theoretical computer science.

The complexity class P contains problems solvable deterministically in polynomial time — sorting, matrix multiplication, linear programming. BPP (Bounded-error Probabilistic Polynomial time) extends this to randomized algorithms with bounded error. It is believed (but not proven) that BPP = P, meaning randomness does not help beyond polynomial factors. The class NP contains problems whose solutions can be verified in polynomial time — the satisfiability problem (SAT), graph coloring, the Travelling Salesman Problem. The million-dollar P vs NP question asks whether every efficiently verifiable problem is also efficiently solvable.

The current understanding is that $P \subseteq BPP \subseteq BQP \subseteq PSPACE$ and $P \subseteq NP \subseteq PSPACE$. The relationship between BQP and NP is unknown and deeply mysterious. Most complexity theorists believe $NP \not\subseteq BQP$ — that quantum computers cannot solve NP-complete problems in polynomial time — but this is unproven. The evidence is that Grover's algorithm gives only a quadratic speedup for unstructured search (the best quantum analog of brute-force NP solving), and no polynomial-time quantum algorithm is known for any NP-complete problem.

The class QMA (Quantum Merlin-Arthur) is the quantum analog of NP: problems where a quantum witness can be efficiently verified by a quantum verifier. The local Hamiltonian problem — determining whether the ground state energy of a local Hamiltonian falls below a threshold — is QMA-complete, the quantum analog of SAT being NP-complete. This problem is the quantum complexity analog of the many-body problem in physics; it is believed to be intractable even for quantum computers (though not proven).

Problems where quantum gives provable speedups form a short but important list. Integer factoring (Shor): exponential quantum speedup over best known classical algorithm, though not proven impossible classically. Discrete logarithm: similarly exponential. Unstructured search (Grover): provable quadratic speedup, proven optimal. Simulation of quantum systems: exponential speedup, essentially by definition. Parity of $n$ bits with query complexity: quantum needs $n/2$ queries, classical needs $n$ queries. For oracular problems (where the speedup is relative to a black-box), quantum separations are provably exponential for problems like Simon's and Bernstein-Vazirani.

The quantum supremacy experiments of Google (Sycamore, 2019) and subsequent results claimed to demonstrate that a quantum circuit sampling task required $10{,}000$ years on a classical supercomputer but only 200 seconds on 53 qubits. Classical simulation algorithms were subsequently improved (by Alibaba and others) to weeks then days on classical hardware, highlighting the difficulty of establishing supremacy claims. The IonQ and IBM systems followed with similar benchmarks. These experiments demonstrate that quantum hardware can perform specific tasks, but whether they demonstrate practically useful quantum advantage remains contested.

Physical limitations impose hard constraints on all current and near-future quantum computers. Decoherence — the loss of quantum information to the environment — is characterized by the transverse relaxation time $T_2$ (phase coherence time) and the longitudinal relaxation time $T_1$ (energy relaxation time). For superconducting qubits (Google, IBM, Rigetti), $T_1 \sim T_2 \sim 100\, \mu$s; for trapped ions (IonQ, Quantinuum), $T_1 \sim T_2 \sim$ seconds to minutes. Gate times are $\sim 10$–$50$ ns for superconducting, $\sim 10\, \mu$s–$1$ ms for trapped ions. This gives circuit depth budgets of $\sim 1{,}000$ gates for superconducting versus $\sim 10{,}000$ for trapped-ion systems before decoherence dominates.

Gate fidelity — the probability that a gate operation produces the intended output — is the key quality metric. Two-qubit gate fidelities of $99.5\%$ on superconducting systems and $99.9\%$ on trapped-ion systems are state-of-the-art as of 2025. The fault-tolerance threshold theorem states that if the per-gate error rate $p$ is below a threshold $p_{th} \approx 10^{-3}$ to $10^{-2}$ (depending on the error correction code), arbitrarily long computations can be performed with constant overhead. Current error rates are at or near threshold, but the overhead in physical qubits per logical qubit (thousands to millions) makes fault-tolerant quantum computing a 10–20 year prospect.

The resource overhead for fault-tolerant quantum computing is staggering. The surface code — the leading candidate for practical error correction — requires approximately $d^2$ physical qubits for a distance-$d$ code and can correct up to $\lfloor(d-1)/2\rfloor$ errors. For breaking RSA-2048 with Shor's algorithm, estimates (Gidney and Eker , 2021) require $\sim 4{,}000$ logical qubits at distance $d \approx 27$, giving $\sim 4{,}000 \times 27^2 \approx 3$ million physical qubits and $\sim 8$ hours of computation on a quantum computer with $10^6$ physical qubits at $1\,\mu$s cycle time. No current system is within three orders of magnitude of this capability.

Complexity Classes

P: Problems solvable in deterministic polynomial time $O(n^k)$.

BPP: Solvable by randomized algorithm with error $\leq 1/3$ in polynomial time. Believed equal to P.

BQP: Solvable by quantum algorithm with error $\leq 1/3$ in polynomial time. Contains: factoring, discrete log, quantum simulation.

NP: Problems with polynomial-time verifiable proofs. Includes SAT, 3-coloring, TSP. Contains P; whether it contains BQP is unknown.

QMA: Problems with quantum proofs verifiable by quantum polynomial-time verifiers. Contains NP. Local Hamiltonian problem is QMA-complete.

PSPACE: Problems solvable in polynomial space. Contains both BQP and NP.

Believed hierarchy: $P = BPP \subsetneq BQP \subsetneq PSPACE$, with NP incomparable to BQP.

Proven Quantum Speedups

Exponential (oracular): Simon's problem: quantum $O(n)$ queries, classical $\Omega(2^{n/2})$. Bernstein-Vazirani: quantum 1 query, classical $\Omega(n)$. Recursive Fourier sampling: exponential separation.

Exponential (non-oracular, conjectured): Factoring via Shor's: quantum $O((\log N)^3)$, classical best known $\exp(O((\log N)^{1/3}(\log\log N)^{2/3}))$. Discrete log: similar.

Quadratic (proven optimal): Unstructured search via Grover: quantum $O(\sqrt{N})$, classical $\Omega(N)$. BBBV theorem proves quantum lower bound $\Omega(\sqrt{N})$.

No speedup (proven): Computing the OR of $N$ bits: quantum also needs $\Omega(\sqrt{N})$ queries. Certain graph properties: no super-quadratic speedup known.

Fault-Tolerance Threshold Theorem

If the error rate per gate $p < p_{th}$ (threshold), then by using concatenated or topological error-correcting codes, one can perform a computation of length $L$ with error $\leq \epsilon$ using $O(L \cdot \text{poly}(\log(L/\epsilon)))$ physical gates. The surface code threshold $p_{th} \approx 10^{-2}$; for realistic noise models $p_{th} \approx 10^{-3}$. Current best two-qubit gate fidelities: $\sim 99.9\%$, so $p \approx 10^{-3}$, right at threshold.

Example 1

Compare the number of operations for classical brute-force ($O(N)$), classical FFT ($O(N\log N)$), Grover's ($O(\sqrt{N})$), and Shor's ($O((\log N)^3)$) for $N = 10^6$.

With $N = 10^6$ and $\log_2 N \approx 20$:

Classical brute-force $O(N)$: $10^6$ operations.

Classical FFT $O(N\log_2 N)$: $10^6 \times 20 = 2 \times 10^7$ operations.

Grover's $O(\sqrt{N})$: $\sqrt{10^6} = 10^3 = 1{,}000$ operations.

Shor's $O((\log_2 N)^3)$: $20^3 = 8{,}000$ operations.

Speedups vs brute force: Grover: $10^6/10^3 = 1{,}000\times$. Shor: $10^6/8{,}000 = 125\times$ (at this $N$; speedup grows as $N$ grows).

Note: FFT is slower than brute force here because FFT counts operations on $N$ elements, not search steps. The comparison illustrates fundamentally different problem types.

Example 2

A superconducting qubit has $T_2 = 100\,\mu$s and two-qubit gate time $50$ ns. What is the maximum circuit depth before decoherence dominates? What is the error per gate if gate fidelity is $99.5\%$?

Max circuit depth: $T_2 / t_{\text{gate}} = 100\,\mu\text{s} / 50\,\text{ns} = 100 \times 10^{-6} / 50 \times 10^{-9} = 2{,}000$ gates before $T_2$ decay eliminates coherence.

Practical rule: keep circuit depth $\ll T_2/t_{\text{gate}}$; at depth $2{,}000$ coherence has decayed to $e^{-1} \approx 37\%$.

Error per gate: Fidelity $99.5\%$ means error $p = 1 - 0.995 = 0.005 = 5 \times 10^{-3}$.

Error after $k$ gates: $(1-p)^k$. At $k=100$: $(0.995)^{100} \approx 0.606$. At $k=1000$: $(0.995)^{1000} \approx 0.0067$. A 1000-gate circuit on a $99.5\%$-fidelity machine has only $0.67\%$ chance of no errors — far below fault-tolerance threshold.

Example 3

Explain why NP-complete problems are not known to be in BQP, and what this implies for quantum cryptography threats beyond RSA.

NP-complete problems (SAT, 3-coloring, Clique, TSP) have no known polynomial-time quantum algorithms. Grover's gives only a quadratic speedup for brute-force search of $2^n$ solutions: $O(2^{n/2})$ vs $O(2^n)$ classically — both superpolynomial. Quantum computers do not appear to solve NP-complete problems in polynomial time.

Implications for cryptography:

• RSA and ECC: threatened by Shor's algorithm (factoring and discrete log in BQP). These should be replaced.

• AES symmetric encryption: only Grover threat (key search in $O(2^{n/2})$ vs $O(2^n)$). AES-256 provides $2^{128}$ effective security — still computationally infeasible.

• Lattice problems (LWE, NTRU): basis for post-quantum cryptography. Believed outside BQP; best quantum attacks give only polynomial improvement over classical. NIST standardized CRYSTALS-Kyber, CRYSTALS-Dilithium in 2024.

• Hash functions: SHA-256 collision search: classical $O(2^{128})$, Grover $O(2^{85})$ — still secure if output doubled.

Example 4

The Google Sycamore experiment (2019) claimed quantum supremacy for random circuit sampling. Describe the computational task and the controversy around the classical simulation improvements.

Task: Sample bitstrings from the output distribution of a random 53-qubit, 20-cycle quantum circuit. Google claimed this took $200$ seconds on Sycamore vs $10{,}000$ years on Summit (the world's fastest classical supercomputer at the time).

Why this task: Random circuit sampling has no known efficient classical algorithm; estimating output probabilities of deep random circuits is believed $\#P$-hard. The quantum device naturally samples from this distribution in $O(1)$ circuit runs.

Classical simulation improvements: Within months, IBM proposed tensor network methods potentially running in days on a classical cluster. Alibaba (2021) simulated in hours using optimized tensor contraction. The "years" estimate assumed no memory-optimized algorithms; with different memory-time tradeoffs, classical costs dropped dramatically.

Current consensus: Supremacy claims depend sensitively on classical simulation assumptions. For verification (computing actual probabilities to check the quantum output), classical computers remain superior. For sampling at scale ($n \geq 60$ qubits, depth $\geq 30$), genuine quantum advantage is likely but hard to certify definitively.

Interactive Explorer: Algorithm Complexity Comparison
Classical $O(N)$ ops:
Classical FFT $O(N\log N)$ ops:
Grover's $O(\sqrt{N})$ ops:
Shor's $O((\log N)^3)$ ops:

Practice Problems

1. Place each in its complexity class: (a) Sorting $n$ numbers. (b) Finding a Hamiltonian cycle. (c) Integer factoring. (d) Simulating $n$ qubits for $t$ steps. (e) Verifying a graph coloring.
2. Why is it believed that $NP \not\subseteq BQP$? What evidence supports this?
3. For a surface code with distance $d=17$, how many physical qubits encode 1 logical qubit? How many errors can it correct?
4. A trapped-ion qubit has $T_2 = 10$ s and gate time $1$ ms. What maximum circuit depth is achievable before coherence decays to $e^{-1}$?
5. Why does Grover's algorithm not solve NP-complete problems in polynomial time, even though it provides a quadratic speedup for search?
6. What is the local Hamiltonian problem? Why is it QMA-complete? Give a physical interpretation.
7. A two-qubit gate has fidelity $F = 99.9\%$. Compute the error rate $p$ and determine the number of fault-tolerant gates achievable per logical operation using a distance-$d=7$ surface code (error suppression factor $\sim (p/p_{th})^{(d+1)/2}$, $p_{th}=10^{-2}$).
8. Explain the difference between query complexity (oracle model) and computational complexity (Turing machine model). Why can an exponential query separation not automatically imply an exponential computational separation?
9. AES-128 has a $2^{128}$ keyspace. With Grover's algorithm, how many quantum operations are needed to find the key? Is AES-128 considered quantum-secure?
10. What is post-quantum cryptography? Name three NIST-standardized post-quantum algorithms and the mathematical hardness problems they are based on.
11. The quantum volume (QV) metric measures effective qubit quality. If a device has $n=5$ qubits with fidelity $F=0.995$ per two-qubit gate and maximum circuit depth $d=5$, estimate whether QV $\geq 2^5 = 32$.
12. Prove that $BQP \subseteq PSPACE$ by showing that a quantum circuit of polynomial size can be simulated in polynomial space.
Show Answer Key

1. (a) Sorting: P (merge sort $O(n\log n)$). (b) Hamiltonian cycle: NP-complete. (c) Factoring: BQP (and in NP $\cap$ co-NP, but not known in P). (d) Quantum simulation: BQP (efficiently done on quantum hardware). (e) Verifying a coloring: P (just check each edge).

2. Evidence: (1) Grover gives only quadratic speedup for search, not exponential. (2) Relativized evidence: there exist oracles $A$ such that $NP^A \not\subseteq BQP^A$. (3) Structural: NP-complete problems lack the algebraic/periodic structure exploited by known quantum algorithms. (4) Expert consensus: no polynomial quantum algorithm known for SAT, 3-coloring, etc., despite decades of effort.

3. Distance-$d=17$ surface code: $d^2 = 289$ physical qubits per logical qubit. Corrects up to $\lfloor(17-1)/2\rfloor = 8$ errors.

4. Max depth: $T_2/t_{\text{gate}} = 10\text{ s}/10^{-3}\text{ s} = 10{,}000$ gates. Coherence decays to $e^{-1}$ at depth $10{,}000$.

5. NP-complete problems have solution spaces of size $2^n$ where $n$ is the input length. Grover reduces search from $2^n$ to $2^{n/2}$ — still exponential in input length $n$. "Polynomial time" means $O(n^k)$ for some fixed $k$; $2^{n/2}$ is exponential. The quadratic speedup only helps for problems where classical brute force is the only option and we're fine with $O(\sqrt{\text{space}})$ queries.

6. Local Hamiltonian: given $H = \sum_i H_i$ where each $H_i$ acts on $\leq k$ qubits, determine if the ground state energy $E_0 \leq a$ or $E_0 \geq b$ (with $b-a \geq 1/\text{poly}(n)$). QMA-complete: a quantum witness (the ground state $|\psi_0\rangle$) can be used by a quantum verifier to estimate $\langle H \rangle$ efficiently; hardness proven by reduction from 3-SAT. Physical interpretation: finding the ground state of a quantum many-body system (e.g., a spin lattice) is hard even for quantum computers.

7. $p = 1-0.999 = 10^{-3}$. $p/p_{th} = 10^{-3}/10^{-2} = 0.1$. Suppression per $(d+1)/2 = 4$ levels: $(0.1)^4 = 10^{-4}$. Effective logical error rate $\approx p \times 10^{-4} = 10^{-7}$ per logical operation — excellent fault tolerance.

8. Query complexity counts oracle calls (input: black box); computational complexity counts all bit operations. An oracle separation proves $A^Q \neq A^C$ for some specific oracle $A$, but physical problems have concrete structure, not oracular. A query separation does not rule out a polynomial classical algorithm that exploits the structure of the specific problem (e.g., maybe SAT has polynomial algorithms that "see through" any oracle simulation).

9. Grover needs $O(\sqrt{2^{128}}) = O(2^{64}) \approx 1.8 \times 10^{19}$ quantum operations. With quantum gate time $1$ ns and $10^6$ qubits working in parallel: $\sim 10^{13}$ seconds $\sim 300{,}000$ years. Not quantum-secure against a sufficiently large quantum computer in principle, but $2^{64}$ operations remain computationally infeasible for foreseeable technology. NIST recommends AES-256 for long-term post-quantum security.

10. Post-quantum cryptography: classical algorithms believed secure against quantum attack. NIST (2024) standards: (1) CRYSTALS-Kyber (now ML-KEM): key encapsulation based on Module Learning With Errors (MLWE). (2) CRYSTALS-Dilithium (now ML-DSA): digital signatures based on MLWE. (3) SPHINCS+ (now SLH-DSA): hash-based signatures based on hardness of finding hash preimages. Also standardized: FALCON (NTRU lattices).

11. With 5 qubits and circuit depth 5 and gate fidelity $0.995$, a $5\times5$ random circuit has $\sim 12$ two-qubit gates (approximately $n(d/2) = 5\times2.5 = 12.5$). Total fidelity $\approx 0.995^{12} \approx 0.942$. Heavy output probability (HOG test): $> 2/3$ threshold? $0.942 > 0.667$. ✓ Quantum volume likely $\geq 32$ if fidelity-per-layer is maintained. (Full QV computation requires averaging over many random circuits.)

12. A quantum circuit of $L = \text{poly}(n)$ gates on $q = \text{poly}(n)$ qubits has a $2^q$-dimensional state vector. Simulate by tracking the full amplitude vector: $2^q$ complex numbers, each needing $O(q)$ bits — total space $O(2^q \cdot q) = O(2^{\text{poly}(n)} \cdot \text{poly}(n)) = $ exponential. For PSPACE inclusion: use Savitch's theorem — PSPACE is closed under complementation, and the simulation can be done in $O(2^q)$ space by recomputing amplitudes on the fly (since $2^{\text{poly}(n)} \in $ exponential space $\subseteq$ PSPACE by padding). More precisely, each amplitude is computed in $O(L)$ time using $O(qL)$ space, which is polynomial. Hence $BQP \subseteq PSPACE$. ✓