Noisy intermediate-scale quantum computing
Noisy intermediate-scale quantum 'computing is characterized by quantum processors containing up to 1,000 qubits which are not advanced enough yet for fault-tolerance or large enough to achieve quantum advantage. These processors, which are sensitive to their environment and prone to quantum decoherence, are not yet capable of continuous quantum error correction. This intermediate-scale is defined by the quantum volume, which is based on a moderate number of qubits and gate fidelity. The NISQ era' is the current state of quantum computer technology, and the term was coined by John Preskill in 2018.
According to Microsoft Azure Quantum's scheme, NISQ computation is considered level 1, the lowest of the quantum computing implementation levels.
In October 2023, the 1,000 qubit mark was passed for the first time by Atom Computing's 1,180 qubit quantum processor. However, as of 2024, only two quantum processors have over 1,000 qubits, with sub-1,000 quantum processors still remaining the norm.
Algorithms
NISQ algorithms are quantum algorithms designed for quantum processors in the NISQ era. Common examples are the variational quantum eigensolver and quantum approximate optimization algorithm, which use NISQ devices but offload some calculations to classical processors. These algorithms have been successful in quantum chemistry and have potential applications in various fields including physics, materials science, data science, cryptography, biology, and finance. However, due to noise during circuit execution, they often require error mitigation techniques. These methods constitute a way of reducing the effect of noise by running a set of circuits and applying post-processing to the measured data. In contrast to quantum error correction, where errors are continuously detected and corrected during the run of the circuit, error mitigation can only use the outcome of the noisy circuits.The quantum hardware landscape
Current NISQ devices typically contain between 50 and 1,000 physical qubits, with leading systems from IBM, Google, and other companies pushing these boundaries. However, these qubits are inherently "noisy" – they suffer from decoherence, gate errors, and measurement errors that accumulate during computation. Gate fidelities hover around 99-99.5% for single-qubit operations and 95–99% for two-qubit gates, which while impressive, still introduce significant errors in circuits with thousands of operations.The fundamental challenge lies in the exponential scaling of quantum noise. With error rates above 0.1% per gate, quantum circuits can execute approximately 1,000 gates before noise overwhelms the signal. This constraint severely limits the depth and complexity of algorithms that can be successfully implemented on current hardware, necessitating the development of specialized NISQ algorithms that work within these constraints.
Variational quantum eigensolver
The variational quantum eigensolver represents one of the most successful NISQ algorithms, specifically designed for quantum chemistry applications. VQE tackles the fundamental problem of finding the ground state energy of molecular systems – a computation that scales exponentially with system size on classical computers but can potentially be solved in polynomial time on quantum devices.Mathematical foundation and implementation
VQE operates on the variational principle of quantum mechanics, which states that the expectation value of any trial wavefunction provides an upper bound on the true ground state energy. The algorithm constructs a parameterized quantum circuit called an ansatz∣ψ⟩, to approximate the ground state of a molecular HamiltonianThe quantum processor prepares the ansatz state and measures the Hamiltonian expectation value, while a classical optimizer iteratively adjusts the parameters θ to minimize the energy. This hybrid approach leverages quantum superposition to explore exponentially large molecular configuration spaces while relying on well-established classical optimization techniques.
Real-world applications and achievements
VQE has been successfully demonstrated on various molecular systems, from simple diatomic molecules like H₂ and LiH to more complex systems including water molecules and small organic compounds. Google's collaboration with Columbia University demonstrated VQE calculations on 16 qubits to study carbon atoms in diamond crystal structures, representing the largest quantum chemistry computation at that time.The algorithm has proven particularly valuable for studying chemical reactions, transition states, and excited state properties. Recent implementations have achieved chemical accuracy for small molecules, demonstrating the potential for quantum advantage in materials discovery and drug development applications.
Scaling challenges and solutions
Despite its successes, VQE faces significant scaling challenges. The number of measurements required grows polynomial with the number of qubits, while the optimization landscape becomes increasingly complex for larger systems. The fragment molecular orbital approach combined with VQE has shown promise for addressing scalability, allowing efficient simulation of larger molecular systems by breaking them into manageable fragments.Quantum approximate optimization algorithm
QAOA represents a paradigmatic NISQ algorithm for solving combinatorial optimization problems that plague industries from finance to logistics. Developed by Farhi and colleagues, QAOA encodes optimization problems as Ising Hamiltonians and uses alternating quantum evolution operators to explore solution spaces.Algorithm structure and methodology
QAOA constructs a quantum circuit consisting of layers, each containing a cost Hamiltonian evolution followed by a mixer Hamiltonian evolution :Classical optimization adjusts the angles to maximize the probability of measuring good solutions.
Performance benchmarks and quantum advantage
Recent theoretical and experimental work has demonstrated QAOA's potential for quantum advantage on specific problem classes. For the Max Cut problem on random graphs, QAOA at depth p=11 has been shown to outperform standard semidefinite programming algorithms. Even more remarkably, QAOA can exploit non-adiabatic quantum effects that classical algorithms cannot access, potentially circumventing fundamental limitations that constrain classical optimization methods.Experimental implementations on quantum hardware have shown promising results for problems with up to 20–30 variables, though current hardware limitations restrict practical applications to relatively small problem sizes. The algorithm's performance improves with circuit depth p, but NISQ constraints limit the achievable depth, creating a fundamental trade-off between solution quality and hardware requirement.
Error mitigation: Making noisy quantum computing practical
Since NISQ devices lack full quantum error correction, error mitigation techniques become essential for extracting meaningful results from noisy quantum computations. These techniques operate through post-processing measured data rather than actively correcting errors during computation, making them suitable for near-term hardware implementations.Zero-noise extrapolation
Zero-noise extrapolation represents one of the most widely used error mitigation techniques, artificially amplifying circuit noise and extrapolating results to the zero-noise limit. The method assumes that errors scale predictably with noise levels, allowing researchers to fit polynomial or exponential functions to noisy data and infer noise-free results.Recent implementations of purity-assisted ZNE have shown improved performance by incorporating additional information about quantum state degradation. This approach can extend ZNE's effectiveness to higher error regimes where conventional extrapolation methods fail, though it requires additional measurement overhead.
Symmetry verification and probabilistic error cancellation
exploits conservation laws inherent in quantum systems to detect and correct errors. For quantum chemistry calculations, symmetries such as particle number conservation or spin conservation provide powerful error detection mechanisms. When measurement results violate these symmetries, they can be discarded or corrected through post-selection.Probabilistic error cancellation reconstructs ideal quantum operations as linear combinations of noisy operations that can be implemented on hardware. While this approach can achieve zero bias in principle, the sampling overhead typically scales exponentially with error rates, limiting practical applications to relatively low-noise scenarios.
Performance overhead and trade-offs
Error mitigation techniques inevitably increase measurement requirements, with overheads ranging from 2x to 10x or more depending on error rates and the specific method employed. This creates a fundamental trade-off between accuracy and experimental resources, requiring careful optimization for each application.Recent benchmarking studies comparing different mitigation strategies have shown that symmetry verification often provides the best performance for chemistry applications, while ZNE excels for optimization problems with fewer inherent symmetries. The choice of mitigation strategy significantly impacts the overall algorithm performance and should be tailored to specific problem types and hardware characteristics.