# Quantum Glossary

Explore and learn common terminology in the quantum computing world

Algorithmic benchmarking is IonQ’s preferred benchmarking method. It uses algorithms or subroutines that correlate to real-world applications to assess quantum computer performance. Again, much like LINPACK does for classical supercomputers.

It’s impossible to evaluate the computational power of a quantum system purely by its physical qubit count. Noise, connectivity limitations, and sources of error limits the number of useful operations one can perform, and below a certain threshold, not all qubits could be said to be useful or usable for computation at the same time. We use the Algorithmic Qubit metric as a way to describe the number of “useful” qubits in a system.

Barium is also a silvery rare-earth metal, atomic number 56. IonQ has recently started exploring barium as an alternative qubit species because its slightly more complex structure offers higher fundamental gate and readout fidelities when controlled correctly, and because it primarily interacts with light in the visible spectrum, allowing additional opportunities for us to use standard fiber optic technologies in parts of the system.

The number of entangling gates performed in a given quantum circuit. i.e. a quantum circuit that uses thirty entangling gates would be said to have a circuit depth of thirty, regardless of how many qubits it uses to do so.

The number of entangled qubits in a single quantum circuit, i.e. a quantum circuit that entangles six qubits would be said to have a circuit width of six, regardless of how many gates it uses to do so.

This is what most people would just call “a computer.” We call it a classical computer because its approach to storing and calculating information uses *classical* mechanics: information is stored as a 0 or a 1, and all operations are simple combinations of these basic building blocks. In most classical computers, a binary bit is represented by the presence or absence of electrical current in a semiconductor device called a transistor.

Sometimes described in technical specs as “T2 time,” coherence time is how long a qubit can maintain coherent phase, that is, how long it can successfully maintain one of the critical quantum qualities like superposition and entanglement necessary for computation. Without these, you could use the qubits like classical bits, but there wouldn’t be much utility in that. Again, ions have a major advantage over many other qubit technologies here, with coherence times measured in minutes, potentially thousands of times longer than other platforms.

The complexity of an algorithm or problem in computer science can be defined as the quantifiable amount of computing resources required to run it, usually described in terms of time and memory requirements. Certain types of problems require an exponentially greater amount of classical resources (either time, memory, or both) in order to solve them as more variables are added, eventually becoming impossible at large scales. Some (but not all) of these classically intractable problems can be unlocked with the power of quantum computing.

Sometimes described as *topology*, “connectivity” describes what qubits can perform gates with other qubits within a quantum computer. Trapped ions have the benefit of having *all*-*to-all* connectivity, where each qubit can be directly entangled with any other qubit. Many other platforms are limited in their connectivity, which creates additional overhead and potentially introduces error.

There are many ways to make a quantum computer — trapped ions, neutral atoms, superconducting circuits, photonics, nitrogen vacancies in diamond, and more. In 2000, Physicist David Divincenzo proposed five conditions that are necessary for a quantum computer to be considered: it has to be **scalable** (can plausibly expand to tens or hundreds of qubits), you have to be able to **Initialize** the qubits to the same state, perform a **universal** set of quantum gates (i.e. not just annealing), and allow for individual **measurement. ** The qubits also have to have a **long coherence time,** long enough for the initialization, gates and readout to actually be performed.

A property of quantum mechanics where two particles, even when physically separated, behave in ways conditionally dependent on each other. This phenomenon can be harnessed for certain types of quantum logic gates in quantum information science, and is critical to expressing a quantum computer’s full power.

Sometimes also called logical qubit, error-corrected qubits are groups of physical qubits, v logical qubit is a group of physical qubits that are logically combined using techniques called *error *correction encoding to act as one much higher-quality qubit for computational purposes.

EGT is a proprietary IonQ ion trap technology that enables multi-core operation. This is achieved by allowing tighter ion confinement and reduced heating which allows more precise qubit control.

Fault tolerance refers to a system’s ability to accommodate errors in its operation without losing the information it is processing and/or storing. To achieve fault tolerance in quantum computing, we need three things: more qubits, higher-quality qubits, and error correction, ultimately allowing for much larger, longer, and more complex computation. This is considered the endgame for quantum computing, as a scalable fault tolerant quantum computer has the potential to unlock the ability to solve problems in physics, mathematics, computer science, and physical sciences that are impossible to solve today.

Gate fidelity is a way to describe how much noise (or error) is introduced in each operation during a quantum algorithm. Fidelity is a common way of describing this, defined as 100% minus the error rate; i.e. a *fidelity* of 99% is the same as an *error rate* of 1%.

How long it takes to perform a quantum gate. While raw gate speed could become a factor for time-to-solution in a fault-tolerant computer, the most important consideration for gate speed in NISQ systems is that it is fast enough for the computation to complete before the qubits lose *coherence*. That is, all of the gates in the algorithm need to be shorter than the qubit *coherence time.*

An *Ion Trap* or *Ion Trap Chip* is the heart of a trapped-ion quantum computer. It contains many microfabricated electrodes that together create a field that hold (“trap”) the ions in place, ready for computation. Imagine a maglev train, where the train is a microscopic line of trapped ions — the trap is the apparatus responsible for levitation. While ion traps might seem exotic, they can actually be produced with commercial fabrication technology.

At the end of a quantum computation, the answer is measured. The exponentially large computational space available during computation collapses down to a binary string, with each qubit going from a superposition state to a 1 or a 0. The state of the qubits before measurement determines which of the two states it will collapse into. Measurement is a part of quantum computation that can be confusing, because it can make people assume that because measurement forces superpositions to collapse probabilistically, all quantum computation is probabilistic in nature, but this is not true. Ignoring noise, every step in a quantum computation up to measurement is completely deterministic.

One component of IonQ’s technical roadmap, a multi-core QPU describes a single quantum processor that has multiple quantum compute zones — much like a multi-core processor in a classical computer — that can compute in parallel and be entangled via moving and recombining ion chains.

Coined by John Preskill, the Noisy Intermediate Scale Quantum (NISQ) Era is considered to be the first era of quantum computation, where modestly-sized devices with tens to hundreds of noisy qubits may be able to provide early quantum advantage, but will still be limited by noise and size. We are beginning to enter this era now, and will leave it when we achieve fault tolerance at scale.

For quantum computers to compute correctly, they must be isolated from the environment around them. Any interaction with the environment, or imperfection in the control systems that perform gates, introduces *noise*. As noise accumulates, the overall likelihood that an algorithm will produce a successful answer goes down. With too much noise, a quantum computer is no longer useful at all.

The hardware implementation of a qubit in a quantum computer. There are many physical qubit platforms, but ions have proven themselves as the ideal platform for quantum computation, as they offer unique benefits over other platforms such as superior stability and higher fundamental performance.

A different algorithmically-focused benchmarking suite that determines a quantum computer’s ability to execute a variety of quantum circuits that have a relationship to real-world problems such as chemistry, optimization, finance, and cryptography.

Quantum advantage is the name for the improvement in compute time or resource needs over classical computers that a quantum computer can provide. Sometimes this improvement is effectively infinite—a quantum solution that a classical computer could never solve—but often the improvement is expected to only be practical; a classical computer *could* solve the same problem, but not as accurately, quickly, or with greater resource needs. All demonstrations of quantum advantage to date have been on “academic” problems that are exciting proof of progress in quantum computing, but do not offer much by way of *practical* advantage for commercially-relevant problems.

A quantum algorithm or quantum program is a series of quantum logic gates that together solve a specific problem. An algorithm might be a single quantum circuit, a collection of several, or a . Much like classical algorithms, these can be strung together to make larger and more complex algorithms.

The most fundamental part of a quantum computer, qubits are the quantum equivalent of bits in classical computing. Unlike a bit, which can only exist as a 0 or a, a qubit can exist in a superposition of these states and can be entangled with other qubits, unlocking much more computational ability.

A collection of quantum logic gates to be run in a specific order on a given set of qubits.

Benchmarking is the process of running quantum programs on a computer to characterize its performance. Benchmarking can be thought of as a way to combine lower-level metrics like gate error, connectivity, and more into a single metric that describes the performance of the *whole system* in an accessible way—much like the LINPACK does for classical supercomputers.

The quantum equivalent of classical computer science, QIS is an interdisciplinary field sitting at the intersection of computer science, mathematics, and physics, that seeks to understand the analysis, processing, and transmission of information using quantum systems. This field encompasses quantum computing as well as adjacent technologies like quantum sensing and quantum networking.

To perform a computation with a quantum computer, we use what we call *gates* to manipulate the state of qubits, including putting them in superposition states and creating entanglement. Gates can act on one or many qubits.

Quantum mechanics is the branch of physics that describes the physical properties of the smallest and most fundamental building blocks of nature: atoms and subatomic particles. The unique rules of quantum mechanics can be harnessed for the purpose of information processing.

A QPU is a common industry name for a complete system made up of physical qubits and the apparatus for controlling them — for IonQ, this consists of an ion trap chip, the qubits themselves, the lasers that control the gates, and other supporting electronics and apparatus.

Quantum Volume is a metric that uses a randomized benchmarking technique to determine the computational ability of a given quantum computer by determining the largest Square Circuit of a specific format that can be successfully run by a given quantum computer. Quantum Volume also allows us to compare the performance of quantum computers across different architectures and methods.

Counting the number of physical qubits in a quantum computer is the simplest but also least useful yardstick for understanding the power of a given system. Because qubit *quality* matters just as much as qubit count when it comes to actual performance, count must always be qualified with how good the qubits are, both by themselves and when working together.

Sometimes described in technical specs as “T1 time,” qubit lifetime is how long a qubit can be used as a tool for computation. This metric is critical for synthetic qubit technologies like superconductors, which only stay qubits for microseconds or milliseconds, but is not of much concern for trapped ion systems. Because our qubits are naturally quantum—they’re atoms—our qubit lifetime is effectively infinite, and only limited by our ability to trap and control them for that long.

Randomized benchmarking is any benchmarking technique that uses randomly-generated quantum circuits to benchmark their systems. While they have great value in creating rigorous, cross-applicable tools for comparing quantum computing performance, their random nature means they can only directionally predict performance on real-world problems, as opposed to algorithmic benchmarks, which can more directly provide such predictions. Quantum Volume, Gate Set Tomography and Cross-entropy Benchmarking (XEB) are examples of randomized benchmarks.

Sometimes shortened as “1Q Fidelity,” this metric describes the error rate of a single-qubit gate within a quantum computer. When evaluating single-qubit fidelity, it’s important to note if the measurement described is the *best *fidelity measured, the *average* fidelity (across many qubits or on a single qubit), or something else.

A quantum circuit that has a width of N qubits and a depth of N2 gates. Successfully running a “square” circuit is often considered the minimum requirement for a quantum computer to be able to say that it can use all of its qubits effectively.

SPAM error is the measurement of the error accumulated at the beginning and end of a quantum algorithm when setting the qubits to their initial state and then reading them out at the end. Unlike 1Q or 2Q error, SPAM error only compounds within an algorithm if it includes measurement or reset operations in the middle of the computation, which few quantum computers currently support.

Superconducting qubits are a qubit implementation that uses specialized silicon-fabricated chips that act as “artificial atoms” when cooled to ultracold temperatures. The most commonly claimed advantages of superconducting qubits include speed of executing logical operations, as well as the fact that they can be built using existing fabrication methods. The downsides are that they are subject to fabrication error, have short coherence time, limited gate connectivity, and require near absolute zero cooling (difficult to scale outside of a single cryogenic setup).

A key element of quantum computing’s advantage over classical computation, superposition is when a system exists in a combination of multiple states at the same time. It’s often described as the qubit being able to be a “zero and a one simultaneously,” but the reality is even more powerful—imagine being able to point anywhere on the globe, versus only being able to point to the north or south pole.

Trapped ions are a qubit implementation of quantum computing that harnesses charged atomic particles (ions) by confining and suspending them in vacuum with an ion trap chip, and manipulating them with lasers. IonQ’s systems are built with trapped ions. The primary advantages of trapped ions include long coherence times, all-to-all connectivity, and high gate fidelity.

Sometimes shortened as “2Q Fidelity,” this metric describes the error rate of a two-qubit (engangling) gate within a quantum computer. When evaluating two-qubit fidelity, it’s important to note if the measurement described is the *best *fidelity measured, the *average* fidelity (across many qubit pairs or on a single pair), or something else.

Ytterbium is a silvery rare-earth metal in the lanthanide family with the atomic number 70. All of IonQ’s current commercial systems use ytterbium ions as qubits. Ytterbium is well-suited for qubits because of its electronic structure, which only needs a few different wavelengths of laser light to effectively manipulate.