# Quantum Benchmarking

## Understanding Algorithmic Qubits (#AQ)

A benchmark that measures what matters most: a system’s ability to successfully run your target quantum workloads

## Quantum Computers Are Complex, Predicting Their Value Doesn't Have To Be

#AQ is an application based benchmark, which aggregates performance across 6 widely know quantum algorithms that are relevant to the most promising near term quantum use cases: Optimization, Quantum Simulation and Quantum Machine Learning.

• Optimization

Problems involving complex routing, sequencing and more

• Amplitude Estimation
• Monte Carlo Simulation
• Quantum Simulation

Understand the nature of the very small

• Hamiltonian Simulation
• Variational Quantum Eigensolver
• Quantum Machine Learning

Draw inferences from patterns in data, at scale

• Quantum Fourier Transform
• Phase Estimation

These Near Term Quantum Use Cases are widely Applicable to Multiple Industry Verticals*

Optimization
Quantum Simulation
Quantum Machine Learning
5 - highest relevance, 1 - lowest relevance
*Based on algorithmic derivatives most commonly used for IonQ industry use cases

## Putting #AQ Into Practice

### A Single Metric, A Wealth Of Information

A computer's #AQ can reveal how the system will perform against the workloads that are the most valuable to you. #AQ is a summary and analysis of multiple quantum algorithms. Here is what IonQ Aria's #AQ means, from a practical lens.

• 6 instances of the most valuable quantum algorithms were run on IonQ Aria
• #AQ Algorithms of up to ~600 entangling gates were run successfully
• #AQ Algorithms were successfully run on up to 25 qubits
• Algorithm results were deemed successful if they acheived over 37% Worst Case results fidelity

All of the information behind the #AQ benchmark can be summarized in a single chart, that provides insight into how a system performs for a particular class of algorithms. By identifying the algorithmic classes you intent to use the system for, you can make a direct prediction about the performance of an algorithm with a specific gate width and gate depth.

## Translating #AQ to Real World Impact

Water was simulated on IonQ Harmony, was running at #AQ 4, in 2020. The algorithm used 3 qubits across 3 parameters in the problem set and was able to produce accurate results.
Lithium oxide, a chemical of interest in battery development, was simulated on IonQ Aria, running at #AQ 20, in 2022. The algorithm used 12 qubits across 72 parameters in the problem set and was able to produce accurate results.

## Exponential Growth: Put it into Perspective

A quantum computer’s computational space, represented by the possible qubit states outlined below, doubles every time a single qubit is added to the system. Because #AQ measures a system’s useful qubits, an increase of #AQ 1 represents a doubling of that system’s computational space.

As #AQ increases, the scale becomes hard to wrap your head around. Use the below buttons to compare two #AQ metrics and explore the difference in their computational space represented by the difference in scale between two familiar objects.

### Compare The Scale Of The Computational Space

Use the buttons to compare two #AQ metrics

#AQ 1
Width Of A Paper Clip
2 Possible Encoded States
#AQ 51
Width Of The Solar System
Smaller Than
Qubits vs. Algorithmic Qubits

### Every #AQ Is Built With Qubits, But Not All Qubits Result In An #AQ

A system’s qubit count reveals information about the physical structure of the system but does not indicate the quality of the system, which is the largest indicator of utility. For a qubit to contribute to an algorithmic qubit it must be able to run enough gates to successful return useful results across the 6 algorithms in the #AQ definition. This is a high bar to pass and is the reason many system’s #AQ is significantly lower than its physical qubit count.

## IonQ Benchmark Beliefs

At IonQ, We Believe Benchmarks Should:

### Measure Real World Utility

For most quantum computing users a benchmark will only be as useful as its ability to predict how a quantum computer will perform on a task that has value for them. A focus on real world utility is at the heart of IonQ's approach.

### Be Easily Understood

Benchmarks should be a communication tool that can clearly and easily convey information about a complex system. IonQ aims to provide simple benchmarks that aggregate information across a variety of practical measurements.

### Test Critical Aspects Of Performance

Any benchmark used at IonQ is designed to measure the full quantum system, including the classical hardware stack, optimization tools, error mitigation techniques, and of course, the quantum gate and measurement operations. We believe this is the way to most accurately represent the performance our customers expect.

### Be Easily Verifiable

Benchmarks are only valuable if the cost, time and classical compute resources required for validation are practical. We believe that a precisely defined benchmark, that anyone can run, will provide more utility than a resource intensive, theoretical proof of quantum advantage over classical compute.

## Measuring Algorithmic Qubits (#AQ)

Step 1

### Define and Run the Algorithms

In defining the #AQ metric, we derive significant inspiration from the recent benchmarking study from the QED-C. Just like the study, we start by defining benchmarks based on instances of several currently popular quantum algorithms.

Step 2

### Organize And Aggregate The Results

Building upon previous work on volumetric benchmarking, we then represent the success probability of the circuits corresponding to these algorithms as colored circles placed on a 2D plot whose axes are the 'depth' and the 'width' of the circuit corresponding to the algorithm instance.

Step 3

### Release Updated Versions Of #AQ

New benchmarking suites should be released regularly, and be identified with an #AQ version number. The #AQ for a particular quantum computer should reference this version number under which the #AQ was evaluated. Ideally, new versions should lead to #AQ values that are consistent with the existing set of benchmarks and not deviate drastically, but new benchmarks will cause differences, and that is the intention - representing the changing needs of customers.

## #AQ Version 1.0 Definition:

1. This repository defines circuits corresponding to instances of several quantum algorithms. The AQ.md document in the repository outlines the algorithms that must be run to calculate #AQ.
2. The circuits are compiled to a basis of CX, Rx, Ry, Rz to count the number of CX gates. For version 1.0, the transpiler in Qiskit version 0.34.2 must be used with these basis gates, with the seed_transpiler option set to 0, and no other options set.
1. A circuit can be submitted before or after the above compilation to a quantum computer. By quantum computer, here we refer to the entire quantum computing stack including software that turns high-level gates into native gates and software that implements error mitigation or detection.
2. If the same algorithm instance can be implemented in more than one way using different numbers of ancilla qubits, those must be considered as separate circuits for the purposes of benchmarking. A given version of the repository will specify the implementation and number of qubits for the algorithm instance.
3. If the oracle uses qubits that return to the same state as at the beginning of the computation (such as ancilla qubits), these qubits must be traced out before computing the success metric.
4. Any further optimization is allowed as long as (a) the circuit to be executed on QC implements the same unitary operator as submitted, and (b) the optimizer does not target any specific benchmark circuit.
1. These optimizations may reduce the depth of the circuit that is actually executed. Since benchmarking is ultimately for the whole quantum computing system, this is acceptable. However, the final depth of the executed circuit (the number and description of entangling gate operations) must be explicitly provided.
2. Provision (b) will prevent the optimizer from turning on a special purpose module solely for the particular benchmark, thus preventing gaming of the benchmark to a certain extent.
5. Error mitigation techniques like randomized compilation and post-processing have to be reported if they are used. Post-processing techniques may not use knowledge of the output distribution over computational basis states.
6. The success of each circuit run on the quantum computer is measured by the classical fidelity $F_c$ defined against the ideal output probability distribution:
$F_{c}\left(P_{ideal}, P_{output}\right)=\left(\sum_{x} \sqrt{P_{{output }}(x) P_{{ideal }}(x)}\right)^{2}$
where $P_{ideal}$ is the ideal output probability distribution expected from the circuit without any errors and $P_{output}$ is the measured output probability from the quantum computer, and $x$ represents each output result.
7. The definition of #AQ is as follows:
• Let the set of circuits in the benchmark suite be denoted by $C$.
• Locate each circuit $c \in C$ as a point on the $2 \mathrm{D}$ plot by its
• Width, $w_{c}=$ Number of qubits, and
• Depth, $d_{c}=$ Number of CX gates
• Define success probability for a circuit $c, F_{c}$:
• Circuit passes if $F_{c}-\epsilon_{c}>t$, where $\epsilon_{c}$ is the statistical error based on the number of shots, $\epsilon_{c}=\sqrt{\frac{F_{c}\left(1-F_{c}\right)}{s_{c}}}$ where $s_{c}$ is the number of shots, and $\mathrm{t}=1 / \mathrm{e}=0.37$ is the threshold.
• Then, define #AQ=$N$, when
$N=\max \left\{n:\left(F_{c}-\epsilon_{c}>t\right) \forall\left((c \in C) \&\left(w_{c} \leq n\right) \&\left(d_{c} \leq n^{2}\right)\right)\right\}$
8. The data should be presented as a volumetric plot. The code to plot this is provided in our repository.
9. An additional accompanying table should list for each circuit in the benchmark suite the number of qubits, the number of each class of native gates used to execute the circuit, the number of repetitions $s_c$ of each circuit in order to calculate $F_c$, and $F_c$ for each circuit.