Last week, a team from the Quantum
Economic Development Consortium (QED-C) released *Application-Oriented Performance Benchmarks for
Quantum Computing* ^{1}, introducing the
first version of an open-source suite of benchmarks for
quantum computing hardware. The paper used the still-evolving QED-C benchmarks to evaluate systems from many of
the leading quantum computing hardware developers, including **IonQ’s latest trapped ion system, which
outperformed all other devices tested**.

Before we dig into these impressive results, let’s cover a bit of backstory on these new “application-oriented” benchmarks, and the reasons why we think they provide the best view into quantum computing power.

### Benchmarking a Quantum Computer

Having good tools to understand the performance of a given quantum computer is critical to the rapidly evolving quantum computing field. Effective benchmarks let us compare different hardware apples-to-apples and understand a given computer’s ability to solve meaningful problems.

In the early days, the industry relied heavily on qubit count as the standard benchmark for progress and power. When the best systems in the world had only a few qubits, this was a reasonable metric, because no one expected these systems to compute much of anything — they were still lab experiments.

But, as qubit count grows, this metric becomes increasingly meaningless because it doesn’t tell the whole story
of a system’s abilities or its usefulness in solving real world applications. This is because qubit
*quality* matters just as much as qubit *count*, and fewer high-quality qubits can often do more
than many low-quality ones. As Dominik Andrzejczuk of Atmos Ventures puts it in *The
Quantum Quantity Hype*, “The lower the error rates, the fewer physical qubits you actually need.
[...] a device with 100 physical qubits and a 0.1% error rate can solve more problems than a device with one
million physical qubits and a 1% error rate.”

As hardware and understanding matured, “volumetric” metrics like quantum volume came into vogue, attempting to
address the shortcomings of raw qubit count by tempering it with error rate. Using volumetric benchmarks,
additional qubits are only counted towards the total if they’re considered useful, where “useful” is defined by
being able to run a specific, random circuit of equal width and depth, with *width* referring to the
number of qubits in the circuit, and *depth* referring to the number of entangling gates performed
between them.

While a large step in the right direction—they take error rate into account, they provide a specific benchmarking protocol, and they provide an apples-to-apples comparison—volumetric benchmarking’s applicability to practically-useful algorithms still leaves a lot to be desired.

Notably, quantum volume and similar algorithms have no application *outside* of benchmarking, and as
such, their structure can only directionally predict real-world algorithmic performance. Few real-world quantum
algorithms, especially the ones that we expect to be able to run the earliest, actually have a “volumetric”
structure of circuits of equal width and depth. Some are shallower, and most are much deeper.

More importantly, quantum volume is no longer useful precisely when benchmarking becomes the most critical:
because it requires classical simulation of quantum circuits to validate, it becomes increasingly difficult to
use as compute power scales and *cannot* be measured once the performance of a quantum computer exceeds
what classical computers can simulate. This moment, which is likely to correspond with the time where quantum
computers begin making a real-world impact, is exactly when we need effective benchmarks the most.

A better approach would be to create an *application-oriented* suite of benchmarks, much like the SPEC or
LINPACK suites for classical computers, consisting of a suite of quantum algorithms with known real-world
applications. In fact, IonQ has been advocating for and using such an approach since the earliest days of the
company, including when we benchmarked our 11-qubit
system ^{2}, and when evaluating the UMD-based
system ^{3} that would become the blueprint for
the earliest IonQ computers.

With that in mind, IonQ and several other QED-C members initiated a project to make an application-oriented suite as a part of the QED-C Standards and Performance Metrics Technical Advisory Committee (Standards TAC). Once the working group determined the benchmarks that were to be used, we ran this first version of the suite on both our 11-qubit cloud system and our latest-generation system, which is currently available in private beta to select partners.

The QED-C Standards TAC has now released *Application-Oriented
Performance Benchmarks for Quantum Computing*, introducing the first version of this suite and
using it to evaluate the hardware performance of quantum computers from many of the leaders in the space: IonQ,
IBM, Rigetti, and Honeywell.

### The Benchmarks, and How We Stack Up

Two important benchmark algorithms among those in the suite are quantum phase estimation (QPE) and quantum amplitude estimation (QAE). QPE is a core component in molecular simulations and finance applications. Using a classical computer, this algorithm can only be done by “brute force” with exponentially exploding resource requirements, whereas running it on a quantum computer offers an exponential speedup. QAE can offer quadratic speedup in Monte Carlo simulations, such as those used by IonQ, Goldman Sachs, and QC Ware in a recent research collaboration. This technique provides substantial speedup over classical computing analogue for financial portfolio optimization. To date, IonQ’s latest quantum computer is the only system that has successfully run these algorithms.

The benchmark suite also includes canonical “textbook” algorithms such as the Bernstein-Vazirani (BV) algorithm, which finds a hidden number in a provably more efficient way than any classical computer, and may have some applications in quantum cryptanalysis. We ran the benchmark on up to 21 qubits, and for the largest BV test, IonQ’s quantum computer identified the correct answer in a single attempt 70% of the time. For comparison, a classical computer would only get the correct answer in a single attempt 0.0001% of the time. IonQ’s current generation, 11-qubit quantum computer can execute the BV algorithm on all 11 qubits with a similar success rate (78%); IonQ’s next generation quantum computer has shown an improvement of performance by several orders of magnitude, because the number of possible numbers (bit strings) capable of being found scales exponentially with qubit count.

What’s more, while these benchmark results are the best of any system evaluated in the paper, we’re not done tuning and optimizing our latest system for performance. While it’s already running algorithms via the cloud for select partners like QC Ware, Goldman Sachs, Fidelity Center for Applied Technology and more, we expect that we can make it even more performant in the coming weeks and months.

Just like our latest system, the QED-C benchmarking suite is still in its early days, and it appears that it’s only getting better. As both hardware and algorithms mature, we look forward to collaborating with our partners at the QED-C to help create newer, better, and more applicable ways to evaluate these computers and encourage their use to solve impactful, real-world problems.

*Special thanks to Sonika Johri, Jason Nguyen, Neal Pisenti, Ksenia Sosnova, Ken Wright, Luning Zhao, and
all of the IonQ engineers and physicists who supported the effort to develop these benchmarks and run them
on our hardware.*

*IonQ is a proud founding member of the Quantum Economic Development Consortium (QED-C), an industry-driven
consortium which seeks to enable and grow the quantum industry and associated supply chain, with over 120
members from industry and over 40 from academia.*

*If you have applications that you think would benefit from our latest-generation system, please reach out to
[email protected] or fill out this form to express your interest. If you
would like to run the benchmarking suite for yourself on IonQ hardware or any other publicly-available
quantum hardware, the source code can be found at https://github.com/SRI-International/QC-App-Oriented-Benchmarks.*

### References

**1.** T. Lubinski, S.Johri, P. Varosy et al., “Application-Oriented Performance Benchmarks for Quantum
Computing”, arXiv:2110.03137, (2021). ↫

**2.** K. Wright, K.M. Beck, S. Debnath et al. Benchmarking an 11-qubit quantum computer. Nature
Communications **10, **5464 (2019). ↫

**3.** S. Debnath, N. Linke, C. Figgatt et al. Demonstration of a small programmable quantum computer with
atomic qubits. Nature 536, 63–66 (2016). ↫