Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
Since 1987 - Covering the Fastest Computers in the World and the People Who Run Them
The are many issues in quantum computing today – among the more pressing are benchmarking, networking and development of hybrid classical-quantum approaches. For example, will quantum networking be necessary to practically scale up the size of quantum computers? There are differing perspectives on this question but most currently think networking will be necessary to achieve scale. Likewise, well-drawn benchmarking can help both quantum technology developers and users compare systems and identify strengths and weaknesses. But what does well-drawn mean?
In this most recent HPCwire/QCwire survey, senior researchers from D-Wave Systems, Oak Ridge National Laboratory, and PsiQuantum tackle benchmarking, networking, and hybrid classical-quantum computing approaches and you may be surprised by some of their answers. For example, Peter Shadbolt of PsiQuantum offers a nuanced view on hybrid classical-quantum computing, that’s well worth reading. (D-Wave didn’t weigh in on networking as that is not Murray Thom’s expertise).
Thanks to all of the respondents. Their answers are all thoughtful. The idea that these regular HPCwire/QCwire surveys can provide a kind of real-time view into important issues couldn’t happen without their efforts. We expect perspectives to evolve as the technology evolves and we’re hopeful our regular survey will reflect the current views of leaders in the quantum community.
1 Hybrid Classical-Quantum or Pure-play Quantum. There’s a lot of discussion around using quantum computing as mostly another accelerator in the advanced computing landscape and discussion around being able to parse problems into pieces with some portions best run on quantum computers and other portions best run on classical resources.
a) What’s your take on the hybrid classical-quantum computing approach? Is it worthwhile? How significant a portion of quantum computing will the hybrid approach become? Do you see distinct roles for hybrid classical-quantum computing and for pure-play quantum computing ?
ORNL’s Peters: Unless you are building an algorithm-specific quantum computer, much like how one might use an analog classical computer, I’d expect a hybrid classical-quantum system will be the primary way to leverage the power of quantum computers as they mature. Algorithm-optimized quantum-only machines could be used to simulate parts of problems that are hard on classical machines before we have a good way to integrate with larger classical infrastructures. Further, algorithm-optimized quantum computers may even make up core co-processing units used in more general hybrid-quantum classical systems.
D-Wave’s Thom: We believe hybrid computing is central to achieving our quantum future. The combination of the best quantum computing methods and the best classical approaches will be the most optimal way to solve problems. As powerful as modern classical computing technologies may be, there is an emerging set of applications that require new resources – quantum resources – to meet the demands of businesses in today’s increasingly competitive markets.
Pure-play quantum computing will likely be the realm of specialists and hybrid processing workflow designers. There will be uses for remote processing with direct calls to quantum processors – for example, in physics studies of spin glasses or sub-routines of a real Shor’s algorithm implementation. But from a commercial applications point of view, industry users will need whole-problem hybrid solvers with self-contained quantum subroutines.
As we look ahead, performant, high-value hybrid solvers across multiple problem types will continue to expand and deliver the benefits of both quantum and classical resources for both annealing quantum computers and gate-model systems for emerging quantum use cases. What we have seen, and believe others will find as well, is that for problems you can solve most effectively with a quantum computer, you can reach an even larger size once you hybridize with classical systems.
PsiQuantum’s Shadbolt: We anticipate that most end-to-end applications enabled by quantum computing will depend on a mixture of both classical and quantum computation to produce valuable answers. However, there are two widely held misconceptions. The first is that this mixed responsibility “lowers the bar” for the performance of the quantum computer and creates opportunities for real utility using very small or weak quantum computers. This is not the case. As far as we understand, you need a powerful, error-corrected quantum computer before you can start talking seriously about quantum advantage – no matter how great your integration with conventional hardware might be.
Secondly, it is often thought that the quantum computer must be very tightly integrated with the supporting conventional hardware – high-bandwidth networking, colocation, etc. etc. Consider that a “world-changing”, million-physical-qubit quantum computer only supports hundreds of logical qubits, billions of gates, and has a single-shot run-time much (much!) longer than a second. The bandwidth of user-facing data coming out of this system is miniscule – on the order of kilobytes per second. Assuming that the program to be run can be expressed in less than a few gigabytes (an extremely conservative estimate), the entire machine can be operated remotely over a regular consumer internet connection. Latency and bandwidth are not prohibitive at all, colocation is not required.
b) Do you think quantum computing capability will become embedded in existing HPC application suites? For example, in a suite such as ANSYS, will quantum computing become incorporated as an accelerator option for users to target?
ORNL’s Peters: Eventually, it seems likely that quantum computers will be a part of future HPC. I don’t think it is clear yet if we will be able to automate breaking up the code into calls optimized for the different types of accelerators or leave that to the programmers, though automation would be a desirable outcome.
D-Wave’s Thom: Yes, at this point this seems like a natural outcome of the co-evolution of quantum and classical processors. We think it will result in a continuum of quantum-accelerated computations, each varying in the degree to which it depends on quantum computation.
PsiQuantum’s Shadbolt: At some point in the far future, I think this is a reasonable expectation, in the same way that features for exploiting SIMD, GPUs and TPUs have crept into other scientific software libraries. However, in the short term, we expect the use of quantum computers to be more bespoke, more hands-on, and less widely available than is suggested by the question.
2 Quantum Networking. Quantum networking is an active area of research on at least two fronts. 1) Many believe it will be necessary to network quantum processors together to achieve scale, whether at the chip level or system clustering. 2) Quantum networks (LAN/MAN/WAN, etc.) might offer many attractive attributes spanning secure communications to distributed quantum processing environments; DOE even has a Quantum Internet Blueprint.
a) How necessary do you think quantum networking will be for scaling up quantum computers? Will clustering smaller systems together be required to deliver adequate scale to tackle practical problems? When do you expect to see networked quantum chips/systems to start to appear, even if only in R&D? What key challenges remain ?
ORNL’s Peters: One could argue that a quantum network will be needed to scale quantum computers. The value proposition is that, even if not required, a quantum network of two quantum computers is potentially much more than a factor of two more powerful than two independent quantum computers. Though a quantum network might not be optimized the same for different types of qubits. Once a particular qubit technology is selected, it drives a lot of architectural considerations for supporting technology development. Another potential advantage of networked quantum computing resources is its potential to reduce crosstalk when we address qubits living in different parts of a multi core quantum-processor machine. Finally, one could use different quantum computing technologies to do different parts of a computation, not unlike how we use GPUs and CPUs in HPC today.
D-Wave’s Thom: N/A
PsiQuantum’s Shadbolt: At least a million physical qubits are necessary for all known useful applications of quantum computers. For most qubit implementations, the qubits are and will forever remain too large to fit a million qubits onto a single chip (die/reticle), and therefore high-performance quantum networking will be critical to achieve any utility. Probably the most compelling exception to this generalization is quantum dots, where it is reasonable to expect that a million qubits can be fabricated into a single reticle field, albeit with challenges associated with control electronics. Outside of special cases such as quantum dots, where very high density can be achieved, we see chip-to-chip quantum networking as an essential prerequisite for commercial viability of quantum computers.
b) What’s your sense of progress to date in developing quantum networking and a quantum internet? What kinds of applications will be enabled and how soon do you expect nascent quantum networks and prototype quantum internets to appear. What are the key technical hurdles remaining?
ORNL’s Peters: The progress in the US has been rapidly accelerating with recent investments. However, we may have small fault-tolerant quantum computers before we have fault tolerant quantum networks, since the historic focus has been on the computers themselves. We can enable some limited quantum-based cybersecurity functions already, but they need further study to ensure methods of accreditation are developed and implemented. In addition to quantum computing, networking quantum sensors promises to greatly improve our ability to measure events of interest, including, potentially the discovery of new physical phenomena such as dark matter which we cannot directly detect today. The key technical hurdles to overcome are correcting for loss and other operation errors when transmitting quantum information.
D-Wave’s Thom: N/A
PsiQuantum’s Shadbolt: The most compelling use-case that we are aware of for the proposed “quantum internet” is device-independent quantum key distribution, which enables secure communication with very specific and differentiated guarantees on security. PsiQuantum does develop components that are relevant to the challenges posed by a hypothetical quantum internet. For instance, we invest in low-loss photonic devices, high-efficiency manufacturable single photon detectors, high-performance optical phase-shifters, etc. However, PsiQuantum is focused on building a quantum computer, and does not pursue the quantum internet as a goal.
3 Benchmarks. We seem to love benchmarks and top performer lists (think Top500 list and MLPerf). These metrics can be useful or not so useful. Currently, there’s a lot of activity around developing benchmarks for quantum computing. From IBM’s Quantum Volume and IonQ’s Algorithmic Qubits, which is based on QED-C efforts, to diverse efforts underway by DOE. The idea, of course, is to provide reasonable ways to compare quantum systems based in criteria ranging from hardware performance characteristics to application performance across differing systems and qubit technologies.
a) What’s you sense of the need for benchmarks in quantum computing? Which of the existing still-young offerings, if any, do you prefer and why? Are you involved in any benchmark development collaborations? To what extent do you use existing benchmarks to compare systems now?
ORNL’s Peters: Generally speaking, benchmarks are needed. Though in conventional computing infrastructures, careful consideration is made for practical issues like cost and energy consumption along with performance. How exactly one should quantify the performance of a quantum computer is still an active area of research. So further relating the performance of what one gets in a hybrid system vs. what’s possible with equal resources spent on an entirely classical infrastructure is also not yet clear. The technology is probably too immature to make a meaningful comparison at this point, and I am not currently involved in any quantum computing benchmark development efforts, though I am interested in understanding if they might be applied to quantum repeater systems.
D-Wave’s Thom: Benchmarks are vital in quantum computing, having two distinct purposes: communicating technological progress by measuring performance against an ideal (noise-free) quantum computation and informing customers about which products are most suitable for their computational needs.
For D-Wave’s quantum annealing computers, we prefer the second instance, comparing quantum hybrid application performance against existing commercial methods because we believe that customers need real-world comparisons to demonstrate business value.
D-Wave researchers are members of a few committees (IEEE, QED-C) working to develop benchmark tests for both gate model and annealing quantum computers, and we have also published papers that illustrate our approach. We also have a huge repertoire of internal benchmarks that measure performance of bare hardware components, of the full quantum processing unit, and our online hybrid solvers. We normally publish benchmark results when new products go live, again, through the lens, as often as possible, to commercial applications.
PsiQuantum’s Shadbolt: We welcome the concerted and sensitive effort by the community to define good benchmarks.
b) What elements do you think good quantum benchmarks should include? Should the benchmark be a single number, such as in Top500, or offer a suite of results such as is done in MLPerf? Who should develop the benchmarks? Do you think we will end up with an analog of the Top500 List for quantum computers?
ORNL’s Peters: Good quantum benchmarks should be able to capture and quantify the challenging aspects that currently make it difficult to build a scalable quantum computing platform. Perhaps they will be able to abstract to existing metrics, but that might be too lofty a goal considering the types of problems quantum computers will likely be good at solving. The broader computing community, including academia, industry, and government, should develop benchmarks. One could have a top500 list for quantum computers, however, I think it would be more desirable to find benchmarks that quantify the capability of hybrid systems.
D-Wave’s Thom: Good user benchmarks should include performance measurements at whole-problem solving, as opposed to the performance of individual circuits or components (or else better information about how individual component performance is relevant to whole-problem performance). In addition, test designs should reflect the user experience in accounting for the full computation, using realistic inputs, and not unrealistically over-tuned for narrow test scenarios. Measurements also should incorporate both computation time and solution quality. Basically, they should follow standards and expectations that have been set out for classical computational benchmarking, with some necessary modifications for the quantum scenario.
In terms of whether the benchmark should be a single number, given the unusual properties of quantum computers, a single number can be misleading because single number rankings over-generalize performance across too many applications and metrics. No quantum computer can No quantum computer can be best at every task it is given, and a suite of numbers is needed to characterize the kinds of scenarios for which a given one can outperform classical and other quantum alternatives.
The benchmarks need to be developed from dialog between quantum producers and quantum users. Producers want to be able to highlight the kinds of scenarios on which their computer performs best, and users want to know about test results that are relevant to their application/industry.
A single list for quantum computers is unlikely because of the current variety of incomparable technologies. Perhaps it will be possible a long time from now, after the technologies shake themselves out and settle on a small handful of best designs.
PsiQuantum’s Shadbolt: One way to use benchmarks is to help determine whether a particular machine is better or worse than another. However, in general what we would really like to quantify is the distance (essentially, the amount of time and money) between a particular machine, and the scale and performance that is required to achieve genuine utility – i.e. large-scale, fault-tolerant quantum computing. Current benchmarks are very good for the former, but in general are not as useful for the latter, primarily because nobody has yet built a device that is meaningfully large or performant. In other words, benchmarks allow us to rank-order current hardware, but since we also know that none of this hardware is remotely close to a genuinely useful quantum computer, the usefulness of the rank-ordering exercise is limited. This is not to dismiss current benchmarking efforts, but is merely a note of caution.
4 Your work. Please describe in a paragraph or two your current top project(s) and priorities.
ORNL’s Peters: My current top priority is the development of tools and techniques needed to build a national-scale quantum network. This will likely require the development of new concepts and quantum technologies to build a network of quantum repeaters. Such a network will probably look similar to a special purpose distributed quantum computer and will probably require us to encode our quantum information in photons of many different frequencies, or at the very least use these frequencies to improve the number of entangled photons that are probabilistically carried over an optical fiber. One of the major difficulties compared to quantum computing is that in networking we lose most of our quantum information carriers (the photons on which qubits are encoded) as they are transmitted. As a result, we need to fix large loss errors as well as other operation errors.
D-Wave’s Thom: Supporting our track record of relentless product delivery, we’re continuing to focus on our Clarity roadmap to bring new innovations to market. In June 2022, we released an experimental prototype of our next-generation Advantage2 quantum system, which shows great promise with a new Zephyr topology and 20-way inter-qubit connectivity. This new prototype represents an early version of the upcoming full-scale product, and early benchmarks show increased energy scale and improved solution quality. New and existing customers can try out the experimental Advantage2 prototype by signing into Leap, our quantum cloud service.
PsiQuantum’s Shadbolt: Photonic quantum computers have not yet demonstrated very large entangled states of dual rail-encoded photonic qubits. The reason for this is that multiplexing (essentially, trial-until-success) is required to overcome nondeterminism in single photon sources and linear-optical entangling gates. Multiplexing is technically challenging for multiple reasons, but the most fundamental issue is the need for a very high-performance optical switch. PsiQuantum is investing heavily in a novel, high-performance, mass-manufacturable optical switch to overcome this issue. Beyond this, we are investing across the entire stack, from semiconductor process development, device design, packaging, test, reliability, systems integration and architecture, to control electronics and software, networking, cryogenic infrastructure, quantum architecture, error correcting codes, implementations of fault-tolerant logic and algorithms, and application development.
(Interested in participating in HPCwire/QCwire’s periodic sampling of current thinking? Contact [email protected] for more details.)
Be the most informed person in the room! Stay ahead of the tech trends with industy updates delivered to you every week!
Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…
Microsoft shared details on how it uses an AMD technology to secure artificial intelligence as it builds out a secure AI infrastructure in its Azure cloud service. Microsoft has a strong relationship with Nvidia, but is also working with AMD's Epyc chips (including the new 3D VCache series), MI Instinct accelerators, and also... Read more…
In his GTC keynote today, Nvidia CEO Jensen Huang launched another new Nvidia GPU architecture: Ada Lovelace, named for the legendary mathematician regarded as the first computer programmer. The company also announced tw Read more…
Just about six months ago, Nvidia’s spring GTC event saw the announcement of its hotly anticipated Hopper GPU architecture. Now, the GPU giant is announcing that Hopper-generation GPUs (which promise greater energy eff Read more…
Nvidia is trying to uncomplicate AI with a cloud service that makes AI and its many forms of computing less vague and more conversational. The NeMo LLM service, which Nvidia called its first cloud service, adds a layer of intelligence and interactivity... Read more…
Dr. Fabio Baruffa, Sr. HPC & QC Solutions Architect Dr. Pavel Lougovski, Pr. QC Research Scientist Tyson Jones, Doctoral researcher, University of Oxford
Currently, an enormous effort is underway to develop quantum computing hardware capable of scaling to hundreds, thousands, and even millions of physical (non-error-corrected) qubits. Read more…
Insurance is a highly regulated industry that is evolving as the industry faces changing customer expectations, massive amounts of data, and increased regulations. A major issue facing the industry is tracking insurance fraud. Read more…
Nvidia is laying the groundwork for a future in which humans and robots will be collaborators in the surgery rooms at hospitals. The company announced a computer called IGX for Medical Devices, which will be populated in robots, image scanners and other computers and medical devices involved in patient care close to the point... Read more…
Nvidia is not interested in bringing software support to its GPUs for the RISC-V architecture despite being an early adopter of the open-source technology in its GPU controllers. Nvidia has no plans to add RISC-V support for CUDA, which is the proprietary GPU software platform, a company representative... Read more…
In his GTC keynote today, Nvidia CEO Jensen Huang launched another new Nvidia GPU architecture: Ada Lovelace, named for the legendary mathematician regarded as Read more…
Just about six months ago, Nvidia’s spring GTC event saw the announcement of its hotly anticipated Hopper GPU architecture. Now, the GPU giant is announcing t Read more…
Nvidia is trying to uncomplicate AI with a cloud service that makes AI and its many forms of computing less vague and more conversational. The NeMo LLM service, which Nvidia called its first cloud service, adds a layer of intelligence and interactivity... Read more…
Nvidia is laying the groundwork for a future in which humans and robots will be collaborators in the surgery rooms at hospitals. The company announced a computer called IGX for Medical Devices, which will be populated in robots, image scanners and other computers and medical devices involved in patient care close to the point... Read more…
The are many issues in quantum computing today – among the more pressing are benchmarking, networking and development of hybrid classical-quantum approaches. Read more…
Albert Einstein famously described quantum mechanics as "spooky action at a distance" due to the non-intuitive nature of superposition and quantum entangled par Read more…
The need for speed is a hot topic among participants at this week’s AI Hardware Summit – larger AI language models, faster chips and more bandwidth for AI machines to make accurate predictions. But some hardware startups are taking a throwback approach for AI computing to counter the more-is-better... Read more…
It is perhaps not surprising that the big cloud providers – a poor term really – have jumped into quantum computing. Amazon, Microsoft Azure, Google, and th Read more…
In April 2018, the U.S. Department of Energy announced plans to procure a trio of exascale supercomputers at a total cost of up to $1.8 billion dollars. Over the ensuing four years, many announcements were made, many deadlines were missed, and a pandemic threw the world into disarray. Now, at long last, HPE and Oak Ridge National Laboratory (ORNL) have announced that the first of those... Read more…
The U.S. Senate on Tuesday passed a major hurdle that will open up close to $52 billion in grants for the semiconductor industry to boost manufacturing, supply chain and research and development. U.S. senators voted 64-34 in favor of advancing the CHIPS Act, which sets the stage for the final consideration... Read more…
The 59th installment of the Top500 list, issued today from ISC 2022 in Hamburg, Germany, officially marks a new era in supercomputing with the debut of the first-ever exascale system on the list. Frontier, deployed at the Department of Energy’s Oak Ridge National Laboratory, achieved 1.102 exaflops in its fastest High Performance Linpack run, which was completed... Read more…
Amid the high-performance GPU turf tussle between AMD and Nvidia (and soon, Intel), a new, China-based player is emerging: Biren Technology, founded in 2019 and headquartered in Shanghai. At Hot Chips 34, Biren co-founder and president Lingjie Xu and Biren CTO Mike Hong took the (virtual) stage to detail the company’s inaugural product: the Biren BR100 general-purpose GPU (GPGPU). “It is my honor to present... Read more…
The first-ever appearance of a previously undetectable quantum excitation known as the axial Higgs mode – exciting in its own right – also holds promise for developing and manipulating higher temperature quantum materials... Read more…
Additional details of the architecture of the exascale El Capitan supercomputer were disclosed today by Lawrence Livermore National Laboratory’s (LLNL) Terri Read more…
Tesla has revealed that its biggest in-house AI supercomputer – which we wrote about last year – now has a total of 7,360 A100 GPUs, a nearly 28 percent uplift from its previous total of 5,760 GPUs. That’s enough GPU oomph for a top seven spot on the Top500, although the tech company best known for its electric vehicles has not publicly benchmarked the system. If it had, it would... Read more…
HPCwire takes you inside the Frontier datacenter at DOE's Oak Ridge National Laboratory (ORNL) in Oak Ridge, Tenn., for an interview with Frontier Project Direc Read more…
AMD is getting personal with chips as it sets sail to make products more to the liking of its customers. The chipmaker detailed a modular chip future in which customers can mix and match non-AMD processors in a custom chip package. "We are focused on making it easier to implement chips with more flexibility," said Mark Papermaster, chief technology officer at AMD during the analyst day meeting late last week. Read more…
Intel reiterated it is well on its way to merging its roadmap of high-performance CPUs and GPUs as it shifts over to newer manufacturing processes and packaging technologies in the coming years. The company is merging the CPU and GPU lineups into a chip (codenamed Falcon Shores) which Intel has dubbed an XPU. Falcon Shores... Read more…
The long-troubled, hotly anticipated MareNostrum 5 supercomputer finally has a vendor: Atos, which will be supplying a system that includes both Nvidia and Inte Read more…
The Universal Chiplet Interconnect Express (UCIe) consortium is moving ahead with its effort to standardize a universal interconnect at the package level. The c Read more…
Fusion, the nuclear reaction that powers the Sun and the stars, has incredible potential as a source of safe, carbon-free and essentially limitless energy. But Read more…
You may recall that efforts proposed in 2020 to remake the National Science Foundation (Endless Frontier Act) have since expanded and morphed into two gigantic bills, the America COMPETES Act in the U.S. House of Representatives and the U.S. Innovation and Competition Act in the U.S. Senate. So far, efforts to reconcile the two pieces of legislation have snagged and recent reports... Read more…
Just a couple of weeks ago, the Indian government promised that it had five HPC systems in the final stages of installation and would launch nine new supercomputers this year. Now, it appears to be making good on that promise: the country’s National Supercomputing Mission (NSM) has announced the deployment of “PARAM Ganga” petascale supercomputer at Indian Institute of Technology (IIT)... Read more…
© 2022 HPCwire. All Rights Reserved. A Tabor Communications Publication
HPCwire is a registered trademark of Tabor Communications, Inc. Use of this site is governed by our Terms of Use and Privacy Policy.
Reproduction in whole or in part in any form or medium without express written permission of Tabor Communications, Inc. is prohibited.