Superconducting computing

Source: Wikipedia, the free encyclopedia.

Superconducting logic refers to a class of

cryogenic temperatures for operation, typically below 10 kelvin. Often superconducting computing is applied to quantum computing, with an important application known as superconducting quantum computing
.

Superconducting digital logic circuits use single flux quanta (SFQ), also known as magnetic flux quanta, to encode, process, and transport data. SFQ circuits are made up of active Josephson junctions and passive elements such as inductors, resistors, transformers, and transmission lines. Whereas voltages and capacitors are important in semiconductor logic circuits such as CMOS, currents and inductors are most important in SFQ logic circuits. Power can be supplied by either direct current or alternating current, depending on the SFQ logic family.

Fundamental concepts

The primary advantage of superconducting computing is improved power efficiency over conventional

resistance, little energy is required to move bits within the processor. This is expected to result in power consumption savings of a factor of 500 for an exascale computer.[1] For comparison, in 2014 it was estimated that a 1 exaFLOPS computer built in CMOS logic is estimated to consume some 500 megawatts of electrical power.[2] Superconducting logic can be an attractive option for ultrafast CPUs, where switching times are measured in picoseconds and operating frequencies approach 770 GHz.[3][4] However, since transferring information between the processor and the outside world does still dissipate energy, superconducting computing was seen as well-suited for computations-intensive tasks where the data largely stays in the cryogenic environment, rather than big data applications where large amounts of information are streamed from outside the processor.[1]

As superconducting logic supports standard digital machine architectures and algorithms, the existing knowledge base for CMOS computing will still be useful in constructing superconducting computers. However, given the reduced heat dissipation, it may enable innovations such as three-dimensional stacking of components. However, as they require inductors, it is harder to reduce their size. As of 2014, devices using niobium as the superconducting material operating at 4 K were considered state-of-the-art. Important challenges for the field were reliable cryogenic memory, as well as moving from research on individual components to large-scale integration.[1]

Josephson junction count is a measure of superconducting circuit or device complexity, similar to the transistor count used for semiconductor integrated circuits.

History

Superconducting computing research has been pursued by the U. S. National Security Agency since the mid-1950s. However, progress could not keep up with the increasing performance of standard CMOS technology. As of 2016 there are no commercial superconducting computers, although research and development continues.[5]

Research in the mid-1950s to early 1960s focused on the

superconducting quantum interference device using these junctions, mainly working with lead-based junctions and later switching to lead/niobium junctions. In 1980 the Josephson computer revolution was announced by IBM through the cover page of the May issue of Scientific American. One of the reasons which justified such a large-scale investment lies in that Moore's law - enunciated in 1965 - was expected to slow down and reach a plateau 'soon'. However, on the one hand Moore's law kept its validity, while the costs of improving superconducting devices were basically borne entirely by IBM alone and the latter, however big, could not compete with the whole world of semiconductors which provided nearly limitless resources.[7] Thus, the program was shut down in 1983 because the technology was not considered competitive with standard semiconductor technology. The Japanese Ministry of International Trade and Industry funded a superconducting research effort from 1981 to 1989 that produced the ETL-JC1, which was a 4-bit machine with 1,000 bits of RAM.[5]

In 1983,

energy efficiency, was developed by researchers at Moscow State University. These advances led to the United States' Hybrid Technology Multi-Threaded project, started in 1997, which sought to beat conventional semiconductors to the petaflop computing scale. The project was abandoned in 2000, however, and the first conventional petaflop computer was constructed in 2008. After 2000, attention turned to superconducting quantum computing. The 2011 introduction of reciprocal quantum logic by Quentin Herr of Northrop Grumman, as well as energy-efficient rapid single flux quantum by Hypres, were seen as major advances.[5]

The push for exascale computing beginning in the mid-2010s, as codified in the National Strategic Computing Initiative, was seen as an opening for superconducting computing research as exascale computers based on CMOS technology would be expected to require impractical amounts of electrical power. The Intelligence Advanced Research Projects Activity, formed in 2006, currently coordinates the U. S. Intelligence Community's research and development efforts in superconducting computing.[5]

Conventional computing techniques

Despite the names of many of these techniques containing the word "quantum", they are not necessarily platforms for quantum computing.[citation needed]

Rapid single flux quantum (RSFQ)