AI accelerator

Source: Wikipedia, the free encyclopedia.

An AI accelerator, deep learning processor, or neural processing unit (NPU) is a class of specialized

in-memory computing capability. As of 2024, a typical AI integrated circuit chip contains tens of billions of MOSFET transistors.[5]

AI accelerators are used in mobile devices, such as neural processing units (NPUs) in Apple

tensor processing units (TPU) in the Google Cloud Platform.[8] A number of vendor-specific terms exist for devices in this category, and it is an emerging technology without a dominant design
.

Graphics processing units designed by companies such as Nvidia and AMD often include AI-specific hardware, and are commonly used as AI accelerators, both for training and inference.[9]

History

Computer systems have frequently complemented the

video cards for graphics, sound cards, graphics processing units and digital signal processors. As deep learning and artificial intelligence workloads rose in prominence in the 2010s, specialized hardware units were developed or adapted from existing products to accelerate
these tasks.

Early attempts

First attempts like Intel's ETANN 80170NX incorporated analog circuits to compute neural functions.[10]

Later all-digital chips like the Nestor/Intel Ni1000 followed. As early as 1993, digital signal processors were used as neural network accelerators to accelerate optical character recognition software.[11]

By 1988, Wei Zhang et al. had discussed fast optical implementations of convolutional neural networks for alphabet recognition.[12][13]

In the 1990s, there were also attempts to create parallel high-throughput systems for workstations aimed at various applications, including neural network simulations.[14][15]

This presentation covers a past attempt at neural net accelerators, notes the similarity to the modern SLI GPGPU processor setup, and argues that general purpose vector accelerators are the way forward (in relation to RISC-V hwacha project. Argues that NN's are just dense and sparse matrices, one of several recurring algorithms)[16]

FPGA-based accelerators were also first explored in the 1990s for both inference and training.[17][18]

In 2014, Chen et al. proposed DianNao (Chinese for "electric brain"),[19] to accelerate deep neural networks especially. DianNao provides the 452 Gop/s peak performance (of key operations in deep neural networks) only in a small footprint of 3.02 mm2 and 485 mW. Later, the successors (DaDianNao,[20] ShiDianNao,[21] PuDianNao[22]) are proposed by the same group, forming the DianNao Family[23]

Qualcomm Snapdragon 820 in 2015.[24][25]

Heterogeneous computing

Heterogeneous computing incorporates many specialized processors in a single system, or a single chip, each optimized for a specific type of task. Architectures such as the

Cell microprocessor[26] have features significantly overlapping with AI accelerators including: support for packed low precision arithmetic, dataflow architecture, and prioritizing throughput over latency. The Cell microprocessor has been applied to a number of tasks[27][28][29] including AI.[30][31][32]

In the 2000s,

DNNs
with small or medium-scale parallelism, for sparse DNNs and in low-batch-size scenarios.

Use of GPU

Graphics processing units or GPUs are specialized hardware for the manipulation of images and calculation of local image properties. The mathematical basis of neural networks and image manipulation are similar, embarrassingly parallel tasks involving matrices, leading GPUs to become increasingly used for machine learning tasks.[34][35]

In 2012, Alex Krizhevsky adopted two GPUs to train a deep learning network, i.e., AlexNet,[36] which won the champion of the ISLVRC-2012 competition. During the 2010s, GPU manufacturers such as Nvidia added deep learning related features in both hardware (e.g., INT8 operators) and software (e.g., cuDNN Library).

Over the 2010s GPUs continued to evolve in a direction to facilitate deep learning, both for training and inference in devices such as

cores are intended to speed up the training of neural networks.[40]

GPUs continue to be used in large-scale AI applications. For example, Summit, a supercomputer from IBM for Oak Ridge National Laboratory,[41] contains 27,648 Nvidia Tesla V100 cards, which can be used to accelerate deep learning algorithms.

Use of FPGAs

Deep learning frameworks are still evolving, making it hard to design custom hardware. Reconfigurable devices such as field-programmable gate arrays (FPGA) make it easier to evolve hardware, frameworks, and software alongside each other.[42][17][18][43]

Microsoft has used FPGA chips to accelerate inference for real-time deep learning services.[44]

Emergence of dedicated AI accelerator ASICs

While GPUs and FPGAs perform far better than CPUs for AI-related tasks, a factor of up to 10 in efficiency

floating-point formats used for AI acceleration are half-precision and the bfloat16 floating-point format.[49][50][51][52][53][54][55] Companies such as Google, Qualcomm, Amazon, Apple, Facebook, AMD and Samsung are all designing their own AI ASICs.[56][57][58][59][60][61] Cerebras Systems has built a dedicated AI accelerator based on the largest processor in the industry, the second-generation Wafer Scale Engine (WSE-2), to support deep learning workloads.[62][63]

Ongoing research

In-memory computing architectures

In June 2017,

deep neural networks.[65] The system is based on phase-change memory arrays.[66]

In-memory computing with analog resistive memories

In 2019, researchers from Politecnico di Milano found a way to solve systems of linear equations in a few tens of nanoseconds via a single operation. Their algorithm is based on

matrix–vector multiplication in one step using Ohm's law and Kirchhoff's law. The researchers showed that a feedback circuit with cross-point resistive memories can solve algebraic problems such as systems of linear equations, matrix eigenvectors, and differential equations in just one step. Such an approach improves computational times drastically in comparison with digital algorithms.[67]

Atomically thin semiconductors

In 2020, Marega et al. published experiments with a large-area active channel material for developing logic-in-memory devices and circuits based on

molybdenum disulphide to precisely tune FGFETs as building blocks in which logic operations can be performed with the memory elements. [68]

Integrated photonic tensor core

In 1988, Wei Zhang et al. discussed fast optical implementations of

convolutional neural networks for alphabet recognition.[12][13]
In 2021, J. Feldmann et al. proposed an integrated
hardware accelerator for parallel convolutional processing.[69] The authors identify two key advantages of integrated photonics over its electronic counterparts: (1) massively parallel data transfer through wavelength division multiplexing in conjunction with frequency combs, and (2) extremely high data modulation speeds.[69] Their system can execute trillions of multiply-accumulate operations per second, indicating the potential of integrated photonics in data-heavy AI applications.[69] Optical processors that can also perform backpropagation for artificial neural networks have been experimentally developed.[70]

Nomenclature

As of 2016, the field is still in flux and vendors are pushing their own marketing term for what amounts to an "AI accelerator", in the hope that their designs and

APIs will become the dominant design
. There is no consensus on the boundary between these devices, nor the exact form they will take; however several examples clearly aim to fill this new space, with a fair amount of overlap in capabilities.

In the past when consumer

graphics accelerators emerged, the industry eventually adopted Nvidia's self-assigned term, "the GPU",[71]
as the collective noun for "graphics accelerators", which had taken many forms before settling on an overall pipeline implementing a model presented by Direct3D.

All models of Intel Meteor Lake processors have a Versatile Processor Unit (VPU) built-in for accelerating inference for computer vision and deep learning.[72]

Deep Learning Processors (DLP)

Inspired from the pioneer work of DianNao Family, many DLPs are proposed in both academia and industry with design optimized to leverage the features of deep neural networks for high efficiency. Only at ISCA 2016, three sessions, 15% (!) of the accepted papers, are all architecture designs about deep learning. Such efforts include Eyeriss (MIT),

Cambricon) in industry.[78]
We listed several representative works in Table 1.

Table 1. Typical DLPs
Year DLPs Institution Type Computation Memory Hierarchy Control Peak Performance
2014 DianNao[19] ICT, CAS digital vector MACs scratchpad VLIW 452 Gops (16-bit)
DaDianNao[20] ICT, CAS digital vector MACs scratchpad VLIW 5.58 Tops (16-bit)
2015 ShiDianNao[21] ICT, CAS digital scalar MACs scratchpad VLIW 194 Gops (16-bit)
PuDianNao[22] ICT, CAS digital vector MACs scratchpad VLIW 1,056 Gops (16-bit)
2016 DnnWeaver Georgia Tech digital Vector MACs scratchpad - -
EIE[74] Stanford digital scalar MACs scratchpad - 102 Gops (16-bit)
Eyeriss[73] MIT digital scalar MACs scratchpad - 67.2 Gops (16-bit)
Prime[79] UCSB hybrid Process-in-Memory ReRAM - -
2017 TPU[77] Google digital scalar MACs scratchpad CISC 92 Tops (8-bit)
PipeLayer[80] U of Pittsburgh hybrid Process-in-Memory ReRAM -
FlexFlow ICT, CAS digital scalar MACs scratchpad - 420 Gops ()
DNPU[81] KAIST digital scalar MACS scratchpad - 300 Gops(16bit)

1200 Gops(4bit)

2018 MAERI Georgia Tech digital scalar MACs scratchpad -
PermDNN City University of New York digital vector MACs scratchpad - 614.4 Gops (16-bit)
UNPU[82] KAIST digital scalar MACs scratchpad - 345.6 Gops(16bit)

691.2 Gops(8b) 1382 Gops(4bit) 7372 Gops(1bit)

2019 FPSA Tsinghua hybrid Process-in-Memory ReRAM -
Cambricon-F ICT, CAS digital vector MACs scratchpad FISA 14.9 Tops (F1, 16-bit)

956 Tops (F100, 16-bit)

Digital DLPs

The major components of DLPs architecture usually include a computation component, the on-chip memory hierarchy, and the control logic that manages the data communication and computing flows.

Regarding the computation component, as most operations in deep learning can be aggregated into vector operations, the most common ways for building computation components in digital DLPs are the MAC-based (multiplier-accumulation) organization, either with vector MACs[19][20][22] or scalar MACs.[77][21][73] Rather than SIMD or SIMT in general processing devices, deep learning domain-specific parallelism is better explored on these MAC-based organizations. Regarding the memory hierarchy, as deep learning algorithms require high bandwidth to provide the computation component with sufficient data, DLPs usually employ a relatively larger size (tens of kilobytes or several megabytes) on-chip buffer but with dedicated on-chip data reuse strategy and data exchange strategy to alleviate the burden for memory bandwidth. For example, DianNao, 16 16-in vector MAC, requires 16 × 16 × 2 = 512 16-bit data, i.e., almost 1024 GB/s bandwidth requirements between computation components and buffers. With on-chip reuse, such bandwidth requirements are reduced drastically.[19] Instead of the widely used cache in general processing devices, DLPs always use scratchpad memory as it could provide higher data reuse opportunities by leveraging the relatively regular data access pattern in deep learning algorithms. Regarding the control logic, as the deep learning algorithms keep evolving at a dramatic speed, DLPs start to leverage dedicated ISA (instruction set architecture) to support the deep learning domain flexibly. At first, DianNao used a VLIW-style instruction set where each instruction could finish a layer in a DNN. Cambricon[83] introduces the first deep learning domain-specific ISA, which could support more than ten different deep learning algorithms. TPU also reveals five key instructions from the CISC-style ISA.

Hybrid DLPs

Hybrid DLPs emerge for DNN inference and training acceleration because of their high efficiency. Processing-in-memory (PIM) architectures are one most important type of hybrid DLP. The key design concept of PIM is to bridge the gap between computing and memory, with the following manners: 1) Moving computation components into memory cells, controllers, or memory chips to alleviate the memory wall issue.[80][84][85] Such architectures significantly shorten data paths and leverage much higher internal bandwidth, hence resulting in attractive performance improvement. 2) Build high efficient DNN engines by adopting computational devices. In 2013, HP Lab demonstrated the astonishing capability of adopting ReRAM crossbar structure for computing.[86] Inspiring by this work, tremendous work are proposed to explore the new architecture and system design based on ReRAM,[79][87][88][80] phase change memory,[84][89][90] etc.

Benchmarks

Benchmarks such as MLPerf and others may be used to evaluate the performance of AI accelerators.[91] Table 2 lists several typical benchmarks for AI accelerators.

Table 2. Benchmarks.
Year NN Benchmark Affiliations # of microbenchmarks # of component benchmarks # of application benchmarks
2012 BenchNN ICT, CAS N/A 12 N/A
2016 Fathom Harvard N/A 8 N/A
2017 BenchIP ICT, CAS 12 11 N/A
2017 DAWNBench Stanford 8 N/A N/A
2017 DeepBench Baidu 4 N/A N/A
2018 AI Benchmark ETH Zurich N/A 26 N/A
2018 MLPerf Harvard, Intel, and Google, etc. N/A 7 N/A
2019 AIBench ICT, CAS and Alibaba, etc. 12 16 2
2019 NNBench-X UCSB N/A 10 N/A

Potential applications

See also

References

  1. ^ "Intel unveils Movidius Compute Stick USB AI Accelerator". July 21, 2017. Archived from the original on August 11, 2017. Retrieved August 11, 2017.
  2. ^ "Inspurs unveils GX4 AI Accelerator". June 21, 2017.
  3. ^ Wiggers, Kyle (November 6, 2019) [2019], Neural Magic raises $15 million to boost AI inferencing speed on off-the-shelf processors, archived from the original on March 6, 2020, retrieved March 14, 2020
  4. ^ "Google Designing AI Processors". Google using its own AI accelerators.
  5. ^ Moss, Sebastian (March 23, 2022). "Nvidia reveals new Hopper H100 GPU, with 80 billion transistors". Data Center Dynamics. Retrieved January 30, 2024.
  6. ^ "Deploying Transformers on the Apple Neural Engine". Apple Machine Learning Research. Retrieved August 24, 2023.
  7. ^ "HUAWEI Reveals the Future of Mobile AI at IFA".
  8. .
  9. ^ Patel, Dylan; Nishball, Daniel; Xie, Myron (November 9, 2023). "Nvidia's New China AI Chips Circumvent US Restrictions". SemiAnalysis. Retrieved February 7, 2024.
  10. ^ Dvorak, J.C. (May 29, 1990). "Inside Track". PC Magazine. Retrieved December 26, 2023.
  11. ^ "convolutional neural network demo from 1993 featuring DSP32 accelerator". YouTube.
  12. ^ a b Zhang, Wei (1988). "Shift-invariant pattern recognition neural network and its optical architecture". Proceedings of Annual Conference of the Japan Society of Applied Physics.
  13. ^
    PMID 20577468
    .
  14. . Retrieved December 26, 2023.
  15. ^ "The end of general purpose computers (not)". YouTube.
  16. S2CID 16364797
    .
  17. ^ a b Gschwind, M.; Salapura, V.; Maischberger, O. (February 1995). "Space Efficient Neural Net Implementation". Retrieved December 26, 2023.
  18. ^
    S2CID 17630664
    .
  19. ^ .
  20. ^ .
  21. ^ .
  22. ^ .
  23. .
  24. ^ "Qualcomm Helps Make Your Mobile Devices Smarter With New Snapdragon Machine Learning Software Development Kit". Qualcomm.
  25. ^ Rubin, Ben Fox. "Qualcomm's Zeroth platform could make your smartphone much smarter". CNET. Retrieved September 28, 2021.
  26. S2CID 17834015
    .
  27. .
  28. .
  29. .
  30. ^ "Development of an artificial neural network on a heterogeneous multicore architecture to predict a successful weight loss in obese individuals" (PDF). Archived from the original (PDF) on August 30, 2017. Retrieved November 14, 2017.
  31. S2CID 14429828
    .
  32. .
  33. ^ "Improving the performance of video with AVX". February 8, 2012.
  34. ^ Chellapilla, K.; Sidd Puri; Simard, P. (October 23, 2006). "High Performance Convolutional Neural Networks for Document Processing". 10th International Workshop on Frontiers in Handwriting Recognition. Retrieved December 23, 2023.
  35. .
  36. .
  37. ^ Roe, R. (May 17, 2023). "Nvidia in the Driver's Seat for Deep Learning". insideHPC. Retrieved December 23, 2023.
  38. ^ Bohn, D. (January 5, 2016). "Nvidia announces 'supercomputer' for self-driving cars at CES 2016". Vox Media. Retrieved December 23, 2023.
  39. ^ "A Survey on Optimized Implementation of Deep Learning Models on the NVIDIA Jetson Platform", 2019
  40. ^ a b Harris, Mark (May 11, 2017). "CUDA 9 Features Revealed: Volta, Cooperative Groups and More". Retrieved August 12, 2017.
  41. ^ "Summit: Oak Ridge National Laboratory's 200 petaflop supercomputer". United States Department of Energy. 2024. Retrieved January 8, 2024.
  42. S2CID 203656070
    .
  43. ^ "FPGA Based Deep Learning Accelerators Take on ASICs". The Next Platform. August 23, 2016. Retrieved September 7, 2016.
  44. ^ "Microsoft unveils Project Brainwave for real-time AI". Microsoft. August 22, 2017.
  45. ^ "Google boosts machine learning with its Tensor Processing Unit". May 19, 2016. Retrieved September 13, 2016.
  46. ^ "Chip could bring deep learning to mobile devices". www.sciencedaily.com. February 3, 2016. Retrieved September 13, 2016.
  47. ^ "Deep Learning with Limited Numerical Precision" (PDF).
  48. ].
  49. ^ Khari Johnson (May 23, 2018). "Intel unveils Nervana Neural Net L-1000 for accelerated AI training". VentureBeat. Retrieved May 23, 2018. ...Intel will be extending bfloat16 support across our AI product lines, including Intel Xeon processors and Intel FPGAs.
  50. ^ Michael Feldman (May 23, 2018). "Intel Lays Out New Roadmap for AI Portfolio". TOP500 Supercomputer Sites. Retrieved May 23, 2018. Intel plans to support this format across all their AI products, including the Xeon and FPGA lines
  51. ^ Lucian Armasu (May 23, 2018). "Intel To Launch Spring Crest, Its First Neural Network Processor, In 2019". Tom's Hardware. Retrieved May 23, 2018. Intel said that the NNP-L1000 would also support bfloat16, a numerical format that's being adopted by all the ML industry players for neural networks. The company will also support bfloat16 in its FPGAs, Xeons, and other ML products. The Nervana NNP-L1000 is scheduled for release in 2019.
  52. ^ "Available TensorFlow Ops | Cloud TPU | Google Cloud". Google Cloud. Retrieved May 23, 2018. This page lists the TensorFlow Python APIs and graph operators available on Cloud TPU.
  53. ^ Elmar Haußmann (April 26, 2018). "Comparing Google's TPUv2 against Nvidia's V100 on ResNet-50". RiseML Blog. Archived from the original on April 26, 2018. Retrieved May 23, 2018. For the Cloud TPU, Google recommended we use the bfloat16 implementation from the official TPU repository with TensorFlow 1.7.0. Both the TPU and GPU implementations make use of mixed-precision computation on the respective architecture and store most tensors with half-precision.
  54. ^ Tensorflow Authors (February 28, 2018). "ResNet-50 using BFloat16 on TPU". Google. Retrieved May 23, 2018.[permanent dead link]
  55. . Accessed May 23, 2018. All operations in TensorFlow Distributions are numerically stable across half, single, and double floating-point precisions (as TensorFlow dtypes: tf.bfloat16 (truncated floating point), tf.float16, tf.float32, tf.float64). Class constructors have a validate_args flag for numerical asserts
  56. ^ "Google Reveals a Powerful New AI Chip and Supercomputer". MIT Technology Review. Retrieved July 27, 2021.
  57. ^ "What to Expect From Apple's Neural Engine in the A11 Bionic SoC – ExtremeTech". www.extremetech.com. Retrieved July 27, 2021.
  58. ^ "Facebook has a new job posting calling for chip designers". April 19, 2018.[permanent dead link]
  59. ^ "Facebook joins Amazon and Google in AI chip race". Financial Times. February 18, 2019.
  60. ^ Amadeo, Ron (May 11, 2021). "Samsung and AMD will reportedly take on Apple's M1 SoC later this year". Ars Technica. Retrieved July 28, 2021.
  61. ^ Smith, Ryan. "The AI Race Expands: Qualcomm Reveals "Cloud AI 100" Family of Datacenter AI Inference Accelerators for 2020". www.anandtech.com. Retrieved September 28, 2021.
  62. ^ Woodie, Alex (November 1, 2021). "Cerebras Hits the Accelerator for Deep Learning Workloads". Datanami. Retrieved August 3, 2022.
  63. ^ "Cerebras launches new AI supercomputing processor with 2.6 trillion transistors". VentureBeat. April 20, 2021. Retrieved August 3, 2022.
  64. PMID 29062022
    .
  65. ^ "A new brain-inspired architecture could improve how computers handle data and advance AI". American Institute of Physics. October 3, 2018. Retrieved October 5, 2018.
  66. S2CID 7637801
    .
  67. .
  68. ^ .
  69. ^ .
  70. ^ "Photonic Chips Curb AI Training's Energy Appetite - IEEE Spectrum".
  71. ^ "NVIDIA launches the World's First Graphics Processing Unit, the GeForce 256". Archived from the original on February 27, 2016.
  72. ^ "Intel to Bring a 'VPU' Processor Unit to 14th Gen Meteor Lake Chips". PCMAG.
  73. ^
    ISSN 0272-1732
    .
  74. ^ .
  75. .
  76. .
  77. ^ .
  78. ^ "MLU 100 intelligence accelerator card" (in Japanese). Cambricon. 2024. Retrieved January 8, 2024.
  79. ^ .
  80. ^ .
  81. . Retrieved August 24, 2023.
  82. . Retrieved November 30, 2023.
  83. .
  84. ^ .
  85. .
  86. .
  87. .
  88. OCLC 1106329050.{{cite book}}: CS1 maint: multiple names: authors list (link
    )
  89. .
  90. .
  91. ^ "Nvidia claims 'record performance' for Hopper MLPerf debut".
  92. CiteSeerX 10.1.1.7.342. Archived from the original
    (PDF) on June 23, 2010.
  93. ^ "Self-Driving Cars Technology & Solutions from NVIDIA Automotive". NVIDIA.
  94. ^ "movidius powers worlds most intelligent drone". March 16, 2016.
  95. ^ "Qualcomm Research brings server class machine learning to everyday devices–making them smarter [VIDEO]". October 2015.

External links