AI accelerator

From Wikipedia, the free encyclopedia

An AI accelerator is a class of specialized hardware accelerator[1] or computer system[2][3] designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and machine vision. Typical applications include algorithms for robotics, internet of things, and other data-intensive or sensor-driven tasks.[4] They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2018, a typical AI integrated circuit chip contains billions of MOSFET transistors.[5] A number of vendor-specific terms exist for devices in this category, and it is an emerging technology without a dominant design.

History[]

Computer systems have frequently complemented the CPU with special-purpose accelerators for specialized tasks, known as coprocessors. Notable application-specific hardware units include video cards for graphics, sound cards, graphics processing units and digital signal processors. As deep learning and artificial intelligence workloads rose in prominence in the 2010s, specialized hardware units were developed or adapted from existing products to accelerate these tasks.

Early attempts[]

First attempts like Intel's ETANN 80170NX[6] incorporated analog circuits to compute neural functions. Another example for chips of this category is ANNA, a neural net CMOS accelerator developed by Yann LeCun.[7] Later all digital chips like Nestor/Intel Ni1000 followed. As early as 1993, digital signal processors were used as neural network accelerators e.g. to accelerate optical character recognition software.[8] In the 1990s, there were also attempts to create parallel high-throughput systems for workstations aimed at various applications, including neural network simulations.[9][10][11] FPGA-based accelerators were also first explored in the 1990s for both inference[12] and training.[13]

Heterogeneous computing[]

Heterogeneous computing refers to incorporating a number of specialized processors in a single system, or even a single chip, each optimized for a specific type of task. Architectures such as the Cell microprocessor[14] have features significantly overlapping with AI accelerators including: support for packed low precision arithmetic, dataflow architecture, and prioritizing 'throughput' over latency. The Cell microprocessor was subsequently applied to a number of tasks[15][16][17] including AI.[18][19][20]

In the 2000s, CPUs also gained increasingly wide SIMD units, driven by video and gaming workloads; as well as support for packed low-precision data types.[21] Due to increasing performance of CPUs, they are also being used for running AI workloads. CPUs are superior for DNNs with DNNs with small or medium-scale parallelism, for sparse DNNs and in low-batch-size scenarios.

Use of GPU[]

Graphics processing units or GPUs are specialized hardware for the manipulation of images and calculation of local image properties. The mathematical basis of neural networks and image manipulation are similar, embarrassingly parallel tasks involving matrices, leading GPUs to become increasingly used for machine learning tasks.[22][23][24] As of 2016, GPUs are popular for AI work, and they continue to evolve in a direction to facilitate deep learning, both for training[25] and inference in devices such as self-driving cars.[26] GPU developers such as Nvidia NVLink are developing additional connective capability for the kind of dataflow workloads AI benefits from.[27] As GPUs have been increasingly applied to AI acceleration, GPU manufacturers have incorporated neural network-specific hardware to further accelerate these tasks.[28][29] Tensor cores are intended to speed up the training of neural networks.[29]

Use of FPGAs[]

Deep learning frameworks are still evolving, making it hard to design custom hardware. Reconfigurable devices such as field-programmable gate arrays (FPGA) make it easier to evolve hardware, frameworks, and software alongside each other.[30][12][13][31]

Microsoft has used FPGA chips to accelerate inference.[32]

Emergence of dedicated AI accelerator ASICs[]

While GPUs and FPGAs perform far better[quantify] than CPUs for AI related tasks, a factor of up to 10 in efficiency[33][34] may be gained with a more specific design, via an application-specific integrated circuit (ASIC).[citation needed] These accelerators employ strategies such as optimized memory use[citation needed] and the use of lower precision arithmetic to accelerate calculation and increase throughput of computation.[35][36] Some adopted low-precision floating-point formats used AI acceleration are half-precision and the bfloat16 floating-point format.[37][38][39][40][41][42][43] Companies such as Google, Amazon, Apple, Facebook, AMD and Samsung are all designing their own AI ASICs.[44][45][46][47][48]

In-memory computing architectures[]

In June 2017, IBM researchers announced an architecture in contrast to the Von Neumann architecture based on in-memory computing and phase-change memory arrays applied to temporal correlation detection, intending to generalize the approach to heterogeneous computing and massively parallel systems.[49] In October 2018, IBM researchers announced an architecture based on in-memory processing and modeled on the human brain's synaptic network to accelerate deep neural networks.[50] The system is based on phase-change memory arrays.[51]

In-memory computing with analog resistive memories[]

In 2019 researchers from Politecnico di Milano found a way to solve systems of linear equations in a few tens of nanoseconds via a single operation. Their algorithm is based on in-memory computing with analog resistive memories which performs with high efficiencies of time and energy, via conducting matrix-vector multiplication in one step using Ohm's law and Kirchhoff's law. The researchers showed that a feedback circuit with cross-point resistive memories can solve algebraic problems such as systems of linear equations, matrix eigenvectors, and differential equations in just one step. Such an approach improves computational times drastically in comparison with digital algorithms.[52]

Atomically thin semiconductors[]

In 2020, Marega et al. published experiments with a large-area active channel material for developing logic-in-memory devices and circuits based on floating-gate field-effect transistors (FGFETs).[53] Such atomically thin semiconductors are considered promising for energy-efficient machine learning applications, where the same basic device structure is used for both logic operations and data storage. The authors used two-dimensional materials such as semiconducting molybdenum disulfide.[53]

Integrated photonic tensor core[]

In 2021, J. Feldmann et al. proposed an integrated photonic hardware accelerator for parallel convolutional processing.[54] The authors identify two key advantages of integrated photonics over its electronic counterparts: (1) massively parallel data transfer through wavelength division multiplexing in conjunction with frequency combs, and (2) extremely high data modulation speeds.[54] Their system can execute trillions of multiply-accumulate operations per second, indicating the potential of integrated photonics in data-heavy AI applications.[54]

Nomenclature[]

As of 2016, the field is still in flux and vendors are pushing their own marketing term for what amounts to an "AI accelerator", in the hope that their designs and APIs will become the dominant design. There is no consensus on the boundary between these devices, nor the exact form they will take; however several examples clearly aim to fill this new space, with a fair amount of overlap in capabilities.

In the past when consumer graphics accelerators emerged, the industry eventually adopted Nvidia's self-assigned term, "the GPU",[55] as the collective noun for "graphics accelerators", which had taken many forms before settling on an overall pipeline implementing a model presented by Direct3D.

Potential applications[]

See also[]

References[]

  1. ^ "Intel unveils Movidius Compute Stick USB AI Accelerator". July 21, 2017. Archived from the original on August 11, 2017. Retrieved August 11, 2017.
  2. ^ "Inspurs unveils GX4 AI Accelerator". June 21, 2017.
  3. ^ Wiggers, Kyle (November 6, 2019) [2019], Neural Magic raises $15 million to boost AI inferencing speed on off-the-shelf processors, archived from the original on March 6, 2020, retrieved March 14, 2020
  4. ^ "Google Designing AI Processors". Google using its own AI accelerators.
  5. ^ "13 Sextillion & Counting: The Long & Winding Road to the Most Frequently Manufactured Human Artifact in History". Computer History Museum. April 2, 2018. Retrieved July 28, 2019.
  6. ^ John C. Dvorak: Intel’s 80170 chip has the theoretical intelligence of a cockroach in PC Magazine Volume 9 Number 10 (May 1990), p. 77, [1], retrieved May 16, 2021
  7. ^ "Application of the ANNA Neural Network Chip to High-Speed Character Recognition" (PDF).
  8. ^ "convolutional neural network demo from 1993 featuring DSP32 accelerator".
  9. ^ "design of a connectionist network supercomputer".
  10. ^ "The end of general purpose computers (not)".This presentation covers a past attempt at neural net accelerators, notes the similarity to the modern SLI GPGPU processor setup, and argues that general purpose vector accelerators are the way forward (in relation to RISC-V hwacha project. Argues that NN's are just dense and sparse matrices, one of several recurring algorithms)
  11. ^ Ramacher, U.; Raab, W.; Hachmann, J.A.U.; Beichter, J.; Bruls, N.; Wesseling, M.; Sicheneder, E.; Glass, J.; Wurz, A.; Manner, R. (1995). Proceedings of 9th International Parallel Processing Symposium. pp. 774–781. CiteSeerX 10.1.1.27.6410. doi:10.1109/IPPS.1995.395862. ISBN 978-0-8186-7074-9. S2CID 16364797.
  12. ^ Jump up to: a b "Space Efficient Neural Net Implementation".
  13. ^ Jump up to: a b Gschwind, M.; Salapura, V.; Maischberger, O. (1996). "A Generic Building Block for Hopfield Neural Networks with On-Chip Learning". 1996 IEEE International Symposium on Circuits and Systems. Circuits and Systems Connecting the World. ISCAS 96. pp. 49–52. doi:10.1109/ISCAS.1996.598474. ISBN 0-7803-3073-0. S2CID 17630664.
  14. ^ Gschwind, Michael; Hofstee, H. Peter; Flachs, Brian; Hopkins, Martin; Watanabe, Yukio; Yamazaki, Takeshi (2006). "Synergistic Processing in Cell's Multicore Architecture". IEEE Micro. 26 (2): 10–24. doi:10.1109/MM.2006.41. S2CID 17834015.
  15. ^ De Fabritiis, G. (2007). "Performance of Cell processor for biomolecular simulations". Computer Physics Communications. 176 (11–12): 660–664. arXiv:physics/0611201. doi:10.1016/j.cpc.2007.02.107. S2CID 13871063.
  16. ^ Video Processing and Retrieval on Cell architecture. CiteSeerX 10.1.1.138.5133.
  17. ^ Benthin, Carsten; Wald, Ingo; Scherbaum, Michael; Friedrich, Heiko (2006). 2006 IEEE Symposium on Interactive Ray Tracing. pp. 15–23. CiteSeerX 10.1.1.67.8982. doi:10.1109/RT.2006.280210. ISBN 978-1-4244-0693-7. S2CID 1198101.
  18. ^ "Development of an artificial neural network on a heterogeneous multicore architecture to predict a successful weight loss in obese individuals" (PDF).
  19. ^ Kwon, Bomjun; Choi, Taiho; Chung, Heejin; Kim, Geonho (2008). 2008 5th IEEE Consumer Communications and Networking Conference. pp. 1030–1034. doi:10.1109/ccnc08.2007.235. ISBN 978-1-4244-1457-4. S2CID 14429828.
  20. ^ Duan, Rubing; Strey, Alfred (2008). Euro-Par 2008 – Parallel Processing. Lecture Notes in Computer Science. 5168. pp. 665–675. doi:10.1007/978-3-540-85451-7_71. ISBN 978-3-540-85450-0.
  21. ^ "Improving the performance of video with AVX". February 8, 2012.
  22. ^ "microsoft research/pixel shaders/MNIST".
  23. ^ "How GPU came to be used for general computation".
  24. ^ "ImageNet Classification with Deep Convolutional Neural Networks" (PDF).
  25. ^ "nvidia driving the development of deep learning". May 17, 2016.
  26. ^ "Nvidia introduces supercomputer for self driving cars". January 6, 2016.
  27. ^ "how nvlink will enable faster easier multi GPU computing". November 14, 2014.
  28. ^ "A Survey on Optimized Implementation of Deep Learning Models on the NVIDIA Jetson Platform", 2019
  29. ^ Jump up to: a b Harris, Mark (May 11, 2017). "CUDA 9 Features Revealed: Volta, Cooperative Groups and More". Retrieved August 12, 2017.
  30. ^ Sefat, Md Syadus; Aslan, Semih; Kellington, Jeffrey W; Qasem, Apan (August 2019). "Accelerating HotSpots in Deep Neural Networks on a CAPI-Based FPGA". 2019 IEEE 21st International Conference on High Performance Computing and Communications; IEEE 17th International Conference on Smart City; IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS): 248–256. doi:10.1109/HPCC/SmartCity/DSS.2019.00048. ISBN 978-1-7281-2058-4. S2CID 203656070.
  31. ^ "FPGA Based Deep Learning Accelerators Take on ASICs". The Next Platform. August 23, 2016. Retrieved September 7, 2016.
  32. ^ "Project Brainwave". Microsoft Research. Retrieved June 16, 2020.
  33. ^ "Google boosts machine learning with its Tensor Processing Unit". May 19, 2016. Retrieved September 13, 2016.
  34. ^ "Chip could bring deep learning to mobile devices". www.sciencedaily.com. February 3, 2016. Retrieved September 13, 2016.
  35. ^ "Deep Learning with Limited Numerical Precision" (PDF).
  36. ^ Rastegari, Mohammad; Ordonez, Vicente; Redmon, Joseph; Farhadi, Ali (2016). "XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks". arXiv:1603.05279 [cs.CV].
  37. ^ Khari Johnson (May 23, 2018). "Intel unveils Nervana Neural Net L-1000 for accelerated AI training". VentureBeat. Retrieved May 23, 2018. ...Intel will be extending bfloat16 support across our AI product lines, including Intel Xeon processors and Intel FPGAs.
  38. ^ Michael Feldman (May 23, 2018). "Intel Lays Out New Roadmap for AI Portfolio". TOP500 Supercomputer Sites. Retrieved May 23, 2018. Intel plans to support this format across all their AI products, including the Xeon and FPGA lines
  39. ^ Lucian Armasu (May 23, 2018). "Intel To Launch Spring Crest, Its First Neural Network Processor, In 2019". Tom's Hardware. Retrieved May 23, 2018. Intel said that the NNP-L1000 would also support bfloat16, a numerical format that’s being adopted by all the ML industry players for neural networks. The company will also support bfloat16 in its FPGAs, Xeons, and other ML products. The Nervana NNP-L1000 is scheduled for release in 2019.
  40. ^ "Available TensorFlow Ops | Cloud TPU | Google Cloud". Google Cloud. Retrieved May 23, 2018. This page lists the TensorFlow Python APIs and graph operators available on Cloud TPU.
  41. ^ Elmar Haußmann (April 26, 2018). "Comparing Google's TPUv2 against Nvidia's V100 on ResNet-50". RiseML Blog. Archived from the original on April 26, 2018. Retrieved May 23, 2018. For the Cloud TPU, Google recommended we use the bfloat16 implementation from the official TPU repository with TensorFlow 1.7.0. Both the TPU and GPU implementations make use of mixed-precision computation on the respective architecture and store most tensors with half-precision.
  42. ^ Tensorflow Authors (February 28, 2018). "ResNet-50 using BFloat16 on TPU". Google. Retrieved May 23, 2018.[permanent dead link]
  43. ^ Joshua V. Dillon; Ian Langmore; Dustin Tran; Eugene Brevdo; Srinivas Vasudevan; Dave Moore; Brian Patton; Alex Alemi; Matt Hoffman; Rif A. Saurous (November 28, 2017). TensorFlow Distributions (Report). arXiv:1711.10604. Bibcode:2017arXiv171110604D. Accessed 2018-05-23. All operations in TensorFlow Distributions are numerically stable across half, single, and double floating-point precisions (as TensorFlow dtypes: tf.bfloat16 (truncated floating point), tf.float16, tf.float32, tf.float64). Class constructors have a validate_args flag for numerical asserts
  44. ^ "Google Reveals a Powerful New AI Chip and Supercomputer". MIT Technology Review. Retrieved July 27, 2021.
  45. ^ "What to Expect From Apple's Neural Engine in the A11 Bionic SoC - ExtremeTech". www.extremetech.com. Retrieved July 27, 2021.
  46. ^ "Facebook has a new job posting calling for chip designers".
  47. ^ "Facebook joins Amazon and Google in AI chip race". www.ft.com.
  48. ^ Amadeo, Ron (May 11, 2021). "Samsung and AMD will reportedly take on Apple's M1 SoC later this year". Ars Technica. Retrieved July 28, 2021.
  49. ^ Abu Sebastian; Tomas Tuma; Nikolaos Papandreou; Manuel Le Gallo; Lukas Kull; Thomas Parnell; Evangelos Eleftheriou (2017). "Temporal correlation detection using computational phase-change memory". Nature Communications. 8 (1): 1115. arXiv:1706.00511. doi:10.1038/s41467-017-01481-9. PMC 5653661. PMID 29062022.
  50. ^ "A new brain-inspired architecture could improve how computers handle data and advance AI". American Institute of Physics. October 3, 2018. Retrieved October 5, 2018.
  51. ^ Carlos Ríos; Nathan Youngblood; Zengguang Cheng; Manuel Le Gallo; Wolfram H.P. Pernice; C. David Wright; Abu Sebastian; Harish Bhaskaran (2018). "In-memory computing on a photonic platform". arXiv:1801.06228 [cs.ET].
  52. ^ Zhong Sun; Giacomo Pedretti; Elia Ambrosi; Alessandro Bricalli; Wei Wang; Daniele Ielmini (2019). "Solving matrix equations in one step with cross-point resistive arrays". Proceedings of the National Academy of Sciences. 116 (10): 4123–4128. doi:10.1073/pnas.1815682116. PMC 6410822. PMID 30782810.
  53. ^ Jump up to: a b Marega, Guilherme Migliato; Zhao, Yanfei; Avsar, Ahmet; Wang, Zhenyu; Tripati, Mukesh; Radenovic, Aleksandra; Kis, Anras (2020). "Logic-in-memory based on an atomically thin semiconductor". Nature. 587 (2): 72–77. doi:10.1038/s41586-020-2861-0. PMC 7116757. PMID 33149289.
  54. ^ Jump up to: a b c Feldmann, J.; Youngblood, N.; Karpov, M.; et al. (2021). "Parallel convolutional processing using an integrated photonic tensor". Nature. 589 (2): 52–58. arXiv:2002.00281. doi:10.1038/s41586-020-03070-1. PMID 33408373. S2CID 211010976.
  55. ^ "NVIDIA launches the World's First Graphics Processing Unit, the GeForce 256".
  56. ^ "Design of a machine vision system for weed control". CiteSeerX 10.1.1.7.342. Archived (PDF) from the original on June 23, 2010. Retrieved July 29, 2021. Cite journal requires |journal= (help)
  57. ^ "Self-Driving Cars Technology & Solutions from NVIDIA Automotive". NVIDIA.
  58. ^ "movidius powers worlds most intelligent drone". March 16, 2016.
  59. ^ "Qualcomm Research brings server class machine learning to everyday devices–making them smarter [VIDEO]". October 2015.

External links[]

Retrieved from ""