Mohammad B. In-Memory Computing Hardware Accelerators...Data-Intensive App 2023
Category
Uploaded
2023-09-28 10:51:02 GMT
Size
6.83 MiB (7159055 Bytes)
Files
1
Seeders
0
Leechers
0
Hash
3A89F0FB05D640D08029B4CDBBF7CEDE9075D769

Textbook in PDF format

This book describes the state-of-the-art of technology and research on In-Memory Computing Hardware Accelerators for Data-Intensive Applications. The authors discuss how processing-centric computing has become insufficient to meet target requirements and how Memory-centric computing may be better suited for the needs of current applications. This reveals for readers how current and emerging memory technologies are causing a shift in the computing paradigm. The authors do deep-dive discussions on volatile and non-volatile memory technologies, covering their basic memory cell structures, operations, different computational memory designs and the challenges associated with them. Specific case studies and potential applications are provided along with their current status and commercial availability in the market. Memory plays a crucial role in digital system design and computing, primarily by storing data and making it available to execution units as quickly as possible. The architecture of memory has evolved significantly due to advances in technology. In the past, computers had limited on-chip memory, and data had to be moved from main memory to the CPU. With Moore’s law, the exponential growth of the number of transistors and reduced transistor costs have enabled the addition of on-chip memory, mostly in the form of SRAM, referred to as embedded memory. The size of the embedded memory has increased with each technology generation, with more than 50% of most system-on-chip area dedicated to memory. The memory architecture has been focusing on memory hierarchy, where smaller and faster memory is closer to the CPU, and larger capacity and slower access is added to the hierarchy. However, the bus speed and main memory access could not keep up with the increase in the number of execution units and their speed, resulting in discrepancies between memory access and execution unit data needs, known as the “memory wall.” The role of memory architectures remains to provide data to the execution units with the lowest possible time, power, cost, highest capacity, and long retention time. However, there is no single memory technology that can provide all these features. For instance, SRAM is the fastest memory but takes up a large area with six transistors, DRAM is smaller with one transistor and one capacitor but requires refreshing and has slower access time and higher power, and Flash is one-transistor but requires extra processing steps and has much slower access time than SRAM and DRAM, in addition to high power requirements, especially during write operation. Emerging technologies based on resistive memory instead of charged-based memory have the potential to solve some traditional memory issues but introduce new challenges such as endurance, write time, and write power. Furthermore, emerging applications such as Artificial Intelligence (AI), Genomics, and Big Data are data- and compute-intensive and require high-capacity memory and a large number of operations on the data. However, this presents a new challenge to the already limited bandwidth and high energy cost incurred from moving data from memory to the execution unit and storing intermediate results. To address this challenge, there is a growing trend in both industry and academia to adopt domain-specific architecture, moving away from the traditional use of general-purpose hardware. The advent of GPUs, designed for graphic-type loads, has found itself in a perfect position to handle high and parallel computing required by AI loads. However, the issue of power due to data movements still exists. As a result, there is an increasing focus on developing hardware accelerators that can effectively perform specific functions, such as the widely used multiply-and-accumulate operation in numerous AI and DSP applications, thereby promoting new computing paradigms. Currently, some of these hardware accelerators are investigating in-memory or near-memory computing (IMC or NMC) with the goal of reducing power consumption due to data movements. The aim of this book is to provide insight into the topic of IMC and NMC, using both traditional (SRAM, DRAM, Flash) and emerging resistive memory technologies (Memristor, PCM, and MRAM). It is worth noting that, similar to memory storage, there is no clear universal memory that can satisfy all requirements for IMC. Hence, the solution will be domain- or application-specific. Contents: Data-Centric Computing Paradigm Shift, and Domain-Specific Architecture and Hardware SRAM-Based In-Memory Computing: Circuits, Functions, and Applications In and Near-Memory Computing Using DRAM MRAM-Based In-Memory Computing In-Memory Computing Using Phase Change Memory Memristor-Based In-Memory Computing In-Memory Computing Using FLASH Memory

Gomagnet 2023.
The data comes from Pirate Bay.