High Bandwidth Memory (HBM) is a pc memory interface for 3D-stacked synchronous dynamic random-access memory (SDRAM) initially from Samsung, AMD and SK Hynix. RAM in upcoming CPUs, and FPGAs and in some supercomputers (such because the NEC SX-Aurora TSUBASA and Fujitsu A64FX). HBM achieves higher bandwidth than DDR4 or GDDR5 whereas utilizing less power, and in a substantially smaller form issue. This is achieved by stacking as much as eight DRAM dies and an non-obligatory base die which can include buffer circuitry and test logic. The stack is usually related to the memory controller on a GPU or CPU by way of a substrate, corresponding to a silicon interposer. Alternatively, the memory die may very well be stacked straight on the CPU or GPU chip. Within the stack the dies are vertically interconnected by via-silicon vias (TSVs) and microbumps. The HBM expertise is analogous in precept however incompatible with the Hybrid Memory Cube (HMC) interface developed by Micron Technology. HBM memory bus is very vast compared to other DRAM recollections similar to DDR4 or GDDR5.

An HBM stack of 4 DRAM dies (4-Hello) has two 128-bit channels per die for a total of eight channels and a width of 1024 bits in total. A graphics card/GPU with four 4-Hello HBM stacks would therefore have a memory bus with a width of 4096 bits. As compared, the bus width of GDDR recollections is 32 bits, with sixteen channels for a graphics card with a 512-bit memory interface. HBM helps as much as four GB per package deal. The bigger number of connections to the memory, relative to DDR4 or GDDR5, required a brand new technique of connecting the HBM memory to the GPU (or different processor). AMD and Nvidia have each used goal-built silicon chips, called interposers, to connect the memory and Memory Wave GPU. This interposer has the added benefit of requiring the memory and processor to be bodily shut, reducing memory paths. However, as semiconductor machine fabrication is significantly more expensive than printed circuit board manufacture, this adds price to the final product.

The HBM DRAM is tightly coupled to the host compute die with a distributed interface. The interface is divided into unbiased channels. The channels are utterly independent of each other and are usually not necessarily synchronous to one another. The HBM DRAM makes use of a wide-interface structure to achieve high-velocity, low-power operation. Every channel interface maintains a 128-bit data bus operating at double knowledge charge (DDR). HBM helps switch rates of 1 GT/s per pin (transferring 1 bit), yielding an general package bandwidth of 128 GB/s. The second generation of Excessive Bandwidth Memory, HBM2, also specifies up to eight dies per stack and doubles pin switch rates up to 2 GT/s. Retaining 1024-bit vast access, HBM2 is ready to succeed in 256 GB/s memory bandwidth per package deal. The HBM2 spec allows up to eight GB per package deal. HBM2 is predicted to be especially useful for efficiency-delicate shopper functions equivalent to virtual actuality. On January 19, 2016, MemoryWave Guide Samsung announced early mass manufacturing of HBM2, at up to eight GB per stack.

In late 2018, JEDEC announced an replace to the HBM2 specification, providing for MemoryWave Guide elevated bandwidth and capacities. As much as 307 GB/s per stack (2.5 Tbit/s effective data price) is now supported in the official specification, although products operating at this velocity had already been obtainable. Additionally, the replace added assist for 12-Hello stacks (12 dies) making capacities of up to 24 GB per stack attainable. On March 20, 2019, Samsung introduced their Flashbolt HBM2E, featuring eight dies per stack, a switch fee of 3.2 GT/s, offering a complete of 16 GB and 410 GB/s per stack. August 12, 2019, SK Hynix introduced their HBM2E, that includes eight dies per stack, a transfer price of 3.6 GT/s, offering a total of 16 GB and 460 GB/s per stack. On July 2, 2020, SK Hynix announced that mass manufacturing has begun. In October 2019, Samsung announced their 12-layered HBM2E. In late 2020, Micron unveiled that the HBM2E normal can be updated and alongside that they unveiled the following normal known as HBMnext (later renamed to HBM3).

About Author

Leave a Reply

Leave a Reply

Your email address will not be published. Required fields are marked *