HBM3E; Samsung Introduces high capacity memory to accelerate AI training and inference processes. 

Samsung recently unveiled its latest innovation, HBM3E 12H DRAM, showcasing advanced TC NCF technology. For enthusiasts of acronyms, this news surely sparks excitement, but for the uninitiated, here’s a breakdown of what this entails.

Firstly, let’s decode the acronyms. HBM stands for “high bandwidth memory,” delivering exactly what its name suggests – high bandwidth. In October, Samsung introduced HBM3E Shinebolt, an upgraded version of the third-generation HBM, boasting impressive speeds of 9.8Gbps per pin, translating to a remarkable 1.2 terabytes per second for the entire package. Moving on to “12H,” this denotes the number of chips vertically stacked within each module, with 12 chips in this instance. This stacking method allows for increased memory capacity within a single module. Samsung has achieved a remarkable 36GB capacity with its 12H configuration, representing a 50% increase compared to an 8H design. Despite the increase in capacity, the bandwidth remains consistent at 1.2 terabytes per second.

Lastly, TC NCF refers to Thermal Compression Non-Conductive Film, the material layer between the stacked chips. Samsung has made significant advancements in thinning this material, reducing it to a mere 7µm thickness. As a result, the 12H stack maintains a similar height to an 8H stack, enabling the utilization of existing HBM packaging. Furthermore, the use of TC NCF enhances thermal properties, improving cooling efficiency and ultimately leading to enhanced yields.This new HBM3E 12H DRAM from Samsung holds promise for various applications, with a particular spotlight on AI. The demand for AI computing continues to surge, requiring substantial RAM capacity. Samsung’s partnership with Nvidia, for instance, has resulted in the development of cutting-edge designs like the Nvidia H200 Tensor Core GPU, boasting an impressive 141GB of HBM3E memory running at a staggering 4.8 terabytes per second – a significant leap beyond traditional consumer GPUs equipped with GDDR memory.

Reports indicate that the H200 utilizes six 24GB HBM3E 8H modules from Micron, providing a total capacity of 144GB, although only 141GB is usable. Alternatively, the same capacity can be achieved with just four 12H modules or a capacity of 216GB with six 12H modules.Samsung estimates that the increased capacity of its new 12H design will accelerate AI training by 34% and enable inference services to support “more than 11.5 times” the number of users. With the AI market booming, demand for accelerators like the H200 remains robust, making it a lucrative venture for memory suppliers such as Micron, Samsung, and SK Hynix, all vying for a share of the market.

[Image Source: BNN Breaking]

Leave a Reply

Your email address will not be published. Required fields are marked *