Ultra-wide HPC Memory to Reach 8 GT/s

SK Hynix was among the essential designers of the initial HBM memory back in 2014, and the business definitely wishes to remain ahead of the market with this premium kind of DRAM. On Tuesday, buried in a note about certifying the business’s 1bnm fab procedure, the the maker mentioned for the very first time that it is dealing with next-generation HBM3E memory, which will make it possible for speeds of as much as 8 Gbps/pin and will be readily available in 2024.

Contemporary HBM3 memory from SK Hynix and other suppliers supports information transfer rates as much as 6.4 Gbps/pin, so HBM3E with an 8 Gbpis/pin transfer rate will offer a moderate, 25% bandwidth benefit over existing memory gadgets.

To put this in context, with a single HBM stack utilizing a 1024-bit large memory bus, this would provide a recognized excellent stack die (KGSD) of HBM3E around 1 TB/sec of bandwidth, up from 819.2 GB/sec in case of HBM3 today. Which, with contemporary HPC-class processors using half a lots stacks (or more), would exercise to numerous TB/sec of bandwidth for those high-end processors.

According to the business’s note, SK Hynix means to begin tasting its HBM3E memory in the coming months, and start volume production in 2024. The memory maker did not expose much in the method of information about HBM3E (in reality, this is the very first public reference of its requirements at all), so we do not understand whether these gadgets will be drop-in suitable with existing HBM3 controllers and physical user interfaces.

. .(* ) . HBM Memory Contrast

.

.

HBM3E

HBM3

HBM2E

HBM2

.

Max Capability

.

.

. (* )16 GB

8 GB

.

Max Bandwidth Per Pin

.

8
Gb/s

6.4 Gb/s

3.6 Gb/s

2.0 Gb/s

.

Variety Of DRAM ICs per Stack

.

?

12

8

8

.(* ) .

.

1024-bit

.

.

?(* ) .

1.1 V

.

.

.

1 TB/s

.

.

256 GB/s

.

Presuming SK hynix’s HBM3E advancement goes according to strategy, the business needs to have little problem lining up clients for even faster memory. Particularly with need for GPUs skyrocketing for usage in structure AI training and reasoning systems, NVIDIA and other processor suppliers are more than

they require to produce ever quicker processors throughout this

1b

Like this post? Please share to your friends:
Leave a Reply

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!:

.
. . . . .(* ) .
? 24 GB . . .
. . . . .
. . . . Efficient Bus Width
. Voltage(* ) .
1.2 V(* ) .(* )1.2 V . Bandwidth per Stack (* ) . .(* )819.2 GB/s 460.8 GB/s
.(* ) . happy to pay premium for sophisticated memory boom duration in the market SK Hynix will be producing HBM3E memory utilizing its nanometer fabrication innovation (fifth Generation 10nm-class node), which is presently being utilized to make DDR5-6400 memory chips that are set to be verified for Intel’s next generation Xeon Scalable platform. In addition, the production innovation will be utilized to make LPDDR5T memory chips that will integrate high efficiency with low power intake.