Rabitə və İnformasiya Texnologiyaları Nazirliyinin elektron xəbər xidməti
MIT Develops 3D Chip That Integrates CPU, Memory
One of the most fundamental problems in modern silicon is known as the processor-memory performance gap. The term, which has been used for decades, refers to the well-known tendency for CPUs to advance more quickly in terms of performance than memory does. Speculative prefetching, the implementation of various cache levels (L1, L2, L3), and a variety of software techniques have all been aimed at the same goal — reduce the impact of this gap by creating small pools of extremely fast RAM, backed by sophisticated prediction and fetching techniques that anticipate what information will be needed next.
But there’s another way to potentially deal with the problem — build the CPU directly into a 3D memory structure, connect them directly without any kind of motherboard traces, and compute from within the RAM itself.
Now, new work from MIT claims that carbon nanotubes and resistive RAM (RRAM) could be used in concert to create 3D chips that would integrate RAM directly into the CPU structure. Nor is this purely a proof-of-concept theoretical model; the design team at Stanford and MIT managed to build one million RRAM cells and two million carbon nanotubes in a single design. Ultra-dense wires between the various layers are used to connect the two, at densities that no motherboard could possibly reach. That’s impossible with traditional technology, because the temperatures used to build existing layers of circuits are far too high to avoid damaging previous layers that have already been laid down.
Krishna Saraswat, a team member from Stanford, told MIT News that this new approach could solve multiple problems at the same time.
In addition to improved devices, 3-D integration can address another key consideration in systems: the interconnects within and between chips, Saraswat told MIT News.
The new 3-D computer architecture provides dense and fine-grained integration of computing and data storage, drastically overcoming the bottleneck from moving data between chips, Mitra says. “As a result, the chip is able to store massive amounts of data and perform on-chip processing to transform a data deluge into useful information.”
Point of clarification: We’ve been talking about 3D chips for quite some time now, but there are multiple valid types of 3D structures in play. First, there are 3D chips that stack multiple discrete pieces of silicon on top of each other, like a memory chip sitting directly on top of a CPU, but connected to it via edge wiring. Second, you’ve got 3D NAND, which tilts conventional 2D NAND on its side, and “stacks” layers of flash. Third, there are FinFET designs, in which a fin sticks out of the transistor structure and therefore gives it a 3D shape. Finally, you’ve got 3D chips like this, in which RAM and CPU are combined in a monolithic structure. All of these are “3D” to one extent or another, so it’s important to be careful of what type of 3D is being discussed when evaluating the term.
The problem, as always, is in the details. Neither RRAM nor carbon nanotubes are ready for widespread commercial production, and fully integrating a new 3D chip structure is a tremendous challenge in its own right. While the fundamental premise of integrating memory and logic could allow for the huge performance boosts enthusiasts have been craving for years, this just isn’t a breakthrough that’s going to happen anytime soon. CMOS compatibility isn’t enough — carbon nanotubes would have to hit very high tolerances for manufacturing, and RRAM would need to scale up to manufacturing levels orders of magnitude more than what is currently built today (so would CNTs, for that matter, but that’s a different problem).
Still, research like this is how we move the ball forward. It could be that in 15-20 years, we’ll be computing from 3D stacked devices built with methods similar to this one.
07/07/17 Çap et