Feeds

Elpida boosts DRAM to SRAM speeds

Fast random data access

  • alert
  • submit to reddit

Elpida has introduced a pair of performance promoting techniques designed to reduce the time it takes DRAM chips to provide data requested by the host desktop, notebook, server or other devices.

Technique one essentially extends the bus headroom in Elpida's 1Gb chips and allows them to operate as both DDR 1 and DDR 2 parts, clocked at up to 400MHz and 800MHz, respectively. According to the company, the technology, which is incarnated through the chip's circuit design and layout - it uses a common output buffer and a clever input latch circuit - has already been incorporated into its mass-production chips.

The first technique is used in memory installed in high-end PCs' and servers' main memory banks. The second is geared more towards cache memory, typically of kind found in routers and other network devices. Elpida claims it results in random access times that are between 33 per cent and ten per cent of those offered by conventional DRAM and comparable to those offered by fast SRAM devices.

The technology, jointly developed with Hitachi, incorporates high-speed memory arrays that use two memory cells per bit along with a high-speed data amplification method called "three-stage sensing".

The "twin-cell memory" doubles of the size of the read signal and combined with the complementary operation, eliminates imbalances and noise, yielding high-speed random access.

In general-purpose DRAM, signals read from memory cells go through a two-stage amplification process - first through a 'sense amplifier' and then through a 'main amplifier' - before they are output. The new Elpida method uses an ultra-high-sensitivity main amplifier configured in three stages. This, plus the increased signal strength from the twin-cell memory, allowed Elpida to ditch the sense amplifier, allowing the data to be output more quickly.

Elpida has already fabbed a 110nm 144Mb DRAM prototype using these techniques, yielding a random access time of under 6ns. The company reckons that with fine tuning, it can get that figure down to 4.8ns. Moreover, through the use of an architecture that allows simultaneous access of I/O ports by making them independent, a high data rate of 6GBps was achieved, Elpida said. ®

Related stories

Elpida, Micron ask Japan to take Hynix to task
Rambus offers DDR controller cores
Hynix leads Q1 DRAM sales charge
Judge throws out FTC case against Rambus
Infineon preps 120m R&D fab expansion plan

Whitepapers

Forging a new future with identity relationship management
Learn about ForgeRock's next generation IRM platform and how it is designed to empower CEOS's and enterprises to engage with consumers.
Why cloud backup?
Combining the latest advancements in disk-based backup with secure, integrated, cloud technologies offer organizations fast and assured recovery of their critical enterprise data.
Win a year’s supply of chocolate
There is no techie angle to this competition so we're not going to pretend there is, but everyone loves chocolate so who cares.
High Performance for All
While HPC is not new, it has traditionally been seen as a specialist area – is it now geared up to meet more mainstream requirements?
Intelligent flash storage arrays
Tegile Intelligent Storage Arrays with IntelliFlash helps IT boost storage utilization and effciency while delivering unmatched storage savings and performance.