The first pass of this article on MacWEEK had a few mistakes and ambiguities in it. So this is an edited and revised version of that Article.
Many people are wondering about RDRAM. What is it? What does it mean? Will Mac users need it? What will it do?
A few years back a company (Rambus) created a new way to work interface with memory. It was fairly fast, and tried to address some of the complex electrical and physical problems in dealing with memory. They created a proprietary solution that solves some of the problems, and they decided they would license the solution, and charge a "use tax" in order to pay for their R&D.
RDRAM stands for a Rambus Dynamic Random-Access Memory.
That is not an reasonable business model or anything, but it is a bit more a closed solution than some models. Intel historically likes the idea of proprietary solutions and controlling the industry -- and especially likes the idea of a usage tax (licensing fee) for all memory. So Intel bought into Rambus (with some control and kickbacks likely negotiated though the full deal is not disclosed) -- and pushed for some protocol changes. In return, Intel started pushing RDRAM as the future of Intel based processors and memory systems, and hopefully the future for all computers.
If RDRAM wins, and becomes popular, then Intels investment in Rambus pays off. Either way, Intel has a vested interest in changing things on a regular basis, since this makes it hard for clone chip makers (of processors or memory controllers) to keep up. The change doesn't have to be for the better for this to work -- but it helps.
Pentiums (and their future IA64 chips like the Itamium -- the processor formerly known as Merced) memory controllers will be using RDRAM -- so RDRAM is going to be popular. The only question is how popular. Can Intel drive the market this far? They have failed at some efforts like this in the past -- so they may this time as well.
Needless to say, the memory industry at large isn't very enthused on just quietly paying their graft to RAMBUS. And there are some changes and costs for doing so -- though these will diminish over time. But the industry is not stupid -- they are willing to support RDRAM as an "also" solution, in order to cover their bases (just in case). But most of the industry has teamed up to support other memory standards that are at least as good of solutions (or better, at least short term). The common alternative to RDRAM is DDR.
How these all work
RDRAM is a way to go from using a slower speed, but wider bus -- to a narrower, higher speed, packeted memory network. RDRAM is a very fast bus -- up to 800 MHz. This sounds blindingly fast to tech-heads since current memory (like SDRAM) is only running at 100 MHz. But as we should know with processors, MHz is not pure speed -- just potential.
Intel and RAMBUS's literature love to stress the speed (MHz) of SDRAM chips versus RAMBUS -- assuming they are both the same width. But that isn't the case. RAMBUS is only 16 bits wide (or 18 if you have error correction) -- traditional busses are 64 bits wide (for now) which is 4 times wider. Intel makes it sound like RDRAM is much faster (MHz) but the truth is that because normal busses are 4 times wider to begin with, then RDRAM has to be about 4 times faster in MHz just to break even. So while RDRAM sounds much faster -- it is really not much faster than other memory solutions, and may in fact be slower.
RDRAM is starting at 4 times fasterin MHz (800) -- but they have plans to go to 1600 and maybe faster. But SDRAM isn't staying the same. Regular memory is going to 133 MHz soon, and probably 166 MHz in another year (and 200 MHz after that). And even more dramatic is that SDRAM is going to a new technology as well -- knows as DDR-SDRAM. DDR stands for Double-Data-Rate. DDR-SDRAM will be roughly twice as fast as regular SDRAM (double the data rate) at the same clock speed.
How DDR doubles the data-rate is that when the clock edge rises, it sends data, and when the clock edge falls, it sends again. Normal memory only sends at half that rate. So 133 MHz DDR is really running effectively at 266 MHz. So that makes DDR much faster than its clock rate. This is also how RDRAM works -- which isn't really 800 MHz, but is 400 MHz double-clocked.
Why send only 64 bits at a time? IBM already uses up to a 256 bit bus on some of their high end versions of the PowerPC (Power3 and Power4), and the G4 has 128 bit bus (internal) with some designs going 128 bit external. Motorola and IBM are already talking about future versions of the PowerPC going even wider (256 bits internal, annd possibly external). If you double or quadruple the data width, then you increase the data-speed (throughput) by the same amount as well.
So how do they stack up? Roughly as follows:
This all uses pretty rough approximations. Exact performance is far more complex. But the point is that DDR-SDRAM has the potential to do speeds that are faster than RDRAM for now, and they can do it easier (with fewer design changes).
RDRAM requires a complete design change and ways of making motherboards and busses. Because of RDRAMs high speed (400 MHz) chips and sockets have to be very close together and they had to create all new connectors. Instead of DIMMs (Dual Inline Memory Modules), which SDRAM (and DDR) uses, RDRAM use RIMMs (Rambus Inline Memory Modules). RIMM sounds far more similar to traditional designs than they really are. Current implementations only have 3 RIMMs sockets, and all RIMMs not in use have to have a special pass-thru (empty) module. RIMM's form factor won't work well in portables, and there are other issues like heat, power and shielding, so you are unlikely to see them in portables or low-end designs. Because RDRAM only allows three modules, and there are some speed and reliability issues, it is unlikely that RDRAM will be widely used in servers any time soon. So RDRAM can't compete in the high end, low end, or portables -- so this means that for a while we are going to have competing standards even if RDRAM does become popular in desktops.
Market versus Technology
Just because it is better, doesn't mean it will win in the market. Heck if that was true, we'd all be using Macs. So don't count RDRAM in or out just yet. It certainly looks like it will make a dent in the PC market -- and the hype and Intel generated momentum matter. But a consortium of companies are supporting DDR for this next generation or two. Right now it is basically a bit of a battle between Intel versus everyone else. Memory companies are smart enough to be doing some RDRAM memories -- but they are all just hedging their bets, and probably keeping their fingers crossed against RDRAM.
Rambus (RDRAM) has a few nice concepts, but nothing in it is magic -- and there are issues the implementation (and design). Enough "issues" have popped up that Intel has repeatedly slipped schedules in getting their memory controllers shipped (*). Intel also backed down from an RDRAM only solution, to a chipset that supports both RDRAM and SDRAM -- so even Intel is hedging their bets and not completely convinced of RDRAM.
(*) Intel was supposed to be using RDRAM with their Camino chip-set scheduled to be released with the Pentium III's (January of 1999) -- they missed. In fact, they missed a few times. However, after many delays, the chipset (and RDRAM support) has finally creeped out the door in November '99.
The decisions and opinions over which choice is better is complex.
RDRAMs one strength is that it reduces the pin-count on memory chips (and support chips). This may reduce costs packaging -- but it comes with the tradeoffs I mentioned (higher speeds required, licensing and higher costs in cooling, power, connectors, motherboards require more layers, and so on). RDRAM has advantages -- but I'm not convinced of the tradeoffs (short term). Long term, many of the concepts being used in RDRAM are not bad -- there are just more gotchas and issues to be worked out first.
Not only SDRAM can go wider -- there are ways to go wider with RDRAM. One high end memory controller can use multiple "channels" (banks) at once. This isn't quite the same as the way to increase the bus in a traditional designs (some ways better, other worse), and can add more cost, complexity and unreliability -- but it can be done, and will be.
My opinions on RDRAM is that when it matures, it will be nice. When they get multiple channel memory controllers working well, and when the premium for cutting edge technologies evaporates, it has a lot of potential. But for now, and for the near term, there are a lot more conservative and cost effective ways for mainstream machines to get the performance that they need. So for the next generation or two, the short term solutions don't appear to be RDRAM.
Rambus is a drastic change (that should have some more dramatic returns to justify it). For now, I see nothing compelling about the technology that requires the change for mainstream machines. Certainly, the next generation or two of Macs (and PCs) could operate just fine without this forced change. However, long term these issues may change. When RDRAM controllers start allowing many independant channels, and if they work around those complexities, and when the economies of scale kick in, then it might pay off to switch.
Engineering is about tradeoffs -- and what doesn't make sense in one generation of technology, may be a brilliant move for another. I look forward to the next generation or two of Macs running DDR-SDRAM, maybe with a wider bus, and letting PC users be the beta testers for an unproven technology (RDRAM). If RDRAM works out and the industry shifts, then later generation Macs can easily move to the more proprietary RDRAM type solution (after all the gotcha's have been worked out). Until then they should probably be using the more standard solutions, more open, easier to implement, more cost effective, and maybe even faster types of memory.