![]() |
Which type of RAM is faster?
Which type of RAM is faster for GIMPS the RDRAM or the DDR RAM?
EDIT: Added question mark. |
currently it is in slowest to fastest ddr266, ddr333, pc800(rdram dual)ddr400, and pc1066(rdram dual) When ddr goes dual i believe it will be faster than rdram, if you have at least dd333.
|
I do not think the picture is so clear. Given the array of different choices of Memory timing choices, BIOS settings, and most of all chipsets.
For examble, consider the [url=http://www.supermicro.com/PRODUCT/MotherBoards/GC_HE/P4QH6.htm]SuperMicro P4QH6 Motherboard[/url]. Although it only uses DDR200 Memory the ServerWorks GC-HE Chipset provides 4-way _bank_ interleaving, i.e. Memory must be populated 4 sticks at a time, to provide what I expect would be extrodinary memory bandwidth. But then again this is for a Quad Xeon board with 32 DIMM sockets, max of 32GB of RAM. I do not have such a MB but I would expect that it would have more memory bandwidth than DDR266 :) But then again it also has 4 CPU's so maybe not depending on the configeration and how you measure it. |
[quote]When ddr goes dual i believe it will be faster than rdram, if you have at least dd333.[/quote]
I was purely stating what is currently available on the desktop. When you add more channels of a dram based memory system, your badwidth increases significantly, without a latency penalty you incur with rdram. That is why you do not see rdram in large format memory systems(other that the dual Xeon boards from intel, and those are becomming scarce as it is, with the serverworks ddr chipsets decomming more proliferate. |
[quote]Although it only uses DDR200 Memory the ServerWorks GC-HE Chipset provides 4-way _bank_ interleaving, i.e. Memory must be populated 4 sticks at a time[/quote]
4-way [b]BANK[/b] interleave doesnt mean you have to install memory in blocks of '4' sticks, nor it means it is running memory in 4 channels. In fact it doesnt even mean close to that. The BANK in a 'interleave' is not a bank in a memory stick. It is a BANK within the memory chip itself. You can enable 4 bank inteleave on any mobo (provided it has the bios option), if your memory supports it and in any number of sticks installed. Nearly all KT266/266A/333 mobos has this options and you will see, only 1 stick is needed for the 4 bank inteleave actually take place and performance gained. |
[quote]Four-way memory bank interleaving provides outstanding memory performance. (Memory modules must be equally populated on four banks at a time shown as above.)
[/quote] 4 way memory [b]BANK[/b] interleaving means precisely that. 4way memory interleaving is what you are thinking of. |
I am sorry but it doesnt.
The BANK doesnt mean Memory BANK. It means BANK within the memory chips itself. You can run 4 way BANK interleave on a SINGLE MEMORY MODULE. Try it out yourself and you will see performance gain with that ON. |
If what you are refering to is correct. A mobo will have to have a 256 bit memory interface to the chipset, which doesnt exist for for now.
All DDR memory interface are currently 64bit with 1 or 2 exceptions that has 128 bit (Nforce being one). You have just got confused with the meaning of the word BANK, as do 99% of the people. When people say BANK they think its the BANKs on the memory module because thats the only thing they have heard in terms of BANK. But I can ASSURE you the BANK INTERLEAVE means the banks INSIDE the EACH of the memory chips and NOT as a whole memory bank on a memory stick. |
[quote="xtreme2k"][quote]Although it only uses DDR200 Memory the ServerWorks GC-HE Chipset provides 4-way _bank_ interleaving, i.e. Memory must be populated 4 sticks at a time[/quote]
4-way [b]BANK[/b] interleave doesnt mean you have to install memory in blocks of '4' sticks, nor it means it is running memory in 4 channels. In fact it doesnt even mean close to that. [/quote] The word bank is often used in multiple contexts. I truly mean 4-way interleaving between DIMMs. See SuperMicro's desciption of their Memory riser board for the SuperMicro MotherBoard I mentioned above: [url]http://www.supermicro.com/product/superserver/mec2.htm[/url] Let me be clear, This Motherboard does Four-Way DIMM interleaving as it has FOUR DDR-200 Memory channels. [quote="xtreme2k"]The BANK in a 'interleave' is not a bank in a memory stick. It is a BANK within the memory chip itself. You can enable 4 bank inteleave on any mobo (provided it has the bios option), if your memory supports it and in any number of sticks installed. Nearly all KT266/266A/333 mobos has this options and you will see, only 1 stick is needed for the 4 bank inteleave actually take place and performance gained.[/quote] I know precisely what your are discussing; nevertheless, Motherboard uses a ServerWorks GC-HE chipset and is designed for [b]QUAD Xeons[/b].I would be careful when attempting to draw generalization based on VIA desktop chipsets ;) |
Have you ever worked with a serverworks chipset?
The intel 840, 850, 860, nforce, and most 'newer' serverworks chipsets us a bank memory technology (the physical topology of the memory is to use more than one physical bank of memory, meaning a separate set of control, address, and data lines) This is similar to the old memory banks of the 72pin simm modules, where they were paired in order to increase the usable bandwidth, without the added latency incurred with larger footprint modules. Current Dram technologies do have internal bank structure, but in this case we are not talking about the internal structures of the rams. The physical dimms are located in separate banks in order to increase the amount of available memory, without the massive latency increases that would be incurred by increasing the addressable space of the physical module. By searching in 4 separate memory sets simultaniously, you can reduce the latency of the system, (the interleave)by the number of physical banks. This also allows the processors to use more than their physical address space, which is 4096MB(limit of 32 bits) and instead use the virtual address space, controlled by the chipset, and not the CPU cache. [quote]Bank Referring to memory slots a bank is the smallest amount of memory that can be addressed by the processor at one time. When installing or upgrading memory the instructions or documentation may refer to a bank. [/quote] |
[quote="xtreme2k"]I am sorry but it doesnt.
The BANK doesnt mean Memory BANK. It means BANK within the memory chips itself. You can run 4 way BANK interleave on a SINGLE MEMORY MODULE. Try it out yourself and you will see performance gain with that ON.[/quote] I am aware of this. In fact all of my systems with BIOSs that offer this feature have it turned on. Sorry for the terminology problem. Read the links I have posted above. What I said is correct. |
[quote="xtreme2k"]If what you are refering to is correct. A mobo will have to have a 256 bit memory interface to the chipset, which doesnt exist for for now. [/quote]
If you do not believe [url=http://www.supermicro.com]SuperMicro[/url], then read ServerWorks's own web page [url]http://www.rccorp.com/products/matrix.html[/url]. Refer to GC HE Column. It does have a 256 bit memory interface. |
[QUOTE=jeff8765;650]Which type of RAM is faster for GIMPS the RDRAM or the DDR RAM?
EDIT: Added question mark.[/QUOTE] The answer is not so simple to answer because it depends upon the job and need to take into account also cache usage. It is all 1 big bandwidth problem. Yet there is special trick that messes up bandwidth problem. If the answer is just getting from random spot couple of bytes - then the answer might surprise you. DDR3 for example can quickly deliver 32 bytes and then abort the rest of the cacheline of 64 bytes. Yet in general spoken: more memory channels is better. However there is also a limit there and weird things. For example one might guess 4 channels DDR4 with Intel Xeon cpu i've got here 2699v4 (sure i have ES versions here was much cheaper some time ago). Yet many chinese local market (local in china) cheap motherboards from like 50 dollar a piece, they have 2 channels on paper. However they use trick that 2 dimms cooperate somehow. So performance in reality is close to 4 channels. 4x bandwidth of a single DIMM is of course nice to have. One would guess then some more highend chip there that can have 4 channels + 2 dimms cooperating, effectively that's similar to nearly 8 channels - yet officially it's 4 channels. One would guess that this is magnificent better. Problem is that the maximum bandwidth the CPU can handle is just 10% above 4 channels. So the additional 4 dimms delivering in theory factor 2 more bandwidth then do not help. So what you seek is memory banwidth x number_of_channels x dimms_cooperating >= bandwidth cpu can handle. Then there is of course REG ECC memory - we gladly pay the price for the extra clock that registered memory eats over unbuffered. Now for GPU's there is another problem that was there some years ago. GDDR5 itself already has a CRC to check validity of the data. Yet for sysadmins it's not such easy to use feature. ECC on other hand means huge problems for hardware designers to add to a chip. We all like to have it - yet it's a big burden on performance of the system. So in short non-ecc unbuffered memory is always going to be faster. Yet i would never accept anything else than ECC for a HPC system that government might build. There is a paper reality on how research might be done and there is practice. practice is that 99% of researchers is not so clever like me and not so critical at results they spit out. Showing a result outweighs anything else. In short any calculation they do the hardware conditions such as ECC should be there to avoid such sort of error from happening. Yet it's a big performance burden and huge extra price. |
Auch - this was reply upon posting from 21 years ago :)
|
Better late than never, I always say. :smile:
|
The answer was given in post #3. Responses after that can be safely ignored.
|
| All times are UTC. The time now is 16:13. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.