![]() |
3d xpoint/optane
Is this stuff going to have any relevancy here? It's an order of magnitude slower than RAM, and an order of magnitude faster than SSD. Where it will really shine is capacity.
Just curious. |
We're compute limited here so I don't think it'll have any direct impact. We mostly want fast CPUs balanced with fast ram, and it is neither of those things.
Indirectly, with the recent release of the consumer level SSDs, that has the potential to speed up general load times due to its much higher low queue depth random read rate. I estimate for one game I play a lot, it might reduce loading times by about 30% compared to a high end flash based SSD. I actually tested using a ramdisk, which after overheads I don't think will be that much different from Optane. The other use case as a bulk ram substitute is rather niche. Unless you measure ram requirements by the TB, it isn't going to have any relevance. |
oh, i'll definitely have uses for it elsewhere, for sure.
|
Any workload that is memory bound by SSD could be sped up, does such a workload exist for prime searching? The workload would have to use much more than a terabyte of memory (otherwise you'd use a beefy server with a terabyte of RAM), and it would need to be essentially random access (otherwise you could optimise by loading chunks of the data at a time into RAM). Other than that scenario (which I think is niche if it exists at all), I don't think optane is going to be useful in its current form.
|
[QUOTE=M344587487;470714]Any workload that is memory bound by SSD could be sped up, does such a workload exist for prime searching? The workload would have to use much more than a terabyte of memory (otherwise you'd use a beefy server with a terabyte of RAM), and it would need to be essentially random access (otherwise you could optimise by loading chunks of the data at a time into RAM). Other than that scenario (which I think is niche if it exists at all), I don't think optane is going to be useful in its current form.[/QUOTE]I can think of a number of scenarios where it could be useful, in some of which I have practical experience. For confidentiality reasons I can't go into any detail about my experience with [URL="https://en.wikipedia.org/wiki/Rainbow_table"]rainbow tables[/URL] and their relatives but, suffice to say, a petabyte of fast memory can be extremely useful.
Large databases (large by current RAM standards, of course) are also of great commercial significance, whether classical relational SQL systems or noSQL counterparts which have become more interesting of late. Again, I have experience with ~1TB databases which are confidently expected to grow to 10-100TB as data accumulates over the next few years. Even I, an unassuming amateur, have multiple terabytes to backup with [URL="https://en.wikipedia.org/wiki/BackupPC"]BackupPC[/URL] which uses compression and a home-grown noSQL database. It would run markedly faster with fast, random access, nonvolatile storage. |
[QUOTE=xilman;470715]I can think of a number of scenarios where it could be useful, in some of which I have practical experience. For confidentiality reasons I can't go into any detail about my experience with [URL="https://en.wikipedia.org/wiki/Rainbow_table"]rainbow tables[/URL] and their relatives but, suffice to say, a petabyte of fast memory can be extremely useful.
Large databases (large by current RAM standards, of course) are also of great commercial significance, whether classical relational SQL systems or noSQL counterparts which have become more interesting of late. Again, I have experience with ~1TB databases which are confidently expected to grow to 10-100TB as data accumulates over the next few years. Even I, an unassuming amateur, have multiple terabytes to backup with [URL="https://en.wikipedia.org/wiki/BackupPC"]BackupPC[/URL] which uses compression and a home-grown noSQL database. It would run markedly faster with fast, random access, nonvolatile storage.[/QUOTE] Sure, but we're talking in the context of prime searching.[LIST][*]Do cases within prime searching exist now that could be sped up (likely SSD-bound searches), if so what are they?[*]Are there algorithms which exist but are not currently feasible* to compute, which would be feasible if there were a step between RAM and SSD?[*]For any such algorithm, what features would this tier have to have?[/LIST] *For whatever definition of feasible you find feasible ;) |
[QUOTE=xilman;470715]Large databases (large by current RAM standards, of course) are also of great commercial significance, whether classical relational SQL systems or noSQL counterparts which have become more interesting of late.[/QUOTE]
This seems like the obvious use-case to me as well. There are lots of databases that are either in-memory, but at great cost, or out of memory but would love to be in-memory, which would be great candidates for optane. |
[QUOTE=M344587487;470714]Any workload that is memory bound by SSD could be sped up, does such a workload exist for prime searching? The workload would have to use much more than a terabyte of memory (otherwise you'd use a beefy server with a terabyte of RAM), and it would need to be essentially random access (otherwise you could optimise by loading chunks of the data at a time into RAM). Other than that scenario (which I think is niche if it exists at all), I don't think optane is going to be useful in its current form.[/QUOTE]Linear algebra in sieving algorithms?
|
[QUOTE=M344587487;470716]Sure, but we're talking in the context of prime searching.[LIST][*]Do cases within prime searching exist now that could be sped up (likely SSD-bound searches), if so what are they?[*]Are there algorithms which exist but are not currently feasible* to compute, which would be feasible if there were a step between RAM and SSD?[*]For any such algorithm, what features would this tier have to have?[/LIST][/QUOTE]
I can't think of it off the top of my head but I've definitely been in the position of writing code for my own searches where I've had to cut it off because of lack of RAM and I would have been able to search further if I had more memory, even relatively slow memory. Any time you're storing little bits of information in random places along large tables you'll be in roughly this situation. |
[QUOTE=xilman;470715] For confidentiality reasons I can't go into any detail about my experience with [URL="https://en.wikipedia.org/wiki/Rainbow_table"]rainbow tables[/URL] and their relatives but, suffice to say, a petabyte of fast memory can be extremely useful.
[/QUOTE] I can already imagine what you're using it for. The NSA datacenter is <10 miles from where I am. :-p Intel is coming out with a 3d x-point 1 PB per 2U (!!!!!!) device sometime in the near future. I can't remember how I came up with it but I made an educated guess of about half a million for that, so thats approximately a 40:1 physical space savings and maybe a 2-5:1 cost savings over traditional storage. |
Will accessing this have to go through memory? It will be high latency but what will the bandwidth be?
|
| All times are UTC. The time now is 14:00. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.