![]() |
[quote=henryzz;212595]i would suspect earlier as we have just had a breakthrough in technology(PS3s)
The difference between 2004 and 2009 isnt that great in comparison.[/quote] With respect, the PS3 is [b]not[/b] a breakthrough in technology. It is rather outdated technology. What the EPFL people have done is build a large cluster of PS3s and have programmed the special purpose hardware (the cell processors) inside them. They've undoubtedly been clever, and all kudos to them, but the technology is not that impressive. Just wait until massively parallel GPU computation hits the mainstream. It seems entirely plausible that thousands of ECM curves can be run in parallel on 32-bit GHz gpus hosted in hundreds of cpus, each of which can run several curves concurrently. Some of us are working towards that end. Paul |
[QUOTE=xilman;212614]With respect, the PS3 is [b]not[/b] a breakthrough in technology. It is rather outdated technology.
What the EPFL people have done is build a large cluster of PS3s and have programmed the special purpose hardware (the cell processors) inside them. They've undoubtedly been clever, and all kudos to them, but the technology is not that impressive. Just wait until massively parallel GPU computation hits the mainstream. It seems entirely plausible that thousands of ECM curves can be run in parallel on 32-bit GHz gpus hosted in hundreds of cpus, each of which can run several curves concurrently. Some of us are working towards that end. Paul[/QUOTE] They are also CHEAP. |
[quote=R.D. Silverman;212615]They are also CHEAP.[/quote]As are graphics cards.
Paul |
[QUOTE=xilman;212619]As are graphics cards.
Paul[/QUOTE] Sure. But graphic cards need a parent platform. |
[quote=R.D. Silverman;212624]Sure. But graphic cards need a parent platform.[/quote]Which, these days, are pretty near ubiquitous. Phones and games consoles are about the only mass-market platform which do not (at present) have a programmable gpu.
Example: at the end of last year I bought a cheap and cheerful laptop. It came with a nVIDIA GT240M with a dedicated 1GB of RAM. It may not be up there with the Tesla systems but it still has 48 32-bit cpus running at 1.2GHz. Paul |
For problems which run well enough in parallel, GPUs are also by far the best proposition for individual resource-constrained contributors. The GPU on my test system cost $109 and is ~50x faster at NFS polynomial selection than the machine it's plugged into. Even if I could afford 50 machines to replace it, where would I put them? My basement?
I could also narrow the gap between CPU and GPU, perhaps to the point where a core2 system was only 5x slower, but where would I put 5 more machines? My basement? |
[QUOTE=jasonp;212642]I could also narrow the gap between CPU and GPU, perhaps to the point where a core2 system was only 5x slower, but where would I put 5 more machines? My basement?[/QUOTE]
You could put them in my basement. :smile: |
:smile::big grin:[QUOTE=jasonp;212642]For problems which run well enough in parallel, GPUs are also by far the best proposition for individual resource-constrained contributors. The GPU on my test system cost $109 and is ~50x faster at NFS polynomial selection than the machine it's plugged into.[/QUOTE]
Nice! NFS sieving runs very well in parallel. How well does your GPU perform when sieving? |
[quote=R.D. Silverman;212742]:smile::big grin:
Nice! NFS sieving runs very well in parallel. How well does your GPU perform when sieving?[/quote]My guess, and it is only a guess, is that it doesn't perform at all. Porting general purpose code to a GPU is rarely a simple matter of recompilation. GPUs are also (by and large) superb at arithmetic and not very good at memory access. That said, porting a siever to a GPU would be very much a worthwhile exercise. Do you fancy taking on the challenge? A bunch of people here could try to give assistance. Paul |
[QUOTE=Jeff Gilchrist;212676]You could put them in my basement. :smile:[/QUOTE]
My basement has a 466MHz Alpha, a 400MHz G4 and a 1.4GHz P4, that all work fine and are not even worth turning on. And that's just the computers that work :) [QUOTE=xilman] GPUs are also (by and large) superb at arithmetic and not very good at memory access. That said, porting a siever to a GPU would be very much a worthwhile exercise. Do you fancy taking on the challenge? A bunch of people here could try to give assistance. [/QUOTE] Algorithms that are dominated by memory latency and random byte addressing would be very hard to code on a GPU. Several have already tried to port a lattice sieve with disappointing results |
Is anyone working on ECM on GPUs? That's the second biggest CPU sink in factoring.
Chris K |
| All times are UTC. The time now is 08:02. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.