![]() |
|
|
#1 |
|
22·33·19 Posts |
Could an LL test be split across multiple local computers, the goal being to speed up computation of a single LL test for large exponents? Is it feasible that current home or small server tech designed to minimise latency between machines would be good enough to allow this?
I don't know how the LL test is split onto multiple cores, I guess it must be that the multiplication is split (?) somehow. Can the work be split into an arbitrary number of pieces, is there an optimum number of pieces or piece size to split the work into (dependent on p and/or cpu architecture perhaps), and would the optimum piece count for large exponents be high enough to even suggest that a multi-computer LL test might be worthwhile? I realise that there may be many problems with splitting the workload onto cores which aren't tightly in sync, particularly for something as highly tuned reliant on latency as an LL test, probably. But as I don't know anything for sure and can only guess, I thought asking might be a good idea :) |
|
|
|
#2 | |
|
Oct 2011
7×97 Posts |
Quote:
|
|
|
|
|
|
|
#3 |
|
Feb 2012
Athens, Greece
47 Posts |
You can, however, pause an LL test and take it from one computer to another so that you can continue the same test when you upgrade hardware or change computers.
|
|
|
|
|
|
#4 |
|
Apr 2012
2×5 Posts |
As I understand it LL currently can be done multi-core, because the FFT used in multiplication can be run multi-core. I know iterations cannot be performed out of sync or without the result of the previous iteration. In cases where the iteration can be done entirely in the cache, is it right to think that any external communication (outside of this cpu to ram or anywhere) would make it slower no matter what? For any which cannot be done wholly in the cache (do such cases exist?), would a multi-cpu setup potentially benefit then?
I am the OP, please excuse my ignorance
|
|
|
|
|
|
#5 | |
|
Oct 2011
12478 Posts |
Quote:
1024K FFT on 1 core = 22.390ms 1024K FFT on 2 cores = 13.738ms 1024K FFT on 3 cores = 9.706ms 1024K FFT on 4 cores = 8.489ms Last fiddled with by bcp19 on 2012-04-07 at 16:42 |
|
|
|
|
|
|
#6 | ||
|
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
722110 Posts |
Quote:
Quote:
|
||
|
|
|
|
|
#7 | |
|
"Åke Tilander"
Apr 2011
Sandviken, Sweden
2·283 Posts |
Quote:
If core #1 is set to 100% the second core adds 86% of the first cores capacity 3rd 83% 4th 83% 5th 80% 6th 24% using AVX. It seems as if the speed of the memory is a very crucial factor in relation to how much capacity you loose adding another core. |
|
|
|
|
|
|
#8 | |
|
Feb 2012
34×5 Posts |
Quote:
|
|
|
|
|
|
|
#9 |
|
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
11100001101012 Posts |
Indeed, AVX is so fast that Prime95 is now severely memory limited. The reason the extra cores appear to be relatively efficient is that there is reduced memory requirements from running fewer tests across the system. I suspect if we had infinitely fast memory, the marginal efficiency would be far lower.
|
|
|
|
|
|
#10 |
|
"Åke Tilander"
Apr 2011
Sandviken, Sweden
10001101102 Posts |
|
|
|
|
|
|
#11 | |
|
"Åke Tilander"
Apr 2011
Sandviken, Sweden
2×283 Posts |
Quote:
If core #1 is set to 100% (= 112% of first cores capacity with slower memory) the second core adds 92% of the first cores capacity 3rd 90% of the first cores capacity 4th 75% of the first cores capacity 5th 50% of the first cores capacity 6th 17% of the first cores capacity using AVX. |
|
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| does half-precision have any use for GIMPS? | ixfd64 | GPU Computing | 9 | 2017-08-05 22:12 |
| Single vs Dual channel memory | TObject | Hardware | 5 | 2014-12-24 05:58 |
| How to have all 4 cores working on a single number? | tech96 | Information & Answers | 5 | 2014-07-04 09:53 |
| Why factoring is single-core designed? | otutusaus | Software | 33 | 2010-11-20 21:05 |
| 4 checkins in a single calendar month from a single computer | Gary Edstrom | Lounge | 7 | 2003-01-13 22:35 |