![]() |
|
|
#1 |
|
Mar 2010
43 Posts |
Let's get the ball rolling on this one.
Processor: Pentium 4 3.4 GHz tpsieve for the variable n-range: 5M p/sec tpsieve for a single n: 71.5M p/sec NewPGen for a single n: 86M p/sec NewPGen for "Operation Megabit Twin": estimated to be 80 hours for 1T |
|
|
|
|
|
#2 |
|
Mar 2005
Internet; Ukraine, Kiev
11·37 Posts |
CPU: Intel i5-750 (all 4 cores loaded).
tpsieve on x86_64 Linux for n=480000-485000: 108M p/sec. |
|
|
|
|
|
#3 |
|
A Sunny Moo
Aug 2007
USA
142328 Posts |
|
|
|
|
|
|
#4 |
|
Mar 2010
1010112 Posts |
From what I've seen, the Megabit Twin project goes through a range of k, not a range of p. So that's ~3.5M k/sec, not 3.5M p/sec.
|
|
|
|
|
|
#5 |
|
A Sunny Moo
Aug 2007
USA
2×47×67 Posts |
Ah, right, I see now...most of the prime search efforts I've worked with deal with relatively small ranges of k, and thus I am used to always having an unqualified reference to the suffix "T" refer to p, not k. Since in this project both values are of magnitudes that can be reasonably referred to in T, I would suggest that in the future qualifiers be used: for example "k=1T" instead of just 1T, leaving the latter (or even better, p=1T) strictly for p references.
Last fiddled with by mdettweiler on 2010-06-07 at 00:26 |
|
|
|
|
|
#6 |
|
"Dave"
Sep 2005
UK
23·347 Posts |
You can also calculate a rate in p/sec. We are currently sieving to p=100e9 and therefore 80 hours translates to 347k p/sec. Not very fast, but NewPGen has to break a 1T k range into almost 250 pieces until it gets to p=1e9.
|
|
|
|
|
|
#7 | |
|
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
614110 Posts |
Quote:
I will do a test now to see vaguely when. edit: ~p=4e4 would do the trick nicely Last fiddled with by henryzz on 2010-06-07 at 17:05 |
|
|
|
|
|
|
#8 | |
|
I quite division it
"Chris"
Feb 2005
England
1000000111012 Posts |
Quote:
![]() I was about to do some tests. So, you are suggesting sieving to just 40,000 then again to 100G and it will fit into 485Mb? Just making sure I've got it right. |
|
|
|
|
|
|
#9 |
|
May 2010
499 Posts |
The default option for NewPGen is to sieve to 1G, then to 100G. I don't know whether it's possible to change it to what you were suggesting.
|
|
|
|
|
|
#10 |
|
I quite division it
"Chris"
Feb 2005
England
31×67 Posts |
I meant run it once to 40,000 then manually load it again to 100G.
|
|
|
|
|
|
#11 |
|
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
3×23×89 Posts |
That should work. Once each bit is sieved upto the limit set(in Options|Sieve Until in windows) they will be comibined into one file which should be in theory small enougth to fit into 485Mb. I haven't tested this although I have done something like this to combine early(not really early like this) before so I know that bit works. It's the 485Mb bit that I am not so certain over. It depends whether the memory usage is just number of candidates or if it is also effected by distance between candidates etc.
|
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Perpetual benchmark thread... | Xyzzy | Hardware | 897 | 2023-06-15 13:46 |
| Hardware Benchmark Jest Thread for 100M exponents | joblack | Hardware | 285 | 2022-08-06 21:50 |
| LLR benchmark thread | Oddball | Riesel Prime Search | 5 | 2010-08-02 00:11 |
| sr5sieve Benchmark thread | axn | Sierpinski/Riesel Base 5 | 25 | 2010-05-28 23:57 |
| New Sieve Thread Discussion | Citrix | Prime Sierpinski Project | 15 | 2005-08-29 13:56 |