![]() |
|
|
#1 |
|
May 2003
58 Posts |
It seems that for a while now the ratio of first time LL tests to DCs has been about 2. I was wondering if there were any concern about the DCs falling away from the first time tests? Perhaps there could be a DC push like TPR did with their DC gauntlet, with the board members and such.
It also seems as though the completion rates are steadily increasing despite the increase in exponent values. I was also wondering if there would be any formal push to promote GIMPS, perhaps when the new server is online or sooner? I think a great resouce could be CS and Math departments at Universities, as well as primary schools which could use the hunt to promote math in general. Any ideas to promote the search? Where's da
|
|
|
|
|
|
#2 | |||
|
"Richard B. Woods"
Aug 2002
Wisconsin USA
22×3×641 Posts |
Quote:
Quote:
But DC completions will speed up slightly a few months from now, anyway. Why? Because then DC assignments will reach the range (~10M) where first-time L-L assignments were when Prime95 introduced automatic pre-LL P-1 factoring. Right now, many DC assignments are for exponents that haven't had any P-1 performed, so the DCers perform P-1 in those cases before starting the DC L-L test. Once DC assignments reach the range where almost all exponents have already had P-1 performed, DCers won't be doing the P-1s before their L-Ls. Quote:
|
|||
|
|
|
|
|
#3 | |
|
Mar 2003
New Zealand
22058 Posts |
Quote:
|
|
|
|
|
|
|
#4 | |
|
Aug 2002
110010102 Posts |
Quote:
|
|
|
|
|
|
|
#5 |
|
"Richard B. Woods"
Aug 2002
Wisconsin USA
22×3×641 Posts |
In the DC assignment line
DoubleCheck=eeeeeeee,bb,w the w field is "0" if no P-1 has yet been done, or "1" if there has been P-1 performed (no matter what the B1/B2 limits). |
|
|
|
|
|
#6 | |
|
Aug 2002
Dawn of the Dead
5×47 Posts |
That gauntlet had nothing to do with accelerating DC production. I organized it and the reason was based solely on completion times. We needed to run stats, which were much easier to calculate by exponent rather than having to track progress within a test.
The total exponents used in the contest was 1000 and we didn't finish them all in that period. Some of the corporate farms operated on auto pilot and as such didn't draw from the pool. An interesting side observation was that the mass factoring we did to prepare the pool prior to competing led to the discovery of a large number of factors. I did a run of 100 P-1 tests (160 MB ram allocated) and found 5 factors. Of course, some team interested in boosting DC production could always challenge us to a gauntlet - but they have to visit our home to do that Quote:
|
|
|
|
|
|
|
#7 |
|
Apr 2003
516 Posts |
Wouldn't GIMPS be better off if double checkers took into account the highest B2 that was ever done on a given exponent? That way, if more memory could be used in the P-1 test, perhaps a factor could be found that wouldn't be otherwise.
For instance, if the original tester only used 64 MB of RAM to do P-1 testing, and, 2 years later, the double checker has 512 MB of RAM, shouldn't that massive amount of RAM be used for P-1 testing *despite* the fact that P-1 has already been done? In other words, not all P-1 tests are equal... Perhaps, P-1 testing should be distributed as completely separate assignments to those computers able to use massive amounts of memory. I'm sure a lot of low-memory P-1 tests get performed, and if this system were implemented, a lot of Lucas-Lehmer tests could be avoided. This would undoubtedly speed up the GIMPS project to some extent. |
|
|
|
|
|
#8 |
|
Aug 2002
2×101 Posts |
Given that P-1 testing increases GIMPS throughput by only 1% even with the most efficient P-1 bounds, I don't think there's much more squeezable out of it, especially considering that only one LL test is saved by P-1 at the DC stage.
|
|
|
|
|
|
#9 |
|
Apr 2003
5 Posts |
No, I'm suggesting that P-1 be done once, after trial factoring, but before the first LL test, and when the P-1 factoring is assigned, it should only be assigned to computers with high amounts of memory. If a factor is found, then two LL tests are saved. Perhaps with this implementation, more than 1% of LL tests would be avoided. By the way, where did you get that statistic? The statistic for my team is that 3.2% of all LL assignments ended in a P-1 factor before the LL testing began. And not many of the computers on my team have high amounts of memory dedicated to Prime95...
I'd imagine if all P-1 factoring was done on computers with over a gigabyte of RAM, many more such factors could be found, and though it might not make a *huge* difference, it would help, wouldn't it? Out of curiousity, are most P-1 factors found in stage one or stage two? |
|
|
|
|
|
#10 |
|
"Mike"
Aug 2002
201F16 Posts |
In my experience, stage one...
|
|
|
|
|
|
#11 | ||
|
P90 years forever!
Aug 2002
Yeehaw, FL
2·53·71 Posts |
Quote:
Quote:
|
||
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Ok so now where can I test a huge prime besides gimps | ONeil | Information & Answers | 33 | 2018-04-21 13:55 |
| How do I test if it is a mersenne prime on GIMPS? | spkarra | Math | 21 | 2015-01-23 18:13 |
| error rates and P-1 test | drakkar67 | Prime Sierpinski Project | 9 | 2008-05-26 14:29 |
| Old Athlon 64 Promotion | E_tron | Hardware | 0 | 2007-04-19 00:44 |
| New Prime Test allows reuse exps (eg GIMPS)? | bearnol | Miscellaneous Math | 7 | 2005-10-20 13:21 |