mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Data (https://www.mersenneforum.org/forumdisplay.php?f=21)

 GP2 2003-09-27 10:10

Exponents that haven't had a P-1 test done

First column is Meg range (for instance, 6 = 6,000,000 - 6,999,999).

Second column is the number of exponents in that range for which 2 matching LL tests were done with no P-1 factoring ever having been done for that exponent.

[code]
0 0
1 0
2 15152
3 24580
4 18831
5 9243
6 4170
7 1916
8 1454
9 2754
10 140
11 23
12 8
13 6
14 2
15 8
16 3
17 2
18 3
19 1
[/code]

How do we interpret these results?

At low ranges (2M - 4M), there's a lot. That's because P-1 wasn't added to Prime95 until fairly recent versions, so old exponents got two LL double-checks done and that was all.

At very low ranges (0M - 1M) however, the number drops to zero, because someone is systematically P-1 trial-factoring all those old small exponents and they've gotten up to about 2.4M.

At higher ranges (5M - 8M) the numbers drop steadily because P-1 factoring got added to Prime95 and the chances are reasonable that at least one of the two computers involved had enough memory to do a P-1 test. Still, thousands of exponents never got a P-1 test done.

Finally at the highest ranges (10M +) the numbers are low because most exponents simply haven't been double-checked yet. The leading edge of double-checking is currently sweeping past 10.2M. If every single exponent got a P-1 test before a second LL test was performed, those numbers would stay permanently low and a few dozen new factors would be found in each Meg range, assuming a 3% or so chance of finding a P-1 factor.

I'm not sure why the count picks up sharply in the 9M range after steadily declining. Any ideas?

 Xyzzy 2003-09-27 10:15

[QUOTE][i]Originally posted by GP2 [/i]
[B]At very low ranges (0M - 1M) however, the number drops to zero, because someone is systematically P-1 trial-factoring all those old small exponents and they've gotten up to about 2.4M. [/B][/QUOTE]Over the last month I did several thousand of these (I think up to around 1.5 million or so)... I chose ones that had never had P-1 stage 2 testing done... But I'm not sure if the work I did is reflected in the status files yet...

 GP2 2003-09-27 10:25

First column is Meg range (for instance, 6 = 6,000,000 - 6,999,999).

Second column is the number of exponents
in that range for which at least one LL test was done, but not 2 matching LL tests, and with no P-1 factoring ever having been done for that exponent.

[code]
0 0
1 0
2 0
3 0
4 0
5 0
6 0
7 2
8 79
9 1507
10 9118
11 5972
12 4122
13 1880
14 1187
15 1062
16 1044
17 1054
18 1053
19 1069
[/code]

How do we interpret these results?

At low ranges (0M - 7M), just about every exponent has been double-checked, so the numbers are zero.

The numbers then rise sharply, peaking at 10M (not sure why). Of course, many of the machines that perform the double-checks will have enough memory to do a P-1 trial-factoring before going ahead with the LL double-check. But judging by past history some won't, and some thousands of exponents will never get a P-1 test done.

From 15M-19M the numbers decline to a plateau. I'm not sure why. Maybe it's because only modern machines are fast enough to exponents in that range, and such machines are more likely to have plenty of memory (required for P-1 testing) and also more likely to have a recent version of Prime95 installed (since P-1 trial-factoring was only introduced in fairly recent version of Prime95).

If P-1 testing could be organized to get through the hump between 10M-13M, then after that it would be fairly easy to ensure that P-1 trial-factoring always kept ahead of the leading edge of double-checking (in the "plateau" region).

 GP2 2003-09-27 10:36

Re: Re: Exponents that haven't had a P-1 test done

[QUOTE][i]Originally posted by Xyzzy [/i]
[B]Over the last month I did several thousand of these (I think up to around 1.5 million or so)... I chose ones that had never had P-1 stage 2 testing done... But I'm not sure if the work I did is reflected in the status files yet... [/B][/QUOTE]

Well, I did a few P-1 tests and submitted them through the manual form at [url]http://www.mersenne.org/ips/manualtests.html[/url] and the data was always reflected in the next weekly-or-so version of PMINUS1.TXT (or FACTORS.CMP in the cases where a P-1 factor was found). But I only did P-1 testing of exponents that had never had any P-1 test done, however small the bounds. I never redid an old P-1 test with larger bounds. Still, that shouldn't be a problem.

Instead of working through the old exponents, though, it would benefit GIMPS more to do P-1 testing just ahead of the leading edge of double-checking, because this can save redundant LL double-checks by low-memory machines. If this we can keep ahead of the leading edge of double-checking, then no exponent will ever again have 2 LL tests done with no P-1 test having been done.

As mentioned in my previous message, there's a smooth plateau at 14M+ where it will be very easy to ensure that P-1 trial-factoring keeps ahead of the leading edge of double-checking. But there's a fairly big hump at 10-11M, which it would be useful to tackle. Once past that, there's plenty of leisure opportunity to systematically do the 2M range once again.

 Xyzzy 2003-09-27 10:39

[QUOTE][i]Originally posted by GP2 [/i]
[B]Instead of working through the old exponents, though, it would benefit GIMPS more to do P-1 testing just ahead of the leading edge of double-checking, because this can save redundant LL double-checks by low-memory machines. If this we can keep ahead of the leading edge of double-checking, then no exponent will ever again have 2 LL tests done with no P-1 test having been done.[/B][/QUOTE]I understand that now and I will stop doing the small ones... (They are so much fun because they are like 3 minutes each!)

:smile:

 smh 2003-09-27 16:50

How can i get a list of exponents that haven't had any p-1 testing at all?

As long as it doesn't interfere with primenet it could be a side project of the LMH

BTW, a lot of the exponents that have had P-1 have verry low bounds which have a very low chance of finding a factor.

 NickGlover 2003-09-27 17:44

[QUOTE][i]Originally posted by GP2 [/i]

How do we interpret these results?

At low ranges (0M - 7M), just about every exponent has been double-checked, so the numbers are zero.

The numbers then rise sharply, peaking at 10M (not sure why). [/QUOTE]

The reason that so many exponents have not had P-1 done in the 10M range is that GIMPS was doing first-time tests in that range around the time the first P-1 version of Prime95 was released. A majority GIMPS users do not upgrade immediately and frequently to new versions, so it took a while before enough people had P-1 capable clients to do P-1 on almost of all the first-time exponents.

If the 9M range had not been mostly double-checked already, it would have even more exponents without P-1 than 10M because most of the 9M exponents were already handed out before the P-1 capable client was available.

 NickGlover 2003-09-27 18:10

Re: Exponents that haven't had a P-1 test done

[QUOTE][i]Originally posted by GP2 [/i]

I'm not sure why the count [for double-checked exponents with P-1 done] picks up sharply in the 9M range after steadily declining. Any ideas? [/B][/QUOTE]

I figured this out. I think it is because of TempleU-CAS (combined with TempleU-DI in George's files). Earlier this year, TempleU-CAS ramped up his production by putting Prime95 on lots of P4s in computer labs on the Temple University campus. He puts these P4s (which all have the same computer name; FL-SLE) all on double-checks. This occurred right about when GIMPS started handing out 9M double-checks.

Being currently the third highest LL (double-checks and first-time tests) producer (see [url]http://www.teamprimerib.com/rr1/topover.htm[/url]), with almost all of his computing power focused on double-checks, he ends up doing a sizable percentage of the double-checks that are completed. I've noticed that his computers don't seem to do P-1 very often, which probably means he has intentionally turned it off because it doesn't give credit proportional to the amount of work done. So, I think the larger number of exponents in the 9M's without P-1 is due TempleU-CAS not doing P-1 while completing a sizable percentage of the double-checks. He will probably have a similar effect on the 10M range (though it won't be noticable compared to the 9M range).

 NickGlover 2003-09-27 19:28

Re: Re: Exponents that haven't had a P-1 test done

[QUOTE][i]Originally posted by NickGlover [/i]
I've noticed that his computers don't seem to do P-1 very often, which probably means he has intentionally turned it off because it doesn't give credit proportional to the amount of work done.[/B][/QUOTE]

This was just my initial guess about the motives of the person running the TempleU-CAS account. He also might not be running P-1 because he fears the high memory usage would interfere with the University students' use of the computers.

 GP2 2003-09-27 20:07

[QUOTE][i]Originally posted by smh [/i]
[B]How can i get a list of exponents that haven't had any p-1 testing at all?

As long as it doesn't interfere with primenet it could be a side project of the LMH

BTW, a lot of the exponents that have had P-1 have verry low bounds which have a very low chance of finding a factor. [/B][/QUOTE]

I can supply you with the list, but the complete list is very long. As you can see from the earlier posts, there's tens of thousands.

If you want to try do P-1 trial-factoring just ahead of the leading edge of double-checking, see the [url=http://www.mersenneforum.org/forumdisplay.php?s=&forumid=30]Marin's Mersenne-aries forum[/url]. If you want some other range, let me know.

I could also come up with a list of P-1 test with very low bounds.

 Complex33 2003-09-27 20:32

Another suggestion for those interested in and who have the means to P-1 test exponents would be to use PrimeNet to request a block of DC's and turn the sequentalwork switch on in prime.ini. It will go through and P-1 the exponents that need it and return the result to the server. When the batch is done just unreserve the lot, wait a while for them to be assigned to others and repeat. Has worked for me in the past when I wanted to get some coveted double check factors, plus clears the way for the generally older DC machines to concentrate on LL iterations.

Gratuitous dancing :banana:

All times are UTC. The time now is 15:41.