![]() |
P-1 Question
1) chalsall, how hard would it be for you to find those exponents which have only had Stage 1 of P-1, and reassign them to the 'No P-1' Group?
2) What do people think about making this recategorization, as in, do people not care, actively dislike it, or want to actively get rid of those exponents, i.e. get them some stage 2? |
[QUOTE=Dubslow;284980]1) chalsall, how hard would it be for you to find those exponents which have only had Stage 1 of P-1, and reassign them to the 'No P-1' Group?
2) What do people think about making this recategorization, as in, do people not care, actively dislike it, or want to actively get rid of those exponents, i.e. get them some stage 2?[/QUOTE] I take it you mean exp's where B1=B2? I would like to get them 'back' in order to do a 'proper' P-1 myself. The time to run a P-1 VS 1-2 LL's would be well worth it I think. |
[QUOTE=bcp19;284981]I take it you mean exp's where B1=B2?
I would like to get them 'back' in order to do a 'proper' P-1 myself. The time to run a P-1 VS 1-2 LL's would be well worth it I think.[/QUOTE] I do believe you've read my mind. |
[QUOTE=Dubslow;284980]1) chalsall, how hard would it be for you to find those exponents which have only had Stage 1 of P-1, and reassign them to the 'No P-1' Group?[/QUOTE]
Trivial. [CODE]mysql> select count(*) from GPU where B1=B2 and P1>0; +----------+ | count(*) | +----------+ | 22137 | +----------+ 1 row in set (0.03 sec) [/CODE] [QUOTE=Dubslow;284980]2) What do people think about making this recategorization, as in, do people not care, actively dislike it, or want to actively get rid of those exponents, i.e. get them some stage 2?[/QUOTE] That is the fundamental question. I suggested this before, and many argued it was not worth the cycles compared to virgin P-1 candidates. I tend to agree with them. |
[QUOTE=chalsall;284984]Trivial.
[CODE]mysql> select count(*) from GPU where B1=B2 and P1>0; +----------+ | count(*) | +----------+ | 22137 | +----------+ 1 row in set (0.03 sec) [/CODE] That is the fundamental question. I suggested this before, and many argued it was not worth the cycles compared to virgin P-1 candidates. I tend to agree with them.[/QUOTE] I've run 1251 P-1's in the 7-7.1M range that were poorly done (less than 3%) and while I know not all of these were in the B1=B2 range, I have found 39 factors, which is better than 3% of the P-1's done. That seems like it may be worthwhile to me if the same odds work out. Could always just make a 'new' selection for those who want to do the B1=B2 P-1's. |
[QUOTE=bcp19;284988]I've run 1251 P-1's in the 7-7.1M range that were poorly done (less than 3%) and while I know not all of these were in the B1=B2 range, I have found 39 factors, which is better than 3% of the P-1's done. That seems like it may be worthwhile to me if the same odds work out. Could always just make a 'new' selection for those who want to do the B1=B2 P-1's.[/QUOTE]
Could do. But would you not agree that while there is still "virgin" P-1 work to do that that should be done (properly) first? |
[QUOTE=bcp19;284988]I've run 1251 P-1's, I have found 39 factors, which is better than 3% of the P-1's done.[/QUOTE]
Sounds like more P-1 is on the borderline of being valuable. A 3+% success rate would "pay off" if an LL test is more than 15 times the cost of the P-1 test (100%/3+% = 30. 2 LL tests saved = 15-to-1 ratio). |
There is this site too ....
[url]http://mersenne-aries.sili.net/p1small.php[/url]
Which lists exponents poorly (or NOT) P-1'd |
[QUOTE=petrw1;284999][url]http://mersenne-aries.sili.net/p1small.php[/url]
Which lists exponents poorly (or NOT) P-1'd[/QUOTE] Yes, but it's not very good at finding such candidates before they have been LL'd. They are assigned for LL before anybody who is willing gets a chance to do better P-1. That's where chalsall's system comes in, because we get access to them after they've been reserved. As for that 15 to 1 ratio: A generic [URL="http://mersenne-aries.sili.net/exponent.php?exponentdetails=48265319"]48M test[/URL] takes 80-85 GHz days to test, so that's 160 GD. A P-1 with B1=B2=750K is 1.9 GD of work. A proper P-1 takes 3.1 GD from scratch. 160/3.2 is 50, which is greater than the 30-1 ratio specified. (Alternately, 80/3.2 > 15.) Seems to me that it would pay off. Also: 22,000 is [i]a lot[/i] of expos that have been short-changed factoring wise. Assume, that with normal P-1 bounds, we get a 2% success rate (bcp19 suggests > 3% success rate), which is a lot less than 6.1% that James' site suggests. Then that's 320 GD to P-1 100 expos, with 2*160=320 GD saved. At 2% is the rough border (with conservative estimates against usefulness of these 're-do runs') and we have evidence of a rate bigger than that, so it seems worthwhile. Of course, chalsall's reasoning is harder to defeat. Perhaps instead of assigning lower TF bounds when the pool runs dry, assign expos with TF to 72 with B1=B2? Or make it user option. |
[QUOTE=Dubslow;285001]Of course, chalsall's reasoning is harder to defeat. Perhaps instead of assigning lower TF bounds when the pool runs dry, assign expos with TF to 72 with B1=B2? Or make it user option.[/QUOTE]
It would literally take me an hour (but not tonight) to implement such a work type based on the code and database tables (and spiders) I already have implemented. But I question if it makes (overall) sense for GIMPS to do so. OTOH, as always, I'm just a facilitator. If people want to do this kind of work, I can assist. And maybe we can try it, and observe the empirical results.... |
It all depends on success rate. If it's -ge 3%, then it's good. If it's 2% < x < 2.5%, then more analysis needed. If it's <2%, I think we can pretty much kill it. What sort of sample size should we be looking at? At least 500, I would think.
|
| All times are UTC. The time now is 14:28. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.