mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   PrimeNet (https://www.mersenneforum.org/forumdisplay.php?f=11)
-   -   P-1 factoring anyone? (https://www.mersenneforum.org/showthread.php?t=11101)

jrk 2009-11-04 05:22

Assuming that the B1 was at least 319673 then these two factors were found during the stage 1.

[code]? factor(4655372538754486733614467124913690121322890046151)
%1 =
[29359077499843468232567 1]

[158566717185828044566850353 1]

? factor(29359077499843468232567-1)
%2 =
[2 1]

[14087 1]

[69073 1]

[319673 1]

[47193221 1]

? factor(158566717185828044566850353-1)
%3 =
[2 4]

[3 3]

[197 1]

[2857 1]

[37019 1]

[373291 1]

[47193221 1]

?
[/code]

petrw1 2009-11-09 17:40

How are we doing???
 
Are we getting close to adequate P-1'ers lately?
Are we keeping ahead of the LL'ers?

Not that 1 individual can make a big difference but I could help by allocating a few more decent sized cores to P-1 in the New-Year (working on a few personal goals first).

S485122 2009-11-09 22:04

[QUOTE=petrw1;195295]Are we getting close to adequate P-1'ers lately?
Are we keeping ahead of the LL'ers?[/QUOTE]The situation is getting better : based on the last 31 days, each day 303 P-1 factoring attempts have been completed as compared to 209 first time LL tests. To mitigated this one must say that quite a few of the P-1 factoring attempts concern low exponents, others have been done in the old way (as part of a LL test.)

So my feeling is that some more P-1 effort is still necessary.

Jacob

lycorn 2009-11-10 13:49

[quote=S485122;195317]So my feeling is that some more P-1 effort is still necessary.[/quote]

Indeed. Also because the P-1 performed by dedicated participants is done with higher amounts of memory compared to some of the P-1 done together with the first-time tests which is, I suspect, done with the default memory allocated, hence less efficiently, and with a much higher probability of missing factors.

petrw1 2009-11-28 22:36

[QUOTE=Prime95;181129]I'd say that P-1 and double-checking are both short-handed.[/QUOTE]

I'm redirecting some cores to these two.

petrw1 2009-12-23 16:40

[QUOTE=S485122;195317]The situation is getting better : based on the last 31 days, each day 303 P-1 factoring attempts have been completed as compared to 209 first time LL tests. To mitigated this one must say that quite a few of the P-1 factoring attempts concern low exponents, others have been done in the old way (as part of a LL test.)

So my feeling is that some more P-1 effort is still necessary.

Jacob[/QUOTE]

Barring official results from other sources; if/when PrimeNet gets to the point that the Hourly Results report contains an entire hour of data (rather than a few minutes - i.e. when the low-level factoring work is all done) one could inspect the actual ranges completed by LL and P1 to determine whether P1 is keeping up.

petrw1 2009-12-31 17:57

365 Day P1 vs LL analysis
 
I totaled all the LL and P-1 Attempts from the Last 365 Days reports.

LL 81,959
P1 164,981 (twice LL!!!)

NOT SO FAST ...

The P1 total needs work.
I took an educated guess to eliminate all the P1-Small. Any that had a small Points Per of less than 1 with "many" attempts.

New total
P1 92,315 (still a little above LL)

HOWEVER, I suggest we also need to exclude P1 tests that were done on exponents below (or well above - i.e. 100M Digits Project) the current LL line. These would NOT be helping the quest to keep ahead of the immediate LL needs. I do NOT know how to even guess at this number.

I also have no way to determine how many of the P1 were done independantly and how many P1 were done in conjunction with the corresponding LL test but I believe that in either case they are contributing to the current LL needs and should stay included for the sake of this analysis.

Uncwilly 2009-12-31 18:28

[QUOTE=petrw1;200463]HOWEVER, I suggest we also need to exclude P1 tests that were done on exponents below (or well above - i.e. [B][COLOR="DarkOrange"]100M Digits Project[/COLOR][/B])[/QUOTE] That would be 48 as of late Dec 30. I haven't check that stat today.

S485122 2009-12-31 20:51

[QUOTE=cheesehead;200476]... exclude those P-1 tests from what?[/QUOTE]Exclude them from the count of P-1 factorings preparing for the first time LL tests. (See the beginning of that same post.)

Jacob

cheesehead 2009-12-31 21:10

[quote=petrw1;200463]HOWEVER, I suggest we also need to exclude P1 tests that were done on exponents below (or well above - i.e. 100M Digits Project) the current LL line. These would NOT be helping the quest to keep ahead of the immediate LL needs. I do NOT know how to even guess at this number.[/quote]I think all you [I]can[/I] do is guess ... until we have some quantitative definition of what it means to be "helping the quest to keep ahead of the immediate LL needs".

Any calculation you perform before having that definition is just numericizing your guesses.

If PrimeNet assigns me an LL test of 49xxxxxx and a P-1 test of 55xxxxxx in the same session, I think we'd agree that the latter is "helping the quest to keep ahead of the immediate LL needs".

Indeed, I think any P-1 assignment PrimeNet makes to any user who has not specified a particular exponent range should be considered to be "helping the quest to keep ahead of the immediate LL needs". In other words, any unfettered (no range restriction specified) PrimeNet assignment (P-1 or otherwise) should be considered to be "helping the quest".

But what if, while PrimeNet is handing out LL assignments of 49xxxxxx and P-1 assignments of 55xxxxxx, I send in a manual report of P-1 tests in the 57xxxxxx range? Weren't the latter also helping the quest to keep ahead of the immediate LL needs -- just a bit farther ahead than a 55xxxxxx?

But what if the latter range were 61xxxxxx? 67xxxxxx? Where do you draw the line?

[quote]I also have no way to determine how many of the P1 were done independantly and how many P1 were done in conjunction with the corresponding LL test but I believe that in either case they are contributing to the current LL needs and should stay included for the sake of this analysis.[/quote]I suggest coming up with a quantitative definition of what it means for a P-1 test to be "contributing to the current LL needs" before trying to form any opinion about whether it is "helping the quest to keep ahead of the immediate LL needs".

petrw1 2009-12-31 21:47

[QUOTE=cheesehead;200479]I suggest coming up with a quantitative definition of what it means for a P-1 test to be "contributing to the current LL needs" before trying to form any opinion about whether it is "helping the quest to keep ahead of the immediate LL needs".[/QUOTE]

[QUOTE=cheesehead]If PrimeNet assigns me an LL test of 49xxxxxx and a P-1 test of 55xxxxxx in the same session, I think we'd agree that the latter is "helping the quest to keep ahead of the immediate LL needs".[/QUOTE]

Agreed.

What I am trying to determine (don't know how quantitative a measure it will be???) is if the project is completing at least as many required P-1 tests as LL tests in what is commonly(?) known as the LL Leading Edge. Currently mid-high 40's / low 50's?.

I understand that George has described the new V5 process as:
1. TF to 1 bit below the prescribed level
2. P-1
3. Remaining bit of TF
4. LL
5. DC

with the hopes that enough people will do the P-1 required in Step 2 to keep those doing LL in Step 4 from running out or from being required to complete Steps 2 and 3 first.

I further understand from George's posts that any exponents in this range already at the max TF level are less of a (not a?) concern in this process and can/will be assigned as part of the LL test as long as the PC has adequate RAM assigned.

Not sure this is as coherent as I would like it to be but I have to go home to meet the kids for supper ... if I get a brainwave later I will add to this.


All times are UTC. The time now is 22:50.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.