mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   PrimeNet (https://www.mersenneforum.org/forumdisplay.php?f=11)
-   -   P-1 factoring anyone? (https://www.mersenneforum.org/showthread.php?t=11101)

Dubslow 2012-01-24 20:19

I think those are better than average numbers, if I recall correctly. Someone somewhere once got like two workers on four cores running at like 9 ms per iter, but with all four cores on one worker only got 7 ms, which is clearly quite less than 90% efficiency of the 2 cores per, much less compared to 1 core per. Maybe on your computer, it could be that memory bandwidth is more of a problem than the one I mention?

James Heinrich 2012-01-24 20:38

Core2 generation (e.g. Q6xxx, Q8xxxx, Q9xxxx) do lousy at multithreaded assignments.
Core-i generation (e.g. Core i3/i5/i7) do significantly better.

Dubslow 2012-01-25 00:19

[url]http://mersenne-aries.sili.net/exponent.php?exponentdetails=52567117[/url]
What's the rule on composite factors?
whoops mod please move to [url]http://www.mersenneforum.org/showthread.php?t=13977&page=11[/url]

kladner 2012-01-25 20:44

Thread Continuation: RAM and P-1
 
I am carrying over a digressive discussion from the "[B]Assignment discrepancy" thread[/B]-
[URL]http://www.mersenneforum.org/showthread.php?t=16352&page=3[/URL]

The general drift over there are the relations between RAM Allocation and Number of Relative Primes, Bounds, etc. in P-1 Factoring.

Jwb52z 2012-01-25 20:50

How much do I need to worry about the validity of my P-1 results when I noticed over the last week or so that my laptop's clock loses about a minute a day unless I resync it manually?

chalsall 2012-01-25 21:28

[QUOTE=kladner;287222]The general drift over there are the relations between RAM Allocation and Number of Relative Primes, Bounds, etc. in P-1 Factoring.[/QUOTE]

I have successfully created a new thread: [URL="http://www.mersenneforum.org/showthread.php?t=16473"]P-1 Discussion[/URL].

Sorry for the hesitation, but the last time I tried to do this I lost several messages.

(The supermods laughed in my general direction, and told me that this was to be expected of new moderators... :smile:)

chalsall 2012-01-25 21:42

[QUOTE=Jwb52z;287224]How much do I need to worry about the validity of my P-1 results when I noticed over the last week or so that my laptop's clock loses about a minute a day unless I resync it manually?[/QUOTE]

Do you run NTP?

Jwb52z 2012-01-27 01:50

[QUOTE=chalsall;287229]Do you run NTP?[/QUOTE]I am guessing I don't as I don't knwo what that means. I think I fixed my problem by vacuuming out the vent hole slit spaces on the bottom of my laptop and the keyboard. It's now appearing to be running as if it were new.

James Heinrich 2012-01-27 02:57

[QUOTE=Jwb52z;287377]I am guessing I don't as I don't knwo what that means.[/QUOTE]NTP = [url=http://en.wikipedia.org/wiki/Network_Time_Protocol]Network Time Protocol[/url]. If you run Windows, you're familiar with the concept where Windows can sync time over the internet.

Dubslow 2012-01-27 03:54

[QUOTE=James Heinrich;287383]NTP = [url=http://en.wikipedia.org/wiki/Network_Time_Protocol]Network Time Protocol[/url]. If you run Windows, you're familiar with the concept where Windows can sync time over the internet.[/QUOTE]
Edit: Damn not-noticing-there's-another-page.

What I had written:
[quote]NTP is an internet time synchronization protocol. If you right click on the clock, you can adjust it; one of the options on those various pages should be something to the effect of "Synchronize with internet servers" or the like. (I'm not using Windows right now so I can't attest to exact directions, but Google should clear up what I can't.)[/quote]

Mini-Geek 2012-01-27 12:57

[QUOTE=James Heinrich;287117]I don't think there's any particular limit to the size of factors that could be found with any particular bounds (? someone correct me if I'm wrong, please).[/QUOTE]

According to [url]http://www.mersennewiki.org/index.php/P-1_Factorization_Method[/url], E is chosen so that each prime is roughly the size of B1, so I believe there is a maximum. E.g. if it chose 1024 as 2's exponent, I believe it wouldn't find a prime with P-1=2^1026*... This would make the maximum from stage 1 approximately p*B1[SUP]pi(B1)[/SUP]. Stage 2 adds approximately a factor of B2, (relatively insignificant) and I don't know what sort/number of factors Brent-Suyama adds.

Jwb52z 2012-01-27 15:18

[QUOTE=James Heinrich;287383]NTP = [url=http://en.wikipedia.org/wiki/Network_Time_Protocol]Network Time Protocol[/url]. If you run Windows, you're familiar with the concept where Windows can sync time over the internet.[/QUOTE]Yes, I understand now. I've just never seen what it was called before specifically. It's set to sync once a week. I'm also now worrying that my fan is dying because, even though vacuuming the keyboard and vents on my laptop made it apparently run faster again like it used to, the fan itself will start making my laptop buzz and vibrate after a while of being used for games or video.

Mini-Geek 2012-01-27 18:25

[QUOTE=Mini-Geek;287412]p*B1[SUP]pi(B1)[/SUP][/QUOTE]

With B1=620000, this has nearly 300,000 digits. Needless to say, the probability of actually finding a factor near that size is nearly 0.

M0CZY 2012-01-30 10:50

P-1 done deep enough?
 
For my current P-1 assignment I have allocated 1200 MB of memory, so I am getting this output:
[CODE][Jan 30 10:04] Worker starting
[Jan 30 10:04] Setting affinity to run worker on any logical CPU.
[Jan 30 10:04] Optimal P-1 factoring of M56****** using up to 1200MB of memory.
[Jan 30 10:04] Assuming no factors below 2^71 and 2 primality tests saved if a factor is found.
[Jan 30 10:04] Optimal bounds are B1=590000, B2=12685000
[Jan 30 10:04] Chance of finding a factor is an estimated 4.73%
[Jan 30 10:04] Using Core2 type-3 FFT length 3M, Pass1=3K, Pass2=1K, 2 threads
[Jan 30 10:04] Setting affinity to run helper thread 1 on any logical CPU.
[Jan 30 10:05] Using 1196MB of memory. Processing 44 relative primes (396 of 480 already processed).
[Jan 30 10:05] M56****** stage 2 is 85.16% complete.
[Jan 30 10:21] M56****** stage 2 is 86.57% complete. Time: 950.475 sec.
[Jan 30 10:37] M56****** stage 2 is 87.98% complete. Time: 939.702 sec.[/CODE]
As the chance of finding a factor is only an estimated 4.73%, I worry that my work is not very useful to the project.
Should I continue doing P-1 assignments using this amount of memory, or should I leave it to the 'big guns', who can use much more memory than me?

lorgix 2012-01-30 10:55

[QUOTE=M0CZY;287744]For my current P-1 assignment I have allocated 1200 MB of memory, so I am getting this output:
[CODE][Jan 30 10:04] Worker starting
[Jan 30 10:04] Setting affinity to run worker on any logical CPU.
[Jan 30 10:04] Optimal P-1 factoring of M56****** using up to 1200MB of memory.
[Jan 30 10:04] Assuming no factors below 2^71 and 2 primality tests saved if a factor is found.
[Jan 30 10:04] Optimal bounds are B1=590000, B2=12685000
[Jan 30 10:04] Chance of finding a factor is an estimated 4.73%
[Jan 30 10:04] Using Core2 type-3 FFT length 3M, Pass1=3K, Pass2=1K, 2 threads
[Jan 30 10:04] Setting affinity to run helper thread 1 on any logical CPU.
[Jan 30 10:05] Using 1196MB of memory. Processing 44 relative primes (396 of 480 already processed).
[Jan 30 10:05] M56****** stage 2 is 85.16% complete.
[Jan 30 10:21] M56****** stage 2 is 86.57% complete. Time: 950.475 sec.
[Jan 30 10:37] M56****** stage 2 is 87.98% complete. Time: 939.702 sec.[/CODE]As the chance of finding a factor is only an estimated 4.73%, I worry that my work is not very useful to the project.
Should I continue doing P-1 assignments using this amount of memory, or should I leave it to the 'big guns', who can use much more memory than me?[/QUOTE]
You're making a difference. The chance of finding a factor may look small, but you save a lot of time if you do find a factor.

(For that particular assignment you may want to adjust the memory settings so that 40 or 48 relative primes can be processed at once.)

M0CZY 2012-01-30 12:00

[QUOTE](For that particular assignment you may want to adjust the memory settings so that 40 or 48 relative primes can be processed at once.)[/QUOTE]

OK, for my next assignment I'll increase the memory allocation to 1400 MB and see what happens.
But that particular machine I was using is at the public library, and only has 2 GB of RAM, so I can't use too much more before it starts thrashing and becoming unresponsive during Stage 2!
My own computer has 3 GB of RAM, but is very much slower (3.00 GB Pentium 4 Prescott core), which is why I use the fast Dual Core 'whizz box' at the library for as long as I can every day.

James Heinrich 2012-01-30 12:13

[QUOTE=M0CZY;287744]As the chance of finding a factor is only an estimated 4.73%, I worry that my work is not very useful to the project.
Should I continue doing P-1 assignments using this amount of memory, or should I leave it to the 'big guns', who can use much more memory than me?[/QUOTE]Absolutely keep doing. 1200MB may not seem like much compared to the "big guns" around here, but the extra RAM generally provides only a marginal increase in factor probability. For example, my own big system with 10GB per worker:[code]Optimal P-1 factoring of M58407191 using up to 10000MB of memory.
Assuming no factors below 2^71 and 2 primality tests saved if a factor is found.
Optimal bounds are B1=625000, B2=14687500
Chance of finding a factor is an estimated 4.92%[/code]So I've given it more than 8x as much RAM as you did, and the factor probability is 0.19% higher...

Conversely, 1200MB is [i]much[/i] better than 200MB, and hugely better than what can be expected by random GIMPS user doing P-1 as initial part of L-L test.

bcp19 2012-01-30 14:38

I was just noticing that my P-1 machine has had a weird bounds with no change to the memory, can anyone explain if this is something other than how far the exp was TF'd?

[code][Sat Jan 28 09:29:51 2012]
P-1 found a factor in stage #2, B1=430000, B2=8707500.
UID: bcp19/HP-NEW, M45048023 has a factor: 360751991413212824008821379007
[Sat Jan 28 17:23:52 2012]
UID: bcp19/HP-NEW, M45122951 completed P-1, B1=340000, B2=5780000, E=6, We4: 8F2BB36D
[Sun Jan 29 04:22:56 2012]
UID: bcp19/HP-NEW, M45158209 completed P-1, B1=430000, B2=8707500, E=12, We4: 90ECBFD6
[Sun Jan 29 15:21:47 2012]
UID: bcp19/HP-NEW, M45159679 completed P-1, B1=430000, B2=8707500, E=12, We4: 90C4BFFD
[Mon Jan 30 02:20:51 2012]
UID: bcp19/HP-NEW, M45163583 completed P-1, B1=430000, B2=8707500, E=12, We4: 90E7BF9E
[/code]

Dubslow 2012-01-30 16:27

If the mem allocation (and TF level) was the same, then I don't know. Are you sure it isn't TF level? I got something like the last three at TF=72, but with TF=76 (thanks to roswald) I got something like the first one you have (even less, I think, but I could easily see 73 giving those bounds).

James Heinrich 2012-01-30 16:29

[QUOTE=bcp19;287761]I was just noticing that my P-1 machine has had a weird bounds with no change to the memory[/QUOTE]Bounds-selection is a complex process. My guess would be that you passed over some threshold where it could no longer get a "nice" number of relative primes into a single pass, so since it would have to do more passes anyway the balance of efficiency said that higher bounds and better Brent-Suyama extension usage (E=12 instead of E=6) were worth it.

Mini-Geek 2012-01-31 00:19

[QUOTE=bcp19;287761]I was just noticing that my P-1 machine has had a weird bounds with no change to the memory, can anyone explain if this is something other than how far the exp was TF'd?[/QUOTE]

Here's a possibility that affected me recently: if Prime95's RollingAverage (a measure of how fast you really are vs how fast it expects you to be) is off, it will choose bounds differently. I saw Prime95 choose much higher bounds when I manually bumped up the rolling average to be more accurate (it was about half what it should've been).

chalsall 2012-01-31 00:46

[QUOTE=Mini-Geek;287830]Here's a possibility that affected me recently: if Prime95's RollingAverage (a measure of how fast you really are vs how fast it expects you to be) is off, it will choose bounds differently. I saw Prime95 choose much higher bounds when I manually bumped up the rolling average to be more accurate (it was about half what it should've been).[/QUOTE]

I think you might be onto something here.

I have sometimes tried to lie to mprime about what the system can do, and it "fixes" it with 24 hours.

And some have left mprime/Prime95 alone, and it sometimes goes insane and thinks a single instance of a supercomputer is the speed of a 6502.

George?

kladner 2012-01-31 00:55

[QUOTE=Mini-Geek;287830]Here's a possibility that affected me recently: if Prime95's RollingAverage (a measure of how fast you really are vs how fast it expects you to be) is off, it will choose bounds differently. I saw Prime95 choose much higher bounds when I manually bumped up the rolling average to be more accurate (it was about half what it should've been).[/QUOTE]

Hmm. This makes me wonder. A while back, I corrected the ~10GHz which P95 was reporting for a PhenomII running at 3510MHz. Now this is what local.txt shows:
[CODE]CpuSpeed=3512
OldCpuSpeed=3512
NewCpuSpeedCount=0
NewCpuSpeed=0
RollingAverage=953
[/CODE]

Is RollingAverage supposed to be showing something remotely related to the actual core speed?

Dubslow 2012-01-31 01:00

Yes, I think your example proves that. Some sort of override would be nice.

chalsall 2012-01-31 02:06

[QUOTE=kladner;287834]Is RollingAverage supposed to be showing something remotely related to the actual core speed?[/QUOTE]

My inference is that this is a value which is suppost to represent the precentage of the CPU actually available, times 10.

It seems that Prime95/mprime sometimes makes mistakes on this (sometimes for very short periods of time) and then makes important decisions based on this value.

Prime95 2012-01-31 03:36

[QUOTE=kladner;287834]Is RollingAverage supposed to be showing something remotely related to the actual core speed?[/QUOTE]

No. It is a measure of actual throughput versus expected throughput. Your rolling average of 953 means prime95 is getting 95.3% of the expected throughput.

The rolling average is used to compute expected completion dates (and thus deciding whether to unreserve assignments). The server uses the value to decide what kind of work assignment "makes the most sense".

Rolling average is not used to compute P-1 bounds.

kladner 2012-01-31 03:42

Thanks for the clarification.

Dubslow 2012-01-31 04:27

I guess that if the core speed is way off, then the RollingAverage will be way off indirectly. (If your core speed was 10GHz, then I'm sure your RA was somewhere around ~300.)

kladner 2012-01-31 04:32

[QUOTE=Dubslow;287847]I guess that if the core speed is way off, then the RollingAverage will be way off indirectly. (If your core speed was 10GHz, then I'm sure your RA was somewhere around ~300.)[/QUOTE]

True a month ago. But 95.3% is probably pretty accurate when one considers restarts and demanding software (Photoshop, Camera Raw, etc) taking a chunk out of ideal performance.

cheesehead 2012-01-31 07:00

[QUOTE=bcp19;287761]I was just noticing that my P-1 machine has had a weird bounds with no change to the memory, can anyone explain if this is something other than how far the exp was TF'd?

[code][Sat Jan 28 09:29:51 2012]
P-1 found a factor in stage #2, B1=430000, B2=8707500.
UID: bcp19/HP-NEW, M45048023 has a factor: 360751991413212824008821379007
[Sat Jan 28 17:23:52 2012]
UID: bcp19/HP-NEW, M45122951 completed P-1, B1=340000, B2=5780000, E=6, We4: 8F2BB36D
[Sun Jan 29 04:22:56 2012]
UID: bcp19/HP-NEW, M45158209 completed P-1, B1=430000, B2=8707500, E=12, We4: 90ECBFD6
[Sun Jan 29 15:21:47 2012]
UID: bcp19/HP-NEW, M45159679 completed P-1, B1=430000, B2=8707500, E=12, We4: 90C4BFFD
[Mon Jan 30 02:20:51 2012]
UID: bcp19/HP-NEW, M45163583 completed P-1, B1=430000, B2=8707500, E=12, We4: 90E7BF9E
[/code][/QUOTE]This looks exactly like a situation in which M45122951 has been TFed to a higher level than the other exponents have been TFed.

Upon checking the status report, I find that M45122951 has been TFed to 2^75, while M45158209, M45159679, and M45163583 have each been TFed to only 2^72. M45048023 had almost certainly also been TFed to only 2^72 before this P-1 run.

Higher TF already done means that the probability of finding a factor with a particular set of P-1 bounds is lower than it would have been if less TF had already been done. Thus, the optimum B1/B2 combination of best balance for the high-TFed exponent occurs at lower bounds than for the lower-TFed exponents.

James Heinrich 2012-01-31 12:50

[QUOTE=cheesehead;287857][QUOTE=bcp19;287761]...can anyone explain if this is something other than how far the exp was TF'd?[/QUOTE]Upon checking the status report, I find that M45122951 has been TFed to 2^75, while M45158209, M45159679, and M45163583 have each been TFed to only 2^72. M45048023 had almost certainly also been TFed to only 2^72 before this P-1 run.[/QUOTE]Thanks for looking that up. I didn't bother checking TF levels since [i]bcp19[/i] seemed to imply that he'd ruled out the obvious explanation.

Prime95 2012-01-31 15:25

[QUOTE=kladner;287848]True a month ago. But 95.3% is probably pretty accurate when one considers restarts and demanding software (Photoshop, Camera Raw, etc) taking a chunk out of ideal performance.[/QUOTE]

The biggest source of deviation from 1000 is that prime95's initial estimate is not very accurate.

bcp19 2012-01-31 15:35

[QUOTE=James Heinrich;287876]Thanks for looking that up. I didn't bother checking TF levels since [I]bcp19[/I] seemed to imply that he'd ruled out the obvious explanation.[/QUOTE]

Sorry, I hadn't ruled that out, it just seemed an awful big jump in bounds for just a few extra levels.

kladner 2012-01-31 15:40

[QUOTE=bcp19;287886]Sorry, I hadn't ruled that out, it just seemed an awful big jump in bounds for just a few extra levels.[/QUOTE]

On the bright side, you caused some very informative discussion to take place, clearing up some misconceptions.:smile:

Dubslow 2012-01-31 17:57

[QUOTE=bcp19;287886]Sorry, I hadn't ruled that out, it just seemed an awful big jump in bounds for just a few extra levels.[/QUOTE]

Consider that the total number of candidates tested for the higher one was 8x greater than the other four. From that view, it's no wonder the P-1 bounds dropped.

Mini-Geek 2012-02-01 03:38

[QUOTE=Prime95;287885]The biggest source of deviation from 1000 is that prime95's initial estimate is not very accurate.[/QUOTE]

On a four core machine, is it expected to be around 1000 or around 4000? Mine is currently at 3707 (it was lingering around 2000 when I manually bumped it up to ~4000, as I mentioned), which is roughly accurate. The CPU is an i5-750@2.8GHz. If you want more info (e.g. for debugging, to make the estimate more accurate) let me know.

cheesehead 2012-02-01 09:36

[QUOTE=bcp19;287886]Sorry, I hadn't ruled that out, it just seemed an awful big jump in bounds for just a few extra levels.[/QUOTE]Mr. Dickman's function ( [URL]http://en.wikipedia.org/wiki/Dickman_function[/URL] , [URL]http://mathworld.wolfram.com/DickmanFunction.html[/URL] ) leads to some broad maxima in the optimization function.

Mr. P-1 2012-02-02 12:00

[QUOTE=lorgix;287746](For that particular assignment you may want to adjust the memory settings so that 40 or 48 relative primes can be processed at once.)[/QUOTE]

44 relative primes is just fine, finishing stage 2 in eleven passes. 40 would need twelve passes, while 48 would do it in ten. The difference in performance would be miniscule.

cheesehead 2012-02-02 18:26

"Miniscule" (or, at least, "Detail Oriented") is some of our middle names around here.

LaurV 2012-02-03 03:06

[URL="http://www.youtube.com/watch?feature=endscreen&NR=1&v=Zy77SXP-nxY"]Miniscule[/URL]... that is like factoring, hard and odd.. :D

bcp19 2012-02-03 07:30

I was looking for some easy numbers to get a better understanding of the P-1 process, and in looking at M2011, it lists bounds of 5 and 27 where K=2*2*3*3*3*5. As I was working through this, I got to thinking, couldn't this also be found with bounds of 5 and 9? In using 27, you'd have 2*2*5 from S1 and 27 from S2, but wouldn't 2*2*3*5 from S1 and 9 from S2 also work?

axn 2012-02-03 08:35

[QUOTE=bcp19;288146]I was looking for some easy numbers to get a better understanding of the P-1 process, and in looking at M2011, it lists bounds of 5 and 27 where K=2*2*3*3*3*5. As I was working through this, I got to thinking, couldn't this also be found with bounds of 5 and 9? In using 27, you'd have 2*2*5 from S1 and 27 from S2, but wouldn't 2*2*3*5 from S1 and 9 from S2 also work?[/QUOTE]

Where did you get this information? Neither 5,27 nor 5,9 will find this. A stage 1 bound of 27 is needed to find this.

With B1=27, you'd use an exponent of 2^4*3^3*5^2*7*11*13*17*19*23, which'd find the factor.

MrHappy 2012-02-03 10:24

1 Attachment(s)
29.8 percent chance of finding a factor!? Is that expectation correct? Until today I have only seen values from 4 to 7 percent...

axn 2012-02-03 10:31

[QUOTE=MrHappy;288162]29.8 percent chance of finding a factor!? Is that expectation correct? Until today I have only seen values from 4 to 7 percent...[/QUOTE]
>>>>> Assuming no factors below 2^-1 <<<<<

There is nothing wrong here. Nothing at all... :whistle:

MrHappy 2012-02-03 10:55

Oh! worktodo got somewhat corrupted... I guess I should have seen that earlier.

James Heinrich 2012-02-03 12:38

[QUOTE=axn;288150]Where did you get this information? Neither 5,27 nor 5,9 will find this. A stage 1 bound of 27 is needed to find this.[/QUOTE]Probably from my exponent page for [url=http://mersenne-aries.sili.net/M2011]M2011[/url]. The smaller factor (2171881) is split up (per my standard k-value practice) into 2171881 - 1 = 2[sup]3[/sup] × 3[sup]3[/sup] × 5 × 2011. Then stripping out the exponent (2011) and a 2, you're left with 2[sup]2[/sup] × 3[sup]3[/sup] × 5.

Now I've been told that B2 should be the largest [forgive probably-wrong math term] powered prime factor, and B1 the second-largest, so chosing between 2[sup]2[/sup]=4, 3[sup]3[/sup]=27 and 5 I list B1=5, B2=27.

Is this approach wrong? If a smaller prime exponent (3<5) gives a larger value than a larger prime exponent when raised to the appropriate power (3[sup]3[/sup] > 5[sup]1[/sup]) should that override the minimum B1?

axn 2012-02-03 13:14

[QUOTE=James Heinrich;288177]The smaller factor (2171881) is split up (per my standard k-value practice) into 2171881 - 1 = 2[sup]3[/sup] × 3[sup]3[/sup] × 5 × 2011. Then stripping out the exponent (2011) and a 2, you're left with 2[sup]2[/sup] × 3[sup]3[/sup] × 5.[/quote]
So far, so good.

[QUOTE=James Heinrich;288177]Now I've been told that B2 should be the largest [forgive probably-wrong math term] powered prime factor, and B1 the second-largest, so chosing between 2[sup]2[/sup]=4, 3[sup]3[/sup]=27 and 5 I list B1=5, B2=27.[/quote]
Close, but not quite. Stage 2 can't really do powers of prime factors. It can cover a single prime factor > B1 and <= B2.

So the 3^3 will not be caught in Stage 2. It has to be caught in Stage 1.

The 5 could conceivably be caught by stage 2, but...

[QUOTE=James Heinrich;288177]If a smaller prime exponent (3<5) gives a larger value than a larger prime exponent when raised to the appropriate power (3[sup]3[/sup] > 5[sup]1[/sup]) should that override the minimum B1?[/QUOTE]
Yep. For each prime <= B1, you compute the largest power of that prime <= B1. The product of these is what constitutes the B1 exponent. So if we use a B1 of 3, and B2 of 5, it'll cover the 2^2, a 3, and the 5. but the remaining 3^2 will be missed.

So in that case, B1=27 is needed to cover the 3^3, which incidentally takes care of the 5 also (as shown in the previous post) -- thus no need for a stage 2.

EDIT:- Algorithm

Scan the list for the largest and second largest prime factor (ignoring power).
If largest has a power > 1 (very rare), set B1=B2=large^power. Stop. (only stage 1 can find this one. stage 2 not needed)
Set B2 = large
Set B1 = second large.
for all the factors other then large, if factor^power > B1, set B1 = factor^power
if B1 > B2, set B2 = B1 (stage 1 bound subsumes stage 2 bound. only stage 1 is needed).

fivemack 2012-02-03 13:23

The B2 step (at least the cleverer B2 step in gmp-ecm) does not account for non-trivial powers of a prime. Which is why the code below doesn't find the factor until B2 contains p^2

[code]
driver@tractor:/home/nfsworld/aliquot$ echo "70*65537^2+1" | ecm -pm1 -c 1 1e4 1e9
GMP-ECM 6.2 [powered by GMP 4.3.2] [P-1]
Input number is 70*65537^2+1 (12 digits)
Using B1=10000, B2=1024792678, polynomial x^1, x0=2755809201
Step 1 took 0ms
Step 2 took 80ms
driver@tractor:/home/nfsworld/aliquot$ echo "70*65537^2+1" | ecm -pm1 -c 1 1e4 1e11
GMP-ECM 6.2 [powered by GMP 4.3.2] [P-1]
Input number is 70*65537^2+1 (12 digits)
Using B1=10000, B2=117865359558, polynomial x^1, x0=2752419731
Step 1 took 0ms
********** Factor found in step 2: 300656885831
Found input number N

[/code]

axn 2012-02-03 13:29

[QUOTE=fivemack;288181]The B2 step (at least the cleverer B2 step in gmp-ecm) does not account for non-trivial powers of a prime. Which is why the code below doesn't find the factor until B2 contains p^2
[/QUOTE]

Technically, a "pure" implementation shouldn't find it. Only a lazy implementation (which doesn't sieve out all the composites from the prime list) will find it. It is probably going to be hit and miss as to which prime powers are found (even if the prime power is < B2).

At least in P95, it attempts to only deal with pure primes (but prime pairing logic does let in others too).

EDIT:- Cute trick
[code]
echo 70*65537^^^^2+1 | ecm -pm1 -c 1 7e4 1e6
GMP-ECM 6.3.1 [configured with GMP 5.0.2, --enable-asm-redc] [P-1]
Input number is 70*65537^2+1 (12 digits)
Using B1=70000, B2=1589656, polynomial x^1, x0=1488548802
Step 1 took 16ms
Step 2 took 15ms
********** Factor found in step 2: 300656885831
Found input number N
[/code]
Take care of 1 factor of 65537 using B1. The other will popup some where in stage 2

bcp19 2012-02-03 14:57

[QUOTE=axn;288150]Where did you get this information? Neither 5,27 nor 5,9 will find this. A stage 1 bound of 27 is needed to find this.

With B1=27, you'd use an exponent of 2^4*3^3*5^2*7*11*13*17*19*23, which'd find the factor.[/QUOTE]

Ok, I must be missing something. How does the P-1 'break out' 2^2*3^3*5 from what you list above?

axn 2012-02-03 16:37

[QUOTE=bcp19;288189]Ok, I must be missing something. How does the P-1 'break out' 2^2*3^3*5 from what you list above?[/QUOTE]

When you understand that, you'd have understood the P-1 algorithm.:smile:

James Heinrich 2012-02-04 00:47

[QUOTE=axn;288180]Stage 2 can't really do powers of prime factors. It can cover a single prime factor > B1 and <= B2. So the 3^3 will not be caught in Stage 2. It has to be caught in Stage 1.[/QUOTE]Thank you for the explanation, very easy for even me to follow. :smile:

I have updated my code accordingly, and the smaller factor of [url=http://mersenne-aries.sili.net/M2011]M2011[/url] now shows B1=B2=27.

Mr. P-1 2012-02-05 14:16

To do a stage 1 P-1, you start by choosing a composite number E then compute 3[sup]E[/sup]-1. The method will then find a factor p if E is divisible by p-1. This will work for any E, not just smooth E. So for example if we chose E=66=2*3*11, then we would find the factor 23, but not the factor 19, even though 19-1 is smother than 23-1.

The thing to realize is that the smoothness 'rule' isn't something inherent in the algorithm itself. Rather, it arises out of the optimisation of the algorithm. Every E power-smooth to some B1 is optimal in the sense that all other choices are either less likely to find a factor, more expensive to compute, or both.

Stage 2 computes 3[sup]E*q[/sup] for a range of different prime q and will find a factor p if any E*q is divisible by p-1. Again, the algorithm will work (in the sense that it will find these factors) for an arbitrary set of primes q, but it's only worth doing if it can be implemented efficiently, which it can on large blocks of consecutive primes.

The "standard" P-1 then, chooses two bound, B1 and B2, computes E power-smooth to B1 for stage 1, and uses all primes q between B1 and B2 for stage 2. Variants on this are worth considering, for example one might compute E smooth to B1 but power-smooth to B2, or one might run stage 2 from 2 to B2 instead of from B1 to B2. The examples supplied above by axn and fivemack suggest that gmp-ecm does both.

c10ck3r 2012-02-05 20:26

For the P-1 method, does the number have to be taken mod 2^P-1 each step, or could we compute, say, 3^50000, take that mod 2^P-1, and take it to the P power?

NBtarheel_33 2012-02-05 21:28

[QUOTE=c10ck3r;288399]For the P-1 method, does the number have to be taken mod 2^P-1 each step, or could we compute, say, 3^50000, take that mod 2^P-1, and take it to the P power?[/QUOTE]

Given the size of the exponents [I]p[/I] that we usually work with, 3^50000 ~ 10^23856 (mod 2^[I]p[/I] - 1) is likely to simply be 3^50000 (i.e. modular reduction does nothing to reduce the size of 3^50000).

In practice, however, where 3^[I]E[/I] is huge, modular reduction at each step is likely done for the same purpose for which it is done in an LL test: it keeps the size of the numbers involved from very quickly outgrowing the universe.

NBtarheel_33 2012-02-05 21:59

[QUOTE=Mr. P-1;288369]To do a stage 1 P-1, you start by choosing a composite number E then compute 3[sup]E[/sup]-1. The method will then find a factor p if E is divisible by p-1. This will work for any E, not just smooth E. So for example if we chose E=66=2*3*11, then we would find the factor 23, but not the factor 19, even though 19-1 is smother than 23-1.

The thing to realize is that the smoothness 'rule' isn't something inherent in the algorithm itself. Rather, it arises out of the optimisation of the algorithm. Every E power-smooth to some B1 is optimal in the sense that all other choices are either less likely to find a factor, more expensive to compute, or both.

Stage 2 computes 3[sup]E*q[/sup] for a range of different prime q and will find a factor p if any E*q is divisible by p-1. Again, the algorithm will work (in the sense that it will find these factors) for an arbitrary set of primes q, but it's only worth doing if it can be implemented efficiently, which it can on large blocks of consecutive primes.

The "standard" P-1 then, chooses two bound, B1 and B2, computes E power-smooth to B1 for stage 1, and uses all primes q between B1 and B2 for stage 2. Variants on this are worth considering, for example one might compute E smooth to B1 but power-smooth to B2, or one might run stage 2 from 2 to B2 instead of from B1 to B2. The examples supplied above by axn and fivemack suggest that gmp-ecm does both.[/QUOTE]

It would be very helpful to see a small, paper-and-pencil example of the P-1 algorithm, if someone would be willing to work through one.

Mr. P-1 2012-02-05 23:02

[QUOTE=c10ck3r;288399]For the P-1 method, does the number have to be taken mod 2^P-1 each step, or could we compute, say, 3^50000, take that mod 2^P-1, and take it to the P power?[/QUOTE]

Bear in mind that a standard stage 1 with B1=13 would involve computing 3^360360-1. For the values of B1 we use in practice, typically > 100,000, computing the power without modular reduction at each step is completely infeasible.

Mr. P-1 2012-02-05 23:21

[QUOTE=NBtarheel_33;288406]It would be very helpful to see a small, paper-and-pencil example of the P-1 algorithm, if someone would be willing to work through one.[/QUOTE]

I'm not willing to do a literal paper-and-pencil example but perhaps this will help. Take E=66 as in my example above. 3^66-1 = 30903154382632612361920641803528 is divisible by 23. If N is also divisible by 23 then GCD(N,3^66-1) cannot be 1. If we're unlucky then this GCD will be just N and we will have failed to find a factor. More likely it will be 23 or some multiple, in which case we (partially) factorise N.

ET_ 2012-02-06 13:55

Question about P-1
 
My laptop is actually running P-1 tests on M54,xxx,xxx on the first thread and ECM tests on small Mersenne numbers on the second one.

I gave Prime95 1.5 GB of memory.

As the laptop is active only 8 hours/day, it may happen that:

1 - Thread 1 starts Stage 1 with 1.5 GB of memory
2 - Thread 2 starts Stage 2 with 760 MB, and thread 1 is lowered to 760 MB
3 - Thread 2 finishes Stage 2
4 - Thread 1 regains 1.5 GB of memory
5 - Go to 2

a) How will Stage 2 of P-1 be affected from this change of memory allotments?
b) Is the algorithm still affordable?
c) At the end of P-1 Stage 2, how will be the calculation credited by PrimeNet? With higher or lower B2 bounds?

Luigi

James Heinrich 2012-02-06 14:11

[QUOTE=ET_;288443]I gave Prime95 1.5 GB of memory.[/QUOTE]Prime95 will determine optimal bounds under the assumption that it will have access to the full amount of the allocated RAM for all of stage2. This logic works well for the standard model where P-1 is a small part at the start of a L-L test. For dedicated P-1 (or ECM) work, it would be best to put specific allocations in [i]local.txt[/i]:[code][Worker #1]
Memory=800

[Worker #2]
Memory=700[/code]Numbers for example only, you can decide if you want equal or unequal distribution, whatever works best for your P-1 + ECM setup.

ET_ 2012-02-06 15:27

[QUOTE=James Heinrich;288444]Prime95 will determine optimal bounds under the assumption that it will have access to the full amount of the allocated RAM for all of stage2. This logic works well for the standard model where P-1 is a small part at the start of a L-L test. For dedicated P-1 (or ECM) work, it would be best to put specific allocations in [i]local.txt[/i]:[code][Worker #1]
Memory=800

[Worker #2]
Memory=700[/code]Numbers for example only, you can decide if you want equal or unequal distribution, whatever works best for your P-1 + ECM setup.[/QUOTE]

Thank you James, this makes sense. :smile:

I was worrying about the change of B2 settings during the calculation, and wondering which one will be chosen by PrimeNet, and how the sudden B2 change affects the calculation.

I will stick to fixed bounds for now, but I'm still curious about the questions...

Luigi

bcp19 2012-02-06 16:08

[QUOTE=ET_;288448]Thank you James, this makes sense. :smile:

I was worrying about the change of B2 settings during the calculation, and wondering which one will be chosen by PrimeNet, and how the sudden B2 change affects the calculation.

I will stick to fixed bounds for now, but I'm still curious about the questions...

Luigi[/QUOTE]

There should be no change in B2, it would just take longer to run S2 with the lower amount of memory available.

James Heinrich 2012-02-06 16:18

[QUOTE=ET_;288443]a) How will Stage 2 of P-1 be affected from this change of memory allotments?
b) Is the algorithm still affordable?
c) At the end of P-1 Stage 2, how will be the calculation credited by PrimeNet? With higher or lower B2 bounds?[/QUOTE]Bounds are calculated for optimal efficiency, under the assumption that stage2 will have access to the full amount of RAM that you've configured for Prime95.

To answer your questions directly:
a) The bounds used will be as calculated, runtime may be a little longer with less-than-full memory available than if all memory was available to the single instance.

b) Prime95 may have chosen different (smaller) bounds if it knew it would have less RAM available to maintain optimal balance between runtime and chance of factor, but the difference should be relatively small. And indeed, some of the time it will likely have the full amount of RAM available.

c) You always get credit for what you ran. Once the bounds are chosen, that's what's used to run the P-1 and that's what you get credit for. The amount of RAM just affects how efficiently stage2 can run, therefore how long it will take to run, therefore how large the bounds should be to obtain optimal balance between runtime and probability.

ET_ 2012-02-06 16:41

OK, now I think I have understood. :smile:

Thank you everybody!

Luigi

chalsall 2012-02-08 16:29

Low candidates available to P-1...
 
Just so you all know, there are a batch (347) of candidates available to P-1 in the 52M range from [URL="http://www.gpu72.com/reports/available/nop-1/"]GPU to 72[/URL] ("Not just for GPUs any more").

And if anyone is not yet working with the sub-project, please consider joining. (Hint, hint Mr. P-1... :smile:)

cheesehead 2012-02-08 20:05

A little pedantry :-)
 
As always in discussions (as above) of the P-1 bounds-choosing algorithm, recall that prime95 invokes it only for "Pfactor=" (or implicitly for "Doublecheck=" or "Test="), not for "Pminus1=" where the bounds are explicitly specified by the user.

Dubslow 2012-02-12 21:16

Question: If a P-1 only did S1 because it found factors, and I want it to go back and do S2 (I kept the save file) what do I put in worktodo? Pminus1 or PFactor?

James Heinrich 2012-02-12 22:18

[QUOTE=Dubslow;289199]Question: If a P-1 only did S1 because it found factors, and I want it to go back and do S2 (I kept the save file) what do I put in worktodo? Pminus1 or PFactor?[/QUOTE]You'll need to use Pminus1:[code]Pminus1=k,b,n,c,B1,B2[,how_far_factored][,B2_start][,"factors"][/code]Notably, you'll need to include the factors that were found in stage1 in the "factors" list, otherwise Prime95 will start running off the existing savefile, find the stage1 factor(s) again and stop the assignment. So something like:[code]Pminus1=1,2,49876543,1,500000,12000000,0,"1234567890123456789"[/code]

Dubslow 2012-02-12 22:33

Wouldn't c need to be -1? Otherwise, thanks. I definitely would not have gotten the known factors part. (Decimal, right?)

Is the zero for 'how far factored' or for B2 start?
Edit: Going off the wiki article, it doesn't mention factored depth, so I'll assume B2 start.


[code]Pminus1=1,2,52567117,-1,515000,15000000,0,"10005731267083223465351","6376255264142420877595961"[/code]

lorgix 2012-02-12 22:42

[QUOTE=Dubslow;289210]Wouldn't c need to be -1? Otherwise, thanks. I definitely would not have gotten the known factors part. (Decimal, right?)

Is the zero for 'how far factored' or for B2 start?
Edit: Going off the wiki article, it doesn't mention factored depth, so I'll assume B2 start.


[code]Pminus1=1,2,52567117,-1,515000,15000000,0,"10005731267083223465351","6376255264142420877595961"[/code][/QUOTE]
From whatsnew.txt:

" Pminus1=k,b,n,c,B1,B2[,how_far_factored][,B2_start][,"factors"] "

Another way would be setting Stage1GCD=0 in prime.txt.

Dubslow 2012-02-12 23:16

[code]Pminus1=N/A,1,2,52567117,-1,515000,15000000,0,"10005731267083223465351","6376255264142420877595961"[/code]
[code][Worker #1 Feb 12 17:09:48] Worker starting
[Worker #1 Feb 12 17:09:48] Setting affinity to run worker on logical CPU #1
[Worker #3 Feb 12 17:09:48] Waiting 10 seconds to stagger worker starts.
[Worker #2 Feb 12 17:09:48] Waiting 5 seconds to stagger worker starts.
[Worker #1 Feb 12 17:09:48] P-1 on M52567117 with B1=515000, B2=15000000
[Worker #1 Feb 12 17:09:48] Using Core2 type-3 FFT length 2800K, Pass1=448, Pass2=6400
[Worker #1 Feb 12 17:09:50] Error parsing comma separated known factor list near: "637625526
[Worker #1 Feb 12 17:09:50] M52567117 stage 1 is 0.0417% complete.[/code]
It's not finding the save file. I just looked, and I have no clue if I have it or not. I do not have any KeepPminus1SaveFile lines in prime.txt or local.txt, so I assume it's keeping them, but there's a lot less files than there are P-1s that I've done in the last week. (Either way though, it still can't parse the factor thingy.)

James Heinrich 2012-02-12 23:35

[QUOTE=lorgix;289212]Another way would be setting Stage1GCD=0 in prime.txt.[/QUOTE]Ah, excellent idea, I missed that option.

[QUOTE=Dubslow;289215]It's not finding the save file. I just looked, and I have no clue if I have it or not. I do not have any KeepPminus1SaveFile lines in prime.txt or local.txt, so I assume it's keeping them, but there's a lot less files than there are P-1s that I've done in the last week.[/QUOTE]Prime95 now (since relatively recently, maybe starting with v25) defaults to keeping Pminus1 savefiles, unless told otherwise. But note, this is P-1 assignments invoked with [i]Pminus1=[/i] worktodo lines, it does not apply to the more-common [i]Pfactor=[/i] assignments. The savefiles are the same, and if you kept one manually (or can recover it via a backup or perhaps an undelete utility like [url=http://www.piriform.com/recuva]Recuva[/url]) then you can use it.

[QUOTE=Dubslow;289210]Wouldn't c need to be -1?[/QUOTE]Oops, yes, I missed the -

[QUOTE=Dubslow;289215]Either way though, it still can't parse the factor thingy.[/QUOTE]It's a single list of factors -- the comma-separation is within the quotes:[code]Pminus1=N/A,1,2,52567117,-1,515000,15000000,72,0,"10005731267083223465351,6376255264142420877595961"[/code]The how-far-factored element (72 in my above example) doesn't really affect anything except the display of factor probability. In Pfactor= assignments it will affect what bounds are chosen; in Pminus1= assignments the bounds are fixed by you, so it's just a cosmetic thing.

Dubslow 2012-02-13 00:39

[QUOTE=James Heinrich;289217]
Prime95 now (since relatively recently, maybe starting with v25) defaults to keeping Pminus1 savefiles, unless told otherwise. [U]But note, this is P-1 assignments invoked with [i]Pminus1=[/i] worktodo lines, it does not apply to the more-common [i]Pfactor=[/i] assignments.[/U] The savefiles are the same, and if you kept one manually (or can recover it via a backup or perhaps an undelete utility like [url=http://www.piriform.com/recuva]Recuva[/url]) then you can use it.
[/QUOTE]

That's the part I missed. Good to know. I guess I don't have the file.

bcp19 2012-02-13 05:35

I've been pondering the last few days, if the B1 bound being used is the same for several exponents, wouldn't there be some way to check multiple exponents at the same time? I could be wrong, but it seems that the P-1 S1 calculates a lot of the same things for each exponent, how much extra effort would there be to have say 10+ exponents and have it calculate them in parallel?

cheesehead 2012-02-13 06:43

[QUOTE=bcp19;289238]I've been pondering the last few days, if the B1 bound being used is the same for several exponents, wouldn't there be some way to check multiple exponents at the same time? I could be wrong, but it seems that the P-1 S1 calculates a lot of the same things for each exponent,[/QUOTE]As soon as the stage 1 exponentiation hits the first mod(Mnumber) operation, they go their separate ways.

- - -

Exercises:

What's the tenth primorial (i.e., product of first ten primes)? What is the base-2 logarithm of 3^(tenth primorial)?

How far does the stage 1 3-exponentiation have to go before it exceeds a typical Mnumber on which GIMPSters are performing P-1? To what B1 does that correspond?

axn 2012-02-13 06:44

[QUOTE=bcp19;289238]I've been pondering the last few days, if the B1 bound being used is the same for several exponents, wouldn't there be some way to check multiple exponents at the same time? I could be wrong, but it seems that the P-1 S1 calculates a lot of the same things for each exponent, how much extra effort would there be to have say 10+ exponents and have it calculate them in parallel?[/QUOTE]

No. To keep the intermediate calculations from getting too big, the calculation is done modulo the Mp. This means that the calculation values cannot be reused between different Mp's.

bcp19 2012-02-13 18:10

[QUOTE=cheesehead;289242]As soon as the stage 1 exponentiation hits the first mod(Mnumber) operation, they go their separate ways.

- - -

Exercises:

What's the tenth primorial (i.e., product of first ten primes)? What is the base-2 logarithm of 3^(tenth primorial)?

How far does the stage 1 3-exponentiation have to go before it exceeds a typical Mnumber on which GIMPSters are performing P-1? To what B1 does that correspond?[/QUOTE]

[I]p[/I]n#=6469693230 when n=10. log2(3^6468683230) is beyond my current ability to calculate.

Seeing as 3^2310(fifth primorial) is 1.4128e+1102 (roughly M3660) and 3^30030(sixth primorial) = 8.9388e+14327 (roughly M47590), it's fairly easy to comprehend that 3^ 8th or 9th primorial will exceed a typical Mp.

And calculating B1 is also beyond my current ability.

cheesehead 2012-02-13 21:12

[QUOTE=bcp19;289297][I]p[/I]n#=6469693230 when n=10. log2(3^6468683230) is beyond my current ability to calculate.[/QUOTE] log2(3^6468683230) = 6468683230 * log2(3)

log2(3) is just over 1.5 (2^1.5 = 2 * sqrt(2) = 2.828).

That would give you the exponent of the Mersenne number equal in magnitude to 3 raised to the power of the tenth primorial, approximately 10,000,000,000 -- beyond our current limits in prime95.

[quote]Seeing as 3^2310(fifth primorial) is 1.4128e+1102 (roughly M3660) and 3^30030(sixth primorial) = 8.9388e+14327 (roughly M47590), it's fairly easy to comprehend that 3^ 8th or 9th primorial will exceed a typical Mp.[/quote]... and yet, the tenth primorial is simply the product of the first ten primes = 2*3*5*7*11*13*17*19*23*29

[quote]And calculating B1 is also beyond my current ability.[/quote]Suppose B1 = 30. That's pretty small; in fact, it's the default minimum value that prime95 substitutes for any smaller value.

Stage 1 of P-1 computes 3^(product of prime powers up to B1), which for B1=30 is

3^(2^4*3^3*5^2*7*11*13*17*19*23*29),

which is

3^(tenth primorial)*3^360

So, even the default minimum B1 requires computing a number which is 3^360 times 3^(tenth primorial).

3^(tenth primorial) is itself larger than the current upper limit of Mersenne number that prime95 can handle.

The point is that even the smallest B1 value we use requires use of modular arithmetic, so there's no point in trying to save numbers calculated in stage 1 for use on other exponents. There is no duplication (from one exponent to another) after as few as 8-10 terms.

LaurV 2012-02-14 04:42

[QUOTE=cheesehead;289314]
3^(2^4*3^3*5^2*7*11*13*17*19*23*29),
which is
3^(tenth primorial)*3^360
[/QUOTE]

Gotcha! :razz:
[URL="http://www.mersenneforum.org/showpost.php?p=288883&postcount=20"]I told you so![/URL] Ha ha ha ha. Next time don't jump to byte the poor bcp guy... I bet every mathematician make this mistake at least 10 times in his life! :smile:

KyleAskine 2012-02-14 12:12

What James said above got me thinking. Is it possible to start a P-1 search using your save data and just push B1 or B2 out a little and not have to do most of the work.

Because that would make sense to me as what Never Odd or Even seems to do to get his mammoth P-1 GHz-d:

[URL="http://mersenne-aries.sili.net/exponent.php?exponentdetails=383838383"]http://mersenne-aries.sili.net/exponent.php?exponentdetails=383838383[/URL]

Just curious.

James Heinrich 2012-02-14 13:48

[QUOTE=KyleAskine;289362]What James said above got me thinking. Is it possible to start a P-1 search using your save data and just push B1 or B2 out a little and not have to do most of the work.[/QUOTE]Certainly. A downside of P-1 is that it doesn't know if it's found a factor until it does a GCD, which it only does at the end of stage1 and end of stage2 (assuming it found nothing in stage1). As long as you keep the savefile ([i]KeepPminus1SaveFiles=1[/i] in prime.txt and specify assignments as "Pminus1="), stage1 will continue right where it left off with successively higher B1s. That is what I am currently doing with [url=http://mersenne-aries.sili.net/M595900037]M595,900,037[/url], with worktodo like this:[code]Pminus1=N/A,1,2,595900037,-1,3000000,3000000,82
Pminus1=N/A,1,2,595900037,-1,3500000,3500000,82
Pminus1=N/A,1,2,595900037,-1,4000000,4000000,82
...
Pminus1=N/A,1,2,595900037,-1,6000000,6000000,82[/code]In a few months, when I've done stage1 up to (at least) B1=6,000,000 I'll move on to stage2, for which I can use the B2_start parameter of Pminus1= lines:[quote]Pminus1=k,b,n,c,B1,B2[,how_far_factored][,[b]B2_start[/b]][,"factors"][/quote]to specify progressively higher B2s until I get up to around [url=http://mersenne-aries.sili.net/prob.php?exponent=595900037&work=500&factorbits=82]B2=130,000,000[/url].


On a side note: doing P-1 this big with current hardware is silly. :smile:

cheesehead 2012-02-14 17:12

[QUOTE=LaurV;289337]Gotcha! :razz:
[URL="http://www.mersenneforum.org/showpost.php?p=288883&postcount=20"]I told you so![/URL] Ha ha ha ha.[/QUOTE]

Yes, in my preceding post I should have written (corrections in boldface):

3^(2^4*3^3*5^2*7*11*13*17*19*23*29),

which is

3^[B](tenth primorial*360) = [3^(tenth primorial)]^360[/B]

So, even the default minimum B1 requires computing a number which is [B]the 360th power of 3^(tenth primorial)[/B].

[quote] Next time don't jump to byte the poor bcp guy...[/quote]Where did I "jump to byte" anyone? :-)

My calculation wasn't correcting any mistake bcp19 had made other than an apparent assumption that part of the stage 1 computation could be usefully saved for reuse on another exponent.

I was supplying a computation that neither bcp19 nor I had (apparently) actually made yet. In fact, before I performed the computation for that posting I had expected that the maximum B1 at which a saving of partial computation could be useful would be higher than 30. So I surprised myself, too.

The mistake I made in my calculation didn't invalidate the point I intended to communicate: that even the smallest B1 used in prime95 requires modular arithmetic, which makes it impractical to save a partial computation large enough to usefully speed up stage 1 for a different exponent.

- - - - - - - - - -

[QUOTE=James Heinrich;289368]On a side note: doing P-1 this big with current hardware is silly. :smile:[/QUOTE]

"Silly", along with "Miniscule", is some of our middle names here on mersenneforum.org

bcp19 2012-02-14 18:55

[QUOTE=cheesehead;289376]"Silly", along with "Miniscule", is some of our middle names here on mersenneforum.org[/QUOTE]

Does that mean I have to be bsp19 now?

oswald 2012-02-14 20:53

[QUOTE=bcp19;289383]Does that mean I have to be bsp19 now?[/QUOTE]
[B]binary space partitioning[/B] ([B]BSP[/B]) is a method for recursively subdividing a space into convex sets by hyperplanes. Isn't that what you are doing? :smile:

LaurV 2012-02-15 03:42

hrrrr... hating anything involving bsp's (board support package for winCE and win mobile, system programmers know what I am talking about...)
edit: and yes, that is what [I][B]I[/B][/I] am doing, wasting my days and getting white hair...

cheesehead 2012-02-15 07:11

[QUOTE=LaurV;289423]hrrrr... hating anything involving bsp's (board support package for winCE and win mobile, system programmers know what I am talking about...)
edit: and yes, that is what [I][B]I[/B][/I] am doing, wasting my days and getting white hair...[/QUOTE]
Just keep in mind all that nicely partitioned space you're producing, making the world a more-organized place. :-)-

bcp19 2012-02-27 23:14

I've been doing some reading, and have a few questions regarding P-1 and ECM. From the usage of them on the site, it appears to be a given that P-1 is more efficient than ECM for time spent vs factor found for exponents that were not factored during TF and prior to LL. So, while P-1 will find all factors 2kp+1 where the 2nd largest factor of k < B1 and the largest factor of k < B2, will the ECM also find all factors? (If I am asking this question badly, given that factor F is 22 digits long and can be found with B1=50000 using 300 curves[25 digit ECM], will F also be found using B1=250000 using 700 curves[30 digit ECM] and higher?) At what point is it better to use ECM for factorization and how do you determine what level of ECM to start at?

cheesehead 2012-02-27 23:41

[QUOTE=bcp19;291113]I've been doing some reading, and have a few questions regarding P-1 and ECM. From the usage of them on the site, it appears to be a given that P-1 is more efficient than ECM for time spent vs factor found for exponents that were not factored during TF and prior to LL. So, while P-1 will find all factors 2kp+1 where the 2nd largest factor of k < B1 and the largest factor of k < B2, will the ECM also find all factors? (If I am asking this question badly,[/QUOTE]No, it's clear. :-)

[quote]given that factor F is 22 digits long and can be found with B1=50000 using 300 curves[25 digit ECM], will F also be found using B1=250000 using 700 curves[30 digit ECM] and higher?)[/quote]Not necessarily. P-1 is a deterministic algorithm; it will find all factors within the search area defined by B1 and B2. But ECM is a probabilistic algorithm; it examines a random selection of elliptic curves within the search area defined by its B1/B2. The more curves examined, the greater its chances of finding a factor that is with the search area. But ECM is never [I]guaranteed[/I] to find a particular factor that exists within the search area, as P-1 is.

[quote]At what point is it better to use ECM for factorization and how do you determine what level of ECM to start at?[/quote]As the search area defined by B1/B2 expands, P-1's performance bogs down faster than ECM's. That is, as B1/B2 go higher, P-1's guarantee of finding a factor in the search area is handicapped by its longer and longer execution time to do so.

By contrast, ECM's execution time has a lower proportionality to search area size, so at sufficiently high bounds its probability of finding a factor [I]within a given time of execution[/I] beats P-1's probability by a larger and larger margin.

(By "P-1's probability" here, I'm referring to the probability that a factor actually exists in the search area. P-1's certainty of finding a factor doesn't help if no such factor exists within the search area. OTOH, ECM's probability of finding a factor is [probability that a factor exists in the search area] * [probability that one of ECM's random curves will find a factor that does exist in the search area])

There are general guidelines for the optimum levels at which to switch from P-1 to ECM, but I don't have a link handy.

Mini-Geek 2012-02-27 23:43

[QUOTE=bcp19;291113]I've been doing some reading, and have a few questions regarding P-1 and ECM. From the usage of them on the site, it appears to be a given that P-1 is more efficient than ECM for time spent vs factor found for exponents that were not factored during TF and prior to LL.[/QUOTE]
That's right. ECM, unlike TF and P-1, is not useful as a pre-factoring method (i.e. factoring efforts done to prove a Mersenne number composite without running LL tests), instead it is useful as an effort towards fully factoring a number. This is largely because with P-1 on Mersenne numbers, we know that P-1 has a factor of p (the Mersenne exponent), thus growing the potential factor size by several digits at almost no cost.
[QUOTE=bcp19;291113]So, while P-1 will find all factors 2kp+1 where the 2nd largest factor of k < B1 and the largest factor of k < B2, will the ECM also find all factors? (If I am asking this question badly, given that factor F is 22 digits long and can be found with B1=50000 using 300 curves[25 digit ECM], will F also be found using B1=250000 using 700 curves[30 digit ECM] and higher?)[/QUOTE]
ECM is not* deterministic like P-1 is. When you run "25 digit ECM", what you're doing (IIRC) is saying that if there is a 25 digit factor, you can expect to have found it 1 time, which means about a 68% probability that you actually found it. When running to 30 digits, you can still find the 22 digit factor, in fact you're much more likely to find it than when searching at a 25 digit level (at a cost of higher expected time to find the factor than at 25 digit level).
* In ECM, a random sigma is chosen for each curve/run. With each particular sigma, whether a factor will be found is deterministic, just like P-1: It is dependent on the smoothness of the group order of p given sigma s, like P-1 is dependent on the smoothness of p-1. When looking at a large number of ECM curves, it's best to think of it as being non-deterministic.
[QUOTE=bcp19;291113]At what point is it better to use ECM for factorization...[/QUOTE]
After you've done a P-1 run, and you have done enough TF that the expected time per factor now favors ECM.
[QUOTE=bcp19;291113]...and how do you determine what level of ECM to start at?[/QUOTE]
You'd probably want to start where TF left off. E.g. 70 bits is ~22 digits, so you would probably want to start at an ECM level optimal for finding 25 or 30 digit factors.

bcp19 2012-02-28 17:26

[QUOTE=Mini-Geek;291120]* In ECM, a random sigma is chosen for each curve/run. With each particular sigma, whether a factor will be found is deterministic, just like P-1: It is dependent on the smoothness of the group order of p given sigma s, like P-1 is dependent on the smoothness of p-1. When looking at a large number of ECM curves, it's best to think of it as being non-deterministic.
[/QUOTE]

So, because of the random sigma, you can have 50 people each working on 100 curves with a very good chance of not duplicating work. I'd been wondering how that worked, thanks.

Dubslow 2012-03-02 04:41

What does QA mean in ecm.c?

[code]/* Global variables */ /*line 38*/

int QA_IN_PROGRESS = FALSE;
int QA_TYPE = 0;
giant QA_FACTOR = NULL;[/code]

cheesehead 2012-03-02 04:51

[QUOTE=Dubslow;291517]What does QA mean in ecm.c?

[code]/* Global variables */ /*line 38*/

int QA_IN_PROGRESS = FALSE;
int QA_TYPE = 0;
giant QA_FACTOR = NULL;[/code][/QUOTE]Quality assurance, or something similar -- i.e., test mode.

flashjh 2012-03-03 00:34

P-1 question
 
James,

Your site gives slightly different values for P-1 than Prime95.

Should I override Prime95 and use yours, combine, or just let Prime95 decide?

Prime95:

[CODE]
Optimal P-1 factoring of M403000007 using up to 117950MB of memory.
Assuming no factors below 2^77 and 2 primality tests saved if a factor is found.
Optimal bounds are B1=4065000, B2=134145000
Chance of finding a factor is an estimated 6.41%
Using Core2 type-3 FFT length 22400K, Pass1=3584, Pass2=6400, 4 threads

[/CODE]

Your site:

[CODE]
M403000007, factored to 77 bits, assuming 2 L-L tests saved, with B1=3,975,000 and B2=99,375,000, using K*B^N+C = 1*2^403000007-1
Probability = 6.095490%
Should take about 253.322407 GHz-days (using FFT size 22,400K)
Recommended RAM allocation: min=3,621MB; good=10,789MB; max=88,204MB; insane=432,268MB;
[/CODE]

Prime95 shows higher chance of finding a factor, but your B1 is lower. Should I use a pminus1 line in Prime95 like

[CODE]
Pminus1=1,2,403000007,-1,3975000,134145000,77
[/CODE]

to override the system?

On your site I put in the exponent and selected 2 LL tests saved to calculate.

Thanks

James Heinrich 2012-03-03 00:46

[QUOTE=flashjh;291650]Should I override Prime95 and use yours, combine, or just let Prime95 decide?[/QUOTE][b]Always[/b] let Prime95 decide. My numbers are reasonable guidelines, but don't take into account the nuances like how many relative primes can be processed per pass given your memory allocation.

For comparison, if you use my site to seek a [url=http://mersenne-aries.sili.net/prob.php?exponent=403000007&b1=4065000&b2=134145000&factorbits=77&K=1&C=-1]matching[/url] [url=http://mersenne-aries.sili.net/prob.php?exponent=403000007&prob=6.411128&factorbits=77]6.411128% chance of factor[/url], it does come up with bounds that are reasonably close to what Prime95 has selected:[code]Seeking 6.411128% chance of factor for exponent M403,000,007
M403,000,007, factored to 77 bits, with B1=4,593,367 and B2=124,020,909
Probability = 6.411128%
Should take about 306.553148 GHz-days (using FFT size 22,400K)
Recommended RAM allocation: min=3,621MB; good=10,789MB; max=88,204MB; insane=432,268MB;[/code]

Dubslow 2012-03-03 00:48

The problem is his site doesn't let you enter the memory to use, which plays a not-minor role in bounds choosing. You're allocating ~ [i]100GiB[/i](!!!), which is why Prime95 chose higher bounds and thus a better factor chance. If in doubt, always go with Prime95's bounds. James' site is meant for exploration, i.e. not having to put a Pfactor line in worktodo.txt and then stopping the test to get bounds calculated.

Edit: Whoops, cross post. Addendum: Why the hell are you doing P-1 that high? :P

flashjh 2012-03-03 00:49

[QUOTE=James Heinrich;291651][B]Always[/B] let Prime95 decide. My numbers are reasonable guidelines, but don't take into account the nuances like how many relative primes can be processed per pass given your memory allocation.

For comparison, if you use my site to seek a [URL="http://mersenne-aries.sili.net/prob.php?exponent=403000007&b1=4065000&b2=134145000&factorbits=77&K=1&C=-1"]matching[/URL] [URL="http://mersenne-aries.sili.net/prob.php?exponent=403000007&prob=6.411128&factorbits=77"]6.411128% chance of factor[/URL], it does come up with bounds that are reasonably close to what Prime95 has selected:[code]Seeking 6.411128% chance of factor for exponent M403,000,007
M403,000,007, factored to 77 bits, with B1=4,593,367 and B2=124,020,909
Probability = 6.411128%
Should take about 306.553148 GHz-days (using FFT size 22,400K)
Recommended RAM allocation: min=3,621MB; good=10,789MB; max=88,204MB; insane=432,268MB;[/code][/QUOTE]

Thanks.

James Heinrich 2012-03-03 01:00

[QUOTE=Dubslow;291652]Why the hell are you doing P-1 that high? :P[/QUOTE]Because he has 128GB of RAM :smile:

My current 32GB now feels strangely inadequate for my even-crazier attempt at [url=http://mersenne-aries.sili.net/M595900037]M595,900,037[/url] (3.5 months and I'm not done stage1 yet, although that's because it's sharing a single core with mfaktc on my i7-920; I'll put 4x 3930K cores on it once it's time for stage2).


All times are UTC. The time now is 10:25.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.