mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Data (https://www.mersenneforum.org/forumdisplay.php?f=21)
-   -   COMPLETE!!!! Thinking out loud about getting under 20M unfactored exponents (https://www.mersenneforum.org/showthread.php?t=22476)

masser 2020-10-31 16:22

[QUOTE=pinhodecarlos;561656]I would like to run TF on the range 0-20M from lowest bit to 69-70 bits but I can’t seem to manage to download a sample file to try it. Also I have an account on GPU72 and can’t get them from them. Help please?[/QUOTE]

Here you go. No one has touched the 14.2M range in the past year. It will certainly need TF attention to get below 2000 unfactored. I've attached the first subrange below (about 200 candidates). Note: user lycorn is working his way down from 20M to 10M, factoring up to 70 bits. So, if you want to test your GPU with these candidates, do it now. Don't wait a month or two, for there will be a good chance they've already been completed.

It's pretty simple to use the times you record on these assignments to estimate times on other TF assignments.

EDIT: Ach, just saw axn's post. Sorry, sir. I'm echoing axn's response.

[CODE]Factor=N/A,14200001,69,70
Factor=N/A,14200003,69,70
Factor=N/A,14200009,69,70
Factor=N/A,14200031,69,70
Factor=N/A,14200037,69,70
Factor=N/A,14200097,69,70
Factor=N/A,14200099,69,70
Factor=N/A,14200103,69,70
Factor=N/A,14200283,69,70
Factor=N/A,14200451,69,70
Factor=N/A,14200513,69,70
Factor=N/A,14200541,69,70
Factor=N/A,14200567,69,70
Factor=N/A,14200603,69,70
Factor=N/A,14200789,69,70
Factor=N/A,14200799,69,70
Factor=N/A,14200811,69,70
Factor=N/A,14200817,69,70
Factor=N/A,14200819,69,70
Factor=N/A,14200829,69,70
Factor=N/A,14200951,69,70
Factor=N/A,14200987,69,70
Factor=N/A,14200993,69,70
Factor=N/A,14201027,69,70
Factor=N/A,14201063,69,70
Factor=N/A,14201183,69,70
Factor=N/A,14201221,69,70
Factor=N/A,14201263,69,70
Factor=N/A,14201287,69,70
Factor=N/A,14201351,69,70
Factor=N/A,14201431,69,70
Factor=N/A,14201449,69,70
Factor=N/A,14201573,69,70
Factor=N/A,14201591,69,70
Factor=N/A,14201611,69,70
Factor=N/A,14201657,69,70
Factor=N/A,14201687,69,70
Factor=N/A,14201699,69,70
Factor=N/A,14201773,69,70
Factor=N/A,14201807,69,70
Factor=N/A,14201809,69,70
Factor=N/A,14201839,69,70
Factor=N/A,14201897,69,70
Factor=N/A,14201899,69,70
Factor=N/A,14201989,69,70
Factor=N/A,14202053,69,70
Factor=N/A,14202073,69,70
Factor=N/A,14202143,69,70
Factor=N/A,14202197,69,70
Factor=N/A,14202211,69,70
Factor=N/A,14202271,69,70
Factor=N/A,14202313,69,70
Factor=N/A,14202323,69,70
Factor=N/A,14202347,69,70
Factor=N/A,14202457,69,70
Factor=N/A,14202467,69,70
Factor=N/A,14202473,69,70
Factor=N/A,14202491,69,70
Factor=N/A,14202509,69,70
Factor=N/A,14202613,69,70
Factor=N/A,14202623,69,70
Factor=N/A,14202641,69,70
Factor=N/A,14202719,69,70
Factor=N/A,14202737,69,70
Factor=N/A,14202739,69,70
Factor=N/A,14202763,69,70
Factor=N/A,14202767,69,70
Factor=N/A,14202863,69,70
Factor=N/A,14202913,69,70
Factor=N/A,14203039,69,70
Factor=N/A,14203181,69,70
Factor=N/A,14203219,69,70
Factor=N/A,14203223,69,70
Factor=N/A,14203289,69,70
Factor=N/A,14203291,69,70
Factor=N/A,14203339,69,70
Factor=N/A,14203349,69,70
Factor=N/A,14203429,69,70
Factor=N/A,14203457,69,70
Factor=N/A,14203477,69,70
Factor=N/A,14203547,69,70
Factor=N/A,14203711,69,70
Factor=N/A,14203727,69,70
Factor=N/A,14203747,69,70
Factor=N/A,14203759,69,70
Factor=N/A,14203769,69,70
Factor=N/A,14203771,69,70
Factor=N/A,14203867,69,70
Factor=N/A,14203877,69,70
Factor=N/A,14203883,69,70
Factor=N/A,14203901,69,70
Factor=N/A,14204059,69,70
Factor=N/A,14204101,69,70
Factor=N/A,14204131,69,70
Factor=N/A,14204149,69,70
Factor=N/A,14204237,69,70
Factor=N/A,14204257,69,70
Factor=N/A,14204353,69,70
Factor=N/A,14204447,69,70
Factor=N/A,14204683,69,70
Factor=N/A,14204711,69,70
Factor=N/A,14204803,69,70
Factor=N/A,14204849,69,70
Factor=N/A,14204881,69,70
Factor=N/A,14204923,69,70
Factor=N/A,14204947,69,70
Factor=N/A,14204963,69,70
Factor=N/A,14205031,69,70
Factor=N/A,14205127,69,70
Factor=N/A,14205137,69,70
Factor=N/A,14205221,69,70
Factor=N/A,14205241,69,70
Factor=N/A,14205479,69,70
Factor=N/A,14205493,69,70
Factor=N/A,14205539,69,70
Factor=N/A,14205593,69,70
Factor=N/A,14205599,69,70
Factor=N/A,14205629,69,70
Factor=N/A,14205671,69,70
Factor=N/A,14205757,69,70
Factor=N/A,14205823,69,70
Factor=N/A,14205869,69,70
Factor=N/A,14205899,69,70
Factor=N/A,14205949,69,70
Factor=N/A,14205979,69,70
Factor=N/A,14205991,69,70
Factor=N/A,14206043,69,70
Factor=N/A,14206051,69,70
Factor=N/A,14206081,69,70
Factor=N/A,14206109,69,70
Factor=N/A,14206117,69,70
Factor=N/A,14206123,69,70
Factor=N/A,14206183,69,70
Factor=N/A,14206219,69,70
Factor=N/A,14206327,69,70
Factor=N/A,14206369,69,70
Factor=N/A,14206421,69,70
Factor=N/A,14206427,69,70
Factor=N/A,14206561,69,70
Factor=N/A,14206579,69,70
Factor=N/A,14206651,69,70
Factor=N/A,14206657,69,70
Factor=N/A,14206837,69,70
Factor=N/A,14206919,69,70
Factor=N/A,14206957,69,70
Factor=N/A,14206991,69,70
Factor=N/A,14207059,69,70
Factor=N/A,14207101,69,70
Factor=N/A,14207143,69,70
Factor=N/A,14207177,69,70
Factor=N/A,14207227,69,70
Factor=N/A,14207293,69,70
Factor=N/A,14207411,69,70
Factor=N/A,14207423,69,70
Factor=N/A,14207483,69,70
Factor=N/A,14207507,69,70
Factor=N/A,14207509,69,70
Factor=N/A,14207527,69,70
Factor=N/A,14207533,69,70
Factor=N/A,14207539,69,70
Factor=N/A,14207549,69,70
Factor=N/A,14207663,69,70
Factor=N/A,14207747,69,70
Factor=N/A,14207749,69,70
Factor=N/A,14207783,69,70
Factor=N/A,14207797,69,70
Factor=N/A,14207873,69,70
Factor=N/A,14207899,69,70
Factor=N/A,14207911,69,70
Factor=N/A,14207939,69,70
Factor=N/A,14207983,69,70
Factor=N/A,14207993,69,70
Factor=N/A,14208001,69,70
Factor=N/A,14208031,69,70
Factor=N/A,14208083,69,70
Factor=N/A,14208113,69,70
Factor=N/A,14208127,69,70
Factor=N/A,14208149,69,70
Factor=N/A,14208253,69,70
Factor=N/A,14208367,69,70
Factor=N/A,14208539,69,70
Factor=N/A,14208697,69,70
Factor=N/A,14208739,69,70
Factor=N/A,14208743,69,70
Factor=N/A,14208749,69,70
Factor=N/A,14208809,69,70
Factor=N/A,14208829,69,70
Factor=N/A,14208833,69,70
Factor=N/A,14208847,69,70
Factor=N/A,14208863,69,70
Factor=N/A,14208907,69,70
Factor=N/A,14208937,69,70
Factor=N/A,14208947,69,70
Factor=N/A,14209051,69,70
Factor=N/A,14209081,69,70
Factor=N/A,14209123,69,70
Factor=N/A,14209163,69,70
Factor=N/A,14209241,69,70
Factor=N/A,14209291,69,70
Factor=N/A,14209313,69,70
Factor=N/A,14209339,69,70
Factor=N/A,14209513,69,70
Factor=N/A,14209537,69,70
Factor=N/A,14209541,69,70
Factor=N/A,14209609,69,70
Factor=N/A,14209633,69,70
Factor=N/A,14209691,69,70
Factor=N/A,14209693,69,70
Factor=N/A,14209703,69,70
Factor=N/A,14209709,69,70
Factor=N/A,14209771,69,70
Factor=N/A,14209801,69,70
Factor=N/A,14209859,69,70
Factor=N/A,14209861,69,70
Factor=N/A,14209927,69,70
Factor=N/A,14209999,69,70[/CODE]

petrw1 2020-10-31 18:07

[QUOTE=pinhodecarlos;561674]Just wanted to trial on my old GPU, thank you guys.[/QUOTE]

Do you use misfit?
If you go to GPU72.COM you can have choose work type Double check tests and What Makes Sense. You might have to notify Chris

storm5510 2020-10-31 21:22

[QUOTE=masser;561693]..So, if you want to test your GPU with these candidates, do it now. Don't wait a month or two, for there will be a good chance they've already been completed....[/QUOTE]

Anyone with a good GPU should be able to knock these out in short order. I sampled one on my 2080, < 3 minutes.

About Misfit, it left me in a mess a couple of years back. 900+ assignments, if memory serves. It took close to a month to clean those up.

lycorn 2020-10-31 23:08

@pinhodecarlos: As per S485122 post, it´s indeed straightforwrd to get exponents to TF. Just pick your range of choice, TF level, and mersenne.org will format the worktodo file for you.

Regarding masser´s warning about me TFing my way down to 10000000, that won´t be a concern, for I have put that activity on hold for now, and when I resume it I will coordinate with people in this thread to avoid toe stepping. FYI, it will certainly be several weeks until I resume work in those ranges, most probably not until next year.

masser 2020-11-02 00:18

1 Attachment(s)
[QUOTE=VBCurtis;560448]I've started ECM on the suggested range. My first set is 5 curves at B1=250K; I may adjust that later. A curve at this size on the Surface tablet I'm using takes about 25k seconds, so ought to progress at roughly 5 candidates per week. I'll give it a couple weeks to start.[/QUOTE]

Thanks again for the kind offer of help. I've attached a P-1 worktodo.txt for the 14.01M range if the ECM work becomes too boring. I'll ensure that my devices avoid that range for a few months (at least until mid-January).

VBCurtis 2020-11-02 06:50

I accept your work!
I may inflate the bounds a little bit, after I time the first test or two.
"optimal" P-1 is out the window when we're going to need so much ECM to hit the factors-found target...

VBCurtis 2020-11-05 18:10

I went with B1=8M, B2=200M for the first few P-1 jobs.
It's on a really slow but 8GB laptop, though I expect a desktop core to open up later.

petrw1 2020-11-09 23:45

2020-11-09 Update ... GREAT PROGRESS!!!
 
Since last update (July 24)
17 more ranges cleared: 4.0, 7.9, 8.8, 19.8, 22.5, 27.6, 29.4, 37.0, 37.3, 40.3, 40.5, 40.8, 41.0, 41.1, 45.5, 56.8, 59.4
And 4 bonus ranges: 68.4, 73.1, 73.5, 75.8.

There is only 1 bonus range to go (58.7)... and it had good progress; maybe; just maybe.
If or when it is cleared I am confident that every range 50M+ will be taken care of by natural GIMPS processes.
That allows this project to focus on the lower ranges only.

TOTALS to date:
195 total ranges cleared or 39.24%
22 Ranges with less than 20 to go.
1,601 more factored (25,598)....46.35% total factored.

Continuing to get lots of great help. THANKS

Thanks again for everyone contributing.

petrw1 2020-11-10 00:21

Once again where is the most help needed?
 
Note that while there is lots of room for more TF in the remaining ranges, the lower the exponents the more beneficial P-1 becomes (or ECM for VERY low) and the less beneficial TF will become.

My general process is:
1. TF a couple extra bits, while they are still relatively fast for GPUs
2. Aggressively P-1 where the current B1=B2. (I choose new B1/B2 that give about a 2.5-3% improvement over the current P-1 You can use this tool: [url]https://www.mersenne.ca/prob.php[/url])
3a. Aggressively P-1 where the current B1/B2 are relatively low. (Any that I can gain at least 2% with reasonable new B1/B2)
3b. TF another bit level.
...(3a and 3b are numbered as such because the order I do these depends on analysis of which I estimate will help with the least effort/time. Often I'll do some of both).

That said; if you have TF-grade GPUs the immediate need is:
- TF the 20-29M ranges to at least 72 bits (or until a range is under 2,000 remaining)
===> If you use GPU72, it is already making available these assignments
- IF you have a more powerful GPU (or lots of them) consider helping in the more stubborn 30M and 40M ranges where aggressive P-1 is already done.
===> In the 40's: 44.6, 46.3 and 47.8 (They should be on GPU72 shortly)
===> In the 30's: 30.2, 31.6, 32.5, 33.0, 38.0, 38.6, 38.9
===> I (and others) are finishing P-1 for about 1 or 2 ranges a month to add to the above list.
- If you have CPUs or LL/P1-Grade GPUs consider doing aggressive P-1 as I defined above (or ECM if your RAM is limited). Pick a range and reserve it HERE and/or here: [url]https://www.mersenneforum.org/showthread.php?t=23152&highlight=redoing[/url]

Thanks in advance.

KEP 2020-11-11 14:45

Notice of working unreserved in the 999M-1G range (TF)
 
@Everyone:

Due to the difficulty of predicting Colab availability, I've decided to still help this effort and work far far far away from where SRBase is crunching and climb 1 bit at a time. The choice fell on the range 999M-1G (1 million n).

Currently I've loaded all exponents Trial Factored to 72 bit, to my worktodo.txt file. If I can complete about 100 a day, it will take me about 107 days using Colab, to completely test 72 to 73 bit. Once that completes, I'm going to load all candidates Trial Factored to 73 bit and take them to 74 bit.

My hope is that someone gives me a heads up, if they reserve any large amount of work, so I can remove the candidates from my list. Even though Colab ressources are free, it is just not meaningfull to run 2 trial factor tests of the same n's. Also I will keep an eye on the Active Assignments page and see if anyone reserves anything for Trial Factoring.

Take care.

lycorn 2020-11-12 15:48

[QUOTE=KEP;562909]


My hope is that someone gives me a heads up, if they reserve any large amount of work, so I can remove the candidates from my list.[/QUOTE]

Why don't you simply reserve the batches you're willing to test, like the SRBASE folks do?

KEP 2020-11-12 21:17

[QUOTE=lycorn;563006]Why don't you simply reserve the batches you're willing to test, like the SRBASE folks do?[/QUOTE]

Good and just question. As far as I know, there is no way for me to reserve all candidates tested to 72 bit and leave those tested to higher bit levels unreserved. I either have to make potentially thousands of reservations to get just those tested to 72 bit and none of those tested to higher bits. I tried using the manual GPU reservation page a while back and it gave me 100 n's in the low n range, near the wavefront. If there is a way for me to reserve for 999M-1G all candidates tested to 72 bit please let me know, either here or in PM :smile:

masser 2020-11-12 21:32

[QUOTE=KEP;563031]Good and just question. As far as I know, there is no way for me to reserve all candidates tested to 72 bit and leave those tested to higher bit levels unreserved. I either have to make potentially thousands of reservations to get just those tested to 72 bit and none of those tested to higher bits. I tried using the manual GPU reservation page a while back and it gave me 100 n's in the low n range, near the wavefront. If there is a way for me to reserve for 999M-1G all candidates tested to 72 bit please let me know, either here or in PM :smile:[/QUOTE]

[URL="https://www.mersenne.org/report_factoring_effort/?exp_lo=999000000&exp_hi=999999999&bits_lo=72&bits_hi=&exassigned=1&tfonly=1&worktodo=1&tftobits=73"]Is this what you seek?[/URL]

lycorn 2020-11-12 23:27

[QUOTE=masser;563032][URL="https://www.mersenne.org/report_factoring_effort/?exp_lo=999000000&exp_hi=999999999&bits_lo=72&bits_hi=&exassigned=1&tfonly=1&worktodo=1&tftobits=73"]Is this what you seek?[/URL][/QUOTE]

That link allows you to get exponents to test, but it doesn´t actually reserves them in the server.

But you can do it using the Manual GPU Assignments Form:

i) Choose the number of assignments you wish (don´t know whether or not there is a limit)
ii) Leave the Preferred Work Range at "What makes sense"
iii) Enter 999000000 and 999999999 in the Optional Exponent range fields
iv) Leave Work Preference at "What makes sense"
v) In the "Optional bit level to factor to" field enter 73. Note: I tried it and it gave me assignments from 72 to [B]74[/B] instead. It´s not a problem, tho, just a matter of editing the worktodo file changing [B]",74"[/B] to [B]",73"[/B]. Notepad will suffice.

Even if there are limits on the number of exponents you may reserve, that´s not a problem. After all, you don´t really need to reserve [B]all[/B] available exponents in one go, do you? :smile:

HTH

KEP 2020-11-13 09:34

[QUOTE=masser;563032][URL="https://www.mersenne.org/report_factoring_effort/?exp_lo=999000000&exp_hi=999999999&bits_lo=72&bits_hi=&exassigned=1&tfonly=1&worktodo=1&tftobits=73"]Is this what you seek?[/URL][/QUOTE]

Not exactly, but it was a good shoot - as Lycorn mentions, it did not give me the possibility to reserve anything, but it was very usefull for my initial plan, since it was just copy paste and then push "play" in the notebook.

[QUOTE=lycorn;563042]That link allows you to get exponents to test, but it doesn´t actually reserves them in the server.

But you can do it using the Manual GPU Assignments Form:

i) Choose the number of assignments you wish (don´t know whether or not there is a limit)
ii) Leave the Preferred Work Range at "What makes sense"
iii) Enter 999000000 and 999999999 in the Optional Exponent range fields
iv) Leave Work Preference at "What makes sense"
v) In the "Optional bit level to factor to" field enter 73. Note: I tried it and it gave me assignments from 72 to [B]74[/B] instead. It´s not a problem, tho, just a matter of editing the worktodo file changing [B]",74"[/B] to [B]",73"[/B]. Notepad will suffice.

Even if there are limits on the number of exponents you may reserve, that´s not a problem. After all, you don´t really need to reserve [B]all[/B] available exponents in one go, do you? :smile:

HTH[/QUOTE]

It appears that I must have done something wrong, on the last try of reserving for manual Trial Factoring assignments ahead of the wavefront. The descriped plan worked, however now I have changed my own plan and decided to Trial Factor from 72 to 74 bit on the currently 2000 reserved tasks.

You are right, I do not need to reserve all work at once :wink:

pinhodecarlos 2020-11-17 18:27

Hi "Wayne", it's me again...lol. Almost done with the P-1 work you sent across last month, can I have now another batch of 100 P-1 but for range lower than 20M? If not happy to proceed with 30M, 40M, 50 exponent range. TIA.

petrw1 2020-11-17 21:57

[QUOTE=pinhodecarlos;563534]Hi "Wayne", it's me again...lol. Almost done with the P-1 work you sent across last month, can I have now another batch of 100 P-1 but for range lower than 20M? If not happy to proceed with 30M, 40M, 50 exponent range. TIA.[/QUOTE]

No one seems to be in the 16M range:
I think there are 121 here. About 2.3 GhzDays each

[CODE]Pminus1=N/A,1,2,16686743,-1,1500000,30000000,70
Pminus1=N/A,1,2,16608569,-1,1500000,30000000,70
Pminus1=N/A,1,2,16611989,-1,1500000,30000000,70
Pminus1=N/A,1,2,16616207,-1,1500000,30000000,70
Pminus1=N/A,1,2,16623379,-1,1500000,30000000,70
Pminus1=N/A,1,2,16637987,-1,1500000,30000000,70
Pminus1=N/A,1,2,16641263,-1,1500000,30000000,70
Pminus1=N/A,1,2,16641413,-1,1500000,30000000,70
Pminus1=N/A,1,2,16644701,-1,1500000,30000000,70
Pminus1=N/A,1,2,16651951,-1,1500000,30000000,70
Pminus1=N/A,1,2,16653269,-1,1500000,30000000,70
Pminus1=N/A,1,2,16658461,-1,1500000,30000000,70
Pminus1=N/A,1,2,16659217,-1,1500000,30000000,70
Pminus1=N/A,1,2,16659917,-1,1500000,30000000,70
Pminus1=N/A,1,2,16667857,-1,1500000,30000000,70
Pminus1=N/A,1,2,16674529,-1,1500000,30000000,70
Pminus1=N/A,1,2,16675639,-1,1500000,30000000,70
Pminus1=N/A,1,2,16679081,-1,1500000,30000000,70
Pminus1=N/A,1,2,16680943,-1,1500000,30000000,70
Pminus1=N/A,1,2,16686947,-1,1500000,30000000,70
Pminus1=N/A,1,2,16687493,-1,1500000,30000000,70
Pminus1=N/A,1,2,16691089,-1,1500000,30000000,70
Pminus1=N/A,1,2,16691099,-1,1500000,30000000,70
Pminus1=N/A,1,2,16691453,-1,1500000,30000000,70
Pminus1=N/A,1,2,16692157,-1,1500000,30000000,70
Pminus1=N/A,1,2,16694263,-1,1500000,30000000,70
Pminus1=N/A,1,2,16695697,-1,1500000,30000000,70
Pminus1=N/A,1,2,16697389,-1,1500000,30000000,70
Pminus1=N/A,1,2,16699589,-1,1500000,30000000,70
Pminus1=N/A,1,2,16643647,-1,1500000,30000000,70
Pminus1=N/A,1,2,16644403,-1,1500000,30000000,70
Pminus1=N/A,1,2,16645283,-1,1500000,30000000,70
Pminus1=N/A,1,2,16647041,-1,1500000,30000000,70
Pminus1=N/A,1,2,16648469,-1,1500000,30000000,70
Pminus1=N/A,1,2,16649063,-1,1500000,30000000,70
Pminus1=N/A,1,2,16649651,-1,1500000,30000000,70
Pminus1=N/A,1,2,16651111,-1,1500000,30000000,70
Pminus1=N/A,1,2,16653787,-1,1500000,30000000,70
Pminus1=N/A,1,2,16654993,-1,1500000,30000000,70
Pminus1=N/A,1,2,16655081,-1,1500000,30000000,70
Pminus1=N/A,1,2,16655879,-1,1500000,30000000,70
Pminus1=N/A,1,2,16656307,-1,1500000,30000000,70
Pminus1=N/A,1,2,16656569,-1,1500000,30000000,70
Pminus1=N/A,1,2,16659919,-1,1500000,30000000,70
Pminus1=N/A,1,2,16661417,-1,1500000,30000000,70
Pminus1=N/A,1,2,16661927,-1,1500000,30000000,70
Pminus1=N/A,1,2,16662179,-1,1500000,30000000,70
Pminus1=N/A,1,2,16662187,-1,1500000,30000000,70
Pminus1=N/A,1,2,16663177,-1,1500000,30000000,70
Pminus1=N/A,1,2,16663529,-1,1500000,30000000,70
Pminus1=N/A,1,2,16664867,-1,1500000,30000000,70
Pminus1=N/A,1,2,16665221,-1,1500000,30000000,70
Pminus1=N/A,1,2,16665469,-1,1500000,30000000,70
Pminus1=N/A,1,2,16665619,-1,1500000,30000000,70
Pminus1=N/A,1,2,16666057,-1,1500000,30000000,70
Pminus1=N/A,1,2,16666367,-1,1500000,30000000,70
Pminus1=N/A,1,2,16666577,-1,1500000,30000000,70
Pminus1=N/A,1,2,16667383,-1,1500000,30000000,70
Pminus1=N/A,1,2,16670987,-1,1500000,30000000,70
Pminus1=N/A,1,2,16671019,-1,1500000,30000000,70
Pminus1=N/A,1,2,16672463,-1,1500000,30000000,70
Pminus1=N/A,1,2,16673519,-1,1500000,30000000,70
Pminus1=N/A,1,2,16676027,-1,1500000,30000000,70
Pminus1=N/A,1,2,16677329,-1,1500000,30000000,70
Pminus1=N/A,1,2,16679527,-1,1500000,30000000,70
Pminus1=N/A,1,2,16680163,-1,1500000,30000000,70
Pminus1=N/A,1,2,16680199,-1,1500000,30000000,70
Pminus1=N/A,1,2,16680731,-1,1500000,30000000,70
Pminus1=N/A,1,2,16681421,-1,1500000,30000000,70
Pminus1=N/A,1,2,16681703,-1,1500000,30000000,70
Pminus1=N/A,1,2,16683353,-1,1500000,30000000,70
Pminus1=N/A,1,2,16694423,-1,1500000,30000000,70
Pminus1=N/A,1,2,16652413,-1,1500000,30000000,70
Pminus1=N/A,1,2,16674349,-1,1500000,30000000,70
Pminus1=N/A,1,2,16683791,-1,1500000,30000000,70
Pminus1=N/A,1,2,16696021,-1,1500000,30000000,70
Pminus1=N/A,1,2,16679939,-1,1500000,30000000,70
Pminus1=N/A,1,2,16600421,-1,1500000,30000000,70
Pminus1=N/A,1,2,16600811,-1,1500000,30000000,70
Pminus1=N/A,1,2,16605443,-1,1500000,30000000,70
Pminus1=N/A,1,2,16609457,-1,1500000,30000000,70
Pminus1=N/A,1,2,16612891,-1,1500000,30000000,70
Pminus1=N/A,1,2,16617787,-1,1500000,30000000,70
Pminus1=N/A,1,2,16619507,-1,1500000,30000000,70
Pminus1=N/A,1,2,16619833,-1,1500000,30000000,70
Pminus1=N/A,1,2,16623407,-1,1500000,30000000,70
Pminus1=N/A,1,2,16623493,-1,1500000,30000000,70
Pminus1=N/A,1,2,16625383,-1,1500000,30000000,70
Pminus1=N/A,1,2,16628753,-1,1500000,30000000,70
Pminus1=N/A,1,2,16628863,-1,1500000,30000000,70
Pminus1=N/A,1,2,16631161,-1,1500000,30000000,70
Pminus1=N/A,1,2,16631731,-1,1500000,30000000,70
Pminus1=N/A,1,2,16634929,-1,1500000,30000000,70
Pminus1=N/A,1,2,16638277,-1,1500000,30000000,70
Pminus1=N/A,1,2,16638287,-1,1500000,30000000,70
Pminus1=N/A,1,2,16639853,-1,1500000,30000000,70
Pminus1=N/A,1,2,16642091,-1,1500000,30000000,70
Pminus1=N/A,1,2,16643989,-1,1500000,30000000,70
Pminus1=N/A,1,2,16646191,-1,1500000,30000000,70
Pminus1=N/A,1,2,16646521,-1,1500000,30000000,70
Pminus1=N/A,1,2,16648543,-1,1500000,30000000,70
Pminus1=N/A,1,2,16654601,-1,1500000,30000000,70
Pminus1=N/A,1,2,16659581,-1,1500000,30000000,70
Pminus1=N/A,1,2,16664783,-1,1500000,30000000,70
Pminus1=N/A,1,2,16665949,-1,1500000,30000000,70
Pminus1=N/A,1,2,16668361,-1,1500000,30000000,70
Pminus1=N/A,1,2,16670363,-1,1500000,30000000,70
Pminus1=N/A,1,2,16671269,-1,1500000,30000000,70
Pminus1=N/A,1,2,16674443,-1,1500000,30000000,70
Pminus1=N/A,1,2,16677389,-1,1500000,30000000,70
Pminus1=N/A,1,2,16677487,-1,1500000,30000000,70
Pminus1=N/A,1,2,16682177,-1,1500000,30000000,70
Pminus1=N/A,1,2,16682927,-1,1500000,30000000,70
Pminus1=N/A,1,2,16684853,-1,1500000,30000000,70
Pminus1=N/A,1,2,16686949,-1,1500000,30000000,70
Pminus1=N/A,1,2,16691111,-1,1500000,30000000,70
Pminus1=N/A,1,2,16691839,-1,1500000,30000000,70
Pminus1=N/A,1,2,16693123,-1,1500000,30000000,70
Pminus1=N/A,1,2,16693279,-1,1500000,30000000,70
Pminus1=N/A,1,2,16696369,-1,1500000,30000000,70
Pminus1=N/A,1,2,16697909,-1,1500000,30000000,70
Pminus1=N/A,1,2,16609777,-1,1500000,30000000,70
Pminus1=N/A,1,2,16622743,-1,1500000,30000000,70
Pminus1=N/A,1,2,16633459,-1,1500000,30000000,70
Pminus1=N/A,1,2,16602847,-1,1500000,30000000,70
Pminus1=N/A,1,2,16606519,-1,1500000,30000000,70
Pminus1=N/A,1,2,16612121,-1,1500000,30000000,70
Pminus1=N/A,1,2,16614877,-1,1500000,30000000,70
Pminus1=N/A,1,2,16621723,-1,1500000,30000000,70
Pminus1=N/A,1,2,16622159,-1,1500000,30000000,70
Pminus1=N/A,1,2,16629703,-1,1500000,30000000,70
Pminus1=N/A,1,2,16653431,-1,1500000,30000000,70
Pminus1=N/A,1,2,16656043,-1,1500000,30000000,70
Pminus1=N/A,1,2,16657183,-1,1500000,30000000,70
Pminus1=N/A,1,2,16663831,-1,1500000,30000000,70
Pminus1=N/A,1,2,16665769,-1,1500000,30000000,70
Pminus1=N/A,1,2,16672657,-1,1500000,30000000,70
Pminus1=N/A,1,2,16674509,-1,1500000,30000000,70
Pminus1=N/A,1,2,16676263,-1,1500000,30000000,70
Pminus1=N/A,1,2,16677211,-1,1500000,30000000,70
Pminus1=N/A,1,2,16689061,-1,1500000,30000000,70
Pminus1=N/A,1,2,16694963,-1,1500000,30000000,70
Pminus1=N/A,1,2,16696733,-1,1500000,30000000,70
Pminus1=N/A,1,2,16697683,-1,1500000,30000000,70[/CODE]

pinhodecarlos 2020-11-18 07:50

Queued them and thank you.

pinhodecarlos 2020-11-20 18:50

Have a problem. Had to reboot my machine then I lost the majority of the M30 outstanding P-1 wus plus the ones from the above list. Client decided to communicate with the server and messed up downloading other new work. I've re queued the 16M range.

Ensigm 2020-11-20 18:59

[QUOTE=pinhodecarlos;563855]Client decided to communicate with the server and messed up downloading other new work[/QUOTE]
From my experience, adding [C]NoMoreWork=1[/C] in [I]prime.txt[/I] might prevent this from happening again.

petrw1 2020-11-20 18:59

[QUOTE=pinhodecarlos;563855]Have a problem. Had to reboot my machine then I lost the majority of the M30 outstanding P-1 wus plus the ones from the above list. Client decided to communicate with the server and messed up downloading other new work. I've re queued the 16M range.[/QUOTE]

OK....do you want more 30M or do you prefer these 16M?

pinhodecarlos 2020-11-20 19:13

[QUOTE=petrw1;563857]OK....do you want more 30M or do you prefer these 16M?[/QUOTE]

Will stay with the 16M, thank you both.

petrw1 2020-11-21 04:26

36.2M taken for P1
 
Thanks

petrw1 2020-11-23 07:08

Looks like all the TF help is getting ahead of P1.
 
I guess it's a balancing act....How much TF power we have vs. how much P1 (ECM) power.

If I knew the actual ratio, then suggesting where each would best help would be possible. However, I can only guess; and it changes week by week.

Recently, while the P-1 help is certainly increasing (YAY) there is still much more TF capacity being applied here....but the question on my mind is where is it best applied (Of course, at best I can offer an opinion...no more, no less)

The following quote bubble is my deep thinking; feel free to skip it if you so desire.
[QUOTE]In a nutshell I estimate how many GhzDay per expected factor via P-1 taking into account how much P-1 has already been done....AND... then I do the same for TF, knowing that GPUs are up to 100 times faster at TF than CPUs and that each extra bit of TF takes twice as long as the one before.

In 4x.xM and 5x.xM I'm seeing about 150 GDs per factor for P-1 where B1=B2; about twice that where B1/B2 are still relatively low. Based on the typical P-1 that has been done I will expect 15-25 factors at the 150GDsPer and another 5-25 factors of the 300-500GdsPer.

As for TF due to the aggressive P-1 I'm seeing about 1% success rate, so for example in the entire 25.0M range TF70-71: 20 Factors; 20,000 Ghz Days: 1,000GDs Per Factor. But by TF74-75 the next 20 factors will take 320,000 GDs; 16,000 Per Factor.

Then, as the exponents get lower the cost per factor for P-1 will decrease and for TF will increase.

So depending how many more factors are required to get under 2,000 I can determine how much more TF or P-1 to suggest and in what order. Often I'll do 1 or 2 bits of TF; then easier P-1; another bit of TF; harder P-1; more TF or some variation.
[/QUOTE]

There are NOT a lot of ranges right now where I consider all the necessary extra P-1 complete; though there are a few more that could be close enough as long as we have excess TF capacity.

I am P-1'ing in the 4x,xM ranges aggressively:
44.6 and 46.3 are done P-1; 49.4 will be done in a couple weeks.
I am TF'ing 44.6; 46.3 is available for TF.
The remaining 8 have P-1 scheduled though it takes me about 3 weeks per range.
That said, these remaining 8 ranges will all need at least TF75 so it could be done before or after P-1 as long as there is no toe-stepping.
40.1, 41.7, 43.3, 43.4, 43.0, 42.6, 49.6, 48.4

Others are TF'ing and P-1'ing in the 2x.xM and 3x.xM ranges.
There are about a half dozen in 3x.xM that have completed P-1 where B1=B2 and could be tackled by TF but a few have enough factors remaining that I think more P-1 is warranted first.

There are another half dozen or so 3x.xM that B1=B2 P-1 will be enough to release them to TF.

As for 2x.xM I think the first priority is TF to at least 72 bits where necessary; then aggressive P-1.

That's enough for now ... probably way too much! :smile:

If necessary I can provide more specifics for the aforementioned ranges.

Thanks all and enjoy the hunt.

axn 2020-11-24 06:27

3.6M done
 
After about 6 months of deep P-1 and one month of TF, 3.6M range is done. There is still about two weeks of P-1 left to bring the entire range to a logical closure, so may be another 10 factors might be found.

Why 3.6? 3-4M was the first 1M range with > 20000, and out of that 3.6M was the range with the largest number of unfactored exponents. So picked it as a challenge.

What next? Will do some more work around 3-4M range. There is already some activity in that area, so will try not to step on toes.

petrw1 2020-11-24 18:16

[QUOTE=axn;564182]After about 6 months of deep P-1 and one month of TF, 3.6M range is done.

Why 3.6? 3-4M was the first 1M range with > 20000, and out of that 3.6M was the range with the largest number of unfactored exponents. So picked it as a challenge.
.[/QUOTE]

Yay!!!

pinhodecarlos 2020-11-30 17:40

1 Attachment(s)
Having lots of fun running P-1 with not the recommended optimized B1/B2 bounds but even though I am getting satisfying rate of factors for range below 7.8M. Work was taken from mersenne.ca at the "poorly P-1 blablabla" link.

petrw1 2020-11-30 18:15

[QUOTE=pinhodecarlos;564846]Having lots of fun running P-1 with not the recommended optimized B1/B2 bounds but even though I am getting satisfying rate of factors for range below 7.8M. Work was taken from mersenne.ca at the "poorly P-1 blablabla" link.[/QUOTE]

Cool... it all helps

pinhodecarlos 2020-11-30 18:23

[QUOTE=petrw1;564850]Cool... it all helps[/QUOTE]

Will go back to your 16M once I’m done with these, 10 days to go.

petrw1 2020-12-08 04:18

[QUOTE=pinhodecarlos;564853]Will go back to your 16M once I’m done with these, 10 days to go.[/QUOTE]

No Longer Required....someone else cleared the 16.6M range.

But when you are ready for other small ranges let me know.

Thanks

petrw1 2020-12-16 05:07

Few more TF ranges up for grabs.....
 
Available now (P-1 done)
29.1
35.5
49.4

A little more P-1 is in progress
30.1
36.2

A little more P-1 would be beneficial but not essential
36.5
38.2

firejuggler 2020-12-16 06:48

I was thinking about doing a PM1 B1=500e3 B2 =10e6 on the 27-27.1 range ... Starting at about the new year. is the range availlable?

petrw1 2020-12-16 15:49

[QUOTE=firejuggler;566352]I was thinking about doing a PM1 B1=500e3 B2 =10e6 on the 27-27.1 range ... Starting at about the new year. is the range availlable?[/QUOTE]

It is free and I absolutely appreciate any help. But if I may offer some experience:

In my humble opinion if you mean 27.0 that is a tough one. I'm not trying to talk you out of it, just want to be sure you know what you are getting into.
If you mean 27.1 that is much closer/easier.
Based on a lot of math I use to determine the best way to clear these ranges, I think your bounds could be higher, though.

HERE IS THE MATH I USE:
The 27.0 range needs 176 more factors. TF alone would take too much work..
Each bit of TF clears a little over 1% (of 2175 ... 25 would be generous); so we expect something like:
TF70-71: 2175-25=2150 (8.85 GhzDays per assignment)
TF71-72: 2150-25=2125 (17.7)
TF72-73: 2125-25=2100 (35.5)
TF73-74: 2100-25=2075 (71.0)
TF74-75: 2075-25=2050 (142.0)
I'm not sure we want to go this high so TF alone is not the answer.

So we need at least 50 factors from P-1; either before TF or after.

There are 860 that have had a low B1=B2; about 420e3
For these 860 your proposed bounds have an expected success rate of close to 3% or about 25 factors according to these:
[url]https://www.mersenne.ca/prob.php?exponent=27050039&factorbits=70&b1=420000&b2=420000[/url]
[url]https://www.mersenne.ca/prob.php?exponent=27050039&factorbits=70&b1=500000&b2=10000000[/url].

The remaining 1300 or so have current higher B1/B2 which would diminish your success rate if you continue to P-1 these as well.
Something like a 2% success rate here gives another 26 factors....we're getting closer. But I'm not sure we want to P-1 all 2175 exponents either.

On the other hand if you used bounds of 1e6,20e6 you'll get another 1% (granted for double the work; 2.7 vs 1.35 GhzDays each).
[url]https://www.mersenne.ca/prob.php?exponent=27050039&factorbits=70&b1=1000000&b2=20000000[/url]
So out of the 860 you would get about 34 factors.
And if your goal is 50 P-1 factors you need to P-1 a lot less of the remaining 1300 exponents.

On the third hand (hahaha), sometimes I will use the lower B1/B2 for the exponents that have a current B1=B2 and more aggressive B1/B2 for the others.
I have a spread sheet that helps me choose.

CONFUSED.....or maybe you are way better at stats than me and know all this and more.
Anyway thanks a lot and anything you choose is greatly appreciated.

petrw1 2020-12-16 23:47

I've got 29.1
 
[QUOTE=petrw1;566350]Available now (P-1 done)
29.1 .... I'm working on this one
35.5
49.4

A little more P-1 is in progress
30.1
36.2

A little more P-1 would be beneficial but not essential
36.5
38.2[/QUOTE]

I share. Use GPU72 to avoid toe stepping OR start at the top (i.e. 29100000)

Prime95 2020-12-17 19:45

I'm looking for volunteers to test a pre-release of version 30.4. 30.4 is a nearly complete rewrite of the ECM and P-1 code.

Useful testing would include:
1) rerun successful P-1 and ECM attempts to make sure the new code does not miss any factors.
2) make sure save and restore work
3) find bugs reading old save files
4) testing various high and low memory configurations
5) report back on whether (how much) the new code is faster

and finally...

6) run the improved code to see if it is finding about the expected number of new factors.

Any interest? Please specify Windows or Linux.

New features include:
- better use of available memory for ECM and P-1 stage 2
- small speed improvements in ECM stage 1
- selection of optimal B2 in both ECM and P-1
- deprecate Brent Suyama (more efficient to put that effort into a larger B1)

kruoli 2020-12-17 21:16

Wonderful! I'm looking forward to this. I could try Windows and Linux both, 64 bits. Thank you! :smile:

chalsall 2020-12-17 21:22

[QUOTE=kruoli;566476]Wonderful! I'm looking forward to this. I could try Windows and Linux both, 64 bits. Thank you! :smile:[/QUOTE]

I've currently got a couple of machines doing DC-P1 in 104M.

Would love to give it a whirl if you don't there's (much) risk of missed factors. (If a bug is found they can simply be re-run.)

kruoli 2020-12-17 21:33

I'd be willing to implement a script for finding known factors of mostly Mersenne numbers, at least (if they were found by P-1 or ECM).

@James et al., could you compile a list of those factors? Not all of them, but some?

Edit 1: The script should create worktodo.txt entries and then I'll be running them.

@George, do we have to test different machines or is it sufficient to test on a single machine, when assuming the big number arithmetic itself is rock solid?

Edit 2: I may have forgotten; how to specify Sigma when executing ECM with Prime95 or mprime?

masser 2020-12-17 22:49

[QUOTE=Prime95;566466]
Any interest? Please specify Windows or Linux.
[/QUOTE]

I can test a linux version. Might not get to it until the weekend, though.

nordi 2020-12-17 22:54

[QUOTE=Prime95;566466]I'm looking for volunteers to test a pre-release of version 30.4. 30.4 is a nearly complete rewrite of the ECM and P-1 code.[/QUOTE]I'm interested in trying it out on Linux. Roughly when will the testing begin, and when do you need the results?

petrw1 2020-12-17 22:55

[QUOTE=Prime95;566466]I'm looking for volunteers to test a pre-release of version 30.4. 30.4 is a nearly complete rewrite of the ECM and P-1 code.
[/QUOTE]

Is this for PFACTOR only or also for PMINUS1?

Prime95 2020-12-17 23:18

[QUOTE=petrw1;566503]Is this for PFACTOR only or also for PMINUS1?[/QUOTE]

Pfactor and P-1 are the same (in both existing and 30.4 versions). The only difference is Pfactor computes the P-1 bounds based on expected #LL tests saved, Pminus1 uses user specified bounds.

30.4 will prefer you use this format for your P-1 work:
Pminus1=1,2,n,-1,B1,B2_which_will_be_ignored,TF_sieve_depth
This is different than what is displayed on the James' "poorly factored" page.

[QUOTE=kruoli;566484]@George, do we have to test different machines or is it sufficient to test on a single machine, when assuming the big number arithmetic itself is rock solid?[/QUOTE]

The FFT code changed a little, but I'm more worried about the ECM and P-1 C code.
The change to FFT code is an assembly implementation of (a+b)*c and (a-b)*c using less memory bandwidth. I only implemented this for AVX, FMA3, and AVX-512 -- for older machines that operation is emulated.

[QUOTE=nordi;566502]I'm interested in trying it out on Linux. Roughly when will the testing begin, and when do you need the results?[/QUOTE]

Let me make a few more tweaks and tests, then I'll build a Windows and Linux executable.

Uncwilly 2020-12-17 23:24

I am willing to run it on a couple of Win64 machines.

LaurV 2020-12-18 04:36

[QUOTE=Prime95;566466]I'm looking for volunteers to test a pre-release of version 30.4. [/QUOTE]
Put me on the list and PM link if Win7/i7-6950X is what you look for as a OS/CPU.
I'm in holiday next week, and not going anywhere (moving the rubbish around the house from here to there and back, etc.)

ET_ 2020-12-18 12:59

[QUOTE=Prime95;566466]I'm looking for volunteers to test a pre-release of version 30.4. 30.4 is a nearly complete rewrite of the ECM and P-1 code.


Any interest? Please specify Windows or Linux.
[/QUOTE]


If you still need Linux testes, I'm here to help.

Luigi :et_:

firejuggler 2020-12-18 13:58

Might I suggest 32148013 as one of the PM1 test?
B1=451 519 /B2= 2 846 113
This ought to find a composite 172 bit factor (2)

lycorn 2020-12-18 15:37

@george: count me in. Win 10 64-bit on a kaby lake 16 GB machine, and Linux on several Colab instances.

Prime95 2020-12-18 22:48

Warnings:
1) Resuming stage 2 P-1 or ECM from a v30.3 save file will not work. Stage 2 will start again from scratch.
2) I've not tested resuming from a P-1 v30.3 save file. It is supposed to work, but may not due to substantial changes in save files. Report a bug if you encounter an error.

I recommend installing v30.4 in a new directory and copying over your prime.txt, local.txt, worktodo.txt and save files. Thanks for your help in locating bugs or suggestions for more improvements.

Win64 version:

[url]https://www.dropbox.com/s/ilw7k3x3omja13s/p95v304b2.win64.zip?dl=0[/url]

Linux64 soon.

P.S. Calculating optimal B2 for P-1 and ECM is based upon estimates of the work involved in B1 and B2. If there are bugs in the cost estimating code, wrong optimal B2 will be calculated. Thus, if you find a "head scratcher" such as "B2 is faster with 6GB of memory than with 8GB" please report your observations.

Prime95 2020-12-18 23:45

Linux 64:

[url]https://www.dropbox.com/s/9ksfnkwtx5r4fwx/p95v304b2.linux64.tar.gz?dl=0[/url]

masser 2020-12-19 02:03

If a factor is not found, does the new code report the requested B2 or the computed B2? In my initial results, [STRIKE]it appears to report the requested B2, which seems inaccurate.[/STRIKE]

These are the lines I tried:
[QUOTE]Pminus1=1,2,21150827,-1,6133,596857,66
Pminus1=1,2,31919773,-1,1901,84737,66[/QUOTE]


NEVERMIND: I'm wrong - misread the outputs. Code reported the B2 it used, which was not adequate in this case... I'll try to see how much memory to allocate to get the desired factor.

Prime95 2020-12-19 02:34

A present for your project -- I was comparing P-1 speeds for 30.3 vs. 30.4 and found this:

processing: P-1 factor 74415394148849438918449121 for M16963013 (B1=500,000, B2=20,500,000)

petrw1 2020-12-19 04:28

[QUOTE=Prime95;566632]A present for your project[/QUOTE]

:bow:

And....inquiring minds want to know...how do they compare please?

Prime95 2020-12-19 06:19

[QUOTE=petrw1;566638]:bow:

And....inquiring minds want to know...how do they compare please?[/QUOTE]

Was not a good test as stage 2 was limited to 300MB memory. Stage 1 was same speed.
30.3 did 17.3% of stage 2 to B2=10M in 470 seconds.
30.4 did 43.4% of stage 2 to B2=20.5M in 1940 seconds.

My extrapolation is 30.3 would have taken 2480 seconds to do what 30.4 did.

axn 2020-12-19 06:52

@George: Can you share the B2 selection logic (source code)? I want to understand/predict how memory allocation will affect this. Does the logic take into account MaxHighMemWorkers / worker-specific memory settings?

petrw1 2020-12-19 17:35

Are the save files compatible. I'm currently at 29.8

Prime95 2020-12-19 20:06

[QUOTE=axn;566642]@George: Can you share the B2 selection logic (source code)? I want to understand/predict how memory allocation will affect this. Does the logic take into account MaxHighMemWorkers / worker-specific memory settings?[/QUOTE]

B2 is selected when stage 2 starts. It uses the available memory according to Day/Night MaxHighMemWorkers/etc settings at the time stage 2 begins.

The source code will be made available, but emulating the B2 selection logic will not be easy.

[QUOTE=petrw1;566688]Are the save files compatible. I'm currently at 29.8[/QUOTE]

See post #242

kruoli 2020-12-19 20:13

[QUOTE=axn;566642]@George: [...] Does the logic take into account MaxHighMemWorkers / worker-specific memory settings?[/QUOTE]

That would be specifically interesting if e. g. an extra 100 MB would cause the algorithm to select a higher B2 with relatively little higher ETA.

An ECM got: M630901, B1 = 250k, B2 =155 * B1 @ 6 GB availible. Stage 1 took around 0.5h, stage 2 took 0.25h.

Edit: @George, thanks for a rough clarifications.

masser 2020-12-20 00:28

Could someone teach me how to find the sigma that was used to find an ECM factor from the mersenne.org website?

I'm trying to test that the new ECM in version 30.4 recovers known factors without running hundreds of curves.

axn 2020-12-20 02:40

Potential P-1 (display) bug:

Stage 2 % resets to zero once a batch of relative primes is processed.

At least, that's what I think is happening -- haven't completed a full P-1 run yet.

axn 2020-12-20 13:17

First impressions. All testing done using Linux version. ECM tests were done in Colab VMs running both FMA3 & AVX-512. P-1 was done with my Ryzen 5 3600. More details on these tests can be provided upon request.

ECM
Good
1. Finds new factors (haven't checked for existing factors)
2. Stage 1 & Stage 2 are faster (despite stage 2 being slightly bigger)
3. Uses more memory.


P-1
Good
1. Finds existing factors
2. Stage 2 is faster (about 50% faster)

Bad
1. B2 chosen is much too big (Pminus1). Needs a way to control B2 selection / honor specified B2
2. Potential % display bug.
3. Saw some weird output (sort of like infinte loop) when multiple workers entered Stage 2 together
4. Potential issue with MaxHighMemWorkers setting - more than the specified number of workers proceeded to Stage 2 (CPU: Ryzen 5 3600, 6 workers)

Prime95 2020-12-20 19:37

[QUOTE=axn;566762]
1. B2 chosen is much too big (Pminus1). Needs a way to control B2 selection / honor specified B2
2. Potential % display bug.
3. Saw some weird output (sort of like infinte loop) when multiple workers entered Stage 2 together
4. Potential issue with MaxHighMemWorkers setting - more than the specified number of workers proceeded to Stage 2 (CPU: Ryzen 5 3600, 6 workers)[/QUOTE]

1. Why do you say B2 is too big? The algorithm tries to find the B2 such that if you were to invest more CPU time you'd get the same increase in chance of finding a factor if you invested that CPU time increasing B1 or B2. I'm not saying you're not right, there could be a code bug or inaccurate estimation of P-1 B1 or B2 costs.
The new B2 selection can be circumvented by leaving off the sieve depth in worktodo.txt or by Pminus1BestB2=0 in prime.txt

2. Can you send a screenshot or a cut/paste of the screen output?

3. Same request as 2.

4. I'll try to reproduce. I did not test that.

lycorn 2020-12-20 23:39

[QUOTE=masser;566727]Could someone teach me how to find the sigma that was used to find an ECM factor from the mersenne.org website?

I'm trying to test that the new ECM in version 30.4 recovers known factors without running hundreds of curves.[/QUOTE]

Have a look at the results.txt file. The sigma used is displayed along with the relevant parameters of the successful run. The following is a copy/paste from my results file upon finding a factor:

(...)

[Sat Dec 19 18:01:38 2020]
UID: lycorn/supernova, M544837 completed 427 ECM curves, B1=250000, B2=25000000, Wg4: 0FA45DC6, AID: 091211D7E0C858E32CCEB53CD41ACA9E
[Sat Dec 19 19:23:21 2020]
ECM found a factor in curve #63, stage #2
[B]Sigma=7091917254266823[/B], B1=250000, B2=25000000.
UID: lycorn/supernova, M544721 has a factor: 479701949456122248252251360609 (ECM curve 63, B1=250000, B2=25000000), AID: 56052C6C12AC548F71FBD1F0B34779CF

(...)

Happy5214 2020-12-21 01:18

[QUOTE=masser;566727]Could someone teach me how to find the sigma that was used to find an ECM factor [B]from the mersenne.org website[/B]?[/QUOTE]

[QUOTE=lycorn;566817]Have a look at the results.txt file. The sigma used is displayed along with the relevant parameters of the successful run. The following is a copy/paste from my results file upon finding a factor:
[/QUOTE]

That's not what was asked. Most ECM results on mersenne.org don't display the sigma, though I'm not sure if the server has a record of it. A few ECM results (like Ryan Propper's) list the sigma after the B2. For example, on [M]1399[/M], you'll see the line "Factor: 9729831901051958663829453004687723271026191923786080297556081 / (ECM curve 1, B1=850000000, B2=15892628251516, [I]Sigma=16318523421442679557[/I])".

masser 2020-12-21 01:47

[QUOTE=Happy5214;566822]Most ECM results on mersenne.org don't display the sigma, though I'm not sure if the server has a record of it.[/QUOTE]

Thanks for clarifying my question: does anyone know if the server keeps a record of the sigma value?

axn 2020-12-21 04:34

[QUOTE=Prime95;566792]1. Why do you say B2 is too big? The algorithm tries to find the B2 such that if you were to invest more CPU time you'd get the same increase in chance of finding a factor if you invested that CPU time increasing B1 or B2. I'm not saying you're not right, there could be a code bug or inaccurate estimation of P-1 B1 or B2 costs.[/quote]
Too long from my perspective. I'm running P-1 with (30m, 600m) which completes in about 12 hours (7+5) for a probability of 11.1%. With the new code, it is running (30m,4080m) in about 28 hours (7+21) for a probability of 14.5%. If it would just run the given B2, it would complete it in about 10 hours (7+3) which would be a much better deal for me.
However, I think I understand what you're saying. If I'm to spend 10 hours on a P-1, my best chance would not be a (30m, 600m) (7+3) split, but rather smaller B1, bigger B2. So, I guess the problem was that I didn't understand how the logic was selecting B2. My guess is, something like (10m, whatever program selects) might be a better use of my compute time. Anyway, something to play with.
I would still want to reduce the B2, because of a different problem -- I want to avoid the number of stage 2 running in parallel because it can cause up to 15% slow down when too many stage 2 are running. However this is probably too much for the s/w to take into account.
Incidentally, it appears that the formula for probability has been tweaked. It is reporting p(30m, 4080m, 69 bits) as 14.5% whereas mersenne.ca is giving it as 14.0%.

[QUOTE=Prime95;566792]The new B2 selection can be circumvented by leaving off the sieve depth in worktodo.txt or by Pminus1BestB2=0 in prime.txt[/quote]
Thanks. Leaving off the tf depth is working fine! That helps.

[QUOTE=Prime95;566792]2. Can you send a screenshot or a cut/paste of the screen output?[/quote]
Sorry, don't have it. But easy enough to describe. With the (30m, 4080m) run, I have allocated enough memory per worker for about 6300 temps. That means about 5.3 passes are needed. EDIT:- I might have misunderstood the relation between the temps and passes. Anyway, based on the number of stage 2 primes it reported, I estimated about 5 and bit passes.
In the first pass, the stage 2 % goes from 0 to about 19%. However, when the second pass starts, instead of the % going to 20% and on, it restarts at 0% and keeps going till another 19%, and again restarts on next pass and so on. Looks like the % variable is getting reset for every pass. ECM is fine, btw.

[QUOTE=Prime95;566792]3. Same request as 2.[/quote]
Don't have this one either. But it was sort of an infinite loop "x memory being used" shown by the final worker entering stage 2. It happened when I didn't have per-worker memory configured, and all the workers entered stage 2 in quick succession. The first one took everything, then when the second one came, there was some adjustment, and then the third came in and more adjustment, and so on until, bam, all hell broke loose. Infinite loop, and it wouldn't even respond to ^C, and had to kill -9.
Anyway, just thought I'd let you know. But I have no interest in trying to reproduce this one.

[QUOTE=Prime95;566792]4. I'll try to reproduce. I did not test that.[/QUOTE]
Sure, thanks. Could it be a Ryzen thing? Reason I'm asking is, when I gave that setting as 2, it enforced it as 4 (i.e. allowed 4/6 stage 2, but stopped two of them), so wondering if it enforced two per L3 block or some such weirdness.
Anyway, for now, I am looking at enforcing memory at per-worker level. However if this could be sorted out, I could be allocating more per stage 2 safely.


One other (potential) [B]bug[/B]. On AVX-512, doing ECM on wagstaff, it didn't display the [C]x bits-per-word below FFT limit (more than 0.5 allows extra optimizations)[/C] text. Only this particular combination. If have done both mersenne & wagstaff ECM on both FMA3 & AVX-512 and all the other combinations show this.

Prime95 2020-12-21 19:56

[QUOTE=axn;566826]Too long from my perspective. I'm running P-1 with (30m, 600m) which completes in about 12 hours (7+5) for a probability of 11.1%. With the new code, it is running (30m,4080m) in about 28 hours (7+21) for a probability of 14.5%.
However, I think I understand what you're saying. If I'm to spend 10 hours on a P-1, my best chance would not be a (30m, 600m) (7+3) split, but rather smaller B1, bigger B2.[/QUOTE]

Can you tell me what the exponent, TF level, and memory settings were? On the samples I've run I'm getting a B2 or 40x to 60x B1 (8GB memory). I'm curious if the B2=136*B1 is an indication of a bug.

Yes, you understand correctly. The program is saying (if the code is bug-free) that for a 10 hour run you'd be better off with a smaller B1 and larger B2.

I've got a fix in place for the stage 2 % complete. Thanks.

firejuggler 2020-12-21 21:32

I may have misunderstood something.
Back in the day, for PM1, a reasonnable multiplier between B1 and B2 was 30.
More recently, with the advent of reliable PRP, that mult was lowered to 20.
And now that 'we' drop BS extension, we go back to a mult of 60, with a slightly lowered B1?

masser 2020-12-21 22:02

[QUOTE=firejuggler;566903]
And now that 'we' drop BS extension, we go back to a mult of 60, with a slightly lowered B1?[/QUOTE]

I don't believe that is the case here. Now, we drop the BS extension and use an optimal multiplier, with optimal determined on a case-by-case basis given the available system memory and perhaps other hardware considerations.

Prime95 2020-12-21 22:22

[QUOTE=firejuggler;566903]I may have misunderstood something.
Back in the day, for PM1, a reasonnable multiplier between B1 and B2 was 30.
More recently, with the advent of reliable PRP, that mult was lowered to 20.
And now that 'we' drop BS extension, we go back to a mult of 60, with a slightly lowered B1?[/QUOTE]

Version 30.4 is more efficient at stage 2 than earlier versions. Thus, it makes sense that the the optimal B2/B1 ratio will go up.

I've not looked at wavefront PRP/LL tests where far fewer temporaries can be allocated (because the numbers are so much larger). Stage 2 efficiency will not increase nearly as much thus the best B2/B1 ratio will not go up as much.

Also, the 20 and 30 ratios you quote were guidelines. This is the first time we've gone to the effort of accurately predicting the best B2 value.

axn 2020-12-22 02:49

[QUOTE=Prime95;566892]Can you tell me what the exponent, TF level, and memory settings were? On the samples I've run I'm getting a B2 or 40x to 60x B1 (8GB memory). I'm curious if the B2=136*B1 is an indication of a bug.[/quote]
These are the ones I tested:
M3699277
M3801709
M3802999
M3804763
M3804937
M3805183

TF depth was 69 bits. Memory allocated was 9.5 GB per worker. The prime-pairing % ranged about 90-93% between different blocks.

IF, for some reason, you want to test one of these exponent. let me know; I have save files from near the end of stage 1.

Incidentally, in the first run, I had allocated 24GB RAM, but without the per-worker restriction. The first worker entering stage 2 took all 24 GB RAM and selected about 200x multiplier!!

[QUOTE=Prime95;566892]I've got a fix in place for the stage 2 % complete. Thanks.[/QUOTE]
Thanks. Looking forward to the next build. If there are no glaring issues, I will start using that for my "production" P-1. I am already using the current one for ECM.

Prime95 2020-12-22 04:03

[QUOTE=Prime95;566892]I'm curious if the B2=136*B1 is an indication of a bug.[/QUOTE]

I stepped through the code with your exponent, B1, memory and it is operating correctly. My 40x to 60x values were using a "puny" B1 of 500000. Apparently as B1 gets larger, it makes sense for the B2/B1 ratio to go up as well.

Looking at mersenne.ca [url]https://www.mersenne.ca/prob.php?exponent=3699277&factorbits=69&prob=12.75[/url] you might try a B1 in the 15M to 20M area with B2 auto-computed for your 10 hour runs and see which gives a higher chance of finding a factor.

LaurV 2020-12-22 05:09

[QUOTE=axn;566826]I'm running P-1...[/QUOTE]
:goodposting: Very good post, kinda my issues too, but you said it better than I could say it!

If we are to trust RDS' papers which he always push in front as much as he can :razz: (albeit they refer to ECM mostly) the best choice (i.e. most efficient, wallclock per probability of finding a factor) is when the program spends about the same amount of time in stage 1 as it does in stage 2. Regardless of how fast one stage is done, comparable with the other stage. If stage 2 becomes more efficient in the newer version, then it seems common sense that B2 will grow related to B1. But 150 times seems a bit too much...

Just saying...

nordi 2020-12-22 11:34

I got an out of memory error during my test run. Amongst many other things, the Linux kernel said[INDENT] Killed process 22440 ([B]mprime[/B]) total-vm:112100576kB, anon-[B]rss:109294088kB[/B], file-rss:0kB, shmem-rss:4kB
[/INDENT]The memory limit in local.txt is set to "100000" which should be 100,000,000,000 bytes or 104,857,600,000 bytes if MiB are used instead of MB. But apparently, mprime was using 111,917,146,112 bytes, which is either 11.9 or 7 GB more than it should.

That was probably triggered by stage 2 of ECM needing more RAM than before. I was running 32 threads of ECM of all sizes, from M1277 to M9,100,919. Also, I had some programs still running so mprime really had to observe the memory limit. So not necessarily a new issue, just one that surfaces now.

nordi 2020-12-22 12:26

[QUOTE=nordi;566962]Killed process 22440 ([B]mprime[/B]) total-vm:112100576kB, anon-[B]rss:109294088kB[/B], file-rss:0kB, shmem-rss:4kB
[/QUOTE]
I freed up some memory and started a second test run, which aborted after a few minutes with the kernel saying

[quote]
[1310480.143387] Killed process 28361 (mprime) total-vm:127209968kB, anon-[B]rss:123919728kB[/B], file-rss:0kB, shmem-rss:0kB
[/quote]I'll set "Memory=50000" instead of 100000 and keep testing.

nordi 2020-12-22 13:19

[QUOTE=nordi;566968]I'll set "Memory=50000" instead of 100000 and keep testing.[/QUOTE]
Even with that setting, mprime consumes up to ~100GB instead of 50GB, i.e. twice as much as it should.

I'm monitoring memory usage with[INDENT]while true; do grep "RssAnon" /proc/$(pidof mprime)/status; sleep 10; done
[/INDENT]and the highest I got so far was "RssAnon: 105544236 kB".


Edit:
And a bit later, mprime segfaulted. Kernel log says


[1315248.083098] show_signal_msg: 38 callbacks suppressed
[1315248.083102] mprime[29045]: segfault at 7f7878287f26 ip 00000000004166e5 sp 00007f777eff2d60 error 6 in mprime[400000+2190000]

masser 2020-12-23 00:09

3 Attachment(s)
[QUOTE=Prime95;566466]
1) rerun successful P-1 and ECM attempts to make sure the new code does not miss any factors.
[/QUOTE]

I feel confident that the new code is not missing any factors. I've attached results from two machines; the new code found all of the known factors I sought. For P-1, I first used the minimal B1,B2 and so a lot of factors were missed, as B2 was ignored. On the second pass, I used B1=B2/10 and that worked to find the remaining factors. On the Haswell i5, I used 7 GB of ram. On the Skylake i7, I used 3 GB or ram. For ECM, I had to make several passes over the remaining candidates to find all of the factors. With each pass I increased the available memory and the code responded with higher B2 values, as expected.

[QUOTE=Prime95;566466]
2) make sure save and restore work
[/QUOTE]

I feel less confident about this point. See the attached file, bad_read_result.txt. I ran a short P-1 attempt on M21150827 and then something went wrong when I later tried a longer P-1 attempt. I checked to see if the error was reproducible; running the two P-1 attempts back-to-back in a clean directory with the new executable worked fine the second time. Maybe it was a fluke on my local system or maybe others will report a similar occurrence.

I will re-run some of the longer attempts above with version 30.3, collect some timing comparisons and report back later.

James Heinrich 2020-12-24 03:34

Running some P-1 in the 17M range. v30.3 was using 7-8GB RAM (out of 40GB available) and 960 relative primes.
v30.4:[quote]Optimal P-1 factoring of M17847311 using up to 40960MB of memory.
Assuming no factors below 2^65 and 4 primality tests saved if a factor is found.
Optimal bounds are B1=404000, B2=27115000
Chance of finding a factor is an estimated 8.4%
...
Starting stage 1 GCD - please be patient.
Stage 1 GCD complete. Time: 4.691 sec.
D: 2310, relative primes: 4918, stage 2 primes: 1655650, pair%=95.05
Using [b]38019MB[/b] of memory.
Stage 2 init complete. 52544 transforms. Time: 57.675 sec.[/quote]Stage2 memory initialization seems faster than I'm used to for previous versions, which is a very good thing.

Comparative result output for adjacent assignments:[quote]v30.3: M17843899 completed P-1, B1=370000, B2=12533000, E=12
v30.4: M17847311 completed P-1, B1=404000, B2=27115000[/quote]Somewhat higher B1, vastly higher B2, no Brent-Suyama?

axn 2020-12-24 05:09

[QUOTE=James Heinrich;567189]Stage2 memory initialization seems faster than I'm used to for previous versions[/quote]
It creates more temporaries yet faster, because it is not doing B-S.

[QUOTE=James Heinrich;567189]Comparative result output for adjacent assignments:[/quote]
How do the runtimes and probabilities compare?

[QUOTE=James Heinrich;567189]Somewhat higher B1, vastly higher B2, no Brent-Suyama?[/QUOTE]

Higher B2 (owing to faster stage 2) & no B-S are the key features.

EDIT:
[quote]Assuming no factors below 2^65 and 4 primality tests saved if a factor is found.[/quote]
This is not good. For some reason it is thinking the exponent has been factored to 2^65 when it has been factored to 2^70. This means the bounds it has calculated won't be optimal.

Prime95 2020-12-24 05:28

[QUOTE=axn;567200]
This is not good. For some reason it is thinking the exponent has been factored to 2^65 when it has been factored to 2^70. This means the bounds it has calculated won't be optimal.[/QUOTE]

The 2^65 value comes from worktodo.txt.

Pfactor= in worktodo.txt will NOT calculate optimal bounds as far as this project is concerned. Pfactor is optimizing bounds to maximize the number of LL/PRP tests saved per unit of P-1 time invested. Pminus1= lines in worktodo.txt optimizes the B2 bound to maximize the chance of finding a factor per unit of P-1 time invested (user is responsible for picking the B1 bound). I know -- it is all very confusing.

axn 2020-12-24 05:31

[QUOTE=Prime95;567201]The 2^65 value comes from worktodo.txt.[/QUOTE]

Ok, so James gave it wrong values? Regardless, this affects the optimality of bounds (by affecting the probability calculations).

James Heinrich 2020-12-24 05:40

[QUOTE=axn;567202]Ok, so James gave it wrong values? Regardless, this affects the optimality of bounds (by affecting the probability calculations).[/QUOTE]Yes, I gave it "wrong" values on purpose, specifically to affect the bounds. Using [c]Pfactor[/c] worktodo lines, specifying the "PrimeNet-default" TF level and a large number of tests-saved (anywhere from 2-10 depending on exponent size) forces Prime95 to choose extra-large bounds that I deem suitable for re-doing of P-1 work. This would of course be inappropriate at 100M/wavefront P-1 work, but I think entirely appropriate at <20M.

BTW, at George's request, my [url=https://www.mersenne.ca/pm1_worst.php]Worst P-1[/url] page now includes your choice of [c]Pfactor[/c] or [c]Pminus1[/c] worktodo formats.
Note that I make no claim that Prime95 will select the same bounds from both variants, just that either are a reasonably-suitable starting point for P-1 redo work.

James Heinrich 2020-12-24 05:56

I happened to have a number of other program open at one point, and when v30.4 started up stage2 Windows complained about low memory. This seems to have gotten Prime95 into a "stuck" state, where the worker window says "P-1 stage2 init" but it just sits there at 100% of a single core indefinitely (I noticed it after it had run for 53 minutes getting nowhere). I force-closed Prime95 (it wouldn't close normally) and restarted it, but the hang is reproducible. I've sent George the savefile for debugging.

axn 2020-12-24 06:11

[QUOTE=James Heinrich;567204]Yes, I gave it "wrong" values on purpose[/quote]
You no longer need to do this. Even "Pminus1" will calculate optimal value -- you just need to pick a B1 for it. But giving it the correct TF depth is essential, else the choice would be suboptimal.

Actually, I wanted you to modify the calculator to take into account the improved Stage 2 and give the optimal B1/B2 for given probability / GHzDay target.

[QUOTE=James Heinrich;567205]I happened to have a number of other program open at one point, and when v30.4 started up stage2 Windows complained about low memory. This seems to have gotten Prime95 into a "stuck" state, where the worker window says "P-1 stage2 init" but it just sits there at 100% of a single core indefinitely (I noticed it after it had run for 53 minutes getting nowhere). I force-closed Prime95 (it wouldn't close normally) and restarted it, but the hang is reproducible. I've sent George the savefile for debugging.[/QUOTE]
This sounds almost similar to the "infinite loop" I encountered. Anyway, currently I'm using hard limits per-worker to avoid this.

Dylan14 2020-12-24 14:51

1 Attachment(s)
There appears to be a crash when running an assignment that has a save file from a previous version. I've attached a picture of the output prior to it crashing after it completes stage 2 init.

Note, this is with the worktodo line

[code]Pfactor=<aid>,1,2,28009823,-1,70,3[/code]Trying to see if it crashes after a fresh start: nope, seems to work fine at this point.

James Heinrich 2020-12-24 14:57

[QUOTE=Dylan14;567222]There appears to be a crash when running an assignment that has a save file from a previous version.[/QUOTE]But not necessarily always -- when I upgraded mid-assignment it also said "cannot continue stage 2", added a bit more B1, and then completed stage2 without problem.
(It did get stuck later on a different assignment, as described above, so it's not entirely stable).

James Heinrich 2020-12-25 00:02

Just mentioning I found my first factor with v30.4:
[M]M17840447[/M] has a 75.272-bit (23-digit) factor: [url=https://www.mersenne.ca/M17840447]45606749097226437406729[/url] (P-1,B1=404000,B2=27105000)

Prime95 2020-12-25 02:26

Win64 version 30.4 build 3: [url]https://www.dropbox.com/s/1nbpfh37tzd57gb/p95v304b3.win64.zip?dl=0[/url]

I fixed 5 bugs found by you folks:
1) Bad checksum for submitting ECM results manually.
2) There were cases where prime95 did not reduce stage 2 memory to conform to current memory settings.
3) Possible infinite loop during ECM stage 2 init (and maybe P-1 too).
4) Rare memory corruption re-figuring a stage 2 plan.
5) Percent complete in stage 2 corrected.

I'm not convinced these explain all the undesirable behaviors described by James, nordi, and axn. Give it a try and let know of any troubles.

I may not be able to make a linux build until after Christmas.

Prime95 2020-12-26 18:49

Linux64 30.4 build 3: [url]https://www.dropbox.com/s/9yadeo8nn9aeajw/p95v304b3.linux64.tar.gz?dl=0[/url]

nordi 2020-12-27 14:24

[QUOTE=Prime95;567284]
2) There were cases where prime95 did not reduce stage 2 memory to conform to current memory settings.
[/QUOTE]
I tried again on Linux and mprime kept running much longer than before, but was still stopped by the kernel's OOM killer. It used 121GB when configured to use just 50.


One thing I noticed is that Stage 2 init frequently needs a long time (~1 minute instead of 5 seconds):
[quote]
[Worker #22 Dec 27 13:49] Stage 2 init complete. 62496 transforms, 1 modular inverses. Time: 59.057 sec.
[Worker #30 Dec 27 13:49] Stage 2 init complete. 62496 transforms, 1 modular inverses. Time: 54.013 sec.
[Worker #28 Dec 27 13:50] Stage 2 init complete. 62496 transforms, 1 modular inverses. Time: 57.211 sec.
[/quote]I also saw this in previous versions, but only during startup. It makes the impression like the threads were competing/waiting for a global lock. Maybe that waiting time confuses the allocation logic?

Prime95 2020-12-27 16:42

[QUOTE=nordi;567446]I tried again on Linux and mprime kept running much longer than before, but was still stopped by the kernel's OOM killer. It used 121GB when configured to use just 50
[/QUOTE]

Can you describe your setup? 32 workers? 50GB memory. Worktodo.txt is? MaxHighMemWorkers?

Can you provide the screen output (say 200 lines of output) at the time the OOM occurred?

I'll try to reproduce on my dinky quad-core.

nordi 2020-12-27 17:48

[QUOTE=Prime95;567454]Can you describe your setup? 32 workers? 50GB memory. Worktodo.txt is? MaxHighMemWorkers?
[/QUOTE]
Yes, 32 workers with "Memory=50000" in local.txt. The MaxHighMemWorkers is not set in my config. The machine has 128GB of RAM.
[QUOTE=Prime95;567454]
Can you provide the screen output (say 200 lines of output) at the time the OOM occurred?[/QUOTE]That and the worktodo were sent via PM.

ixfd64 2020-12-27 20:14

I've seen an issue where P-1 continues past 100% in stage 1 until the worker is stopped. However, this has only happened like two times. I have no idea if it's related to any of the above bugs.

firejuggler 2020-12-28 17:15

[M]M32159551[/M] has a 77.841-bit (24-digit) factor: [URL="https://www.mersenne.ca/M32159551"]270719245854611997909647[/URL] (P-1,B1=1100000,B2=61600000)
found it with 30.4 b3
about 8169 M of ram



Also maybe move the discussion about 30.4b3 in its own thread?

petrw1 2020-12-29 03:27

Initial observations
 
All of my current work is N/A.
As soon as I started Prime95 it fetched 36 ECM assignments (that is my default).
However there were only 18 new ECM assignments in Worker #1; none anywhere else.
There were 36 on my Assignments page.

It no longer reports number of relative factors processed in Stage 2; only percent complete.

In my case it chose a B2=48*B1 =48,000,000. (I had 20*B1)
I have 2 Workers with 2 CPUs each.
I have 4000MB RAM allocated.

With prior B1/B2 it was taking about 9 hours per assignment.
Based on preliminary results it appears it will take:
3 hours for Stage 1
9 hours for Stage 2.

More to follow

Part 2: 2nd worker finished Stage 1 and of course split the RAM with Worker 1.
However this assignments was given a B2=43*B1

The exponents are very close 41,778,xxx and 41,780,xxx.
Both would have had a prior P-1 with B1=B2=685,000 (or very close)

Part 3:
So far it seems that:
If Worker x is still on Stage 1 when Worker y finishes Stage 1 it gets a B2=48xB1 for Stage 2.
If Worker x then is ready for Stage 2 while Worker y is still on Stage 2 then Worker x gets B2=41 or 43xB1.
Is that because it detects less RAM available (ie. the workers now have to share RAM)?

petrw1 2020-12-30 04:31

Format suggestion
 
Any chance you could display this on 2 lines; I believe on most of our windows it will scroll into the Right Abyss.


[CODE]Dec 29 18:17] With trial factoring done to 2^74, optimal B2 is 41*B1 = 32800000. If no prior P-1, chance of a new factor is 4.57%[/CODE]

Like this instead...

[CODE]Dec 29 18:17] With trial factoring done to 2^74, optimal B2 is 41*B1 = 32800000.
Dec 29 18:17] If no prior P-1, chance of a new factor is 4.57%[/CODE]

Prime95 2020-12-30 06:23

[QUOTE=petrw1;567696]Any chance you could display this on 2 lines;[/QUOTE]

Will do

Prime95 2020-12-30 06:29

[QUOTE=petrw1;567595]As soon as I started Prime95 it fetched 36 ECM assignments (that is my default). However there were only 18 new ECM assignments in Worker #1; none anywhere else.[/quote]

Weird. That code hasn't changed.

[quote]It no longer reports number of relative factors processed in Stage 2; only percent complete.[/quote]

Relative primes are no longer processed in passes. They are all done at once as prime95 steps from B1 to B2 in steps of size D.

[quote]
If Worker x is still on Stage 1 when Worker y finishes Stage 1 it gets a B2=48xB1 for Stage 2.
If Worker x then is ready for Stage 2 while Worker y is still on Stage 2 then Worker x gets B2=41 or 43xB1.
Is that because it detects less RAM available (ie. the workers now have to share RAM)?[/QUOTE]

Your guess is correct.


All times are UTC. The time now is 00:13.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.