[QUOTE=pinhodecarlos;561656]I would like to run TF on the range 020M from lowest bit to 6970 bits but I can’t seem to manage to download a sample file to try it. Also I have an account on GPU72 and can’t get them from them. Help please?[/QUOTE]
Here you go. No one has touched the 14.2M range in the past year. It will certainly need TF attention to get below 2000 unfactored. I've attached the first subrange below (about 200 candidates). Note: user lycorn is working his way down from 20M to 10M, factoring up to 70 bits. So, if you want to test your GPU with these candidates, do it now. Don't wait a month or two, for there will be a good chance they've already been completed. It's pretty simple to use the times you record on these assignments to estimate times on other TF assignments. EDIT: Ach, just saw axn's post. Sorry, sir. I'm echoing axn's response. [CODE]Factor=N/A,14200001,69,70 Factor=N/A,14200003,69,70 Factor=N/A,14200009,69,70 Factor=N/A,14200031,69,70 Factor=N/A,14200037,69,70 Factor=N/A,14200097,69,70 Factor=N/A,14200099,69,70 Factor=N/A,14200103,69,70 Factor=N/A,14200283,69,70 Factor=N/A,14200451,69,70 Factor=N/A,14200513,69,70 Factor=N/A,14200541,69,70 Factor=N/A,14200567,69,70 Factor=N/A,14200603,69,70 Factor=N/A,14200789,69,70 Factor=N/A,14200799,69,70 Factor=N/A,14200811,69,70 Factor=N/A,14200817,69,70 Factor=N/A,14200819,69,70 Factor=N/A,14200829,69,70 Factor=N/A,14200951,69,70 Factor=N/A,14200987,69,70 Factor=N/A,14200993,69,70 Factor=N/A,14201027,69,70 Factor=N/A,14201063,69,70 Factor=N/A,14201183,69,70 Factor=N/A,14201221,69,70 Factor=N/A,14201263,69,70 Factor=N/A,14201287,69,70 Factor=N/A,14201351,69,70 Factor=N/A,14201431,69,70 Factor=N/A,14201449,69,70 Factor=N/A,14201573,69,70 Factor=N/A,14201591,69,70 Factor=N/A,14201611,69,70 Factor=N/A,14201657,69,70 Factor=N/A,14201687,69,70 Factor=N/A,14201699,69,70 Factor=N/A,14201773,69,70 Factor=N/A,14201807,69,70 Factor=N/A,14201809,69,70 Factor=N/A,14201839,69,70 Factor=N/A,14201897,69,70 Factor=N/A,14201899,69,70 Factor=N/A,14201989,69,70 Factor=N/A,14202053,69,70 Factor=N/A,14202073,69,70 Factor=N/A,14202143,69,70 Factor=N/A,14202197,69,70 Factor=N/A,14202211,69,70 Factor=N/A,14202271,69,70 Factor=N/A,14202313,69,70 Factor=N/A,14202323,69,70 Factor=N/A,14202347,69,70 Factor=N/A,14202457,69,70 Factor=N/A,14202467,69,70 Factor=N/A,14202473,69,70 Factor=N/A,14202491,69,70 Factor=N/A,14202509,69,70 Factor=N/A,14202613,69,70 Factor=N/A,14202623,69,70 Factor=N/A,14202641,69,70 Factor=N/A,14202719,69,70 Factor=N/A,14202737,69,70 Factor=N/A,14202739,69,70 Factor=N/A,14202763,69,70 Factor=N/A,14202767,69,70 Factor=N/A,14202863,69,70 Factor=N/A,14202913,69,70 Factor=N/A,14203039,69,70 Factor=N/A,14203181,69,70 Factor=N/A,14203219,69,70 Factor=N/A,14203223,69,70 Factor=N/A,14203289,69,70 Factor=N/A,14203291,69,70 Factor=N/A,14203339,69,70 Factor=N/A,14203349,69,70 Factor=N/A,14203429,69,70 Factor=N/A,14203457,69,70 Factor=N/A,14203477,69,70 Factor=N/A,14203547,69,70 Factor=N/A,14203711,69,70 Factor=N/A,14203727,69,70 Factor=N/A,14203747,69,70 Factor=N/A,14203759,69,70 Factor=N/A,14203769,69,70 Factor=N/A,14203771,69,70 Factor=N/A,14203867,69,70 Factor=N/A,14203877,69,70 Factor=N/A,14203883,69,70 Factor=N/A,14203901,69,70 Factor=N/A,14204059,69,70 Factor=N/A,14204101,69,70 Factor=N/A,14204131,69,70 Factor=N/A,14204149,69,70 Factor=N/A,14204237,69,70 Factor=N/A,14204257,69,70 Factor=N/A,14204353,69,70 Factor=N/A,14204447,69,70 Factor=N/A,14204683,69,70 Factor=N/A,14204711,69,70 Factor=N/A,14204803,69,70 Factor=N/A,14204849,69,70 Factor=N/A,14204881,69,70 Factor=N/A,14204923,69,70 Factor=N/A,14204947,69,70 Factor=N/A,14204963,69,70 Factor=N/A,14205031,69,70 Factor=N/A,14205127,69,70 Factor=N/A,14205137,69,70 Factor=N/A,14205221,69,70 Factor=N/A,14205241,69,70 Factor=N/A,14205479,69,70 Factor=N/A,14205493,69,70 Factor=N/A,14205539,69,70 Factor=N/A,14205593,69,70 Factor=N/A,14205599,69,70 Factor=N/A,14205629,69,70 Factor=N/A,14205671,69,70 Factor=N/A,14205757,69,70 Factor=N/A,14205823,69,70 Factor=N/A,14205869,69,70 Factor=N/A,14205899,69,70 Factor=N/A,14205949,69,70 Factor=N/A,14205979,69,70 Factor=N/A,14205991,69,70 Factor=N/A,14206043,69,70 Factor=N/A,14206051,69,70 Factor=N/A,14206081,69,70 Factor=N/A,14206109,69,70 Factor=N/A,14206117,69,70 Factor=N/A,14206123,69,70 Factor=N/A,14206183,69,70 Factor=N/A,14206219,69,70 Factor=N/A,14206327,69,70 Factor=N/A,14206369,69,70 Factor=N/A,14206421,69,70 Factor=N/A,14206427,69,70 Factor=N/A,14206561,69,70 Factor=N/A,14206579,69,70 Factor=N/A,14206651,69,70 Factor=N/A,14206657,69,70 Factor=N/A,14206837,69,70 Factor=N/A,14206919,69,70 Factor=N/A,14206957,69,70 Factor=N/A,14206991,69,70 Factor=N/A,14207059,69,70 Factor=N/A,14207101,69,70 Factor=N/A,14207143,69,70 Factor=N/A,14207177,69,70 Factor=N/A,14207227,69,70 Factor=N/A,14207293,69,70 Factor=N/A,14207411,69,70 Factor=N/A,14207423,69,70 Factor=N/A,14207483,69,70 Factor=N/A,14207507,69,70 Factor=N/A,14207509,69,70 Factor=N/A,14207527,69,70 Factor=N/A,14207533,69,70 Factor=N/A,14207539,69,70 Factor=N/A,14207549,69,70 Factor=N/A,14207663,69,70 Factor=N/A,14207747,69,70 Factor=N/A,14207749,69,70 Factor=N/A,14207783,69,70 Factor=N/A,14207797,69,70 Factor=N/A,14207873,69,70 Factor=N/A,14207899,69,70 Factor=N/A,14207911,69,70 Factor=N/A,14207939,69,70 Factor=N/A,14207983,69,70 Factor=N/A,14207993,69,70 Factor=N/A,14208001,69,70 Factor=N/A,14208031,69,70 Factor=N/A,14208083,69,70 Factor=N/A,14208113,69,70 Factor=N/A,14208127,69,70 Factor=N/A,14208149,69,70 Factor=N/A,14208253,69,70 Factor=N/A,14208367,69,70 Factor=N/A,14208539,69,70 Factor=N/A,14208697,69,70 Factor=N/A,14208739,69,70 Factor=N/A,14208743,69,70 Factor=N/A,14208749,69,70 Factor=N/A,14208809,69,70 Factor=N/A,14208829,69,70 Factor=N/A,14208833,69,70 Factor=N/A,14208847,69,70 Factor=N/A,14208863,69,70 Factor=N/A,14208907,69,70 Factor=N/A,14208937,69,70 Factor=N/A,14208947,69,70 Factor=N/A,14209051,69,70 Factor=N/A,14209081,69,70 Factor=N/A,14209123,69,70 Factor=N/A,14209163,69,70 Factor=N/A,14209241,69,70 Factor=N/A,14209291,69,70 Factor=N/A,14209313,69,70 Factor=N/A,14209339,69,70 Factor=N/A,14209513,69,70 Factor=N/A,14209537,69,70 Factor=N/A,14209541,69,70 Factor=N/A,14209609,69,70 Factor=N/A,14209633,69,70 Factor=N/A,14209691,69,70 Factor=N/A,14209693,69,70 Factor=N/A,14209703,69,70 Factor=N/A,14209709,69,70 Factor=N/A,14209771,69,70 Factor=N/A,14209801,69,70 Factor=N/A,14209859,69,70 Factor=N/A,14209861,69,70 Factor=N/A,14209927,69,70 Factor=N/A,14209999,69,70[/CODE] 
[QUOTE=pinhodecarlos;561674]Just wanted to trial on my old GPU, thank you guys.[/QUOTE]
Do you use misfit? If you go to GPU72.COM you can have choose work type Double check tests and What Makes Sense. You might have to notify Chris 
[QUOTE=masser;561693]..So, if you want to test your GPU with these candidates, do it now. Don't wait a month or two, for there will be a good chance they've already been completed....[/QUOTE]
Anyone with a good GPU should be able to knock these out in short order. I sampled one on my 2080, < 3 minutes. About Misfit, it left me in a mess a couple of years back. 900+ assignments, if memory serves. It took close to a month to clean those up. 
@pinhodecarlos: As per S485122 post, it´s indeed straightforwrd to get exponents to TF. Just pick your range of choice, TF level, and mersenne.org will format the worktodo file for you.
Regarding masser´s warning about me TFing my way down to 10000000, that won´t be a concern, for I have put that activity on hold for now, and when I resume it I will coordinate with people in this thread to avoid toe stepping. FYI, it will certainly be several weeks until I resume work in those ranges, most probably not until next year. 
1 Attachment(s)
[QUOTE=VBCurtis;560448]I've started ECM on the suggested range. My first set is 5 curves at B1=250K; I may adjust that later. A curve at this size on the Surface tablet I'm using takes about 25k seconds, so ought to progress at roughly 5 candidates per week. I'll give it a couple weeks to start.[/QUOTE]
Thanks again for the kind offer of help. I've attached a P1 worktodo.txt for the 14.01M range if the ECM work becomes too boring. I'll ensure that my devices avoid that range for a few months (at least until midJanuary). 
I accept your work!
I may inflate the bounds a little bit, after I time the first test or two. "optimal" P1 is out the window when we're going to need so much ECM to hit the factorsfound target... 
I went with B1=8M, B2=200M for the first few P1 jobs.
It's on a really slow but 8GB laptop, though I expect a desktop core to open up later. 
20201109 Update ... GREAT PROGRESS!!!
Since last update (July 24)
17 more ranges cleared: 4.0, 7.9, 8.8, 19.8, 22.5, 27.6, 29.4, 37.0, 37.3, 40.3, 40.5, 40.8, 41.0, 41.1, 45.5, 56.8, 59.4 And 4 bonus ranges: 68.4, 73.1, 73.5, 75.8. There is only 1 bonus range to go (58.7)... and it had good progress; maybe; just maybe. If or when it is cleared I am confident that every range 50M+ will be taken care of by natural GIMPS processes. That allows this project to focus on the lower ranges only. TOTALS to date: 195 total ranges cleared or 39.24% 22 Ranges with less than 20 to go. 1,601 more factored (25,598)....46.35% total factored. Continuing to get lots of great help. THANKS Thanks again for everyone contributing. 
Once again where is the most help needed?
Note that while there is lots of room for more TF in the remaining ranges, the lower the exponents the more beneficial P1 becomes (or ECM for VERY low) and the less beneficial TF will become.
My general process is: 1. TF a couple extra bits, while they are still relatively fast for GPUs 2. Aggressively P1 where the current B1=B2. (I choose new B1/B2 that give about a 2.53% improvement over the current P1 You can use this tool: [url]https://www.mersenne.ca/prob.php[/url]) 3a. Aggressively P1 where the current B1/B2 are relatively low. (Any that I can gain at least 2% with reasonable new B1/B2) 3b. TF another bit level. ...(3a and 3b are numbered as such because the order I do these depends on analysis of which I estimate will help with the least effort/time. Often I'll do some of both). That said; if you have TFgrade GPUs the immediate need is:  TF the 2029M ranges to at least 72 bits (or until a range is under 2,000 remaining) ===> If you use GPU72, it is already making available these assignments  IF you have a more powerful GPU (or lots of them) consider helping in the more stubborn 30M and 40M ranges where aggressive P1 is already done. ===> In the 40's: 44.6, 46.3 and 47.8 (They should be on GPU72 shortly) ===> In the 30's: 30.2, 31.6, 32.5, 33.0, 38.0, 38.6, 38.9 ===> I (and others) are finishing P1 for about 1 or 2 ranges a month to add to the above list.  If you have CPUs or LL/P1Grade GPUs consider doing aggressive P1 as I defined above (or ECM if your RAM is limited). Pick a range and reserve it HERE and/or here: [url]https://www.mersenneforum.org/showthread.php?t=23152&highlight=redoing[/url] Thanks in advance. 
Notice of working unreserved in the 999M1G range (TF)
@Everyone:
Due to the difficulty of predicting Colab availability, I've decided to still help this effort and work far far far away from where SRBase is crunching and climb 1 bit at a time. The choice fell on the range 999M1G (1 million n). Currently I've loaded all exponents Trial Factored to 72 bit, to my worktodo.txt file. If I can complete about 100 a day, it will take me about 107 days using Colab, to completely test 72 to 73 bit. Once that completes, I'm going to load all candidates Trial Factored to 73 bit and take them to 74 bit. My hope is that someone gives me a heads up, if they reserve any large amount of work, so I can remove the candidates from my list. Even though Colab ressources are free, it is just not meaningfull to run 2 trial factor tests of the same n's. Also I will keep an eye on the Active Assignments page and see if anyone reserves anything for Trial Factoring. Take care. 
[QUOTE=KEP;562909]
My hope is that someone gives me a heads up, if they reserve any large amount of work, so I can remove the candidates from my list.[/QUOTE] Why don't you simply reserve the batches you're willing to test, like the SRBASE folks do? 
[QUOTE=lycorn;563006]Why don't you simply reserve the batches you're willing to test, like the SRBASE folks do?[/QUOTE]
Good and just question. As far as I know, there is no way for me to reserve all candidates tested to 72 bit and leave those tested to higher bit levels unreserved. I either have to make potentially thousands of reservations to get just those tested to 72 bit and none of those tested to higher bits. I tried using the manual GPU reservation page a while back and it gave me 100 n's in the low n range, near the wavefront. If there is a way for me to reserve for 999M1G all candidates tested to 72 bit please let me know, either here or in PM :smile: 
[QUOTE=KEP;563031]Good and just question. As far as I know, there is no way for me to reserve all candidates tested to 72 bit and leave those tested to higher bit levels unreserved. I either have to make potentially thousands of reservations to get just those tested to 72 bit and none of those tested to higher bits. I tried using the manual GPU reservation page a while back and it gave me 100 n's in the low n range, near the wavefront. If there is a way for me to reserve for 999M1G all candidates tested to 72 bit please let me know, either here or in PM :smile:[/QUOTE]
[URL="https://www.mersenne.org/report_factoring_effort/?exp_lo=999000000&exp_hi=999999999&bits_lo=72&bits_hi=&exassigned=1&tfonly=1&worktodo=1&tftobits=73"]Is this what you seek?[/URL] 
[QUOTE=masser;563032][URL="https://www.mersenne.org/report_factoring_effort/?exp_lo=999000000&exp_hi=999999999&bits_lo=72&bits_hi=&exassigned=1&tfonly=1&worktodo=1&tftobits=73"]Is this what you seek?[/URL][/QUOTE]
That link allows you to get exponents to test, but it doesn´t actually reserves them in the server. But you can do it using the Manual GPU Assignments Form: i) Choose the number of assignments you wish (don´t know whether or not there is a limit) ii) Leave the Preferred Work Range at "What makes sense" iii) Enter 999000000 and 999999999 in the Optional Exponent range fields iv) Leave Work Preference at "What makes sense" v) In the "Optional bit level to factor to" field enter 73. Note: I tried it and it gave me assignments from 72 to [B]74[/B] instead. It´s not a problem, tho, just a matter of editing the worktodo file changing [B]",74"[/B] to [B]",73"[/B]. Notepad will suffice. Even if there are limits on the number of exponents you may reserve, that´s not a problem. After all, you don´t really need to reserve [B]all[/B] available exponents in one go, do you? :smile: HTH 
[QUOTE=masser;563032][URL="https://www.mersenne.org/report_factoring_effort/?exp_lo=999000000&exp_hi=999999999&bits_lo=72&bits_hi=&exassigned=1&tfonly=1&worktodo=1&tftobits=73"]Is this what you seek?[/URL][/QUOTE]
Not exactly, but it was a good shoot  as Lycorn mentions, it did not give me the possibility to reserve anything, but it was very usefull for my initial plan, since it was just copy paste and then push "play" in the notebook. [QUOTE=lycorn;563042]That link allows you to get exponents to test, but it doesn´t actually reserves them in the server. But you can do it using the Manual GPU Assignments Form: i) Choose the number of assignments you wish (don´t know whether or not there is a limit) ii) Leave the Preferred Work Range at "What makes sense" iii) Enter 999000000 and 999999999 in the Optional Exponent range fields iv) Leave Work Preference at "What makes sense" v) In the "Optional bit level to factor to" field enter 73. Note: I tried it and it gave me assignments from 72 to [B]74[/B] instead. It´s not a problem, tho, just a matter of editing the worktodo file changing [B]",74"[/B] to [B]",73"[/B]. Notepad will suffice. Even if there are limits on the number of exponents you may reserve, that´s not a problem. After all, you don´t really need to reserve [B]all[/B] available exponents in one go, do you? :smile: HTH[/QUOTE] It appears that I must have done something wrong, on the last try of reserving for manual Trial Factoring assignments ahead of the wavefront. The descriped plan worked, however now I have changed my own plan and decided to Trial Factor from 72 to 74 bit on the currently 2000 reserved tasks. You are right, I do not need to reserve all work at once :wink: 
Hi "Wayne", it's me again...lol. Almost done with the P1 work you sent across last month, can I have now another batch of 100 P1 but for range lower than 20M? If not happy to proceed with 30M, 40M, 50 exponent range. TIA.

[QUOTE=pinhodecarlos;563534]Hi "Wayne", it's me again...lol. Almost done with the P1 work you sent across last month, can I have now another batch of 100 P1 but for range lower than 20M? If not happy to proceed with 30M, 40M, 50 exponent range. TIA.[/QUOTE]
No one seems to be in the 16M range: I think there are 121 here. About 2.3 GhzDays each [CODE]Pminus1=N/A,1,2,16686743,1,1500000,30000000,70 Pminus1=N/A,1,2,16608569,1,1500000,30000000,70 Pminus1=N/A,1,2,16611989,1,1500000,30000000,70 Pminus1=N/A,1,2,16616207,1,1500000,30000000,70 Pminus1=N/A,1,2,16623379,1,1500000,30000000,70 Pminus1=N/A,1,2,16637987,1,1500000,30000000,70 Pminus1=N/A,1,2,16641263,1,1500000,30000000,70 Pminus1=N/A,1,2,16641413,1,1500000,30000000,70 Pminus1=N/A,1,2,16644701,1,1500000,30000000,70 Pminus1=N/A,1,2,16651951,1,1500000,30000000,70 Pminus1=N/A,1,2,16653269,1,1500000,30000000,70 Pminus1=N/A,1,2,16658461,1,1500000,30000000,70 Pminus1=N/A,1,2,16659217,1,1500000,30000000,70 Pminus1=N/A,1,2,16659917,1,1500000,30000000,70 Pminus1=N/A,1,2,16667857,1,1500000,30000000,70 Pminus1=N/A,1,2,16674529,1,1500000,30000000,70 Pminus1=N/A,1,2,16675639,1,1500000,30000000,70 Pminus1=N/A,1,2,16679081,1,1500000,30000000,70 Pminus1=N/A,1,2,16680943,1,1500000,30000000,70 Pminus1=N/A,1,2,16686947,1,1500000,30000000,70 Pminus1=N/A,1,2,16687493,1,1500000,30000000,70 Pminus1=N/A,1,2,16691089,1,1500000,30000000,70 Pminus1=N/A,1,2,16691099,1,1500000,30000000,70 Pminus1=N/A,1,2,16691453,1,1500000,30000000,70 Pminus1=N/A,1,2,16692157,1,1500000,30000000,70 Pminus1=N/A,1,2,16694263,1,1500000,30000000,70 Pminus1=N/A,1,2,16695697,1,1500000,30000000,70 Pminus1=N/A,1,2,16697389,1,1500000,30000000,70 Pminus1=N/A,1,2,16699589,1,1500000,30000000,70 Pminus1=N/A,1,2,16643647,1,1500000,30000000,70 Pminus1=N/A,1,2,16644403,1,1500000,30000000,70 Pminus1=N/A,1,2,16645283,1,1500000,30000000,70 Pminus1=N/A,1,2,16647041,1,1500000,30000000,70 Pminus1=N/A,1,2,16648469,1,1500000,30000000,70 Pminus1=N/A,1,2,16649063,1,1500000,30000000,70 Pminus1=N/A,1,2,16649651,1,1500000,30000000,70 Pminus1=N/A,1,2,16651111,1,1500000,30000000,70 Pminus1=N/A,1,2,16653787,1,1500000,30000000,70 Pminus1=N/A,1,2,16654993,1,1500000,30000000,70 Pminus1=N/A,1,2,16655081,1,1500000,30000000,70 Pminus1=N/A,1,2,16655879,1,1500000,30000000,70 Pminus1=N/A,1,2,16656307,1,1500000,30000000,70 Pminus1=N/A,1,2,16656569,1,1500000,30000000,70 Pminus1=N/A,1,2,16659919,1,1500000,30000000,70 Pminus1=N/A,1,2,16661417,1,1500000,30000000,70 Pminus1=N/A,1,2,16661927,1,1500000,30000000,70 Pminus1=N/A,1,2,16662179,1,1500000,30000000,70 Pminus1=N/A,1,2,16662187,1,1500000,30000000,70 Pminus1=N/A,1,2,16663177,1,1500000,30000000,70 Pminus1=N/A,1,2,16663529,1,1500000,30000000,70 Pminus1=N/A,1,2,16664867,1,1500000,30000000,70 Pminus1=N/A,1,2,16665221,1,1500000,30000000,70 Pminus1=N/A,1,2,16665469,1,1500000,30000000,70 Pminus1=N/A,1,2,16665619,1,1500000,30000000,70 Pminus1=N/A,1,2,16666057,1,1500000,30000000,70 Pminus1=N/A,1,2,16666367,1,1500000,30000000,70 Pminus1=N/A,1,2,16666577,1,1500000,30000000,70 Pminus1=N/A,1,2,16667383,1,1500000,30000000,70 Pminus1=N/A,1,2,16670987,1,1500000,30000000,70 Pminus1=N/A,1,2,16671019,1,1500000,30000000,70 Pminus1=N/A,1,2,16672463,1,1500000,30000000,70 Pminus1=N/A,1,2,16673519,1,1500000,30000000,70 Pminus1=N/A,1,2,16676027,1,1500000,30000000,70 Pminus1=N/A,1,2,16677329,1,1500000,30000000,70 Pminus1=N/A,1,2,16679527,1,1500000,30000000,70 Pminus1=N/A,1,2,16680163,1,1500000,30000000,70 Pminus1=N/A,1,2,16680199,1,1500000,30000000,70 Pminus1=N/A,1,2,16680731,1,1500000,30000000,70 Pminus1=N/A,1,2,16681421,1,1500000,30000000,70 Pminus1=N/A,1,2,16681703,1,1500000,30000000,70 Pminus1=N/A,1,2,16683353,1,1500000,30000000,70 Pminus1=N/A,1,2,16694423,1,1500000,30000000,70 Pminus1=N/A,1,2,16652413,1,1500000,30000000,70 Pminus1=N/A,1,2,16674349,1,1500000,30000000,70 Pminus1=N/A,1,2,16683791,1,1500000,30000000,70 Pminus1=N/A,1,2,16696021,1,1500000,30000000,70 Pminus1=N/A,1,2,16679939,1,1500000,30000000,70 Pminus1=N/A,1,2,16600421,1,1500000,30000000,70 Pminus1=N/A,1,2,16600811,1,1500000,30000000,70 Pminus1=N/A,1,2,16605443,1,1500000,30000000,70 Pminus1=N/A,1,2,16609457,1,1500000,30000000,70 Pminus1=N/A,1,2,16612891,1,1500000,30000000,70 Pminus1=N/A,1,2,16617787,1,1500000,30000000,70 Pminus1=N/A,1,2,16619507,1,1500000,30000000,70 Pminus1=N/A,1,2,16619833,1,1500000,30000000,70 Pminus1=N/A,1,2,16623407,1,1500000,30000000,70 Pminus1=N/A,1,2,16623493,1,1500000,30000000,70 Pminus1=N/A,1,2,16625383,1,1500000,30000000,70 Pminus1=N/A,1,2,16628753,1,1500000,30000000,70 Pminus1=N/A,1,2,16628863,1,1500000,30000000,70 Pminus1=N/A,1,2,16631161,1,1500000,30000000,70 Pminus1=N/A,1,2,16631731,1,1500000,30000000,70 Pminus1=N/A,1,2,16634929,1,1500000,30000000,70 Pminus1=N/A,1,2,16638277,1,1500000,30000000,70 Pminus1=N/A,1,2,16638287,1,1500000,30000000,70 Pminus1=N/A,1,2,16639853,1,1500000,30000000,70 Pminus1=N/A,1,2,16642091,1,1500000,30000000,70 Pminus1=N/A,1,2,16643989,1,1500000,30000000,70 Pminus1=N/A,1,2,16646191,1,1500000,30000000,70 Pminus1=N/A,1,2,16646521,1,1500000,30000000,70 Pminus1=N/A,1,2,16648543,1,1500000,30000000,70 Pminus1=N/A,1,2,16654601,1,1500000,30000000,70 Pminus1=N/A,1,2,16659581,1,1500000,30000000,70 Pminus1=N/A,1,2,16664783,1,1500000,30000000,70 Pminus1=N/A,1,2,16665949,1,1500000,30000000,70 Pminus1=N/A,1,2,16668361,1,1500000,30000000,70 Pminus1=N/A,1,2,16670363,1,1500000,30000000,70 Pminus1=N/A,1,2,16671269,1,1500000,30000000,70 Pminus1=N/A,1,2,16674443,1,1500000,30000000,70 Pminus1=N/A,1,2,16677389,1,1500000,30000000,70 Pminus1=N/A,1,2,16677487,1,1500000,30000000,70 Pminus1=N/A,1,2,16682177,1,1500000,30000000,70 Pminus1=N/A,1,2,16682927,1,1500000,30000000,70 Pminus1=N/A,1,2,16684853,1,1500000,30000000,70 Pminus1=N/A,1,2,16686949,1,1500000,30000000,70 Pminus1=N/A,1,2,16691111,1,1500000,30000000,70 Pminus1=N/A,1,2,16691839,1,1500000,30000000,70 Pminus1=N/A,1,2,16693123,1,1500000,30000000,70 Pminus1=N/A,1,2,16693279,1,1500000,30000000,70 Pminus1=N/A,1,2,16696369,1,1500000,30000000,70 Pminus1=N/A,1,2,16697909,1,1500000,30000000,70 Pminus1=N/A,1,2,16609777,1,1500000,30000000,70 Pminus1=N/A,1,2,16622743,1,1500000,30000000,70 Pminus1=N/A,1,2,16633459,1,1500000,30000000,70 Pminus1=N/A,1,2,16602847,1,1500000,30000000,70 Pminus1=N/A,1,2,16606519,1,1500000,30000000,70 Pminus1=N/A,1,2,16612121,1,1500000,30000000,70 Pminus1=N/A,1,2,16614877,1,1500000,30000000,70 Pminus1=N/A,1,2,16621723,1,1500000,30000000,70 Pminus1=N/A,1,2,16622159,1,1500000,30000000,70 Pminus1=N/A,1,2,16629703,1,1500000,30000000,70 Pminus1=N/A,1,2,16653431,1,1500000,30000000,70 Pminus1=N/A,1,2,16656043,1,1500000,30000000,70 Pminus1=N/A,1,2,16657183,1,1500000,30000000,70 Pminus1=N/A,1,2,16663831,1,1500000,30000000,70 Pminus1=N/A,1,2,16665769,1,1500000,30000000,70 Pminus1=N/A,1,2,16672657,1,1500000,30000000,70 Pminus1=N/A,1,2,16674509,1,1500000,30000000,70 Pminus1=N/A,1,2,16676263,1,1500000,30000000,70 Pminus1=N/A,1,2,16677211,1,1500000,30000000,70 Pminus1=N/A,1,2,16689061,1,1500000,30000000,70 Pminus1=N/A,1,2,16694963,1,1500000,30000000,70 Pminus1=N/A,1,2,16696733,1,1500000,30000000,70 Pminus1=N/A,1,2,16697683,1,1500000,30000000,70[/CODE] 
Queued them and thank you.

Have a problem. Had to reboot my machine then I lost the majority of the M30 outstanding P1 wus plus the ones from the above list. Client decided to communicate with the server and messed up downloading other new work. I've re queued the 16M range.

[QUOTE=pinhodecarlos;563855]Client decided to communicate with the server and messed up downloading other new work[/QUOTE]
From my experience, adding [C]NoMoreWork=1[/C] in [I]prime.txt[/I] might prevent this from happening again. 
[QUOTE=pinhodecarlos;563855]Have a problem. Had to reboot my machine then I lost the majority of the M30 outstanding P1 wus plus the ones from the above list. Client decided to communicate with the server and messed up downloading other new work. I've re queued the 16M range.[/QUOTE]
OK....do you want more 30M or do you prefer these 16M? 
[QUOTE=petrw1;563857]OK....do you want more 30M or do you prefer these 16M?[/QUOTE]
Will stay with the 16M, thank you both. 
36.2M taken for P1
Thanks

Looks like all the TF help is getting ahead of P1.
I guess it's a balancing act....How much TF power we have vs. how much P1 (ECM) power.
If I knew the actual ratio, then suggesting where each would best help would be possible. However, I can only guess; and it changes week by week. Recently, while the P1 help is certainly increasing (YAY) there is still much more TF capacity being applied here....but the question on my mind is where is it best applied (Of course, at best I can offer an opinion...no more, no less) The following quote bubble is my deep thinking; feel free to skip it if you so desire. [QUOTE]In a nutshell I estimate how many GhzDay per expected factor via P1 taking into account how much P1 has already been done....AND... then I do the same for TF, knowing that GPUs are up to 100 times faster at TF than CPUs and that each extra bit of TF takes twice as long as the one before. In 4x.xM and 5x.xM I'm seeing about 150 GDs per factor for P1 where B1=B2; about twice that where B1/B2 are still relatively low. Based on the typical P1 that has been done I will expect 1525 factors at the 150GDsPer and another 525 factors of the 300500GdsPer. As for TF due to the aggressive P1 I'm seeing about 1% success rate, so for example in the entire 25.0M range TF7071: 20 Factors; 20,000 Ghz Days: 1,000GDs Per Factor. But by TF7475 the next 20 factors will take 320,000 GDs; 16,000 Per Factor. Then, as the exponents get lower the cost per factor for P1 will decrease and for TF will increase. So depending how many more factors are required to get under 2,000 I can determine how much more TF or P1 to suggest and in what order. Often I'll do 1 or 2 bits of TF; then easier P1; another bit of TF; harder P1; more TF or some variation. [/QUOTE] There are NOT a lot of ranges right now where I consider all the necessary extra P1 complete; though there are a few more that could be close enough as long as we have excess TF capacity. I am P1'ing in the 4x,xM ranges aggressively: 44.6 and 46.3 are done P1; 49.4 will be done in a couple weeks. I am TF'ing 44.6; 46.3 is available for TF. The remaining 8 have P1 scheduled though it takes me about 3 weeks per range. That said, these remaining 8 ranges will all need at least TF75 so it could be done before or after P1 as long as there is no toestepping. 40.1, 41.7, 43.3, 43.4, 43.0, 42.6, 49.6, 48.4 Others are TF'ing and P1'ing in the 2x.xM and 3x.xM ranges. There are about a half dozen in 3x.xM that have completed P1 where B1=B2 and could be tackled by TF but a few have enough factors remaining that I think more P1 is warranted first. There are another half dozen or so 3x.xM that B1=B2 P1 will be enough to release them to TF. As for 2x.xM I think the first priority is TF to at least 72 bits where necessary; then aggressive P1. That's enough for now ... probably way too much! :smile: If necessary I can provide more specifics for the aforementioned ranges. Thanks all and enjoy the hunt. 
3.6M done
After about 6 months of deep P1 and one month of TF, 3.6M range is done. There is still about two weeks of P1 left to bring the entire range to a logical closure, so may be another 10 factors might be found.
Why 3.6? 34M was the first 1M range with > 20000, and out of that 3.6M was the range with the largest number of unfactored exponents. So picked it as a challenge. What next? Will do some more work around 34M range. There is already some activity in that area, so will try not to step on toes. 
[QUOTE=axn;564182]After about 6 months of deep P1 and one month of TF, 3.6M range is done.
Why 3.6? 34M was the first 1M range with > 20000, and out of that 3.6M was the range with the largest number of unfactored exponents. So picked it as a challenge. .[/QUOTE] Yay!!! 
1 Attachment(s)
Having lots of fun running P1 with not the recommended optimized B1/B2 bounds but even though I am getting satisfying rate of factors for range below 7.8M. Work was taken from mersenne.ca at the "poorly P1 blablabla" link.

[QUOTE=pinhodecarlos;564846]Having lots of fun running P1 with not the recommended optimized B1/B2 bounds but even though I am getting satisfying rate of factors for range below 7.8M. Work was taken from mersenne.ca at the "poorly P1 blablabla" link.[/QUOTE]
Cool... it all helps 
[QUOTE=petrw1;564850]Cool... it all helps[/QUOTE]
Will go back to your 16M once I’m done with these, 10 days to go. 
[QUOTE=pinhodecarlos;564853]Will go back to your 16M once I’m done with these, 10 days to go.[/QUOTE]
No Longer Required....someone else cleared the 16.6M range. But when you are ready for other small ranges let me know. Thanks 
Few more TF ranges up for grabs.....
Available now (P1 done)
29.1 35.5 49.4 A little more P1 is in progress 30.1 36.2 A little more P1 would be beneficial but not essential 36.5 38.2 
I was thinking about doing a PM1 B1=500e3 B2 =10e6 on the 2727.1 range ... Starting at about the new year. is the range availlable?

[QUOTE=firejuggler;566352]I was thinking about doing a PM1 B1=500e3 B2 =10e6 on the 2727.1 range ... Starting at about the new year. is the range availlable?[/QUOTE]
It is free and I absolutely appreciate any help. But if I may offer some experience: In my humble opinion if you mean 27.0 that is a tough one. I'm not trying to talk you out of it, just want to be sure you know what you are getting into. If you mean 27.1 that is much closer/easier. Based on a lot of math I use to determine the best way to clear these ranges, I think your bounds could be higher, though. HERE IS THE MATH I USE: The 27.0 range needs 176 more factors. TF alone would take too much work.. Each bit of TF clears a little over 1% (of 2175 ... 25 would be generous); so we expect something like: TF7071: 217525=2150 (8.85 GhzDays per assignment) TF7172: 215025=2125 (17.7) TF7273: 212525=2100 (35.5) TF7374: 210025=2075 (71.0) TF7475: 207525=2050 (142.0) I'm not sure we want to go this high so TF alone is not the answer. So we need at least 50 factors from P1; either before TF or after. There are 860 that have had a low B1=B2; about 420e3 For these 860 your proposed bounds have an expected success rate of close to 3% or about 25 factors according to these: [url]https://www.mersenne.ca/prob.php?exponent=27050039&factorbits=70&b1=420000&b2=420000[/url] [url]https://www.mersenne.ca/prob.php?exponent=27050039&factorbits=70&b1=500000&b2=10000000[/url]. The remaining 1300 or so have current higher B1/B2 which would diminish your success rate if you continue to P1 these as well. Something like a 2% success rate here gives another 26 factors....we're getting closer. But I'm not sure we want to P1 all 2175 exponents either. On the other hand if you used bounds of 1e6,20e6 you'll get another 1% (granted for double the work; 2.7 vs 1.35 GhzDays each). [url]https://www.mersenne.ca/prob.php?exponent=27050039&factorbits=70&b1=1000000&b2=20000000[/url] So out of the 860 you would get about 34 factors. And if your goal is 50 P1 factors you need to P1 a lot less of the remaining 1300 exponents. On the third hand (hahaha), sometimes I will use the lower B1/B2 for the exponents that have a current B1=B2 and more aggressive B1/B2 for the others. I have a spread sheet that helps me choose. CONFUSED.....or maybe you are way better at stats than me and know all this and more. Anyway thanks a lot and anything you choose is greatly appreciated. 
I've got 29.1
[QUOTE=petrw1;566350]Available now (P1 done)
29.1 .... I'm working on this one 35.5 49.4 A little more P1 is in progress 30.1 36.2 A little more P1 would be beneficial but not essential 36.5 38.2[/QUOTE] I share. Use GPU72 to avoid toe stepping OR start at the top (i.e. 29100000) 
I'm looking for volunteers to test a prerelease of version 30.4. 30.4 is a nearly complete rewrite of the ECM and P1 code.
Useful testing would include: 1) rerun successful P1 and ECM attempts to make sure the new code does not miss any factors. 2) make sure save and restore work 3) find bugs reading old save files 4) testing various high and low memory configurations 5) report back on whether (how much) the new code is faster and finally... 6) run the improved code to see if it is finding about the expected number of new factors. Any interest? Please specify Windows or Linux. New features include:  better use of available memory for ECM and P1 stage 2  small speed improvements in ECM stage 1  selection of optimal B2 in both ECM and P1  deprecate Brent Suyama (more efficient to put that effort into a larger B1) 
Wonderful! I'm looking forward to this. I could try Windows and Linux both, 64 bits. Thank you! :smile:

[QUOTE=kruoli;566476]Wonderful! I'm looking forward to this. I could try Windows and Linux both, 64 bits. Thank you! :smile:[/QUOTE]
I've currently got a couple of machines doing DCP1 in 104M. Would love to give it a whirl if you don't there's (much) risk of missed factors. (If a bug is found they can simply be rerun.) 
I'd be willing to implement a script for finding known factors of mostly Mersenne numbers, at least (if they were found by P1 or ECM).
@James et al., could you compile a list of those factors? Not all of them, but some? Edit 1: The script should create worktodo.txt entries and then I'll be running them. @George, do we have to test different machines or is it sufficient to test on a single machine, when assuming the big number arithmetic itself is rock solid? Edit 2: I may have forgotten; how to specify Sigma when executing ECM with Prime95 or mprime? 
[QUOTE=Prime95;566466]
Any interest? Please specify Windows or Linux. [/QUOTE] I can test a linux version. Might not get to it until the weekend, though. 
[QUOTE=Prime95;566466]I'm looking for volunteers to test a prerelease of version 30.4. 30.4 is a nearly complete rewrite of the ECM and P1 code.[/QUOTE]I'm interested in trying it out on Linux. Roughly when will the testing begin, and when do you need the results?

[QUOTE=Prime95;566466]I'm looking for volunteers to test a prerelease of version 30.4. 30.4 is a nearly complete rewrite of the ECM and P1 code.
[/QUOTE] Is this for PFACTOR only or also for PMINUS1? 
[QUOTE=petrw1;566503]Is this for PFACTOR only or also for PMINUS1?[/QUOTE]
Pfactor and P1 are the same (in both existing and 30.4 versions). The only difference is Pfactor computes the P1 bounds based on expected #LL tests saved, Pminus1 uses user specified bounds. 30.4 will prefer you use this format for your P1 work: Pminus1=1,2,n,1,B1,B2_which_will_be_ignored,TF_sieve_depth This is different than what is displayed on the James' "poorly factored" page. [QUOTE=kruoli;566484]@George, do we have to test different machines or is it sufficient to test on a single machine, when assuming the big number arithmetic itself is rock solid?[/QUOTE] The FFT code changed a little, but I'm more worried about the ECM and P1 C code. The change to FFT code is an assembly implementation of (a+b)*c and (ab)*c using less memory bandwidth. I only implemented this for AVX, FMA3, and AVX512  for older machines that operation is emulated. [QUOTE=nordi;566502]I'm interested in trying it out on Linux. Roughly when will the testing begin, and when do you need the results?[/QUOTE] Let me make a few more tweaks and tests, then I'll build a Windows and Linux executable. 
I am willing to run it on a couple of Win64 machines.

[QUOTE=Prime95;566466]I'm looking for volunteers to test a prerelease of version 30.4. [/QUOTE]
Put me on the list and PM link if Win7/i76950X is what you look for as a OS/CPU. I'm in holiday next week, and not going anywhere (moving the rubbish around the house from here to there and back, etc.) 
[QUOTE=Prime95;566466]I'm looking for volunteers to test a prerelease of version 30.4. 30.4 is a nearly complete rewrite of the ECM and P1 code.
Any interest? Please specify Windows or Linux. [/QUOTE] If you still need Linux testes, I'm here to help. Luigi :et_: 
Might I suggest 32148013 as one of the PM1 test?
B1=451 519 /B2= 2 846 113 This ought to find a composite 172 bit factor (2) 
@george: count me in. Win 10 64bit on a kaby lake 16 GB machine, and Linux on several Colab instances.

Warnings:
1) Resuming stage 2 P1 or ECM from a v30.3 save file will not work. Stage 2 will start again from scratch. 2) I've not tested resuming from a P1 v30.3 save file. It is supposed to work, but may not due to substantial changes in save files. Report a bug if you encounter an error. I recommend installing v30.4 in a new directory and copying over your prime.txt, local.txt, worktodo.txt and save files. Thanks for your help in locating bugs or suggestions for more improvements. Win64 version: [url]https://www.dropbox.com/s/ilw7k3x3omja13s/p95v304b2.win64.zip?dl=0[/url] Linux64 soon. P.S. Calculating optimal B2 for P1 and ECM is based upon estimates of the work involved in B1 and B2. If there are bugs in the cost estimating code, wrong optimal B2 will be calculated. Thus, if you find a "head scratcher" such as "B2 is faster with 6GB of memory than with 8GB" please report your observations. 
Linux 64:
[url]https://www.dropbox.com/s/9ksfnkwtx5r4fwx/p95v304b2.linux64.tar.gz?dl=0[/url] 
If a factor is not found, does the new code report the requested B2 or the computed B2? In my initial results, [STRIKE]it appears to report the requested B2, which seems inaccurate.[/STRIKE]
These are the lines I tried: [QUOTE]Pminus1=1,2,21150827,1,6133,596857,66 Pminus1=1,2,31919773,1,1901,84737,66[/QUOTE] NEVERMIND: I'm wrong  misread the outputs. Code reported the B2 it used, which was not adequate in this case... I'll try to see how much memory to allocate to get the desired factor. 
A present for your project  I was comparing P1 speeds for 30.3 vs. 30.4 and found this:
processing: P1 factor 74415394148849438918449121 for M16963013 (B1=500,000, B2=20,500,000) 
[QUOTE=Prime95;566632]A present for your project[/QUOTE]
:bow: And....inquiring minds want to know...how do they compare please? 
[QUOTE=petrw1;566638]:bow:
And....inquiring minds want to know...how do they compare please?[/QUOTE] Was not a good test as stage 2 was limited to 300MB memory. Stage 1 was same speed. 30.3 did 17.3% of stage 2 to B2=10M in 470 seconds. 30.4 did 43.4% of stage 2 to B2=20.5M in 1940 seconds. My extrapolation is 30.3 would have taken 2480 seconds to do what 30.4 did. 
@George: Can you share the B2 selection logic (source code)? I want to understand/predict how memory allocation will affect this. Does the logic take into account MaxHighMemWorkers / workerspecific memory settings?

Are the save files compatible. I'm currently at 29.8

[QUOTE=axn;566642]@George: Can you share the B2 selection logic (source code)? I want to understand/predict how memory allocation will affect this. Does the logic take into account MaxHighMemWorkers / workerspecific memory settings?[/QUOTE]
B2 is selected when stage 2 starts. It uses the available memory according to Day/Night MaxHighMemWorkers/etc settings at the time stage 2 begins. The source code will be made available, but emulating the B2 selection logic will not be easy. [QUOTE=petrw1;566688]Are the save files compatible. I'm currently at 29.8[/QUOTE] See post #242 
[QUOTE=axn;566642]@George: [...] Does the logic take into account MaxHighMemWorkers / workerspecific memory settings?[/QUOTE]
That would be specifically interesting if e. g. an extra 100 MB would cause the algorithm to select a higher B2 with relatively little higher ETA. An ECM got: M630901, B1 = 250k, B2 =155 * B1 @ 6 GB availible. Stage 1 took around 0.5h, stage 2 took 0.25h. Edit: @George, thanks for a rough clarifications. 
Could someone teach me how to find the sigma that was used to find an ECM factor from the mersenne.org website?
I'm trying to test that the new ECM in version 30.4 recovers known factors without running hundreds of curves. 
Potential P1 (display) bug:
Stage 2 % resets to zero once a batch of relative primes is processed. At least, that's what I think is happening  haven't completed a full P1 run yet. 
First impressions. All testing done using Linux version. ECM tests were done in Colab VMs running both FMA3 & AVX512. P1 was done with my Ryzen 5 3600. More details on these tests can be provided upon request.
ECM Good 1. Finds new factors (haven't checked for existing factors) 2. Stage 1 & Stage 2 are faster (despite stage 2 being slightly bigger) 3. Uses more memory. P1 Good 1. Finds existing factors 2. Stage 2 is faster (about 50% faster) Bad 1. B2 chosen is much too big (Pminus1). Needs a way to control B2 selection / honor specified B2 2. Potential % display bug. 3. Saw some weird output (sort of like infinte loop) when multiple workers entered Stage 2 together 4. Potential issue with MaxHighMemWorkers setting  more than the specified number of workers proceeded to Stage 2 (CPU: Ryzen 5 3600, 6 workers) 
[QUOTE=axn;566762]
1. B2 chosen is much too big (Pminus1). Needs a way to control B2 selection / honor specified B2 2. Potential % display bug. 3. Saw some weird output (sort of like infinte loop) when multiple workers entered Stage 2 together 4. Potential issue with MaxHighMemWorkers setting  more than the specified number of workers proceeded to Stage 2 (CPU: Ryzen 5 3600, 6 workers)[/QUOTE] 1. Why do you say B2 is too big? The algorithm tries to find the B2 such that if you were to invest more CPU time you'd get the same increase in chance of finding a factor if you invested that CPU time increasing B1 or B2. I'm not saying you're not right, there could be a code bug or inaccurate estimation of P1 B1 or B2 costs. The new B2 selection can be circumvented by leaving off the sieve depth in worktodo.txt or by Pminus1BestB2=0 in prime.txt 2. Can you send a screenshot or a cut/paste of the screen output? 3. Same request as 2. 4. I'll try to reproduce. I did not test that. 
[QUOTE=masser;566727]Could someone teach me how to find the sigma that was used to find an ECM factor from the mersenne.org website?
I'm trying to test that the new ECM in version 30.4 recovers known factors without running hundreds of curves.[/QUOTE] Have a look at the results.txt file. The sigma used is displayed along with the relevant parameters of the successful run. The following is a copy/paste from my results file upon finding a factor: (...) [Sat Dec 19 18:01:38 2020] UID: lycorn/supernova, M544837 completed 427 ECM curves, B1=250000, B2=25000000, Wg4: 0FA45DC6, AID: 091211D7E0C858E32CCEB53CD41ACA9E [Sat Dec 19 19:23:21 2020] ECM found a factor in curve #63, stage #2 [B]Sigma=7091917254266823[/B], B1=250000, B2=25000000. UID: lycorn/supernova, M544721 has a factor: 479701949456122248252251360609 (ECM curve 63, B1=250000, B2=25000000), AID: 56052C6C12AC548F71FBD1F0B34779CF (...) 
[QUOTE=masser;566727]Could someone teach me how to find the sigma that was used to find an ECM factor [B]from the mersenne.org website[/B]?[/QUOTE]
[QUOTE=lycorn;566817]Have a look at the results.txt file. The sigma used is displayed along with the relevant parameters of the successful run. The following is a copy/paste from my results file upon finding a factor: [/QUOTE] That's not what was asked. Most ECM results on mersenne.org don't display the sigma, though I'm not sure if the server has a record of it. A few ECM results (like Ryan Propper's) list the sigma after the B2. For example, on [M]1399[/M], you'll see the line "Factor: 9729831901051958663829453004687723271026191923786080297556081 / (ECM curve 1, B1=850000000, B2=15892628251516, [I]Sigma=16318523421442679557[/I])". 
[QUOTE=Happy5214;566822]Most ECM results on mersenne.org don't display the sigma, though I'm not sure if the server has a record of it.[/QUOTE]
Thanks for clarifying my question: does anyone know if the server keeps a record of the sigma value? 
[QUOTE=Prime95;566792]1. Why do you say B2 is too big? The algorithm tries to find the B2 such that if you were to invest more CPU time you'd get the same increase in chance of finding a factor if you invested that CPU time increasing B1 or B2. I'm not saying you're not right, there could be a code bug or inaccurate estimation of P1 B1 or B2 costs.[/quote]
Too long from my perspective. I'm running P1 with (30m, 600m) which completes in about 12 hours (7+5) for a probability of 11.1%. With the new code, it is running (30m,4080m) in about 28 hours (7+21) for a probability of 14.5%. If it would just run the given B2, it would complete it in about 10 hours (7+3) which would be a much better deal for me. However, I think I understand what you're saying. If I'm to spend 10 hours on a P1, my best chance would not be a (30m, 600m) (7+3) split, but rather smaller B1, bigger B2. So, I guess the problem was that I didn't understand how the logic was selecting B2. My guess is, something like (10m, whatever program selects) might be a better use of my compute time. Anyway, something to play with. I would still want to reduce the B2, because of a different problem  I want to avoid the number of stage 2 running in parallel because it can cause up to 15% slow down when too many stage 2 are running. However this is probably too much for the s/w to take into account. Incidentally, it appears that the formula for probability has been tweaked. It is reporting p(30m, 4080m, 69 bits) as 14.5% whereas mersenne.ca is giving it as 14.0%. [QUOTE=Prime95;566792]The new B2 selection can be circumvented by leaving off the sieve depth in worktodo.txt or by Pminus1BestB2=0 in prime.txt[/quote] Thanks. Leaving off the tf depth is working fine! That helps. [QUOTE=Prime95;566792]2. Can you send a screenshot or a cut/paste of the screen output?[/quote] Sorry, don't have it. But easy enough to describe. With the (30m, 4080m) run, I have allocated enough memory per worker for about 6300 temps. That means about 5.3 passes are needed. EDIT: I might have misunderstood the relation between the temps and passes. Anyway, based on the number of stage 2 primes it reported, I estimated about 5 and bit passes. In the first pass, the stage 2 % goes from 0 to about 19%. However, when the second pass starts, instead of the % going to 20% and on, it restarts at 0% and keeps going till another 19%, and again restarts on next pass and so on. Looks like the % variable is getting reset for every pass. ECM is fine, btw. [QUOTE=Prime95;566792]3. Same request as 2.[/quote] Don't have this one either. But it was sort of an infinite loop "x memory being used" shown by the final worker entering stage 2. It happened when I didn't have perworker memory configured, and all the workers entered stage 2 in quick succession. The first one took everything, then when the second one came, there was some adjustment, and then the third came in and more adjustment, and so on until, bam, all hell broke loose. Infinite loop, and it wouldn't even respond to ^C, and had to kill 9. Anyway, just thought I'd let you know. But I have no interest in trying to reproduce this one. [QUOTE=Prime95;566792]4. I'll try to reproduce. I did not test that.[/QUOTE] Sure, thanks. Could it be a Ryzen thing? Reason I'm asking is, when I gave that setting as 2, it enforced it as 4 (i.e. allowed 4/6 stage 2, but stopped two of them), so wondering if it enforced two per L3 block or some such weirdness. Anyway, for now, I am looking at enforcing memory at perworker level. However if this could be sorted out, I could be allocating more per stage 2 safely. One other (potential) [B]bug[/B]. On AVX512, doing ECM on wagstaff, it didn't display the [C]x bitsperword below FFT limit (more than 0.5 allows extra optimizations)[/C] text. Only this particular combination. If have done both mersenne & wagstaff ECM on both FMA3 & AVX512 and all the other combinations show this. 
[QUOTE=axn;566826]Too long from my perspective. I'm running P1 with (30m, 600m) which completes in about 12 hours (7+5) for a probability of 11.1%. With the new code, it is running (30m,4080m) in about 28 hours (7+21) for a probability of 14.5%.
However, I think I understand what you're saying. If I'm to spend 10 hours on a P1, my best chance would not be a (30m, 600m) (7+3) split, but rather smaller B1, bigger B2.[/QUOTE] Can you tell me what the exponent, TF level, and memory settings were? On the samples I've run I'm getting a B2 or 40x to 60x B1 (8GB memory). I'm curious if the B2=136*B1 is an indication of a bug. Yes, you understand correctly. The program is saying (if the code is bugfree) that for a 10 hour run you'd be better off with a smaller B1 and larger B2. I've got a fix in place for the stage 2 % complete. Thanks. 
I may have misunderstood something.
Back in the day, for PM1, a reasonnable multiplier between B1 and B2 was 30. More recently, with the advent of reliable PRP, that mult was lowered to 20. And now that 'we' drop BS extension, we go back to a mult of 60, with a slightly lowered B1? 
[QUOTE=firejuggler;566903]
And now that 'we' drop BS extension, we go back to a mult of 60, with a slightly lowered B1?[/QUOTE] I don't believe that is the case here. Now, we drop the BS extension and use an optimal multiplier, with optimal determined on a casebycase basis given the available system memory and perhaps other hardware considerations. 
[QUOTE=firejuggler;566903]I may have misunderstood something.
Back in the day, for PM1, a reasonnable multiplier between B1 and B2 was 30. More recently, with the advent of reliable PRP, that mult was lowered to 20. And now that 'we' drop BS extension, we go back to a mult of 60, with a slightly lowered B1?[/QUOTE] Version 30.4 is more efficient at stage 2 than earlier versions. Thus, it makes sense that the the optimal B2/B1 ratio will go up. I've not looked at wavefront PRP/LL tests where far fewer temporaries can be allocated (because the numbers are so much larger). Stage 2 efficiency will not increase nearly as much thus the best B2/B1 ratio will not go up as much. Also, the 20 and 30 ratios you quote were guidelines. This is the first time we've gone to the effort of accurately predicting the best B2 value. 
[QUOTE=Prime95;566892]Can you tell me what the exponent, TF level, and memory settings were? On the samples I've run I'm getting a B2 or 40x to 60x B1 (8GB memory). I'm curious if the B2=136*B1 is an indication of a bug.[/quote]
These are the ones I tested: M3699277 M3801709 M3802999 M3804763 M3804937 M3805183 TF depth was 69 bits. Memory allocated was 9.5 GB per worker. The primepairing % ranged about 9093% between different blocks. IF, for some reason, you want to test one of these exponent. let me know; I have save files from near the end of stage 1. Incidentally, in the first run, I had allocated 24GB RAM, but without the perworker restriction. The first worker entering stage 2 took all 24 GB RAM and selected about 200x multiplier!! [QUOTE=Prime95;566892]I've got a fix in place for the stage 2 % complete. Thanks.[/QUOTE] Thanks. Looking forward to the next build. If there are no glaring issues, I will start using that for my "production" P1. I am already using the current one for ECM. 
[QUOTE=Prime95;566892]I'm curious if the B2=136*B1 is an indication of a bug.[/QUOTE]
I stepped through the code with your exponent, B1, memory and it is operating correctly. My 40x to 60x values were using a "puny" B1 of 500000. Apparently as B1 gets larger, it makes sense for the B2/B1 ratio to go up as well. Looking at mersenne.ca [url]https://www.mersenne.ca/prob.php?exponent=3699277&factorbits=69&prob=12.75[/url] you might try a B1 in the 15M to 20M area with B2 autocomputed for your 10 hour runs and see which gives a higher chance of finding a factor. 
[QUOTE=axn;566826]I'm running P1...[/QUOTE]
:goodposting: Very good post, kinda my issues too, but you said it better than I could say it! If we are to trust RDS' papers which he always push in front as much as he can :razz: (albeit they refer to ECM mostly) the best choice (i.e. most efficient, wallclock per probability of finding a factor) is when the program spends about the same amount of time in stage 1 as it does in stage 2. Regardless of how fast one stage is done, comparable with the other stage. If stage 2 becomes more efficient in the newer version, then it seems common sense that B2 will grow related to B1. But 150 times seems a bit too much... Just saying... 
I got an out of memory error during my test run. Amongst many other things, the Linux kernel said[INDENT] Killed process 22440 ([B]mprime[/B]) totalvm:112100576kB, anon[B]rss:109294088kB[/B], filerss:0kB, shmemrss:4kB
[/INDENT]The memory limit in local.txt is set to "100000" which should be 100,000,000,000 bytes or 104,857,600,000 bytes if MiB are used instead of MB. But apparently, mprime was using 111,917,146,112 bytes, which is either 11.9 or 7 GB more than it should. That was probably triggered by stage 2 of ECM needing more RAM than before. I was running 32 threads of ECM of all sizes, from M1277 to M9,100,919. Also, I had some programs still running so mprime really had to observe the memory limit. So not necessarily a new issue, just one that surfaces now. 
[QUOTE=nordi;566962]Killed process 22440 ([B]mprime[/B]) totalvm:112100576kB, anon[B]rss:109294088kB[/B], filerss:0kB, shmemrss:4kB
[/QUOTE] I freed up some memory and started a second test run, which aborted after a few minutes with the kernel saying [quote] [1310480.143387] Killed process 28361 (mprime) totalvm:127209968kB, anon[B]rss:123919728kB[/B], filerss:0kB, shmemrss:0kB [/quote]I'll set "Memory=50000" instead of 100000 and keep testing. 
[QUOTE=nordi;566968]I'll set "Memory=50000" instead of 100000 and keep testing.[/QUOTE]
Even with that setting, mprime consumes up to ~100GB instead of 50GB, i.e. twice as much as it should. I'm monitoring memory usage with[INDENT]while true; do grep "RssAnon" /proc/$(pidof mprime)/status; sleep 10; done [/INDENT]and the highest I got so far was "RssAnon: 105544236 kB". Edit: And a bit later, mprime segfaulted. Kernel log says [1315248.083098] show_signal_msg: 38 callbacks suppressed [1315248.083102] mprime[29045]: segfault at 7f7878287f26 ip 00000000004166e5 sp 00007f777eff2d60 error 6 in mprime[400000+2190000] 
3 Attachment(s)
[QUOTE=Prime95;566466]
1) rerun successful P1 and ECM attempts to make sure the new code does not miss any factors. [/QUOTE] I feel confident that the new code is not missing any factors. I've attached results from two machines; the new code found all of the known factors I sought. For P1, I first used the minimal B1,B2 and so a lot of factors were missed, as B2 was ignored. On the second pass, I used B1=B2/10 and that worked to find the remaining factors. On the Haswell i5, I used 7 GB of ram. On the Skylake i7, I used 3 GB or ram. For ECM, I had to make several passes over the remaining candidates to find all of the factors. With each pass I increased the available memory and the code responded with higher B2 values, as expected. [QUOTE=Prime95;566466] 2) make sure save and restore work [/QUOTE] I feel less confident about this point. See the attached file, bad_read_result.txt. I ran a short P1 attempt on M21150827 and then something went wrong when I later tried a longer P1 attempt. I checked to see if the error was reproducible; running the two P1 attempts backtoback in a clean directory with the new executable worked fine the second time. Maybe it was a fluke on my local system or maybe others will report a similar occurrence. I will rerun some of the longer attempts above with version 30.3, collect some timing comparisons and report back later. 
Running some P1 in the 17M range. v30.3 was using 78GB RAM (out of 40GB available) and 960 relative primes.
v30.4:[quote]Optimal P1 factoring of M17847311 using up to 40960MB of memory. Assuming no factors below 2^65 and 4 primality tests saved if a factor is found. Optimal bounds are B1=404000, B2=27115000 Chance of finding a factor is an estimated 8.4% ... Starting stage 1 GCD  please be patient. Stage 1 GCD complete. Time: 4.691 sec. D: 2310, relative primes: 4918, stage 2 primes: 1655650, pair%=95.05 Using [b]38019MB[/b] of memory. Stage 2 init complete. 52544 transforms. Time: 57.675 sec.[/quote]Stage2 memory initialization seems faster than I'm used to for previous versions, which is a very good thing. Comparative result output for adjacent assignments:[quote]v30.3: M17843899 completed P1, B1=370000, B2=12533000, E=12 v30.4: M17847311 completed P1, B1=404000, B2=27115000[/quote]Somewhat higher B1, vastly higher B2, no BrentSuyama? 
[QUOTE=James Heinrich;567189]Stage2 memory initialization seems faster than I'm used to for previous versions[/quote]
It creates more temporaries yet faster, because it is not doing BS. [QUOTE=James Heinrich;567189]Comparative result output for adjacent assignments:[/quote] How do the runtimes and probabilities compare? [QUOTE=James Heinrich;567189]Somewhat higher B1, vastly higher B2, no BrentSuyama?[/QUOTE] Higher B2 (owing to faster stage 2) & no BS are the key features. EDIT: [quote]Assuming no factors below 2^65 and 4 primality tests saved if a factor is found.[/quote] This is not good. For some reason it is thinking the exponent has been factored to 2^65 when it has been factored to 2^70. This means the bounds it has calculated won't be optimal. 
[QUOTE=axn;567200]
This is not good. For some reason it is thinking the exponent has been factored to 2^65 when it has been factored to 2^70. This means the bounds it has calculated won't be optimal.[/QUOTE] The 2^65 value comes from worktodo.txt. Pfactor= in worktodo.txt will NOT calculate optimal bounds as far as this project is concerned. Pfactor is optimizing bounds to maximize the number of LL/PRP tests saved per unit of P1 time invested. Pminus1= lines in worktodo.txt optimizes the B2 bound to maximize the chance of finding a factor per unit of P1 time invested (user is responsible for picking the B1 bound). I know  it is all very confusing. 
[QUOTE=Prime95;567201]The 2^65 value comes from worktodo.txt.[/QUOTE]
Ok, so James gave it wrong values? Regardless, this affects the optimality of bounds (by affecting the probability calculations). 
[QUOTE=axn;567202]Ok, so James gave it wrong values? Regardless, this affects the optimality of bounds (by affecting the probability calculations).[/QUOTE]Yes, I gave it "wrong" values on purpose, specifically to affect the bounds. Using [c]Pfactor[/c] worktodo lines, specifying the "PrimeNetdefault" TF level and a large number of testssaved (anywhere from 210 depending on exponent size) forces Prime95 to choose extralarge bounds that I deem suitable for redoing of P1 work. This would of course be inappropriate at 100M/wavefront P1 work, but I think entirely appropriate at <20M.
BTW, at George's request, my [url=https://www.mersenne.ca/pm1_worst.php]Worst P1[/url] page now includes your choice of [c]Pfactor[/c] or [c]Pminus1[/c] worktodo formats. Note that I make no claim that Prime95 will select the same bounds from both variants, just that either are a reasonablysuitable starting point for P1 redo work. 
I happened to have a number of other program open at one point, and when v30.4 started up stage2 Windows complained about low memory. This seems to have gotten Prime95 into a "stuck" state, where the worker window says "P1 stage2 init" but it just sits there at 100% of a single core indefinitely (I noticed it after it had run for 53 minutes getting nowhere). I forceclosed Prime95 (it wouldn't close normally) and restarted it, but the hang is reproducible. I've sent George the savefile for debugging.

[QUOTE=James Heinrich;567204]Yes, I gave it "wrong" values on purpose[/quote]
You no longer need to do this. Even "Pminus1" will calculate optimal value  you just need to pick a B1 for it. But giving it the correct TF depth is essential, else the choice would be suboptimal. Actually, I wanted you to modify the calculator to take into account the improved Stage 2 and give the optimal B1/B2 for given probability / GHzDay target. [QUOTE=James Heinrich;567205]I happened to have a number of other program open at one point, and when v30.4 started up stage2 Windows complained about low memory. This seems to have gotten Prime95 into a "stuck" state, where the worker window says "P1 stage2 init" but it just sits there at 100% of a single core indefinitely (I noticed it after it had run for 53 minutes getting nowhere). I forceclosed Prime95 (it wouldn't close normally) and restarted it, but the hang is reproducible. I've sent George the savefile for debugging.[/QUOTE] This sounds almost similar to the "infinite loop" I encountered. Anyway, currently I'm using hard limits perworker to avoid this. 
1 Attachment(s)
There appears to be a crash when running an assignment that has a save file from a previous version. I've attached a picture of the output prior to it crashing after it completes stage 2 init.
Note, this is with the worktodo line [code]Pfactor=<aid>,1,2,28009823,1,70,3[/code]Trying to see if it crashes after a fresh start: nope, seems to work fine at this point. 
[QUOTE=Dylan14;567222]There appears to be a crash when running an assignment that has a save file from a previous version.[/QUOTE]But not necessarily always  when I upgraded midassignment it also said "cannot continue stage 2", added a bit more B1, and then completed stage2 without problem.
(It did get stuck later on a different assignment, as described above, so it's not entirely stable). 
Just mentioning I found my first factor with v30.4:
[M]M17840447[/M] has a 75.272bit (23digit) factor: [url=https://www.mersenne.ca/M17840447]45606749097226437406729[/url] (P1,B1=404000,B2=27105000) 
Win64 version 30.4 build 3: [url]https://www.dropbox.com/s/1nbpfh37tzd57gb/p95v304b3.win64.zip?dl=0[/url]
I fixed 5 bugs found by you folks: 1) Bad checksum for submitting ECM results manually. 2) There were cases where prime95 did not reduce stage 2 memory to conform to current memory settings. 3) Possible infinite loop during ECM stage 2 init (and maybe P1 too). 4) Rare memory corruption refiguring a stage 2 plan. 5) Percent complete in stage 2 corrected. I'm not convinced these explain all the undesirable behaviors described by James, nordi, and axn. Give it a try and let know of any troubles. I may not be able to make a linux build until after Christmas. 
Linux64 30.4 build 3: [url]https://www.dropbox.com/s/9yadeo8nn9aeajw/p95v304b3.linux64.tar.gz?dl=0[/url]

[QUOTE=Prime95;567284]
2) There were cases where prime95 did not reduce stage 2 memory to conform to current memory settings. [/QUOTE] I tried again on Linux and mprime kept running much longer than before, but was still stopped by the kernel's OOM killer. It used 121GB when configured to use just 50. One thing I noticed is that Stage 2 init frequently needs a long time (~1 minute instead of 5 seconds): [quote] [Worker #22 Dec 27 13:49] Stage 2 init complete. 62496 transforms, 1 modular inverses. Time: 59.057 sec. [Worker #30 Dec 27 13:49] Stage 2 init complete. 62496 transforms, 1 modular inverses. Time: 54.013 sec. [Worker #28 Dec 27 13:50] Stage 2 init complete. 62496 transforms, 1 modular inverses. Time: 57.211 sec. [/quote]I also saw this in previous versions, but only during startup. It makes the impression like the threads were competing/waiting for a global lock. Maybe that waiting time confuses the allocation logic? 
[QUOTE=nordi;567446]I tried again on Linux and mprime kept running much longer than before, but was still stopped by the kernel's OOM killer. It used 121GB when configured to use just 50
[/QUOTE] Can you describe your setup? 32 workers? 50GB memory. Worktodo.txt is? MaxHighMemWorkers? Can you provide the screen output (say 200 lines of output) at the time the OOM occurred? I'll try to reproduce on my dinky quadcore. 
[QUOTE=Prime95;567454]Can you describe your setup? 32 workers? 50GB memory. Worktodo.txt is? MaxHighMemWorkers?
[/QUOTE] Yes, 32 workers with "Memory=50000" in local.txt. The MaxHighMemWorkers is not set in my config. The machine has 128GB of RAM. [QUOTE=Prime95;567454] Can you provide the screen output (say 200 lines of output) at the time the OOM occurred?[/QUOTE]That and the worktodo were sent via PM. 
I've seen an issue where P1 continues past 100% in stage 1 until the worker is stopped. However, this has only happened like two times. I have no idea if it's related to any of the above bugs.

[M]M32159551[/M] has a 77.841bit (24digit) factor: [URL="https://www.mersenne.ca/M32159551"]270719245854611997909647[/URL] (P1,B1=1100000,B2=61600000)
found it with 30.4 b3 about 8169 M of ram Also maybe move the discussion about 30.4b3 in its own thread? 
Initial observations
All of my current work is N/A.
As soon as I started Prime95 it fetched 36 ECM assignments (that is my default). However there were only 18 new ECM assignments in Worker #1; none anywhere else. There were 36 on my Assignments page. It no longer reports number of relative factors processed in Stage 2; only percent complete. In my case it chose a B2=48*B1 =48,000,000. (I had 20*B1) I have 2 Workers with 2 CPUs each. I have 4000MB RAM allocated. With prior B1/B2 it was taking about 9 hours per assignment. Based on preliminary results it appears it will take: 3 hours for Stage 1 9 hours for Stage 2. More to follow Part 2: 2nd worker finished Stage 1 and of course split the RAM with Worker 1. However this assignments was given a B2=43*B1 The exponents are very close 41,778,xxx and 41,780,xxx. Both would have had a prior P1 with B1=B2=685,000 (or very close) Part 3: So far it seems that: If Worker x is still on Stage 1 when Worker y finishes Stage 1 it gets a B2=48xB1 for Stage 2. If Worker x then is ready for Stage 2 while Worker y is still on Stage 2 then Worker x gets B2=41 or 43xB1. Is that because it detects less RAM available (ie. the workers now have to share RAM)? 
Format suggestion
Any chance you could display this on 2 lines; I believe on most of our windows it will scroll into the Right Abyss.
[CODE]Dec 29 18:17] With trial factoring done to 2^74, optimal B2 is 41*B1 = 32800000. If no prior P1, chance of a new factor is 4.57%[/CODE] Like this instead... [CODE]Dec 29 18:17] With trial factoring done to 2^74, optimal B2 is 41*B1 = 32800000. Dec 29 18:17] If no prior P1, chance of a new factor is 4.57%[/CODE] 
[QUOTE=petrw1;567696]Any chance you could display this on 2 lines;[/QUOTE]
Will do 
[QUOTE=petrw1;567595]As soon as I started Prime95 it fetched 36 ECM assignments (that is my default). However there were only 18 new ECM assignments in Worker #1; none anywhere else.[/quote]
Weird. That code hasn't changed. [quote]It no longer reports number of relative factors processed in Stage 2; only percent complete.[/quote] Relative primes are no longer processed in passes. They are all done at once as prime95 steps from B1 to B2 in steps of size D. [quote] If Worker x is still on Stage 1 when Worker y finishes Stage 1 it gets a B2=48xB1 for Stage 2. If Worker x then is ready for Stage 2 while Worker y is still on Stage 2 then Worker x gets B2=41 or 43xB1. Is that because it detects less RAM available (ie. the workers now have to share RAM)?[/QUOTE] Your guess is correct. 
All times are UTC. The time now is 00:13. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2023, Jelsoft Enterprises Ltd.