![]() |
[QUOTE=Prime95;527836]A look at the history [url]https://www.mersenne.org/report_exponent/?exp_lo=97388611&exp_hi=&full=1[/url] shows that GPUs had only taken the exponent to 2^74[/QUOTE]
Yup... A few weeks ago someone with some ***serious*** compute behind them reserved tens of thousands of P-1 assignments. This has resulted in a situation where P-1'ing is now often done well before TF'ing has been done "optimally". Not the end of the world -- the TF'ers will take any candidates still standing (read: not factored) up to the optimal TF level before they're handed out to LL'ers. |
P-1 found a factor in stage #1, B1=905000.
UID: Jwb52z/Clay, M97453399 has a factor: 163141033136105093126137 (P-1, B1=905000), 77.110 bits. |
[QUOTE=chalsall;527837]Yup... A few weeks ago someone with some ***serious*** compute behind them reserved tens of thousands of P-1 assignments...[/QUOTE]
[B]James Heinrich[/B] has this going on with his project. His latest stats show he has nearly 2.6-million reserved exponents. That's simply ridiculous. His 10,000 assignment limit is being bypassed, I believe. A looping batch file can do this. I know because I tried it, but with only 10 exponents in every pass. He has a fetch example on his page. Modify it a little and put a time-out of a few seconds in the batch, then run it.. That's all it takes. The only way I know of that he could possibly prevent this is by tracking IP addresses. Other than that, I have no idea. |
It's quite different for the TF1G project, where assignments can still take under a second each... and anything that is reserved will expire within 10 days. And yes, I reserve quite a bit more than 10k at a time, with a fetch loop much like you described, and have been doing it for months now. It is not rocket science, really. The reason I do it like this is that both machines have a bit of a flaky network connection, one more than the other, so if there is a network outage, my script will just log what happened and continue with the next block. If I run the provided script as is (fetch work, run it through mfaktc, report results, rinse and repeat) there will be times when mfaktc runs out of work and the card will then run idle until network connectivity is restored. It's quite rare that I don't finish what I reserve. A couple hardware failures, some power outages and a few silly human errors here and there, but stuff happens.
Sure, there would be reason to frown upon this behaviour if someone did big reservations without a proven track record, but the way things are running now, I don't see the problem? I haven't logged the total amount of work done, and as you know, there are no credits for TF1G work anyway, but split between two cards I should be above 30 million factorization attempts already, mostly from 67 to 68 bits. I've only kept logs on one machine, and even there only since June 19th; the statistics are now 209384 factors found on 14074599 attempts. Of all the assigned exponents, I seem to have about 1.4 million, but that's only 8 days of work for those two cards. Too much at a time? Maybe... but is it really a problem for the >1G work? |
[QUOTE=nomead;528228]but is it really a problem for the >1G work?[/QUOTE]Not at all. You're welcome to take as many assignments as you can reasonably complete in a reasonable time (standard assignments expire after 10 days, but you should generally be able to return them much sooner since they're so short-running).
The large number of assignments out at any given time is largely due to a single user who has a large amount of GPU power available, but offline. He has special dispensation to get assignments for longer than 10 days, and grabs a million or so assignments and returns them approx once per month. You can see on the graph at the bottom of the [URL="https://www.mersenne.ca/tf1G.php"]page[/URL] the monthly spikes of about 200,000 GHz-days of results being submitted at once. You are welcome (indeed encouraged) to loop through assignment requests to get as many as you want, the 10k limit is just to be nice to my server and ensure assignment requests are returned in a timely manner. |
[QUOTE=nomead;528228]...Of all the assigned exponents, I seem to have about 1.4 million, but that's only 8 days of work for those two cards. Too much at a time? Maybe... but is it really a problem for the >1G work?[/QUOTE]
I really did not understand the logic for getting so many. You have the capability of running these in mass. A problem for the >1G work? No, I don't see any. James doesn't see any either, so let it hammer away. :smile: |
P-1 found a factor in stage #1, B1=905000.
UID: Jwb52z/Clay, M97735681 has a factor: 22459699301317591337180449 (P-1, B1=905000) 84.216 bits. |
P-1 found a factor in stage #2, B1=700000, B2=12425000.
UID: Jwb52z/Clay, M92430739 has a factor: 1439732501488765602199388883281 (P-1, B1=700000, B2=12425000) 100.184 bits. |
[CODE]UID: storm5510/7700_Kaby_Lake, M5789947 has a factor: 113935630231502890065274318991 (P-1, B1=720000, B2=11520000, e=12, n=324K, aid=6A11....5607 CUDAPm1 v0.22)[/CODE]30 digits, 96.524 bits. My personal best is 39 digits. This one is worth a mention. I do not often find one of this size.
|
P-1 found a factor in stage #2, B1=705000, B2=12513750.
UID: Jwb52z/Clay, M92655257 has a factor: 92250276233360973210007799 (P-1, B1=705000, B2=12513750), 86.254 bits. |
I have been doing a fair bit of P-1 recently and had yet to find a factor. I started to wonder what was going on. And this morning I see that a new (newly 'infected') machine found a 112 bit factor of a number in the 95,000,000 range. :fusion:
Meanwhile: Another machine running some ECM found a 85.8 bit co-factor of a number in the 255,000 range. (6th factor over all for that number.) :lavalamp: I am trying to up my lifetime ECM ranking (goal is to be 99th percentile) and current P-1 ranking. |
| All times are UTC. The time now is 22:53. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.