![]() |
[QUOTE=nordi;523358]The following numbers are shown as "Remaining cofactor is PRP status unknown" on mersenne.ca, even though PRP checks were made in 2017/2018:
[URL]https://www.mersenne.org/report_exponent/?exp_lo=18261863&full=1[/URL] [URL]https://www.mersenne.org/report_exponent/?exp_lo=10172717&full=1[/URL] [URL]https://www.mersenne.org/report_exponent/?exp_lo=10969577&full=1[/URL] What they all have in common is that one of the PRP checks was done by "Gabriel Lignelli" in July 2018. Any idea why this is happening?[/QUOTE] Not sure if this has anything to do with the issue, but I also noticed there is no history of who/when found the smallest factors on these. Maybe they were known before more detailed records were being recorded? |
[QUOTE=nordi;523358]The following numbers are shown as "Remaining cofactor is PRP status unknown" on mersenne.ca, even though PRP checks were made in 2017/2018[/QUOTE]They were done with an older version of Prime95 which format the output nicely when doing PRP against a long list of known factors. The data has since been fixed up on PrimeNet, but the fixes results were not new results so never made it into my data.
I have manually verified and fixed the results for the 3 exponents you quoted. [QUOTE=hansl;523368]I also noticed there is no history of who/when found the smallest factors on these. Maybe they were known before more detailed records were being recorded?[/QUOTE]That is exactly the case. Detailed records were not kept before about 2008 or thereabouts. |
I'm on the final stretch of my factoring effort to 55 bits now; should be done in the next day or two. Just a few more in the 4G, 7G, and 8G ranges left to finish. :whee:
Also I have another small nitpick about the many factors page again: [url]https://www.mersenne.ca/manyfactors.php[/url] When clicking any of the headers to sort by, or navigating to the next page of 50, the rest of the filters: min/max exponent range and # of factors get removed from the query. Likewise if updating those filter fields, the sort-by preferences get removed. I can manually edit the URLs to get these features to work together but it would be nice if clicking around worked too. |
[QUOTE=hansl;523492]Also I have another small nitpick about the many factors page again
..it would be nice if clicking around worked too.[/QUOTE]It should work as intended now. Thanks for pointing that out. |
[QUOTE=James Heinrich;523504]It should work as intended now. Thanks for pointing that out.[/QUOTE]
It looks like the sorting parts work now, but clicking "Next 50 >>" still clears the min/max filters. |
[QUOTE=hansl;523512]It looks like the sorting parts work now, but clicking "Next 50 >>" still clears the min/max filters.[/QUOTE]I've included that too now, thanks.
|
[QUOTE=James Heinrich;523515]I've included that too now, thanks.[/QUOTE]
Thank you for being so responsive! Just one last thing I'm still seeing is that the Next,Prev seem to reset the "s" sort by, and "o" order (ascending/descending) to a default "s=p&o=d" Sorry to be so nitpicky, but the improvements are greatly appreciated! |
Sorry, copy-paste-didn't-check...
I just fixed what I think was the error, but again didn't test fully, you can tell me if it's still broken :smile: |
[QUOTE=hansl;523518]Thank you for being so responsive!
... Sorry to be so nitpicky, but the improvements are greatly appreciated![/QUOTE] QA by another with attention to detail can be a great help to the coders and maintainers. And you are using positive tone and expressing gratitude. ATTABOY! :thumbs-up: |
[QUOTE=kriesel;523530]QA by another with attention to detail can be a great help to the coders and maintainers.[/QUOTE]I appreciate users who take the time to report problems in detail that make it easy for me to fix, so thank you to all who do. :smile:
|
James, as usual, very responsive!
He still has some work to do to beat Scott, who was fixing the bugs just before we reported them :razz: Anyhow, James, just to let you know that I sync'd all the factors from your DB till 2018 and included, downloaded them all and saved them on a corner of hdd (the almost 4GB torrent). I pray that you never need them back from me! (but just, in case life sucks sometimes, you know I have them). |
Finished up my factoring to 55, and the results are: about 38.3M new factors, with about 179K being first time factors.
The total new factor count is not exact because I grabbed the export of known factors a few days after I began trial factoring, so I can't filter out which ones were already known before I submitted them in the first few days, but I extrapolated based on other results in those ranges. The total first time factors is based on the summary of the past 55 days from 4G up, which should be mostly from me: [url]https://www.mersenne.ca/status/tf/0/55/0/400000[/url] (link only really valid for today, 8/12, since its a relative date-based report) Most of those new factors(167K of 179K) were in the exponent range around 7875M-8000M, where it looks like factors between 2^50 and 2^53 got skipped on the first run. Also, James added a full 10G TF progress graph a bit after I started, but don't think it was "announced", and it doesn't show directly from the right hand side menu, but you can find the link at the bottom of this page: [url]https://www.mersenne.ca/graphs/[/url] Here's the direct link for today: [url]https://www.mersenne.ca/graphs/factor_bits_10G/factor_bits_10G_20190812.png[/url] I had submitted all my results by the time this image was generated, but the factor queue was still going through some of them, so tomorrow's will show the final result. The main visual difference which I've been looking at is the greenish colored bit-ranges between around 40-55 have expanded to take up larger percentage of the vertical space of known factors. Compare the above with the first day which the 10G graph was generated here: [url]https://www.mersenne.ca/graphs/factor_bits_10G/factor_bits_10G_20190708.png[/url] (Some of the ranges I worked on were already completed at this point too) So, now that that's done... anyone up for helping to take the whole range from 55 to an almost completely arbitrary 62bits? :devil: (62 is the lower limit that the summary page counts/displays for >4G ) If I've done my math right, This would be about 128 times the effort that I just did. I am wondering if I can improve the PARI script efficiency to get some gains by further sieving out composite factors. I've also been thinking maybe there is some way to bulk test multiple exponents at once, (those belonging to the same "class") but I'm still not quite sure if the math works out for that to be possible/beneficial to efficiency. Just throwing around some vague ideas I've had so far, but I still need to sit down and take some time to really develop them. Not sure if there's really any interest in this from others, but maybe a catchy sub-project name would help: I'm thinking either "OBF", or "PARIto62" :smile: |
[QUOTE=hansl;523614]anyone up for helping to take the whole range from 55 to an almost completely arbitrary 62bits?[/QUOTE]I think it would perhaps be saner to simply try to push it all to 56-bit to start. Which sounds insignificant but is nearly exactly the same effort you just did from 1-55. Of course, if you stop looking at [i]all[/i] exponents and look efficiently for new factors (skip already-factored exponents, stop once you find a factor) then it becomes a much easier task.
|
[URL]https://www.mersenne.ca/exponent/12732431[/URL] produces this error:
PHP error encountered on line functions.local.inc.php:745, admin has been alerted: Undefined offset: 0 Everything else on mersenne.ca works fine, though. The page for this exponent worked just a few days ago, maybe my recent PRP result broke something? :unsure: |
[QUOTE=nordi;523649][URL]https://www.mersenne.ca/exponent/12732431[/URL] produces this error:[/QUOTE]
The same happens for [URL]https://www.mersenne.ca/exponent/19479277[/URL] and [URL]https://www.mersenne.ca/exponent/19822043[/URL] |
It would happen on many many exponents. Just a display artifact, other than the error message no incorrect data should have been displayed. Should be fixed now, thanks.
|
Hey James, quick question about the exports. At the top of the page it says it updates the current year archives every weekend, is this the same for the "Factors n,k" files too?
Would it be much work to add some "Last Updated" info for those on the page? I wanted to do a little final check of my previous effort based on those files, just not sure how long I should wait to make sure all the data gets included. |
[QUOTE=hansl;523657]Would it be much work to add some "Last Updated" info for those on the page?[/QUOTE]None at all, the info's already there, if you mouse-over any displayed filesize on the export page you'll see a tooltip with the file-modified time (as well as the exact byte size). The export process is actually pretty intensive, it takes the better part of 3 hours to run, every Sunday morning, should normally be done by 0400h UTC.
|
[QUOTE=James Heinrich;523659]the info's already there[/QUOTE]
Oh, can't believe I missed that. :doh!: Thanks |
I have noticed the sending results time for completed exponents is a lot longer. Recent typical has been in the range of 20 to 30 seconds. Request is nearly immediate, as always. I am wondering if this is something on my end? :question:
|
[QUOTE=storm5510;523911]I have noticed the sending results time for completed exponents is a lot longer. Recent typical has been in the range of 20 to 30 seconds.[/QUOTE]There was an issue that could exhibit this symptom that I should have resolved about 2 days ago. Are you reporting something from the recent past, or something that's still happening now?
|
[QUOTE=Mark Rose;485793]I was able to download full copies fine yesterday. They decompressed without issue.
Are the dumps being updated weekly now? What time and day of week (UTC?) are the torrents being updated? I'll cron downloading them to seed them automatically.[/QUOTE] So these have been 404'ing since the beginning of the year. |
[QUOTE=Mark Rose;524204]So these have been 404'ing since the beginning of the year.[/QUOTE]What, in specific, are you getting a 404 error on?
|
[QUOTE=James Heinrich;524213]What, in specific, are you getting a 404 error on?[/QUOTE]
Previously I was fetching these URLs weekly for backup purposes to avoid an SoB situation: [url]https://www.mersenne.ca/known_factors.sql.gz[/url] [url]https://www.mersenne.ca/prime_numbers.sql.gz[/url] Around New Years they stopped working. I only noticed yesterday. I haven't had much time to play with Mersenne things lately. |
All the export data now lives at [url]https://www.mersenne.ca/export/[/url]
|
[QUOTE=James Heinrich;524274]All the export data now lives at [url]https://www.mersenne.ca/export/[/url][/QUOTE]
Gotcha. Thanks! |
[QUOTE=James Heinrich;523913]There was an issue that could exhibit this symptom that I should have resolved about 2 days ago. Are you reporting something from the recent past, or something that's still happening now?[/QUOTE]
It's in the six to seven second area now. I do not see this as an issue. :smile: |
I took some chunks of exponents through mfaktc with bit levels 2 to 64 (since it's not significantly slower than 55 to 64) and Stages=0 to catch 'em all. Then I double checked against the export file of known factors. Weird luck, but the first chunks I did were all fully factored already (2900-2908 or so). Then I looked elsewhere, in the 2850-2851 million range, and wow, 2326 new factors. Of course, all were on exponents that had smaller factors found earlier. One chunk took about 1h 40min on an RTX 2080.
Should I continue? Just so that I'm not duplicating anyone else's effort or "stepping on toes" or whatever... Also retested 2900-2901 for bit levels 64-67, mostly for fun. No new factors found as such, but I found out that mfaktc doesn't care if the factor found is prime. For example, M2900030879 has a factor 134562865615456511047, but that is 5800061759 * 23200247033, both of those are already in the database. 32 cases between 2900-2901. |
[QUOTE=nomead;525468]I took some chunks of exponents through mfaktc with bit levels 2 to 64 (since it's not significantly slower than 55 to 64) and Stages=0 to catch 'em all.[/QUOTE]
While Stages=0 is fine and good for a large range of bits, I hope you also mean that you set StopAfterFactor=0 or else you won't catch 'em all. [QUOTE=nomead;525468]Should I continue? Just so that I'm not duplicating anyone else's effort or "stepping on toes" or whatever... [/QUOTE] Its a good a use as any for a GPU in my opinion :smile: I would be surprised and very interested if you find anything below 55 bits, since I recently double checked all of this. But if the difference in compute time is negligible, a double/triple check doesn't hurt I suppose. [QUOTE=nomead;525468] Also retested 2900-2901 for bit levels 64-67, mostly for fun. No new factors found as such, but I found out that mfaktc doesn't care if the factor found is prime. For example, M2900030879 has a factor 134562865615456511047, but that is 5800061759 * 23200247033, both of those are already in the database. 32 cases between 2900-2901. [/QUOTE] I don't know a ton about the internal working of mfaktc, but I think if the smallest factor-of-a-factor is greater than your max sieve prime, its not going to catch the composite-ness in those cases. Also, since each factor must be at least 2p+1 (k==1 at a minimum), this only becomes an issue when you factor past log2((2p+1)^2) ~= 2*(log2(p)+1) (64.866 for 2900) Furthermore if a Mersenne number is divisible by the square of a prime, then you would have found a new Wieferich prime, which is highly improbable (but would be a monumental find!), so we can basically assume that k[SUB]1[/SUB] != k[SUB]2[/SUB] So if k=1 for one of the factors, the next possible lowest other factor is k=4, then the actual limit is log2((2*p+1)*(2*4*p+1)) ~= 2*(log2(p)+2) So for 2900M range, you won't find composite factors below 66.866 bits. And looking at your example factor, that is exactly what you've found! (k=1 and k=4) |
[QUOTE=nomead;525468]Should I continue? Just so that I'm not duplicating anyone else's effort or "stepping on toes" or whatever...[/QUOTE]In theory, [i]all[/i] factors below 2[sup]55[/sup] should be known. Beyond that, TF effort has usually been stopped after the first factor was found, so there's undoubtedly a lot of previously-unknown factors between 55-64 for exponents with one known factor in that range. By all means, submit whatever new factors you find, but please report here if you happen to find any [i]new[/i] factors smaller than 2[sup]55[/sup], since there shouldn't be any undiscovered ones.
If you have comprehensively TF'd a significant contiguous range to a fixed limit and not stopped after finding a factor (such that the range is fully checked up to the specified bit limit) you can please send me an email and I can update the fully-checked-TF limit for those exponents. |
[QUOTE=hansl;525470]While Stages=0 is fine and good for a large range of bits, I hope you also mean that you set StopAfterFactor=0 or else you won't catch 'em all.[/QUOTE]
Whoops. Yes, of course, that too. Just forgot to mention it. :blush: [QUOTE]I would be surprised and very interested if you find anything below 55 bits, since I recently double checked all of this. But if the difference in compute time is negligible, a double/triple check doesn't hurt I suppose.[/QUOTE] Yeah, I would be surprised too. I had to measure the difference because I started doubting myself, and it seems that doing 55-64 is about 5% faster than 2-64. Maybe it's worth it after all to choose the narrower bit range, dunno. [QUOTE=James Heinrich;525493]If you have comprehensively TF'd a significant contiguous range to a fixed limit and not stopped after finding a factor (such that the range is fully checked up to the specified bit limit) you can please send me an email and I can update the fully-checked-TF limit for those exponents.[/QUOTE] Will do, when I have some bigger range to report. |
Is mersenne.ca down? Haven't been able to access it today.
|
[QUOTE=kracker;525538]Is mersenne.ca down? Haven't been able to access it today.[/QUOTE]
Looks like it. Was about to ask the same question. |
I concur: [url]https://downforeveryoneorjustme.com/mersenne.ca[/url]
isitdownrightnow .com agrees |
It's just the front page that doesn't load, most other things that I tested seem to work fine.
|
It should be working fine now.
|
The site seems to be down again.
|
[QUOTE=xx005fs;528334]The site seems to be down again.[/QUOTE]
It's only the front page, again. |
[QUOTE=xx005fs;528334]The site seems to be down again.[/QUOTE]Thanks, should be happier now.
|
[QUOTE=James Heinrich;528342]Thanks, should be happier now.[/QUOTE]
...the site now: :bow wave::banana: Sorry, couldn't resist. |
[QUOTE=nomead;525468]I took some chunks of exponents through mfaktc with bit levels 2 to 64 (since it's not significantly slower than 55 to 64) and Stages=0 to catch 'em all.[/QUOTE]@nomead: It looks from my data that you have probably worked through the entire range of 2800M < 3000M to 2[sup]64[/sup], not stopping after a factor found -- can you confirm (or refute) this, please?
|
[QUOTE=James Heinrich;528454]@nomead: It looks from my data that you have probably worked through the entire range of 2800M < 3000M to 2[sup]64[/sup], not stopping after a factor found -- can you confirm (or refute) this, please?[/QUOTE]
I can confirm that and a bit more now, actually. That was the situation a month ago (Thought I sent an e-mail about it back then?) but the situation is now, everything is done from M2800000051 to M3229999991, from 2^2 to 2^64, not stopping after any factor found, so that should find everything there. The project is actually on hold now, doing the ordinary factoring from 67 to 68 bits on reserved exponents, but will continue after that's done. |
[QUOTE=nomead;528456]I can confirm that and a bit more now, actually.
... everything is done from M2800000051 to M3229999991, from 2^2 to 2^64[/QUOTE]Thanks. I had your effort recorded from 2800M-2999M but not the 3000M-3229M range. I have updated my data now. |
Someone seems to be poaching / working off the books at 1370M. [URL="https://www.mersenne.ca/exponent/1370666131"]Example.[/URL] There's some minor duplication of effort, but still, why bother?
|
[QUOTE=nomead;529593]Someone seems to be poaching / working off the books at 1370M. [URL="https://www.mersenne.ca/exponent/1370666131"]Example.[/URL] There's some minor duplication of effort, but still, why bother?[/QUOTE]
I didn't think this was possible with James' current setup. Anything pulled would be marked and not available to another person to run until they were submitted back or had timed out. With a bit of programming skill, a person could generate their own lists and [I]worktodo[/I] files. I don't see any other way to work off-the-grid, so to speak. Again, why bother? |
[QUOTE=storm5510;529621]I didn't think this was possible[/QUOTE]It could happen if either the user generated their own assignment list, or if they previously had assignments but the expired after 10 days and were reassigned to someone else, then the original person reported results after the assignment had expired.
|
As of 2020-Jan-01, all result lines for TF >1G will need to include the "UID: [user]/[comp]" section at the beginning.
Please configure your mfakt[i]X[/i].ini to set "V5UserID" and "ComputerID". |
[QUOTE=James Heinrich;530279]As of 2020-Jan-01, all result lines for TF >1G will need to include the "UID: [user]/[comp]" section at the beginning.
Please configure your mfakt[I]X[/I].ini to set "V5UserID" and "ComputerID".[/QUOTE] This is interesting. Hopefully, this will allow participants in your project to go back and look at what they have ran. I will assume gigahertz-days will be part of this? Either way, it would be nice if [I]PrimeNet[/I] would follow suit and not dump other things into the "Manual Testing" category. Specifically, anything ran with [I]mfaktc [/I]and its relatives[I].[/I] |
Very Strange
[B]For James Heinrich
[/B] Late yesterday evening, after your data regeneration, my batch file executed the following: [CODE]C:\Wget\wget.exe "https://www.mersenne.ca/tf1G.php?download_worktodo=1&tf_min=68&tf_limit=69&min_exponent=3700000000&max_exponent=3709999999&max_assignments=100&biggest=0" -O - > worktodo.txt [/CODE]Instead of retrieving 100 assignments, like it says, it pulled in over 61,000. This was on my HP. I compared this line with what I have on my i7. No difference. I have no idea what happened. [U]These will be completed[/U]. It might take a day, or so. :confused: |
Haha, I think James got pissed off by me requested one thousand assignments every 7 minutes, and he decided to give sixty times more or so, because I also got like 60k of them and now I am calling base every 7 hours instead, since last night...
|
Already PM'd James about it earlier, before realizing what probably causes it. There's a new way of requesting "GHz-days of work" instead of number of assignments. But unfortunately that number of assignments doesn't seem to work now, instead it gives out the default (1000) GHz-days of work, or at least that's what I got on three separate occasions already. Two of them aren't a problem, they were on machines that are fast enough, but the third one happened on a Jetson Nano that can do about 25 GHz-days per day. :yawn: I'll have to rearrange things a bit...
|
[QUOTE=nomead;530461]But unfortunately that number of assignments doesn't seem to work now, instead it gives out the default (1000) GHz-days of work[/QUOTE]You correctly diagnosed my typo :redface:
The new [i]max_ghd[/i] parameter overrides [i]max_assignments[/i] if present, but I made a small typo that ended up making it hardcoded to 1000 GHz-days, hence your unexpected large number of assignments. It should be fixed now, please let me know if you still have problems (once you get through your longer-than-usual assignments). @Laurv: You can request as many assignments as you want as often as you want, it doesn't bother me in the slightest. :smile: |
[QUOTE=LaurV;530457]Haha, I think James got pissed off by me requested one thousand assignments every 7 minutes, and he decided to give sixty times more or so, because I also got like 60k of them and now I am calling base every 7 hours instead, since last night...[/QUOTE]
I split this between two computers. I periodically stop everything and do a manual submission. There is nothing on James' submission page about a size limit (bytes). For reference, [I]PrimeNet[/I] is 2MB. |
[QUOTE=storm5510;530470]There is nothing on James' submission page about a size limit (bytes).[/QUOTE]There is no hardcoded limit, but there seems to be a server timeout (45s I believe) which I can't figure out how to bypass which practically limits the submissions to somewhere in the vicinity of 300k-400k lines. As a guideline, limit your results submission to 200k results at a time or fewer to ensure they all get processed. Not that this is a major issue anymore, since all the easy pickings have been done.
|
[QUOTE=James Heinrich;530474]...As a guideline, limit your results submission to 200k results at a time or fewer to ensure they all get processed...[/QUOTE]
Doing a bit of averaging, 200,000 results would make a 22MB data file, more or less. This includes the user information in each line. I believe we are all [U]really[/U] safe here. :smile: |
Kudos to James
Hey, I just wanted to publicly thank James Heinrich for a change he made quickly today.
Couldn't find a better place, so it's going here. Take a look at the output of his P-1 bounds/odds page; pick your application, prime95/mprime, gpuowl, or cudapm1, and worktodo format or command line. Also very nicely formatted. Load the following to see what I mean. [URL]https://www.mersenne.ca/prob.php?exponent=333001043&b1=3360000&b2=84000000&guess_saved_tests=2&factorbits=81&K=1&C=-1[/URL] |
Noting, however, that there's currently a (significant) disagreement between my calculus and what Prime95 (the program) selects as bounds. Presumably this is an error on my part, but I'm waiting to hear back from George on his take on the discrepancy. It may be related to the switch from LL to PRP.
|
[I]I assume it is safe to restart my batch process and not get 60,000 exponents[/I]?
[U]Off-topic, but related[/U].:I did not know George had increased the maximum exponent by nearly one-billion for [I]Prime95[/I]. This means I can run some of these on my laptop which has no GPU. For what it is, it is not doing bad at all with 1010's. [U]On-topic[/U]: The manual assignment page seems to have a bit of an issue. It says you can select the number of exponents you want [U]or[/U] specify it in gigahertz-days. It refused to do the "or" function for me. It insisted on both, but used the gigahertz-days part to do the assignment. I requested 10 assignments for my laptop and then put 10 in GHz days. I got 170 assignments. I did not know what I would get using GHz days Running this many is not a problem. |
[QUOTE=storm5510;530554]I assume it is safe to restart my batch process and not get 60,000 exponents?[/QUOTE]Yes, the [i]max_assignments[/i] parameter will be respected again.
[QUOTE=storm5510;530554]It says you can select the number of exponents you want [U]or[/U] specify it in gigahertz-days. It refused to do the "or" function for me[/quote]It's behaving as intended: if you specify GHz-days it will give you that many GHz-days, if you leave that blank it will give you the requested number of assignments. I've added a little javascript trickery to make it more clear for the human-facing HTML form, but be aware that if you're requesting assignments via GET then you need to specify only one of ([i]max_assignments[/i] | [i]max_ghd[/i]), the latter will override the former if both are present. [QUOTE=storm5510;530554]I did not know George had increased the maximum exponent by nearly one-billion for [I]Prime95[/I][/QUOTE]I don't think this is anything new, as far as I know Prime95 has always supported exponents up to 2-billion (note: not 2[sup]31[/sup]). I fired up a copy of v24.13 from 2005 and it supported such exponents. |
[QUOTE=James Heinrich;530528]Noting, however, that there's currently a (significant) disagreement between my calculus and what Prime95 (the program) selects as bounds. Presumably this is an error on my part, but I'm waiting to hear back from George on his take on the discrepancy.[/QUOTE]Thanks to George for pointing my brain back in the right direction. The bounds calculated on the P-1 probability calculator should be sane now. Let me know if they appear otherwise.
|
[QUOTE=James Heinrich;530559]...be aware that if you're requesting assignments via GET then you need to specify only one of ([I]max_assignments[/I] | [I]max_ghd[/I]), the latter will override the former if both are present....[/QUOTE]
Remember, this is for my laptop. No GPU. I did this manually on your request page. It seemed to want something in both of those boxes; putting a red outline on the GHz days box if there was nothing in it. I could modify one of my batch files to do this. I would have to leave everything out but the fetch part and trigger it when I need to. No loop back. To my knowledge, there is not an option which will cause [I]Prime95[/I] to automatically unload itself when it runs out of work. There may be some little gimmick in Windows which could unload it from a batch process If there is, then I could run it like [I]mfaktc[/I] less-classes. This would need to be done by determining the current size of the [I]worktodo[/I] file. Just a few bytes in length, then kill it. The batch would then continue. I set it to manual communication to prevent it trying to send the results to [I]PrimeNet[/I]. I suppose this is a test, of sorts. I don't know if I would want to run it all the time like this. Being a laptop, I generally rest it after three days of running so I can unplug it |
[QUOTE=storm5510;530581]It seemed to want something in both of those boxes; putting a red outline on the GHz days box if there was nothing in it.[/quote]That shouldn't be. What browser are you using?
[QUOTE=storm5510;530581]To my knowledge, there is not an option which will cause [I]Prime95[/I] to automatically unload itself when it runs out of work.[/QUOTE]Such an option was [url=https://www.mersenneforum.org/showpost.php?p=529677&postcount=399]recently added to mprime[/url], but I don't think there is an equivalent option for Windows. It's your hardware, but I would suggest that TF'ing with a laptop is perhaps not the best idea, it would spend many hours of hard-on-the-hardware work to get done what a GPU could do in a few minutes. |
[QUOTE=James Heinrich;530583]That shouldn't be. What browser are you using?
It's your hardware, but I would suggest that TF'ing with a laptop is perhaps not the best idea, it would spend many hours of hard-on-the-hardware work to get done what a GPU could do in a few minutes.[/QUOTE] Browser: Mozilla Firefox. I have already stopped it. I will finish what it has on a GPU. :smile: |
Just found-and-fixed another case where it might've returned an unexpected number of assignments.
|
[QUOTE=James Heinrich;530623]Just found-and-fixed another case where it might've returned an unexpected number of assignments.[/QUOTE]
This 60,000 I received, I've been sending them back in groups of around 2,500, give or take a little. They are in the 3700M area. I have a bit over 15,000 left to run. |
[QUOTE=James Heinrich;530583]...It's your hardware, but I would suggest that TF'ing with a laptop is perhaps not the best idea, it would spend many hours of hard-on-the-hardware work to get done what a GPU could do in a few minutes.[/QUOTE]
Just a short follow-up. I have found a very good use for my laptop. I have two [I]Google Colaboratory[/I] instances which I alternate between, allowing each to age 24 hours, or more. This is monitoring a remote process so there is no excessive heat or undue stress anywhere. This is far better than what I was trying to do. |
[QUOTE=James Heinrich;528460]Thanks. I had your effort recorded from 2800M-2999M but not the 3000M-3229M range. I have updated my data now.[/QUOTE]
I will now continue with this project, at least for now with a two-pronged approach. One machine continues from 3230M and goes up, until it eventually reaches 2¨32, after that it's down from 2800M. Another machine started from 1000M on up. And they'll meet at some point in time, some months in the future. But I'll keep you updated on the progress. |
[QUOTE=nomead;530977]I will now continue with this project, at least for now with a two-pronged approach. One machine continues from 3230M and goes up, until it eventually reaches 2¨32, after that it's down from 2800M. Another machine started from 1000M on up. And they'll meet at some point in time, some months in the future. But I'll keep you updated on the progress.[/QUOTE]Just to refresh my memory, and make sure we're all on the same page, please describe what you're doing and your workflow (factoring from where to where, submitting what results and how often.
|
[QUOTE=James Heinrich;530990]Just to refresh my memory, and make sure we're all on the same page, please describe what you're doing and your workflow (factoring from where to where, submitting what results and how often.[/QUOTE]
I'm factoring bit levels 2 to 64 with mfaktc, and Stages=0 plus StopAfterFactor=0. I'm doing this for all prime exponents, checking against the database exports, and submitting only the remaining factor found results manually and semi-irregularly, one "chunk" at a time. This chunk happens to be everything within a million of the starting point (same four numbers in the beginning of the exponent). The checking is automated, I just need to run the submit script manually. But I try to submit at least once a day to avoid a big pileup in the submission queue. |
[QUOTE=nomead;530994]submitting only the remaining factor found results manually and semi-irregularly, one "chunk" at a time.[/QUOTE]Thanks. I also need to manually update the how-far-fully-factored flag for the exponents you've thoroughly checked, if you could send me an email every so often (once a week, once a month, whatever) letting me know what ranges are completed I can run the update.
|
The entire exponent range from 1000M to 2[sup]32[/sup] (~4294M) has now been TF'd to (at least) 2[sup]67[/sup] (in most cases stopping after first factor found). The [url=https://www.mersenne.ca/tf1G]tf1G[/url] page has been updated to show 68-77 columns.
|
[QUOTE=James Heinrich;531036]The entire exponent range from 1000M to 2[sup]32[/sup] (~4294M) has now been TF'd to (at least) 2[sup]67[/sup] (in most cases stopping after first factor found). The [URL="https://www.mersenne.ca/tf1G"]tf1G[/URL] page has been updated to show 68-77 columns.[/QUOTE]
I thought something was different. Recently, I have been looking at the bottom of this page. It seems the effort has dropped off, (GHz days per day). This could also be an illusion as the time required to factor to higher bit levels would increase. |
[QUOTE=storm5510;531042]It seems the effort has dropped off, (GHz days per day).[/QUOTE]The exact opposite in fact. Three months ago the daily average was about 12500GHd, now it's just over double that (25800).
More people are TF'ing to higher bit depths (70 or 71 instead of min+1) so the throughput of cleared exponents might be lower, but the expended effort is definitely up over time. |
One thing we found a bit odd about those tables (from the beginning, not just now) is that there are a lot of assignments in ranges where either all the work was already done (and they are green) and either in the ranges that have no exponents available. May you explain please how those little red numbers work? Maybe our understanding is wrong..
|
[QUOTE=LaurV;531050]there are a lot of assignments in ranges where either all the work was already done (and they are green) and either in the ranges that have no exponents available. May you explain please how those little red numbers work?[/QUOTE]The big blue (clickable) numbers are the number of assignments available for assignment in that range. The small red numbers are the number of assignments already assigned. The sum of both numbers (not explicitly displayed) is the number of exponents in that range/bitlevel.
The green background color was buggy (it should not extend to any cell with available or assigned exponents). This has been fixed. |
[QUOTE=James Heinrich;531054]This has been fixed.[/QUOTE]
Much better now, well done Sir! If the 6000 assignments on "3730M to 68" are mine, then you can delete them (or they will expire anyhow in few days). I have no assignment at (or under) 68 right now, and they may be remnants from the past when I was trying to understand how your script works and most probably did few wrong rounds of "wget". I have no history nor backup and no way to restore them. |
[QUOTE=LaurV;531055]If the 6000 assignments on "3730M to 68" are mine, then you can delete them
I have no history nor backup and no way to restore them.[/QUOTE]I can tell what the assignments are, but they're truly anonymous so I don't know if they're yours or not. Just let them expire if they're yours and your (or someone) will get them reassigned in a few days. |
[QUOTE=LaurV;531055]Much better now, well done Sir!
If the 6000 assignments on "3730M to 68" are mine, then you can delete them (or they will expire anyhow in few days). I have no assignment at (or under) 68 right now, and they may be remnants from the past when I was trying to understand how your script works and most probably did few wrong rounds of "wget". I have no history nor backup and no way to restore them.[/QUOTE] Ah, but the 6000 assignments in the 68 column aren't "to" 68, but "from" 68, to whatever was specified when reserving them. To 70, to 71, who knows. I think I saw the assignment expired and free to reserve again fairly recently, so I guess someone else wants to fill the gap now. And whatever the mistake, it will expire in 10 days, which seems to be a good time for these short assignments. I know I've made my share of mistakes in the past... :blush: and it's sometimes a bit painful to wait for any sign of them to disappear from the table, but in the long term, luckily it doesn't matter. |
TF verification
Hey James,
Not sure if this is the correct place, let me know if you think there's a better forum. I wrote up and [URL="https://www.mersenneforum.org/showpost.php?p=520512&postcount=199"]coded a verifiable version of GPU trial factoring[/URL]. The idea is that you save k that minimize pow(2, candidate, k) as a proof of correct computation / work. The math is covered in the post. That conversation petered out when more manual methods of detecting cheating were deployed. Obviously not all programs will support this and there will be some fun caveats (e.g. mfaktc with the larger kernel quits doesn't compute the final modulo so there will need to be multiple verification methods based on a kernel lookup). The value would be limited at first, but it feels like it's a good think to increase verification at a small(?) data cost. I'm wondering what/who it would take for mersenne.ca / mersenne.org to parse and save an additional log line per TF-NF results. And if this is of interest to the community. Thanks |
[QUOTE=SethTro;531897]I'm wondering what/who it would take for mersenne.ca / mersenne.org to parse and save an additional log line per TF-NF results. And if this is of interest to the community.[/QUOTE]It is of interest to me. I can work on incorporating the parsing into mersenne.ca first, once that's working as intended I can port it to mersenne.org
I'm going to tell you to modify the output to JSON format, we can discuss the details in email. Please email [email]james@mersenne.ca[/email] with the details, including a simple-language explanation of how to verify your checksum. |
It remains curious to me that there is an apparent intention for .ca to remain distinct from .org, yet the icon that appears in web browser tabs is identical.
Would it not be reasonable that some distinction be made more clear by way of such icon? Perhaps a different colour? Or maple leaf background if you especially want! |
[QUOTE=snme2pm1;532322]yet the icon that appears in web browser tabs is identical. Perhaps a different colour?[/QUOTE]My artistic skills verge on nonexistent, but I have made a blue version of the favicon. It will start appearing as your browser cache expires.
If anyone wants to make a mersenne.ca logo of sorts, or at least a favicon, feel free to submit such. |
[QUOTE=James Heinrich;532326]My artistic skills verge on nonexistent, but I have made a blue version of the favicon. It will start appearing as your browser cache expires.
If anyone wants to make a mersenne.ca logo of sorts, or at least a favicon, feel free to submit such.[/QUOTE] I reckoned that this subject might move to a popular vote! Yet what is your best suitable originating material format that you might be able to sustain and further edit? Or perhaps such notion of edit apparently seems void? |
2 Attachment(s)
Generated from the front page graph using [url]https://www.favicon-generator.org/[/url]
Zip file contains the various icons, and the png is the source image. Here's their suggested html code: [CODE]<link rel="apple-touch-icon" sizes="57x57" href="/apple-icon-57x57.png"> <link rel="apple-touch-icon" sizes="60x60" href="/apple-icon-60x60.png"> <link rel="apple-touch-icon" sizes="72x72" href="/apple-icon-72x72.png"> <link rel="apple-touch-icon" sizes="76x76" href="/apple-icon-76x76.png"> <link rel="apple-touch-icon" sizes="114x114" href="/apple-icon-114x114.png"> <link rel="apple-touch-icon" sizes="120x120" href="/apple-icon-120x120.png"> <link rel="apple-touch-icon" sizes="144x144" href="/apple-icon-144x144.png"> <link rel="apple-touch-icon" sizes="152x152" href="/apple-icon-152x152.png"> <link rel="apple-touch-icon" sizes="180x180" href="/apple-icon-180x180.png"> <link rel="icon" type="image/png" sizes="192x192" href="/android-icon-192x192.png"> <link rel="icon" type="image/png" sizes="32x32" href="/favicon-32x32.png"> <link rel="icon" type="image/png" sizes="96x96" href="/favicon-96x96.png"> <link rel="icon" type="image/png" sizes="16x16" href="/favicon-16x16.png"> <link rel="manifest" href="/manifest.json"> <meta name="msapplication-TileColor" content="#ffffff"> <meta name="msapplication-TileImage" content="/ms-icon-144x144.png"> <meta name="theme-color" content="#ffffff">[/CODE] |
[QUOTE=axn;532334]Generated from the front page graph[/QUOTE]I like it.
The favicon (in all its myriad incarnations) are regenerated from the most recent graph once a week. It won't change much of course, but since I wrote the code already to generate everything it may as well be fresh. :smile: |
[QUOTE=James Heinrich;532377]I like it.
[/QUOTE] I am pleased that axn: [url]https://www.mersenneforum.org/member.php?u=528[/url] was so helpful. Yet it is your choice as to what graphic to apply for such icon generation purposes. Perhaps the top portion is unwanted noise. Perhaps the line graph might be more exaggerated so as to not be muted at low resolution. Anyhow you now have a multicolour flag with slightly lifted left edges... What will Aaron consider in response? PS: Appoligies, s/Arron/Aaron/ |
Back to business, even if only for a little while.
Some of you may be aware that I converted my HP workstation to [I]Ubuntu 19.10[/I] so I can learn how to use it, a little. I will skip the the rest of the story here. I finally managed to get a CUDA10 variant of [I]mfaktc[/I] to run. It just needed a library update. I have been feeding it, [I]mfaktc[/I], by hand by doing manual downloads from James' request page. I studied his bash script. I lot of what was in there was relative to using a RAM drive. With this in mind, I removed all the code relative to using one. Being an antique programmer, it did not take me long to understand the flow. First off, the [I]/bin/bash[/I] folder only appears as a link to another location. So I changed that part. [I]Wget[/I] and [I]curl[/I] were already on my system. Only [I]curl[/I] needed an update. Doing some searching on the web, I found the proper file name extension. I called my greatly modified script [I]mfa.sh[/I]. The web page said to compile it like so: [CODE]chmod +x mfa.sh[/CODE]It preceded without any errors. I try to run it: [CODE]./mfa.sh[/CODE]It returns this: [CODE]bash: ./mfa.sh: /home/norman/mfaktc/^M: bad interpreter: No such file or directory [/CODE]Using the full path name: [I]./home/norman/mfaktc/mfa.sh[/I] returns basically the same. The path above is what I put in the first line of the script. The script is in the same folder as [I]mfaktc[/I]. The only part of this I am not sure about is the "LD_LIBRARY_PATH" statement. The location does not seem correct. The [I]Wget[/I] argument appears basically similar to the one in my Windows batch file, so I replaced it with mine. I left the original as a reference comment. The script is below so if anyone spots anything, please comment. [CODE]#!/home/norman/mfaktc/ # Simple bash file to automatically retrieve TF >1000M worktodo from mersenne.ca # and submits results back. Should be called in your mfaktc working directory. # Will loop infinitely, submitting results and getting more work as needed. # Requires Wget: sudo apt-get install wget # Requires cURL: sudo apt-get install curl # # To exit: 1) Press ctrl-C once while mfaktc is running # 2) Press ctrl-C once while script is sleeping 10 seconds before downloading more work # 3) Do not press ctrl-C otherwise. It could be dangerous for your work. # Work fetch options #TFLimit=69 #maxAssignments=10 #useRAMDisk=0 if [ $? -eq 0 ] then echo "mfaktc is already running, exiting" sleep 30 exit fi trap "echo received signal \"SIGINT\"$'\n'script will exit once the results are sent." SIGINT while true do LD_LIBRARY_PATH="./lib:${LD_LIBRARY_PATH}" ./mfaktc.exe # Use your command for mfaktc if [ -s worktodo.txt ]; then break; fi echo "mfaktc has run out of work or ^C was pressed!. Downloading more work in 5 seconds" sleep 5 [ $? -eq 130 ] && break if [ -e results.txt ] then echo "Sending results to mersenne.ca" echo $(date '+%d/%m/%Y %H:%M:%S') >> ~/mfaktc-results-submitted.log curl --form "results_file=@results.txt" https://www.mersenne.ca/bulk-factors.php >> ~/mfaktc-results-submitted.log if [ $? -ge 1 ] then echo "cURL failed" break fi cat results.txt >> ~/mfaktc-results-submitted.txt rm results.txt fi echo "Retrieving worktodo from mersenne.ca" # wget "https://www.mersenne.ca/tf1G.php?download_worktodo=1&tf_limit=$TFLimit&max_assignments=$maxAssignments&biggest=0" -O - >> worktodo.txt wget "https://www.mersenne.ca/tf1G.php?download_worktodo=1&tf_min=68&tf_limit=69&min_exponent=3400000000&max_exponent=3409999999&max_assignments=50&biggest=0" -O - >> worktodo.txt done [/CODE] |
Have you been editing it on Windows? The ^M looks like the CR from the Windows default CRLF line ending (UNIX uses just a LF).
The first line should point to the interpreter you want to run the script. In most of my bash scripts it's: [code] #!/bin/bash [/code] So if /home/norman/mfaktc/ is a directory you would be telling it to run a file called ^M in that directory. Which probably doesn't exist. Chris |
[QUOTE=chris2be8;532556]Have you been editing it on Windows? The ^M looks like the CR from the Windows default CRLF line ending (UNIX uses just a LF).
The first line should point to the interpreter you want to run the script. In most of my bash scripts it's: [code] #!/bin/bash [/code]So if /home/norman/mfaktc/ is a directory you would be telling it to run a file called ^M in that directory. Which probably doesn't exist. Chris[/QUOTE] Actually, I did edit it on Windows using Notepad++. It has the ability to change the coding. So, I changed it, and placed [I]/bin/bash[/I] back. It now appears to run. The script is exiting where [I]$? -eq 0[/I] is. I am guessing this is sometime of command line parameter. I do not know what this needs to be. |
[QUOTE=storm5510;532559]The script is exiting where [I]$? -eq 0[/I] is.[/QUOTE]
The "$?" gets replaced with the exit code of the previous command. An exit code of "0" indicates that the program ran successfully. Doing this as the very first command seems quite pointless. Maybe this code was checking the outcome of a command that you removed while editing the file? |
[QUOTE=nordi;532573]The "$?" gets replaced with the exit code of the previous command. An exit code of "0" indicates that the program ran successfully.
Doing this as the very first command seems quite pointless. Maybe this code was checking the outcome of a command that you removed while editing the file?[/QUOTE] And to add to that answer, the -eq means "equals". |
[QUOTE=nordi;532573]The "$?" gets replaced with the exit code of the previous command. An exit code of "0" indicates that the program ran successfully.
Doing this as the very first command seems quite pointless. Maybe this code was checking the outcome of a command that you removed while editing the file?[/QUOTE] I commented the section, recompiled, and it is running they way it should, from what I see. It is placing its results save files in the folder above: [CODE]cat results.txt >> ~/mfaktc-results-submitted.txt [/CODE]I changed it to this: [CODE]cat results.txt >> ~/mfaktc/mfaktc-results-submitted.txt[/CODE]A log file was also being created in the same location. I changed its path to point to the program folder, like above. I imagine James made this rather generic because of not knowing what anyone's folder structure would be. Perhaps he can elaborate on his usage of catching an exit code at the top. Now that I know what it is, it seems this would only be useful in calling the script from another process and passing it the code. This HP is not my primary computer, which means I can leave it run for days, or weeks, if I wanted to. It would only be periodically necessary to change the exponent request range. I changed this part because the GPU in the HP is rather slow. |
[QUOTE=storm5510;532585]I imagine James made this rather generic because of not knowing what anyone's folder structure would be. Perhaps he can elaborate on his usage of catching an exit code at the top.[/QUOTE]I am not the author of the bash version (nor the Python version), I don't remember offhand who the original author was.
|
[QUOTE=James Heinrich;532592]I am not the author of the bash version (nor the Python version), I don't remember offhand who the original author was.[/QUOTE]
I had to study it for a while to determine what I could remove. This was anything to do with a RAM drive. It is not like days gone by when I could run assignments in just a scant few seconds. At the current level, it takes 11 seconds for each, give or take a fraction. Of course, the smaller they become, the longer it will take. I am not overly concerned about hard-drive damage at this pace. I considered using a USB drive for this, and may yet. I would have to determine what needs to be changed. |
[QUOTE=storm5510;532641]considered using a USB drive for this, and may yet.[/QUOTE]Don't. USB flash drives are made of cheap, and cannot sustain repeated write cycles. They are NOT the same as SSDs. Use your HDD by all means. Use your SSD if you want. Do not use a flash drive.
|
[QUOTE=James Heinrich;532650]Don't. USB flash drives are made of cheap, and cannot sustain repeated write cycles. They are NOT the same as SSDs. Use your HDD by all means. Use your SSD if you want. Do not use a flash drive.[/QUOTE]
Understood. That machine is not SSD capable. Too old. It has a spinner in it. |
[QUOTE=storm5510;532718]That machine is not SSD capable. Too old.[/QUOTE]I find that hard to believe. It has no SATA ports? They've been on every motherboard since, what, 2006 or so?
|
| All times are UTC. The time now is 23:26. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.