![]() |
Mlucas v20.1.1 (latest) available
[b]06 Jul 2022: Patch Alert:[/b] Small patch which fixes an infrequent (but run-killing when it occurs) issue with multithreaded runs of p-1 stage 2, discovered by tdulcet during his large-batch p-1 work. The version number remains v20.1.1. As always, details and download info at the [url=http://www.mersenneforum.org/mayer/README.html]README page[/url].
Those of you using tdulcet's mlucas.sh install script will want to grab the [url=https://github.com/tdulcet/Distributed-Computing-Scripts/blob/master/mlucas.sh]latest version[/url], but check the SUM-field value and if it differs from the one (8d8851f5e383d8a74cf067192474256a) for the current-download of v20.1.1, manually change it to the md5 checksum listed for the latter at the README. As always, post bug reports, usage comments and whatnot here, and thanks for the builds and compute cycles! |
[QUOTE=ewmayer;592258]Those of you using tdulcet's mlucas.sh install script will want to grab the [URL="https://github.com/tdulcet/Distributed-Computing-Scripts/blob/master/mlucas.sh"]latest version[/URL], but check the SUM-field value and if it differs from the one (e3302de913e7bf65a83985d68a1193e1) for the current-download of v20.1.1, manually change it to the md5 checksum listed for the latter at the README.[/QUOTE]
Thanks for providing instructions for users while they waited for me to update the script. I just pushed the new md5sum for Mlucas v20.1.1 to my repository. See [URL="https://github.com/tdulcet/Distributed-Computing-Scripts/commit/fd73c1de915e1f573212491ebec4c7e62f9ebfd9"]here[/URL] for the change and [URL="https://github.com/tdulcet/Distributed-Computing-Scripts/commit/b81b13c6204738f08ccf4b754755712e246b3b4a#diff-2fde743102cce4f537eee46f9b6b82d18d7b7bae2c3c2726d3605c09318c26c7"]here[/URL] for the original changes. |
I noticed that mlucas now supports exponents up to around nine billion. Although I don't expect anyone to test those numbers anytime soon, this is really cool nonetheless. :smile:
|
Note to 20.1.1 users: there is a small messaging bug in v20.1.1 where a missing mlucas.ini entry for one of the supported options triggers a "User set unsupported value = NaN ... ignoring" message. The warning is benign and will go away in the next release.
[QUOTE=ixfd64;592352]I noticed that mlucas now supports exponents up to around nine billion. Although I don't expect anyone to test those numbers anytime soon, this is really cool nonetheless. :smile:[/QUOTE] Well, not LL or PRP-test, anyway - p-1 on such behemoths is feasible, however, if the user's system has at least 128GB of memory. |
[b]Patch Alert:[/b] There was [a] a PRP-postprocessing bug leading to a sanity-check assertion-exit, and [b] a p-1 premature-exit-at-end-of-stage-1 bug in the initial v20.1.1 release. (01 Nov, md5 = e3302de913e7bf65a83985d68a1193e1). Please make sure you get the current version (06 Nov, md5 = c917eb8faa6ff643d359b335cdacbfda) if you do either of these kinds of GIMPS assignments. If you hit either of these bugs, restarting the aborted assignment using an updated build should get you back on track.
|
[QUOTE=ewmayer;592258]Those of you using tdulcet's mlucas.sh install script will want to grab the [URL="https://github.com/tdulcet/Distributed-Computing-Scripts/blob/master/mlucas.sh"]latest version[/URL], but check the SUM-field value and if it differs from the one (c917eb8faa6ff643d359b335cdacbfda) for the current-download of v20.1.1, manually change it to the md5 checksum listed for the latter at the README.[/QUOTE]
Thanks. I just pushed the new md5sum to my repository. See [URL="https://github.com/tdulcet/Distributed-Computing-Scripts/commit/66de4840e271e482188fa9c92b72bc436505b9a9#diff-2fde743102cce4f537eee46f9b6b82d18d7b7bae2c3c2726d3605c09318c26c7"]here[/URL] for the change. |
Just uploaded an updated version of the v20.1.1 tarball with several not-critical-but-nice-to-have bugfixes, and 2 feature-adds:
1. Signal-handling has been restored; 2. More-flexible workfile parsing, assignment ID no longer required for p-1 assignments, and may be "n/a" for all supported work types. As always, see the [url=http://www.mersenneforum.org/mayer/README.html]README page[/url] for more details and download link, and the help.txt file in the unpacked code archive for full details. |
[QUOTE=ewmayer;592258]Those of you using tdulcet's mlucas.sh install script will want to grab the [URL="https://github.com/tdulcet/Distributed-Computing-Scripts/blob/master/mlucas.sh"]latest version[/URL], but check the SUM-field value and if it differs from the one (81655bf24742b22dcd853741e3ebaefe) for the current-download of v20.1.1, manually change it to the md5 checksum listed for the latter at the README.[/QUOTE]
Thanks. I just pushed the updated script to my repository. See [URL="https://github.com/tdulcet/Distributed-Computing-Scripts/commit/c97347f46233a4cd77e78a439d06db74d9ee4ef2#diff-2fde743102cce4f537eee46f9b6b82d18d7b7bae2c3c2726d3605c09318c26c7"]here[/URL] for the full changes. It will now automatically save the “Benchmark Summary” table that is output at the end of the script to a [C]bench.txt[/C] file for future reference. Here is an example of this file on a 4 core ARM system: [CODE]~/mlucas_v20.1.1/obj$ cat bench.txt # Workers/Runs Threads -cpu arguments 1 4 1 0 1 2 3 2 2 2 0:1 2:3 3 1 4 0:3 Adjusted msec/iter times (ms/iter) vs Actual iters/sec total throughput (iter/s) for each combination FFT #1 #2 #3 length ms/iter iter/s ms/iter iter/s ms/iter iter/s 2048K 36.89 98.255 38.72 96.242 40.48 98.551 2304K 43.87 81.526 45.94 79.443 48.88 83.452 2560K 48.57 75.435 52.12 69.893 53.08 75.775 2816K 54.65 68.347 58.7 62.614 59.16 68.125 3072K 60.74 61.914 63.04 57.576 66.32 60.968 3328K 65.18 57.619 68.58 54.557 71.4 56.944 3584K 71.09 53.280 74.96 51.005 77.6 52.751 3840K 77.28 49.119 80.9 47.089 83.92 48.010 4096K 73.49 51.111 80.7 48.070 84.8 48.704 4608K 92.59 41.217 96.66 40.683 101.4 39.866 5120K 101.62 37.520 106.44 36.146 110.96 36.089 5632K 113.13 33.656 118.4 33.558 124.88 32.784 6144K 122.81 30.797 128.5 30.565 134.56 30.017 6656K 134.6 28.203 141.3 28.122 147.84 27.560 7168K 145.57 25.910 152.58 26.069 157.6 25.673 7680K 160.37 23.800 166.2 23.887 172.68 23.492[/CODE] |
hi,
can mlucas doing wagstaff numbers (like prime95/mprime) ? |
No. Mersenne & Fermat numbers; P-1 LL PRP, & P-1 Pepin test respectively.
[url]https://www.mersenneforum.org/showpost.php?p=488291&postcount=2[/url] [url]https://www.mersenneforum.org/showpost.php?p=585582&postcount=11[/url] |
[QUOTE=kriesel;593802]No. Mersenne & Fermat numbers; P-1 LL PRP, & P-1 Pepin test respectively.
[url]https://www.mersenneforum.org/showpost.php?p=488291&postcount=2[/url] [url]https://www.mersenneforum.org/showpost.php?p=585582&postcount=11[/url][/QUOTE] thank you ! |
[b]Patch Alert:[/b] Some recent code changes to clean up the messaging and file-writing left a few dangling fclose() calls in the two *_mod_square.c source files, potentially leading to a null-pointer fclose crash following emission of a roundoff error warning. Fixed. Also a few help.txt file changes to improve coherence of the how-to-kill text.
The md5 value in the OP has been updated to match this upload (1686232 bytes, md5 = dc5487e984196a32b47a8066ec9a6803). |
[QUOTE=ewmayer;592258]Those of you using tdulcet's mlucas.sh install script will want to grab the [URL="https://github.com/tdulcet/Distributed-Computing-Scripts/blob/master/mlucas.sh"]latest version[/URL], but check the SUM-field value and if it differs from the one (dc5487e984196a32b47a8066ec9a6803) for the current-download of v20.1.1, manually change it to the md5 checksum listed for the latter at the README.[/QUOTE]
I just pushed the updated script to my repository. See [URL="https://github.com/tdulcet/Distributed-Computing-Scripts/commit/e824abcf53a04f019d09a97abdd256303245c81c"]here[/URL] for the full changes It will now automatically add lines to the [C]bench.txt[/C] file for future reference in the same format as those added by Prime95/MPrime to the respective [C]results.bench.txt[/C] file when running a throughput benchmark. This is in addition to the benchmark summery table I added to that file in my previous update of the script (see post above). Here is an example of these lines on a 4 core ARM system: [CODE]Timings for 2048K FFT length (4 cores, 1 threads, 4 workers): 40.41, 41.10, 40.81, 40.32 ms. Throughput: 98.381 iter/sec. Timings for 2304K FFT length (4 cores, 1 threads, 4 workers): 48.30, 48.96, 48.34, 49.33 ms. Throughput: 82.083 iter/sec. Timings for 2560K FFT length (4 cores, 1 threads, 4 workers): 52.98, 53.24, 53.09, 53.06 ms. Throughput: 75.340 iter/sec. Timings for 2816K FFT length (4 cores, 1 threads, 4 workers): 58.45, 58.60, 58.59, 58.59 ms. Throughput: 68.306 iter/sec. Timings for 3072K FFT length (4 cores, 1 threads, 4 workers): 64.35, 64.80, 64.45, 64.69 ms. Throughput: 61.947 iter/sec. Timings for 3328K FFT length (4 cores, 1 threads, 4 workers): 69.29, 69.48, 69.34, 69.57 ms. Throughput: 57.621 iter/sec. Timings for 3584K FFT length (4 cores, 1 threads, 4 workers): 75.27, 75.47, 75.24, 75.52 ms. Throughput: 53.067 iter/sec. Timings for 3840K FFT length (4 cores, 1 threads, 4 workers): 81.52, 81.82, 81.66, 81.73 ms. Throughput: 48.970 iter/sec. Timings for 4096K FFT length (4 cores, 1 threads, 4 workers): 78.06, 78.70, 78.06, 78.87 ms. Throughput: 51.007 iter/sec. Timings for 4608K FFT length (4 cores, 1 threads, 4 workers): 97.03, 97.62, 97.09, 97.78 ms. Throughput: 41.076 iter/sec. Timings for 5120K FFT length (4 cores, 1 threads, 4 workers): 107.23, 107.67, 107.00, 107.60 ms. Throughput: 37.253 iter/sec. Timings for 5632K FFT length (4 cores, 1 threads, 4 workers): 118.54, 119.28, 118.63, 119.31 ms. Throughput: 33.630 iter/sec. Timings for 6144K FFT length (4 cores, 1 threads, 4 workers): 129.88, 130.56, 128.54, 129.97 ms. Throughput: 30.832 iter/sec. Timings for 6656K FFT length (4 cores, 1 threads, 4 workers): 141.49, 142.41, 141.57, 142.44 ms. Throughput: 28.174 iter/sec. Timings for 7168K FFT length (4 cores, 1 threads, 4 workers): 153.81, 155.76, 155.40, 155.35 ms. Throughput: 25.794 iter/sec. Timings for 7680K FFT length (4 cores, 1 threads, 4 workers): 167.57, 169.11, 168.69, 168.74 ms. Throughput: 23.736 iter/sec.[/CODE] |
[b]Patch Alert:[/b]Due to a user report of bad behavior of regular (non '-9') kill with his multithreaded run, signal handling has been changed to immediate-exit without savefile write. Suggest users 'killall -9 Mlucas' any ongoing jobs at their earliest convenience and switch to the updated code; using that, regular 'kill' should work. Also, workfile assignments are now echoed to the per-exponent log (.stat) file, not just to stderr (e.g. to the nohup.out file).
The md5 value in the OP has been updated to match this upload (1688188 bytes, md5 = 970c4dde58417bd7f6be0e4af4b59b4e). |
[QUOTE=ewmayer;592258]Those of you using tdulcet's mlucas.sh install script will want to grab the [URL="https://github.com/tdulcet/Distributed-Computing-Scripts/blob/master/mlucas.sh"]latest version[/URL], but check the SUM-field value and if it differs from the one (970c4dde58417bd7f6be0e4af4b59b4e) for the current-download of v20.1.1, manually change it to the md5 checksum listed for the latter at the README.[/QUOTE]
I just pushed the new md5sum to my repository. See [URL="https://github.com/tdulcet/Distributed-Computing-Scripts/commit/2f5a683d1cdeeab8ce85767b25b790c006348d77"]here[/URL] for the change. |
[QUOTE=ixfd64;592352]I noticed that mlucas now supports exponents up to around nine billion.[/QUOTE]Testing indicates the empirical limits are work type dependent. Something about signed and unsigned int32 in places. See attachment of [URL]https://www.mersenneforum.org/showpost.php?p=594668&postcount=36[/URL]
I think the limit situation may improve somewhat with the next release. |
[QUOTE=kriesel;599483]Testing indicates the empirical limits are work type dependent. Something about signed and unsigned int32 in places. See attachment of [URL]https://www.mersenneforum.org/showpost.php?p=594668&postcount=36[/URL]
I think the limit situation may improve somewhat with the next release.[/QUOTE] There should be no issue running LL, PRP or p-1 with exponents approaching 9 billion - I think you may be alluding to the current 32-bit limit on residue shift. From the README: " exponents > 2^32, thus FFT lengths 256-512M, require '-shift 0' to run." I've been running p-1 on F33, exponent ~8.6 billion, for around 6 months now, max ROEs are a little under 0.10 for that. |
[QUOTE=ewmayer;599607]There should be no issue running LL, PRP or p-1 with exponents approaching 9 billion - I think you may be alluding to the current 32-bit limit on residue shift. From the README: " exponents > 2^32, thus FFT lengths 256-512M, require '-shift 0' to run."
I've been running p-1 on F33, exponent ~8.6 billion, for around 6 months now, max ROEs are a little under 0.10 for that.[/QUOTE] Or started testing with an earlier version of 20.1.1, or forgot about -shift 0, or both. IIRC there was a time when we were ferreting out exponent limits and IIRC the usable exponent range varied. Lots of code paths as you well know. Will retest a few Mn cases & worktypes with v20.1.1 2022-12-02 and PM re any anomalies found. |
Ken PMed me with some questions and examples of issues he hit for M(p) with p > 2^32 - for the benefit of any other users wanting to play with such stuff, copy of my reply to him:
[quote]o Anyone who wishes to, can change the 'if(p > (1ull << 32))' used to limit #iters for expos > 2^32 to 'if(0)' in Mlucas.h or up the value of MAX_SELFTEST_ITERS in Mdata.h. I just want users to be darn sure they want to burn that kind of runtime, consider it a "feature must be manually enabled by user" caution. o Re. your attempt with 8937021997, that is just above the (FMA-build mode) default limit for 512M FFT, and the ensuing error message prints just the low 32 bits of p - fixed in local build. You'll need to force '-fft 512M' for such cases. o Re. "check_kbnc: Mersenne exponent must be prime!" for p = 8937021689 - yep, that's a bug, some incorrectly nested logic. In Mlucas.c::check_knbc(), the clause starting with 'if(i == -1)' needs to be modified thusly: [code] if(i == -1) { uint32 phi32 = (*p >> 32); if((phi32 && !isPRP64(*p)) || (!phi32 && !is_prime((uint32)*p))) { fprintf(stderr,"%s: Mersenne exponent must be prime!\n",func); break; } MODULUS_TYPE = MODULUS_TYPE_MERSENNE; } else if(i == 1) {[/code] In the current release version, if the exponent > 2^32 the left-clause isPRP64 call returns 1 as expected, but the or (||)-following clause gets tested next, i.e. the code checks if the low 32 bits of p are prime. That latter clause must be taken only if p < 2^32. I was wondering why I didn't hit this issue for the p-1 example run of p = 8589934693 I PMed you and Teal about at end of last October, but the low 32 bits of that p happen to == 101, which is prime. Fixed via above in my local branch, but does not affect large-exponent self-testing, so will roll out in v21. [QUOTE=kriesel]Did that 2[SUP]32[/SUP] exponent cap get fully undone, all Mersenne worktypes? If the tweak was modified to 1E6 iterations as selftest now states, [CODE]Full-length LL/PRP/Pepin tests on exponents this large not supported; will exit at self-test limit of 1000000.[/CODE]and applies for Mersenne P-1 stage 1 also, that could be trouble for large-p attempts on Mersennes. OBD B1 ~17M so iter count ~ 25M. 1Gbit B1 ~ 5.5M so iter count ~8M. With so many cases and subcases, and F33 stage 1 behind you and lots of proof work ahead, it would be easy to miss a case or more.[/QUOTE] If you search for the "will exit at" error-print in Mlucas.c, you'll see it's inside a [i]if(TEST_TYPE == TEST_TYPE_PRIMALITY || TEST_TYPE == TEST_TYPE_PRP)[/i] clause, whose outermost if() is for Mersenne moduli, i.e. that limit applies only to Mersenne LL/PRP tests. The overall iteration < 2^32 limit for all test types still applies though. Should anyone decide to manually tweak the above to allow LL/PRP to iter > 10^6 and start such a run which hits the 2^32 limit before I get around to relaxing it, I'll be happy to let them accuse me of false advertising. :)[/quote] |
[QUOTE=ewmayer;599672]Ken PMed me with some questions and examples of issues he hit for M(p) with p > 2^32 - for the benefit of any other users wanting to play with such stuff, copy of my reply to him:[/QUOTE]
"Should anyone decide to manually tweak the above to allow LL/PRP to iter > 10^6 and start such a run which hits the 2^32 limit before I get around to relaxing it, I'll be happy to let them accuse me of false advertising. :)" Yeah, you're pretty safe there, since at an estimated 88.5 years to 2[SUP]32[/SUP] iterations on 256M fft length on one of my "faster" test systems, we'd potentially need to leave instructions for our heirs, and theirs! (Other than your source code that is.) |
I am trying to get the [C]pm1[/C] standalone binary to build on Ubuntu. Getting the following errors:
[CODE]$ clang -c -DPM1_STANDALONE -O3 pm1.c pm1.c:34:3: warning: Building pm1.c in PM1_STANDALONE mode. [-W#warnings] #warning Building pm1.c in PM1_STANDALONE mode. ^ pm1.c:898:4: warning: Building pm1_stage2() in standalone (modmul-counting) mode! [-W#warnings] #warning Building pm1_stage2() in standalone (modmul-counting) mode! ^ pm1.c:1004:2: error: use of undeclared identifier 'dtmp'; did you mean 'tmp'? dtmp = mlucas_getOptVal(MLUCAS_INI_FILE,"InterimGCD"); // Any failure-to-find-or-parse can be checked for via isNaN(dtmp) ^~~~ tmp pm1.c:956:9: note: 'tmp' declared here uint64 tmp,q,q0,q1,q2, qlo = 0ull,qhi, reloc_start, pinv64 = 0ull; ^ pm1.c:1005:5: error: use of undeclared identifier 'dtmp'; did you mean 'tmp'? if(dtmp == 0) { ^~~~ tmp pm1.c:956:9: note: 'tmp' declared here uint64 tmp,q,q0,q1,q2, qlo = 0ull,qhi, reloc_start, pinv64 = 0ull; ^ pm1.c:1008:3: error: use of undeclared identifier 'interim_gcd' interim_gcd = 0; ^ pm1.c:1715:2: error: use of undeclared identifier 'q_old_10M' q_old_10M = (uint32)(q0 * inv10m); ^ pm1.c:1715:28: error: use of undeclared identifier 'inv10m' q_old_10M = (uint32)(q0 * inv10m);[/CODE] Am I missing some necessary [C]-D[/C] flag, or is something else wrong? For reference I have: [CODE]$ clang --version Debian clang version 13.0.1-+rc1-1~exp4 Target: x86_64-pc-linux-gnu Thread model: posix InstalledDir: /usr/bin[/CODE] |
[QUOTE=mathwiz;601266]I am trying to get the [C]pm1[/C] standalone binary to build on Ubuntu. Getting the following errors:[/QUOTE]
That's a developer-only flag I used to help gauge performance of various stage 2 prime-pairing-related settings during my algorithm development work, by way of counting modmuls without actually doing them. Looks like since I last turned it on, other parts of the code have changed. There is no "p-1 standalone" mode for the actual user build - if p-1 is all you want to do, you use the regular build as described on the README webpage ('bash makemake.sh' inside the unzipped source tarball) and simply restrict the assignment types to Pminus1 and/or Pfactor ones. |
In case anyone did not see [URL="https://www.mersenneforum.org/showthread.php?t=27648"]this thread[/URL], @Prime95 updated the PrimeNet API to return 1.3 instead of 2 for the [C]ll_tests_saved_if_factor_found[/C] parameter of P-1 assignments. This unfortunately broke our PrimeNet script, as it uses a regular expression to parse the assignment lines, which was expecting an integer. However, I just pushed a simple update to fix it. See [URL="https://github.com/tdulcet/Distributed-Computing-Scripts/commit/606956261ba97573ea280f6c3d86a977c5a06002#diff-1557cc1ac12b10f59513e25db23a4c33b47f9348943ab2f4ead78b46282c54c4"]here[/URL] for the full changes.
Anyone who is using our PrimeNet script to get P-1 assignments (worktype 4) will need to update as soon as possible. The PrimeNet script ignores lines in the worktodo file that it cannot parse, so this issue will cause it to get a new assignment for every run/worker every 6 hours by default... Linux users could just run this from their [C]mlucas_v20.1.1[/C] directory to update: [CODE]killall python3 rm primenet.py wget https://raw.github.com/tdulcet/Distributed-Computing-Scripts/master/primenet.py -nv[/CODE]@ewmayer - This may cause problems for Mlucas as well, as from looking at your code, it is expecting an unsigned integer. |
Indeed. Only 3 tests_saved values are supported; 0, 1, 2.
In Mlucas.c, dated 2021-10-18, starting line 561, following fragment assumes the PRP worktodo line will contain tests_saved value 0, 1, or 2 only; [CODE] tests_saved = strtoul(++char_addr, (char**)NULL, 10); if(tests_saved > 2) { sprintf(cbuf, "ERROR: the specified tests_saved field [%u] should be 0,1 or 2!\n",tests_saved); fprintf(stderr,"%s",cbuf); [/CODE]line 377, types as uint32 and initializes, tests_saved = 0 starting line 580, splitting nonzero tests_saved assignment to separate P-1 and PRP will fail because it assumes tests_saved occupies a single character, not 3 as in "1.3" or "0.9" or "1.1" or 4 as in "1.05"; [CODE] // Copy up to the final (tests_saved) char of the assignment into cstr and append tests_saved = 0; // A properly formatted tests_saved field is 1 char wide and begins at the current value of char_addr: i = char_addr - in_line; strncpy(cstr,in_line, i); cstr[i] = '0'; cstr[i+1] = '\0'; // Append the rest of the original assignment. If original lacked a linefeed, add one to the edited copy: strcat(cstr,in_line + (i + 1)); if(cstr[strlen(cstr)-1] != '\n') { strcat(cstr,"\n"); }[/CODE] |
@tdulcet: I saw the thread in question and had a look at what happens with such fractional-test-saved assignment. Long story short - there should be no problem for Mlucas users.
Step-through debug of George's example Pfactor assignment did, however, turn up a bug in the parsing code for that assignment type, which leads to nonsense values for the TF_BITS and tests_saved variables. For instance for the example assignment [i] PFactor=9A0F34B929B282C41C75682C9496D92B,1,2,109343671,-1,76,1.3 [/i] after the k,b,n,c data quartet - '1,2,109343671,-1' here - used to encode the modulus as k*b^n+c is (correctly) processed, instead of the char* used as a placeholder for where-to-resume-parsing advancing to the ',76', it gets reset to the '=9A0F...'. That results in both TF_BITS and tests_saved getting assigned the largest parseable decimal-int following the '=', which is 9 in this case. Again not an issue since the ensuing p-1 bounds-setting currently ignores those values, but fixed in my dev-branch version of the code, obviously. I also tried swapping the leading '9' in the above AID from the left to right end, yielding A0F34B929B282C41C75682C9496D92B9, to see what happens when the first char after the '=' is not a decimal integer, that causes TF_BITS and tests_saved to retain their default init values of 0, again not a problem. Ken's code-snips which check that tests_saved <= 2 are for PRP-assignment parsing, which are not being changed to support such fractional values, at least as yet. But just to see what would happen with the current v20.1.1 release were such to occur, I modified the assignment type in the above example from PFactor to PRP: [i] PRP=A0F34B929B282C41C75682C9496D92B9,1,2,109343671,-1,76,1.3 [/i] The parsing code again ignored the '.3' fractional part, yielding tests_saved = 1 and the original assignment was overwritten by the following split form: [i] Pminus1=A0F34B929B282C41C75682C9496D92B9,1,2,109343671,-1,1200000,0 PRP=A0F34B929B282C41C75682C9496D92B9,1,2,109343671,-1,76,0.3 [/i] That is still potentially problematic, since the split-off PRP assignment gets set not to the intended 0, but to 0 followed by the unmodified .3 fraction from the original 1.3. But, once the p-1 assignment completes, assuming no factor is found, we move on the PRP assignment, and now the same parsing code again ignores the .3, yielding tests_saved = 0 and proceeding to the PRP test. For PRP assignments, the only problematic tests_saved values would be ones >= 3, with or without a nonzero fractional part. If anyone does encounter any crash or untowardness-in-practice resulting from such fractional test-saved values, please post details here, as usual. |
Small patch uploaded - The bug only manifests for users doing PRP tests and who have set CheckInterval = 1000, the minimal allowable value, in their mlucas.ini file. (Such a low value is not generally recommended since on all but the slowest systems it wastes 1-2% runtime doing excessively frequent savefile writes.)
As always, post #1 in this thread has updated MD5 value for the release tarball, and the README linked there has details about the issue addressed. |
Hey, I just finished a PRP on 113091931. For whatever reason, my primenet.py has stopped working so I have had to upload the result manually. Do I need to upload a proof file anywhere? Sorry if I am mistaken somehow.
|
[QUOTE=christian_;602818]Do I need to upload a proof file anywhere?[/QUOTE]Mlucas proof file generation has not been released yet, AFAIK, so, no, not if it was an Mlucas PRP result. A PRPDC will be necessary someday.
|
Small patch which fixes an infrequent (but run-killing when it occurs) issue with multithreaded runs of p-1 stage 2.
As always, post #1 in this thread has updated MD5 value for the release tarball, and the README linked there has details about the issue addressed, and download links. |
[QUOTE=ewmayer;592258]Those of you using tdulcet's mlucas.sh install script will want to grab the [URL="https://github.com/tdulcet/Distributed-Computing-Scripts/blob/master/mlucas.sh"]latest version[/URL], but check the SUM-field value and if it differs from the one (8d8851f5e383d8a74cf067192474256a) for the current-download of v20.1.1, manually change it to the md5 checksum listed for the latter at the README.[/QUOTE]
I just pushed the new md5sum to my repository. See [URL="https://github.com/tdulcet/Distributed-Computing-Scripts/commit/f33127b5b5496bd1e4a6217692439ad97cf5f60e"]here[/URL] for the change. |
[QUOTE=christian_;602818]Hey, I just finished a PRP on [M]113091931[/M].[/QUOTE]Result was confirmed via gpuowl PRP DC & proof gen & CERT.
|
| All times are UTC. The time now is 04:26. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.