![]() |
[QUOTE=RichD;496707]What did filtering finally say was the total unique relations? I've had tens of thousands of these but still manage to have enough good relations out of the remaining 200M+ to build a matrix.
I still think someone is hacking the BOINC batch files and submitting previous results for a different job to "pad" their stats. No one seems to be able to identify it. Here is a related [url=https://www.mersenneforum.org/showpost.php?p=432038&postcount=5]post[/url] to getting tons of error -11.[/QUOTE] [url]https://www.mersenneforum.org/showpost.php?p=493961&postcount=2[/url] |
11611_144 factored
1 Attachment(s)
[QUOTE=richs;496109]Reserving 11611_144 on 14e.[/QUOTE]
[CODE]p72 factor: 112045998427219090871953040117667258301569847152699559565607504451497301 p86 factor: 63264787949930725243445414832953024183897855766018170835974420877209157378834269266207[/CODE] Approximately 47.1 hours on 2 threads Core i3-2310M with 4 GB memory for a 5.70M matrix at TD = 70. Log attached and log at [URL="https://pastebin.com/KsDS7zDh"]https://pastebin.com/KsDS7zDh[/URL] Factors added to FDB. |
In a poly, the deg: line is indeed superfluous, but I usually let it in when it's there.
|
I've downloaded .poly/.fb/.ini files from 50009_239 and run job with good old factMsieve.pl - it goes well and produces legal relations. Let's try queue slightly modified poly (deg and lss params were removed, m changed to Y0/Y1 as in others polys) for small range (say, 20M-30M) and see if it helps.
[CODE]n: 500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000009 skew: 1.62 c6: 1 c0: 18 Y0: -10000000000000000000000000000000000000000 Y1: 1 type: snfs rlim: 134000000 alim: 134000000 lpbr: 31 lpba: 31 mfbr: 62 mfba: 62 rlambda: 2.7 alambda: 2.7 [/CODE] |
Well... Multiple near-repdigit numbers have used "m:" lines over time just fine on both RSALS and NFS@Home, so there's something else going on :smile:
I haven't yet managed to convince myself that it's time to resieve, but maybe I'm just dense. In the DB, I noticed that there was another, older entry mistakenly named 50009_239 (should have been "50009_238"), which reads: [code] (1131,1489406743,1489407441,1489407441, '50009_239','50009_239',4,0,0,'Near-repdigit','SNFS','','n: 500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000000000009\r\nm: 5000000000000000000000000000000000000000\r\ndeg: 6\r\nc6: 16\r\nc0: 45\r\nskew: 1.19\r\ntype: snfs\r\nlss: 1\r\nrl im: 120000000\r\nalim: 120000000\r\nlpbr: 30\r\nlpba: 30\r\nmfbr: 60\r\nmfba: 60\r\nrlambda: 2.7\r\nalambda: 2.7\r\n',20000000,120000000,120000000,'239.40',30,'Dmitry Domanov: p55 * p75 * p110',0,2,6248,'',6285 215001,109771593,'HGSCp9xB')[/code] [url]https://pastebin.com/HGSCp9xB[/url] matches [url]http://stdkmd.com/nrr/c.cgi?q=50009_238[/url] . |
Lionel, that's the point! We've sieved job for 50009_238 once again, I've checked this by running current dataset with 50009_238 poly:
[CODE] Msieve v. 1.54 (SVN 1025) Wed Sep 26 11:43:59 2018 random seeds: 2bde25a4 300ea441 factoring 50000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000009 (239 digits) no P-1/P+1/ECM available, skipping commencing number field sieve (239-digit input) R0: -5000000000000000000000000000000000000000 R1: 1 A0: 45 A1: 0 A2: 0 A3: 0 A4: 0 A5: 0 A6: 16 skew 1.62, size 6.797e-12, alpha 0.213, combined = 4.894e-13 rroots = 0 commencing relation filtering estimated available RAM is 7631.3 MB commencing duplicate removal, pass 1 error -1 reading relation 9144250 read 10M relations error -9 reading relation 11675596 error -1 reading relation 19265553 error -5 reading relation 19400233 read 20M relations error -9 reading relation 24810475 error -15 reading relation 24810782 error -15 reading relation 26006753 ... [/CODE]So, it is necessary to resieve the whole 50009_239 job. |
Meh. In this case, some files were not regenerated. And IIRC, it's not the first time this occurs. Too bad we need to resieve.
We probably want to somehow help detect or prevent this on the production versions, before the never-worked-on queue management improvements ( [url]https://www.mersenneforum.org/showthread.php?t=21356[/url] ) become available. Looks like the production code base has changes not available in the internal, privately shared version derived from the production code base over 2 years ago: a page linked from the management interface doesn't exist in the internal version... |
[QUOTE=unconnected;496807]I've downloaded .poly/.fb/.ini files from 50009_239 and run job with good old factMsieve.pl - it goes well and produces legal relations. Let's try queue slightly modified poly (deg and lss params were removed, m changed to Y0/Y1 as in others polys) for small range (say, 20M-30M) and see if it helps.
[CODE]n: 500000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000009 skew: 1.62 c6: 1 c0: 18 Y0: -10000000000000000000000000000000000000000 Y1: 1 type: snfs rlim: 134000000 alim: 134000000 lpbr: 31 lpba: 31 mfbr: 62 mfba: 62 rlambda: 2.7 alambda: 2.7 [/CODE][/QUOTE] Well I’ve test sieved this and it seems to work correctly. On 14e, yield is > 2.7 @Q=20M. I’d be happy to enqueue this job as you describe over a small range, and then you can test the resulting .dat file. If it works, I can then increase the upper bound of Q to give the full result. I assume you want me to delete the current job entitled “50009_239”, correct? |
Yes, that's right. And please give a unique name for new job to not interfere with erroneous named 50009_239 task somewhere in database mentioned by Lionel.
|
[QUOTE=unconnected;496916]Yes, that's right. And please give a unique name for new job to not interfere with erroneous named 50009_239 task somewhere in database mentioned by Lionel.[/QUOTE]
Done. |
I've just expanded the upper bound for 50009_239A from 30M to 100M.
Also, Greg sent me a copy of the latest production code. |
| All times are UTC. The time now is 22:24. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.