![]() |
Well, I have deleted the 89999_243 relations from the RSALS server as soon as Tom posted the result. But he used these relations after that for the experiments on TARGET_DENSITY he has just posted about, so maybe he still has them even after these experiments.
[b]Status update[/b]: the number of WUs in progress has remained steadily between 4000 and 5000 over the last week or so, which is a good thing. As usual, we're keeping the raw relations files until the complete factorization, so the server's disk is filling up, even if the growth is mitigated by the recompression with pbzip2. I'm currently downloading 53_139_minus1-compressed-with-pbzip2.dat.bz2 and C242_125_86-compressed-with-pbzip2.dat.bz2 (both 30-bit large primes, in post-processing stage) to my computer. Deleting those from the server will free more than 13e9 bytes. |
To answer Jason's question, I suspect I was absent-mindedly running msieve -nc2 on one computer and msieve -ncr on another, writing to the same output file - that would explain the alternation of error-reading-relation lines and output from the linear algebra, and the fact that I didn't get any relation-reading errors on the several repeat runs for the TARGET_DENSITY experiment.
|
Oops
60001-247 = P47 * P200
[url]http://www.chiark.greenend.org.uk/~twomack/60001-247.mlog[/url] That's a couple of CPU-years sieving (Q=25M .. 225M, 222M raw relations where one relation takes about 0.3 CPU-seconds to obtain on my hardware) plus a real-time month on my part to get a factor that probably ought to have dropped out of ECM pre-testing (curves at 4e7 take twenty minutes on the same hardware, and 20k would have got it with 90% probability) |
Well, thanks Tom. This was the second-to-last 31-bit large primes task that was taking a lot of space on the server, we're seeing the end of the tunnel.
But duh, this means that there was another ECM miss among the insufficiently ECM-ized integers that I fed the grid with when we got a surge of clients in early March :'( [EDIT: I see that at nearly 223M raw relations, this one was somewhat oversieved, and could be used for further experiments on TARGET_DENSITY. I have slightly reduced oversieving since then, I'm now targetting 214-215M raw relations for 31-bit large primes tasks.] As I wrote above, since then - more than a month before getting the results for the near-repdigit-related tasks ecm-ized to t45 - I increased the RSALS ECM standards, [i]so this won't happen again in the future[/i]: * all recent Aliquot composites have received ECM to at least 2/7 at LORIA; * all OddPerfect integers have received t50 (wblipp is using nearly all of the OP quota on yoyo@home for RSALS); * I ecm-ized C242_125_86 (XYYXF) to 8000 curves at B1=43e6, plus hundreds of curves at B1=11e7, i.e. slightly beyond t50; * last but not least: Andrey of XYYXF pushed the two remaining RSALS reservations to t50 with yoyo@home, which completed the factorization of one of them - and thus avoided another ECM miss for RSALS. To sum up: I'm getting closer and closer to the "2/9 of SNFS difficulty" recommendation, in order to use the (fluctuating) power of the grid more efficiently - but of course, this restricts the set of integers such that I can feed the grid to keep it busy enough :smile: Please bear with me for a little additional while :wink: |
125_86 is done. Not a miss this time! :smile:
[CODE]prp64 factor: 3330422832790132204089998016244685741797960680366832018245775669 prp178 factor: 6496594728518001227096555253238374883003248586791767510452036060266914359229264866730284384278892170493524131830206026104775304432776935883454541025852968360372399933772506019143 [/CODE] |
Great, thanks :smile:
Now crossing fingers that our last potential strong ECM miss, 13333_247, isn't... |
[QUOTE=frmky;217838]125_86 is done. Not a miss this time! :smile:
[CODE]prp64 factor: 3330422832790132204089998016244685741797960680366832018245775669 prp178 factor: 6496594728518001227096555253238374883003248586791767510452036060266914359229264866730284384278892170493524131830206026104775304432776935883454541025852968360372399933772506019143 [/CODE][/QUOTE]Wow, that's a new record for XYYXF! :-) [url]http://xyyxf.at.tut.by/records.html#snfs[/url] |
[QUOTE=debrouxl;217846]
Now crossing fingers that our last potential strong ECM miss, 13333_247, isn't...[/QUOTE] It isn't. :smile: [CODE]prp59 factor: 87249439463317686306278126817198501218079480690148830679591 prp82 factor: 5679765354472201425140929676909637167619008713237391381467962527094500614971973921 prp106 factor: 2069675909056361501853777600970096153959677279407383140871559133287811109230729021458603923025287248790031 [/CODE] |
Thanks Greg, this completes our oldest task :smile:
2 ECM misses out of 5 insufficiently ECM-ized integers... it could have been better, but it could also have been worse. |
Status update: thanks to help from lots of people, our list of integers queued for post-processing is currently pretty small... this is a good thing ! :smile:
A general question: [i]does GNFS tend to produce a higher proportion of duplicate relations than SNFS does[/i] ? I'm asking because two Aliquot 29-bit large primes tasks recently sieved by RSALS and post-processed by Lionel Muller at LORIA were undersieved, which prompted me to raise the target number of raw relations for RSALS. Specifically, the older task of the two had 52M raw relations (of which one third were duplicates, he had to produce 19M more relations !), and the most recent one had 57M raw relations (~15M duplicates, he had to produce at least 4M more relations). So I made the RSALS grid produce ~60M raw relations for 127^97-1 and the other current 29-bit primes tasks, all of them being SNFS. But judging by the difference between the number of relations and the number of ideals after the first removal pass, the number of clique removal passes and the necessity to increase the weight of ideals twice before the matrix became dense enough, 127^97-1 was significantly oversieved... |
[quote=debrouxl;220050]
A general question: [I]does GNFS tend to produce a higher proportion of duplicate relations than SNFS does[/I] ?[/quote]No obvious reason why it should. The only difference between SNFS and GNFS is that in the former it is possible to find polynomials with "small" coefficients. I can expand on what "small" means in this context if there is interest. Paul |
| All times are UTC. The time now is 09:53. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.