![]() |
C149 was cracked by ECM (p50 after ~400@11e6 curves - I'm lucky here).
Now again C158, I can run polyselect for this but not all GNFS. |
Poly for C158.
[CODE]# norm 2.829565e-15 alpha -9.674228 e 1.832e-12 rroots 1 n: 79933515103235306815732304856672491074680074688676425625899406692334007147016700671551909312123530653854370338778979049814698834222425825561192039057975774481 skew: 45476477.92 c0: 2104403158320717301966750387083391826024832 c1: 167898735083323752751067777774833704 c2: -1989821115747940481877005858 c3: -151701621080234560409 c4: 351786144456 c5: 36900 Y0: -4646655632492287543013586279001 Y1: 119797584633535873 rlim: 36800000 alim: 36800000 lpbr: 30 lpba: 30 mfbr: 60 mfba: 60 rlambda: 2.6 alambda: 2.6 [/CODE] |
If no one jumps up to take this on soon, I might want to experiment with it. It's been a while since I did this type of single number work and I can't find (or remember) anything on choosing the proper sieve. Would this be lasieve4I14e?
Or, do I need to test? Maybe that's why I can't find it... |
158 is squarely in 14e, though I suggest 31LP rather than 30. I think NFS@home often runs one large-prime-bit small because data sets 40% smaller is worth 4-8% extra computation effort; for an individual effort, the tradeoff for less effort is valuable.
I believe 31 is faster than 30 at 155 digits, and 32 is faster than 31 at 166 digits. The transition to 15e is somewhere around 170, well above the typical single-machine project. Basically, I run almost all my projects one LP bit higher than NFS@home chooses, with very good results. Something near 150M raw relations should allow you to build a matrix with target density 96 or 100. If the architecture you run LA on is older than Haswell, I'd set target density at 100-110, while LA on haswell is fast enough that 90 or 96 will save you more sieve time than it costs you in LA (compared to, say, 104 or 110). |
For excessive detail on which siever:
[url]http://mersenneforum.org/showpost.php?p=426120&postcount=30[/url] |
[QUOTE=VBCurtis;443039]158 is squarely in 14e, though I suggest 31LP rather than 30. I think NFS@home often runs one large-prime-bit small because data sets 40% smaller is worth 4-8% extra computation effort; for an individual effort, the tradeoff for less effort is valuable.
I believe 31 is faster than 30 at 155 digits, and 32 is faster than 31 at 166 digits. The transition to 15e is somewhere around 170, well above the typical single-machine project. Basically, I run almost all my projects one LP bit higher than NFS@home chooses, with very good results. Something near 150M raw relations should allow you to build a matrix with target density 96 or 100. If the architecture you run LA on is older than Haswell, I'd set target density at 100-110, while LA on haswell is fast enough that 90 or 96 will save you more sieve time than it costs you in LA (compared to, say, 104 or 110).[/QUOTE] I have several machines, all 64-bit multi-core, but not very new and I found my Team Sieving scripts on several. I started some of them last night to see what I would have this morning. I used the original poly and the scripts had been set to use siever 15 from the last time I ran it, so I left it. I currently have about 7M unique relations. Would it be worth restarting from scratch, or should I just let it go? I guess I'll work this number and see if I can actually complete it. |
[QUOTE=EdH;443053]I have several machines, all 64-bit multi-core, but not very new and I found my Team Sieving scripts on several. I started some of them last night to see what I would have this morning. I used the original poly and the scripts had been set to use siever 15 from the last time I ran it, so I left it. I currently have about 7M unique relations. Would it be worth restarting from scratch, or should I just let it go?
I guess I'll work this number and see if I can actually complete it.[/QUOTE] You can switch from 15e to 14e midjob. 15e searches a larger region and will have found more relations than 14e. These should be useful relations in the postprocessing. It isn't like you are reducing the large prime bound. |
[QUOTE=henryzz;443054]You can switch from 15e to 14e midjob. 15e searches a larger region and will have found more relations than 14e. These should be useful relations in the postprocessing. It isn't like you are reducing the large prime bound.[/QUOTE]
I have swapped to 14 and some of the machines are already running with it. The rest should swap over when they finish their current assignments. How much RAM will I need for the LA step? (I'm actually thinking about resurrecting my openmpi setup for that - nah, probably not... at least for now...) Thanks to both VBCurtis and henryzz! |
Well, that depends how far you oversieve, what target-density you choose, and (of course) some luck. If you have to use a 4GB system, you might need some extra relations to get the matrix small enough.
|
[QUOTE=VBCurtis;443093]Well, that depends how far you oversieve, what target-density you choose, and (of course) some luck. If you have to use a 4GB system, you might need some extra relations to get the matrix small enough.[/QUOTE]
Hmmm... I have some 4GBs... And, one 6GB. But, I'm not sure about the 6GB right now. The CPU temps are not right - they are about ten degrees C different and are hovering around 65-75. I've repasted the CPU twice with no change. Maybe I will have to set up a miniature cluster again... Thanks... |
Edit: I think I have it figured out.
Sorry for the following, but my memory is failing me badly (or, is it actually failing me very well...)? I can't find any notes and I wasn't clear from the readmes. [strike]How do I manually invoke msieve when I finally have some relations to try? It seems like I need to make some other files and then run -nc1, etc.[/strike] I did set up a two machine cluster to help with the LA. I think I can get that running. |
| All times are UTC. The time now is 22:26. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.