![]() |
C321_149_146: Ready for SNFS
C321_149_146 has survived years of ECM @B1=850M [url=http://www.rechenkraft.net/yoyo/download/download/stats/ecm/xy/RESULTS]
by yoyo[/url], plus additional work by Ryan Propper at higher levels. Is it time to ask Greg to consider it for the 16e V5 SNFS queue? |
Is this a spot where Ryan might do the sieving and Greg the matrix?
Should we be test-sieving for params? 34LP? |
[QUOTE=VBCurtis;477768]Is this a spot where Ryan might do the sieving and Greg the matrix?
Should we be test-sieving for params? 34LP?[/QUOTE] Ryan is out of the game for awhile. This would be an all 16e V5 effort, if Greg chooses to take it on. eta: the SNFS poly is straightforward but I cannot test sieve above 33-bits. |
Forgive my earlier terse reply but I was running errands. Here are the best SNFS polynomials I could find, from hand calcs to YAFU to snfspoly.
[code] n: 193722964580742095193679112018400904880033366451349783906408872285588397308741963455534137488539382992365208104296998636547195946612823276455827294598394515413146914770645305621675529776146314670471095698267793278948438037457917636834231422611091488453785250279215613653077320648102696969467657304169791933238400575470823 # 149^146+146^149, difficulty: 324.65, anorm: 3.60e+039, rnorm: 3.18e+059 # scaled difficulty: 327.98, suggest sieving rational side # size = 1.329e-016, alpha = 0.000, combined = 1.268e-016, rroots = 0 type: snfs size: 324 skew: 12.1652 c6: 1 c0: 3241346 Y1: -14337401323057856516844082686459887664364975087706401 Y0: 1284758189202133190947305919985473587873306082726117376 rlim: 536000000 alim: 536000000 lpbr: 33 lpba: 33 mfbr: 66 mfba: 66 rlambda: 3.0 alambda: 3.0 ------------------------------------------------------------ skew: 12.0014 c6: 22201 c0: 66338290976 Y1: 8799713624672145143474698082092284848447301936480256 Y0: -14337401323057856516844082686459887664364975087706401 ------------------------------------------------------------- skew: 7.371 c5: 1 c0: 21754 Y1: 1052935537988784885549647673691227628754695148931129696191869349 Y0: -85228662589089972900969749203283720565502083938995901803817598976 -------------------------------------------------------------- skew: 19.8082 c5: 149 c0: 454371856 Y1: 583757962938972417129929789063587127160973177664355491806969856 Y0: -1052935537988784885549647673691227628754695148931129696191869349 [/code] At SNFS 324, lpb is at least 34 though I can’t run above 33. To my mind the quintics are unlikely to yield well enough to be considered but what do I know. |
NFS@Home never tried a SNFS324.
|
True, though 2,1061- was a SNFS 319.7 factored back in 2012. C321_149_146 is a very difficult factoring job I admit.
|
[QUOTE=swellman;477789]True, though 2,1061- was a SNFS 319.7 factored back in 2012. C321_149_146 is a very difficult factoring job I admit.[/QUOTE]
Can you ask Greg if it is possible? If so he still has more than 10 Cuningham composites to post-process and when this composite is added to the 16e V5 queue I advise people not to queue 14e and 15e composites since the former can easily be sieved in under two weeks. I will also help and try to bring more sievers from different teams. From last three weeks when the 14e/15e queue was empty more than 16eV5 120k wus were daily processed. |
What's the hurry? I mean, why alter any other work/queues/teams, when there's a large backlog waiting for postprocessing anyway? Who benefits if this is sieved in two weeks rather than six, if we're going to wait until April for a machine capable of LA to open up?
This job will be about twice as hard as 2,1061-, perhaps a bit worse because the poly coeffs are larger. I'll have a look at 34LP and 34/35 later this week and report my estimates. |
@pinhodecarlos, I’ve sent a note, let’s wait for Greg’s response. He may have no interest in such an extended effort. Thank you for offering to rally more sievers.
@VBCurtis, thank you for poking at the polys. I tried to run comparative testing using 16e and lpb=33 but performance was too far down in the mud to be conclusive. |
Sean, any updates with regards to your last message? Let me know if I need to escalate.
About the LA backlog I can guarantee it will be cleanup as you can see from the status page. |
No word from Greg on my query. I just assumed he was busy/not interested in pursuing this job.
|
[QUOTE=swellman;480075]No word from Greg on my query. I just assumed he was busy/not interested in pursuing this job.[/QUOTE]
Just left him a message, hope he comes back here to the thread. Hey Curtis, any estimates figures yet? Edit: he has seen the thread.... |
I don't see any records for this; I think I decided to wait for an indication of interest from Greg, since Ryan is not available presently. I'll have some time this weekend for test-sieving.
This candidate is difficult enough that it may be worth using CADO with I = 17 on the smallest Q values, as a team sieve outside of NFS@home. Of course, we'd need quite a few cores available to CADO to make a dent. Are parameters handy for the largest previous nfs@home factorizations? I'd benefit from a starting point for testing, particularly for alim/rlim. Does 536M/800M seem reasonable? Edit: I don't quite grasp poly creation for this form. Is a septic possible? |
I don’t have access to the 16e V5 data server but I can ask Greg.
Was looking at previous jobs, 2,1285- (GNFS218), and Greg queued from q=260M to 3700M on sieve range. I’ll dig about alim and rlim. |
I won't have any room for it for at least a few months. I'll revisit it then.
|
[QUOTE=frmky;480144]I won't have any room for it for at least a few months. I'll revisit it then.[/QUOTE]
I suppose for LA but my brain already failing at this hour to understand that expression (capacity), can you elaborate? |
I spent a few hours deciding that the regular 16e siever would not be able to factor this number; as far as I know, it can only sieve Q up to 2G (not sure if binary or decimal G). 2000M special-Q range is not sufficient to gather enough relations, no matter the large-prime settings.
Using 16f with -d 1 (the setting to stick to prime special Q), I've done just a bit of testing. I haven't determined whether 3 large primes should be used on both sides, or just on rational side. I think 35-bit LP is the highest we can safely go with msieve, as 36 might require more than 4G raw relations. I think 35/36 hybrid needs fewer than 4G, and I'll test that against 35 as well. Initial tests suggest 35/99 on rational side and 36/71 on algebraic side might be best, or perhaps 35/69 or 35/99. Still to-do is exploring exactly what mfbr is best with lpbr; I guessed 99 to get started, but anything from 97 to 102 is possibly best. A short (7 samples across the Q range) with 35LP and 99/69 mfb's suggests sieving 20M to 2100M should produce roughly 2600M raw relations. Yield above 2000M is roughly 0.8 to 1.0, so if 3000M relations are needed we can get that by sieving to 2500-2600M. A fast desktop should find about 1 sec/rel with these settings. I'll have more to report about yields of various mfb settings when I have more time. |
Carlos emailed frmky and cc'ed me about the 16e NFS@home queue, and Greg explained that for reasons of client memory use and server space use he prefers to go no higher than 34LP for NFS@home projects.
So, we face a choice: wait for Greg to have interest and an available slot to run this number (with LA an open question) with 34LP and small lim's, or try some sort of nutso team-sieve with larger params. With my previous estimate of 2.5G thread-seconds of sieving, I don't think there's enough interest in a team sieve. I am willing to contribute ~100M thread-seconds worth of CADO 17e time on the smallest Q, which may be something like 50% more effective than GGNFS 16e; that's still only 5 or 6 % of the required sieving, and LA would still be an open question. A team sieve has the possible interest of being among the first to use large LP bounds (bigger than 34) to complete a factorization; only the CADO group has done so previously for their published papers. We'd be the first "users" to do so, whether we use CADO or GGNFS (or both, more likely) to sieve. Before Greg's email, it hadn't occured to me to check GGNFS memory usage. With rlim=800M and alim=1076M, 16f uses 2.5GB per process!!!! No wonder Greg restricts alim and rlim to be much tighter than my testing. My next test-sieve will be with alim=rlim=536M and 34LP to see how yield looks (and thus estimate required Q to get 1600-1800M relations). Not sure 536M is gentle enough on RAM, but I'll report memory usage as well. |
Whoa, this is a BIG job.
I thought about offering an i5 or two but realized 100M thread-seconds would be a little over nine months for a single Core-i5 box. I have 16GB on each machine so a 2.5GB process is not a problem. I would be removing them from the the smallest post-processing NFS@Home queue during that time. Maybe not a good idea. I didn’t realize a 34LP siever is in the public domain. (Shameful plug coming.) OPN has a Most Wanted number for well over a decade at SNFS-301. Ryan tried his hardest to get enough relations as a 33LP job but failed. Perhaps this could be a stepping stone. One a side note, I thought about upgrading to a Core-i9. I have no experience with water cooled systems. It would be nice to have a 12, 14, 16-core box but it would need 32 or 64GB of DDR4 memory. May still not be enough for the bigger LA jobs. Edit: The SNFS-301 is an octic by degree halving. Traditionally it would be an SNFS-338 with degree 6. |
Henryzz has compiled regular 16e with 33LP limit removed; it was as simple as removing a flag to check, according to his posts. I believe he also sent me 15e with 33LP removed, which I plan to try on a SNFS-900bit number as 34LP one of these days.
I got the 16f siever from an old thread around the time frmky expanded nfs@home to use it. Both are windows exe's; I haven't located a linux copy yet of either one, or if I have I forgot about it. So, I do my test-sieving for 34LP+ on a Broadwell ultrabook, my only Windows machine. CADO supports any large-prime size; their factorization of RSA-768 used mfba = 140 and four 40-bit large primes! mfbr was 110 and three 40-bit primes were admitted on the rational side. Edit: I noticed that in my initial test-sieve mentioned above, I failed at mental math and used 1076M as 2^30 rather than 1072M. So, I managed to pick alim just *over* 2^30, which can't be good with GGNFS. I doubt I'll repeat the test-sieve, as NFS@home is likely the only path to getting this done so I'd rather put effort into smaller lims. I tested alim=rlim=536M, and memory use is 1550MB with 16f. Now I see why Greg suggested 268M for each lim! |
[QUOTE=RichD;480533]
I didn’t realize a 34LP siever is in the public domain. (Shameful plug coming.) OPN has a Most Wanted number for well over a decade at SNFS-301. Ryan tried his hardest to get enough relations as a 33LP job but failed. Perhaps this could be a stepping stone. [/QUOTE] Do you have the relation set Ryan generated? Perhaps we can finish it with CADO, since the CADO tools do well with large Q values. We can just keep sieving above the last Q Ryan completed. If the relation set is almost sufficient (say, 800M or more raw 33LP relations), we should be able to finish this. My main concern is that CADO seems to demand a lot of memory for the LA phase, and my biggest machine only has 32GB. We may need to learn how to convert CADO-format relations to msieve-format, and only use CADO for sieving. |
[QUOTE=VBCurtis;481586]Do you have the relation set Ryan generated? Perhaps we can finish it with CADO, since the CADO tools do well with large Q values. We can just keep sieving above the last Q Ryan completed.
If the relation set is almost sufficient (say, 800M or more raw 33LP relations), we should be able to finish this. My main concern is that CADO seems to demand a lot of memory for the LA phase, and my biggest machine only has 32GB. We may need to learn how to convert CADO-format relations to msieve-format, and only use CADO for sieving.[/QUOTE] I don't think he kept them. He ran the sievers until the yield dropped to near zero. This was six-to-eight-plus months ago. I haven't heard from him in quite a while. |
| All times are UTC. The time now is 04:17. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.