mersenneforum.org C230 snfs almost stuck - please comment
 Register FAQ Search Today's Posts Mark Forums Read

 2009-08-12, 18:14 #1 Syd     Sep 2008 Krefeld, Germany 111001102 Posts C230 snfs almost stuck - please comment Hi, I'm currently sieving on 8*10^230-1, parameters are: c6: 25 c0: -2 skew: 0.66 type: snfs rlim: 50000000 alim: 50000000 lpbr: 30 lpba: 30 mfbr: 59 mfba: 59 rlambda: 2.7 alambda: 2.7 sieving on rational side only. I started from Q=1M to 15M: 14e siever, ~23M relations Q=15M to 25M: 15e siever, ~40M relations Currently working at Q=26M. about 131 CPU days so far, still far from a usable matrix. The problem is the duplicate rate, last 8M relations had 6M duplicates in it, at a total of just 10M duplicates! I'm quite sure the sieving ranges dont overlap. At this rate it will take forever to complete the job. Any hints/suggestions? Thanks in advance Last fiddled with by Syd on 2009-08-12 at 18:33 Reason: 10M total of cause
 2009-08-12, 18:47 #2 henryzz Just call me Henry     "David" Sep 2007 Cambridge (GMT/BST) 5,869 Posts if the algebraic side isnt too slow you might try that
2009-08-12, 18:58   #3
Syd

Sep 2008
Krefeld, Germany

2×5×23 Posts

Quote:
 Originally Posted by henryzz if the algebraic side isnt too slow you might try that
About 2.5 times slower. I'll give it a try anyway

 2009-08-12, 19:50 #4 jasonp Tribal Bullet     Oct 2004 67218 Posts If you are sieving special-q far below the factor base limit, then I would expect a huge number of duplicates. How about special-q above the factor base limit?
 2009-08-12, 19:52 #5 Batalov     "Serge" Mar 2008 Phi(4,2^7658614+1)/2 24C816 Posts You will need at least 92-95M unique relations (a bit more is going to make for a better matrix). I wouldn't recommend sieving on algebraic side; this will give you even more duplicates. Most of these existing duplicates came from sieving from a very low starting point. (Better would have started from 10-15M.) The parameters look fine. Hopefully the future relations will not be as redundant. Use 15e. For a job of this size you should not use the vanilla scripts. If you are using them (and the script has MINRELS.txt spells in it), then add a file MINRELS.txt in the project directory with 90000000 in it. You will save a lot of time by not filtering until there's at least a chance of convergence. When you will have 90M raw relations, filter and have a look at redundancy, then revise the next time to filter by putting a larger number in MINRELS.txt This is quite a big number for home computing, but not impossible. Good luck!
 2009-08-12, 20:00 #6 frmky     Jul 2003 So Cal 23·32·29 Posts You started sieving very low. You need to move to q above the factor base limits, and once you do for this number the algebraic side will be faster than the rational side. Sieve on the algebraic side starting at q=50M. This should significantly reduce your rate of duplicates and give you plenty of relations to finish the factorization. Also for future factorizations, for 30 bit large primes mfbr/a of 60 or 61 would be better than 59.
 2009-08-12, 20:44 #7 Syd     Sep 2008 Krefeld, Germany 2×5×23 Posts Thank you! 92M unique, thats quite a lot more than I expected. Anyway, too late to stop now. Just started sieving at Q=50M on algebraic side, only about 20% less relations than with Q=15M on rational side. Hope that will give enough unique relations! I always started low with Q=1M or even lower because it yields more relations per second. On small jobs that gave about 20% duplicates. Is this also a bad idea?
2009-08-12, 21:48   #8
frmky

Jul 2003
So Cal

23×32×29 Posts

Quote:
 Originally Posted by Syd Thank you! 92M unique, thats quite a lot more than I expected.
Serge was referring to 92M total including duplicates when you start sieving at q of half the factor base limit. You should be able to build a matrix at about 78M unique, but a few more will improve it. I usually get at least 83M - 85M unique before starting the LA.

For small numbers, if what you're doing works, keep doing it. For larger numbers, I usually start sieving at about half the factor base limit, sometimes a bit below, then keep on going until I've got enough. You can also start really low and sieve to about half the factor base limit, then jump to above it. Sieving the entire range below the FB limit, though, leads to tons of duplicates as you discovered.

 2009-08-30, 17:41 #9 Syd     Sep 2008 Krefeld, Germany E616 Posts Thanks again, I was finally able to finish it. 75M unique (92M total relations) resulted in this 3-split: prp70 factor: 7194989070351007241001770794481202899232920377811344337193524816903113 prp74 factor: 28405869825449471447004672858393144480378760112284964510299428948566224457 prp80 factor: 22777672993897316831397692267111749997397458530603944165484429112280289213344407
 2009-08-30, 19:34 #10 Batalov     "Serge" Mar 2008 Phi(4,2^7658614+1)/2 23·11·107 Posts Nice job, and let me be the first to welcome you to the top of that list! Triple-splits all around the house.

 Similar Threads Thread Thread Starter Forum Replies Last Post jasong jasong 5 2013-11-12 15:34 David John Hill Jr Miscellaneous Math 7 2010-06-06 12:33 Nunki Miscellaneous Math 6 2007-07-02 18:35 garo Factoring 9 2005-08-02 16:52 eepiccolo Soap Box 6 2003-10-08 03:10

All times are UTC. The time now is 18:32.

Thu May 6 18:32:52 UTC 2021 up 28 days, 13:13, 1 user, load averages: 2.22, 1.89, 2.09