mersenneforum.org BOINC NFS sieving - NFS@Home
 Register FAQ Search Today's Posts Mark Forums Read

2021-09-04, 01:16   #1849
VBCurtis

"Curtis"
Feb 2005
Riverside, CA

5×1,013 Posts

Quote:
 Originally Posted by swellman Agreed. If this challenge only lasts a few more days then the surge will fade quickly. And now that RichD’s OPN is enqueued in 15e that queue may be good to go. I just like the idea of a G208 but it may be too difficult for 16f_small. Perhaps a team sieve? It is a record poly found over 4 years ago but never sieved.
Well, I hope nobody tries to test-sieve with the params provided with the poly! lim's above 2^30, but 32bit large primes?

GNFS-208 is not too hard for f-small; lim's of 225M are a bit restrictive for that number, same for 33LP, but it would work. However, there's plenty of work on f-small right now, and that GNFS-208 job would take quite a long time to sieve. I think we should evaluate via test-sieve how much faster CADO would be with lim's around twice as large and 33/34LP or 34/34, before throwing it on f-small.

Or, since it has waited 4 years already, let's wait until we run a 204-206 digit GNFS job on f-small before jumping to 208.

2021-09-04, 01:34   #1850
charybdis

Apr 2020

541 Posts

Quote:
 Originally Posted by VBCurtis Or, since it has waited 4 years already, let's wait until we run a 204-206 digit GNFS job on f-small before jumping to 208.
3,748+ c204, the smallest unfactored Cunningham composite, doesn't seem to have had any attention in a while...

2021-10-26, 02:24   #1851
swellman

Jun 2012

41×79 Posts

Bump.

I realize 3,748+ is currently working, just trying to maintain visibility on this record poly supporting a record sized near-repdigit composite cofactor. If there is any interest I can run some test sieving (with suitable adjustments to the parameters) though perhaps this should be run as a 34-bit job(?) We could try it on 16f_small - this job seems to be at the limits for that siever.

Maybe Greg will comment.

Quote:
 Originally Posted by swellman Running with this idea, the poly listed (with a current record e-score) is given below: Code: n: 1122306776491337588607322631818708778200214577213206237089370253284687138834200770294167044440649977604444005064220843458726284245767449140856119245849745806395315477933380280425456889166505800707705642211253 skew: 326211947.89 type: gnfs lss: 0 c0: 192558034193459742319046371398586259699707875364425 c1: 5387479895042599888816938205129089277819185 c2: -1098937059524264818729815697407187 c3: -104048030268009541344497393 c4: -47857156754642446 c5: 421635720 Y0: -4842112079991293167491670985200158992968 Y1: 4692246580297096789 # Murphy_E = 1.439e-15, selected by Erik Branger # Polynomial selection took 4 months on a GTX760 # selected mechanically rlim: 1180000000 alim: 1180000000 lpbr: 32 lpba: 32 mfbr: 69 mfba: 69 rlambda: 2.8 alambda: 2.8 BUT, can this poly be spun or otherwise improved? And is a G208 too difficult for 16f_small?

 2021-10-26, 03:43 #1852 VBCurtis     "Curtis" Feb 2005 Riverside, CA 5×1,013 Posts The only difference between the f-small queue that we have access to and Greg's "big" queue is the lim restriction of 225M for f-small, and Greg's self-imposed 250M for the "big" one. That difference doesn't change the biggest-possible job much- maybe a digit or two? So, GNFS-208 is no problem for the siever. Opportunity cost is a bit of an issue- what jobs would wait while we run a 208-digit GNFS job? I'd run it as 33/34LP, but I always like looser bounds and higher yields. Greg pointed out in the past that disk space isn't unlimited and going arbitrarily large LP bounds can eat a ton of space, but with the e-small and e sievers having so few jobs in post-processing it seems to me that using disk for a 33/34 job and 1.3G relations instead of 33/33 and 1.0G relations is acceptable?
 2021-10-26, 03:55 #1853 frmky     Jul 2003 So Cal 222310 Posts Yes, the only difference is the smaller fb limits I've requested for 16f_small. It'll take awhile but a C208 should be no problem for 16f_small. With the fast turnaround enabled by GPU LA, disk space is currently not a problem. Feel free to use 33/34 or even 34/34-bit LPs.
 2021-10-27, 15:21 #1854 swellman     Jun 2012 41·79 Posts 71111_329 I ran some test sieving and it appears the best case for this c208 near-repdigit is as a 33/34 with 3LPs on the algebraic side and sieved on the -a side: Code: n: 1122306776491337588607322631818708778200214577213206237089370253284687138834200770294167044440649977604444005064220843458726284245767449140856119245849745806395315477933380280425456889166505800707705642211253 skew: 326211947.89 type: gnfs lss: 0 c0: 192558034193459742319046371398586259699707875364425 c1: 5387479895042599888816938205129089277819185 c2: -1098937059524264818729815697407187 c3: -104048030268009541344497393 c4: -47857156754642446 c5: 421635720 Y0: -4842112079991293167491670985200158992968 Y1: 4692246580297096789 # Murphy_E = 1.439e-15, selected by Erik Branger rlim: 225000000 alim: 225000000 lpbr: 33 lpba: 34 mfbr: 66 mfba: 100 rlambda: 3.0 alambda: 3.7 Results for a block of 1kQ with Q0=100M using this version of the 16f siever: Code: Yield # Spec_Q Norm_Yield sec/rel 3947 75 2857 1.030 For comparison, results for the 34/34 with 2/3 sieved on the -a side, all else the same: Code: Yield # Spec_Q Norm_Yield sec/rel 5085 75 3681 0.839 The second case has a higher yield and faster speed but the target number of raw relations is 1.8B versus 1.35B for the 33/34 case, giving the 33/34 case a slight edge. But the higher yield for the 34/34 makes it attractive especially with higher values of Q. Full test sieving should clarify things. In all cases, I used r/lim both 225M and mfb=3*lpb-2 for the 3LP side. Not sure if there are better refinements available here, but I was reluctant to completely (and tediously) test sieve either scenario without pausing here for suggestions or advice.
 2021-10-27, 15:57 #1855 VBCurtis     "Curtis" Feb 2005 Riverside, CA 5·1,013 Posts I've had good results with smaller lim on the 3LP side, so I would try alim of 182M and rlim of 268M. If that is faster, then I would try alim 134M and rlim 316M. My rule of thumb is that adding an LP to both sides needs 70% more relations, so adding LP to one side needs 30% more (1.3 * 1.3 = 1.7, roughly). That's where I got 1.3B raw relations as estimate- I'd aim for 1B if this was run as 33LP. Using that same scaling, I'd aim for 1.7B for this job as 34/34. Edit: perhaps 70% more is reasonable at smaller sizes, and 75% is better at this size to compensate for the likely larger matrix that a 34LP job would make compared to a 33LP job. Your estimates are as good as mine, I didn't consider that 33 to 34 is different. mfba = 99 might be a bit faster, but it's not likely much of a difference. Last fiddled with by VBCurtis on 2021-10-27 at 15:59
2021-10-27, 16:48   #1856
swellman

Jun 2012

41·79 Posts

Quote:
 Originally Posted by VBCurtis I've had good results with smaller lim on the 3LP side, so I would try alim of 182M and rlim of 268M. If that is faster, then I would try alim 134M and rlim 316M.
I will try this, thanks.

Quote:
 My rule of thumb is that adding an LP to both sides needs 70% more relations, so adding LP to one side needs 30% more (1.3 * 1.3 = 1.7, roughly). That's where I got 1.3B raw relations as estimate- I'd aim for 1B if this was run as 33LP. Using that same scaling, I'd aim for 1.7B for this job as 34/34. Edit: perhaps 70% more is reasonable at smaller sizes, and 75% is better at this size to compensate for the likely larger matrix that a 34LP job would make compared to a 33LP job. Your estimates are as good as mine, I didn't consider that 33 to 34 is different.
When I increase both lpbr/a, I use almost double the target number of raw rels, e.g. 470M for 32/32, 930M for 33/33. So I used 1.8B for a 34/34. And with mixed lpbr/a, I have read that one should use the average, so a 33/34 ~ (930+1800)/2 or 1.365B, call it 1.3B hedging down. So we are in violent agreement it seems.

Quote:
 mfba = 99 might be a bit faster, but it's not likely much of a difference.
I’ll try it in the final iteration. Appreciate the tips.

 2021-10-29, 14:24 #1857 swellman     Jun 2012 41×79 Posts Still plugging away on 71111_329. My recent test sieving effort used the following polynomial: Code: n: 1122306776491337588607322631818708778200214577213206237089370253284687138834200770294167044440649977604444005064220843458726284245767449140856119245849745806395315477933380280425456889166505800707705642211253 skew: 326211947.89 type: gnfs lss: 0 c0: 192558034193459742319046371398586259699707875364425 c1: 5387479895042599888816938205129089277819185 c2: -1098937059524264818729815697407187 c3: -104048030268009541344497393 c4: -47857156754642446 c5: 421635720 Y0: -4842112079991293167491670985200158992968 Y1: 4692246580297096789 # Murphy_E = 1.439e-15, selected by Erik Branger # Polynomial selection took 4 months on a GTX760 # selected mechanically rlim: 225000000 alim: 225000000 lpbr: 33 lpba: 34 mfbr: 66 mfba: 100 rlambda: 3.0 alambda: 3.7 This set of parameters produced the best results: Code: MQ Norm_yield 60 30519 110 27785 200 24873 300 22587 400 20463 500 18931 600 17836 700 16875 Which suggests a sieving Q-range of 60-660M to produce 1.35B raw relations. Changing r/alim and mfba to 99, and all combinations, proved to be a bit less efficient. Maybe the lims are already so low for a 33/34 job that slight shifts in rlim and alim has little effect? I looked at the 34/34 version of this job but is a beast. The estimated target # of raw rels is 1.8B but it does sieve faster. Still seems the 33/34 job works best based on speed and # rels. Will attempt to test sieve the 34/34 version over the weekend.
 2021-11-03, 11:46 #1858 swellman     Jun 2012 41·79 Posts Finished test sieving 71111_329 as a 34/34 job using the following poly: Code: n: 1122306776491337588607322631818708778200214577213206237089370253284687138834200770294167044440649977604444005064220843458726284245767449140856119245849745806395315477933380280425456889166505800707705642211253 skew: 326211947.89 type: gnfs lss: 0 c0: 192558034193459742319046371398586259699707875364425 c1: 5387479895042599888816938205129089277819185 c2: -1098937059524264818729815697407187 c3: -104048030268009541344497393 c4: -47857156754642446 c5: 421635720 Y0: -4842112079991293167491670985200158992968 Y1: 4692246580297096789 # Murphy_E = 1.439e-15, selected by Erik Branger # Polynomial selection took 4 months on a GTX760 # selected mechanically rlim: 225000000 alim: 225000000 lpbr: 34 lpba: 34 mfbr: 68 mfba: 100 rlambda: 3.1 alambda: 3.7 Test sieving results on the -side with Q in blocks of 10K: Code: MQ Norm_yield 60 40284 110 36659 200 33153 300 29648 400 27000 500 24974 600 23503 700 22224 Suggesting a sieve range of Q=60-680M for a target number of raw relations of 1.8B. Back of the envelope calculations say the 33/34 job will sieve faster and likely LA will be easier to process but either way it is still a behemoth job. Greg - would you be willing to run the LA if we sieve this on 16f_small? I don’t think even my best machine could digest this thing!
 2021-11-03, 16:23 #1859 VBCurtis     "Curtis" Feb 2005 Riverside, CA 5·1,013 Posts I agree that the data suggests a dead heat for sieving time on 34/34 vs 33/34, so we should go with the smaller LP bound for disk space and expected matrix difficulty reasons. Maybe two digits bigger would call for 34/34.

 Similar Threads Thread Thread Starter Forum Replies Last Post thomasn NFS@Home 1 2013-10-02 15:31 debrouxl NFS@Home 621 2012-12-14 23:44 masser Sierpinski/Riesel Base 5 1 2009-02-09 01:10 KEP Twin Prime Search 212 2007-04-25 10:29 bebarce Software 3 2005-12-15 18:35

All times are UTC. The time now is 06:27.

Sun Nov 28 06:27:27 UTC 2021 up 128 days, 56 mins, 0 users, load averages: 1.09, 0.96, 1.01