20200217, 21:24  #56 
Jul 2003
So Cal
2^{5}·3^{2}·7 Posts 
For 2,1165+, the 16e tasks will use significantly more memory. We can't afford to keep it artificially low for this one. It will likely be a bit over 2GB per core.

20200217, 21:35  #57  
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
11^{2}·37 Posts 
Quote:
Make an announcement, double the points for this case only, etc ...I'm happy up to 4 GB/thread. Last fiddled with by pinhodecarlos on 20200217 at 21:47 

20200218, 02:23  #58  
"Curtis"
Feb 2005
Riverside, CA
3916_{10} Posts 
Quote:
Please post parameters here once you decide them lim's and mfb's/lp's. I don't think your client is 34bit capable, so I imagine we're on 33/33 for LP. I'd like to start sieving very small Q locally on CADO. I'll use A=32, which is equivalent of 16.5e (40% larger sieve area). I have just one 20core machine available at present, so I won't get far, but I can start at Q=10M and contribute some relations. 

20200218, 21:39  #59  
Jul 2003
So Cal
11111100000_{2} Posts 
Quote:
rlim: 536000000 alim: 536000000 lpbr: 33 lpba: 33 mfbr: 96 mfba: 66 rlambda: 3.7 alambda: 2.8 I usually start at 20M, but I can up that a bit if you wish. 

20200218, 22:28  #60 
"Curtis"
Feb 2005
Riverside, CA
7514_{8} Posts 
Thanks!
How about you start at 40M, and I'll run Q=540M on CADO? I think I'll get yield 36x higher than ggnfs at those smaller Q (plus the extra 40% from using a larger siever), since CADO is fine with sieving Q values below lim. I'll get CADO fired up later this week, and will post yield and sec/rel data once I have a reasonable sample. 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Poly select and testsieving for RSA232  VBCurtis  Operation Kibibit  25  20200107 01:57 
Poly select and planning for 2,2210M  swellman  Cunningham Tables  50  20191026 20:51 
Poly select and planning for 2,2330M  VBCurtis  Cunningham Tables  68  20190915 07:10 
YAFU Poly Select Deadline  amphoria  YAFU  22  20160917 09:47 
Starting NFS skipping poly select  jux  YAFU  5  20160102 01:01 