20130713, 16:12  #100 
Sep 2009
3724_{8} Posts 

20130713, 16:51  #101 
Apr 2010
Over the rainbow
3^{2}·281 Posts 
If anyone need a poly in this range (150170 digit) , ask. Or if I forgot one in this thread, remind me.

20130716, 10:02  #102 
Tribal Bullet
Oct 2004
2·3·19·31 Posts 
Please move requests for polynomials to be computed for specific numbers to the thread reserved for them

20130716, 13:27  #103 
I moo ablest echo power!
May 2013
1741_{10} Posts 
Thanks Jason!

20130725, 14:51  #104 
Sep 2010
Scandinavia
3·5·41 Posts 
How should we be using the poly select speedup provided by GPUs?
Say the speedup is about ~50x (is it?), should we just divide the time in stage1 by 50? We should probably use some of the speedup to get better polys... but how much better? What is the current protocol? Seems to be; run poly select long enough to get a decent poly by the old standards. Default stage1_norm (possibly higher, while focusing slightly more on the approximate range of coefficients that should yield the best polys?). Run nps with a stage2_norm smaller than default by a factor of 20(?). Should this be done on the whole set or just the top x%? Run npr on the top y% using... what norm? How should we go about optimizing this further? (I recently got a GPU with CC 3.0. Anyone have binaries for that?) Unrelated: Would it be possible to do "reverse" poly select? Find a number that would sieve exceptionally well with a given polynomial? 
20130725, 15:06  #105 
Apr 2010
Over the rainbow
3^{2}×281 Posts 
I usually set a harsh enough stage 2 bound in nps to be left with only few poly to be npr'ed.

20130725, 16:06  #106 
Sep 2010
Scandinavia
267_{16} Posts 
But is that efficient? Or is it inefficient like setting a strict stage1_norm?

20130725, 17:09  #107 
Apr 2010
Over the rainbow
3^{2}×281 Posts 
I would say it is.
Let's say that , you have 1000 poly that pass the (stock bounded) nps stage in a range, and you want to keep only the 200 best. you will have to reorder your 1k poly and remove the last 800. Now with a strict bounded nps, you will have only 90 or 120 poly in your file. The npr stage will be much faster. Last fiddled with by firejuggler on 20130725 at 17:12 
20130725, 17:11  #108  
Tribal Bullet
Oct 2004
2·3·19·31 Posts 
Quote:
Quote:
Your unrelated question is what the special number field sieve is all about :) Stage 1 is a fairly loose 'net' for dragging in polynomials; passing the bound in stage 1 only means that one extra coefficient in the algebraic polynomial is very small. It doesn't mean, at all, that the polynomial is any good. Passing the bound in stage 2 means the polynomial has a high chance of being good, and might be very good if it wins the root score lottery. So setting a strict bound for the size in stage 2 is a good idea, since the best overall polynomials will have both very good size and very good root scores. The only danger is that you ignore polynomials with a very good root score if you are greedy about the size score, because in a sense they are competing goals. 

20130725, 19:06  #109  
Sep 2010
Scandinavia
615_{10} Posts 
It seems to me that the high throughput of a GPU should move the sweet spot slightly upwards. It may be that it has moved a negligible amount. Is that what you're saying or am I missing something?
What do you think by the way? Should we just divide the stage 1 deadline by the speedupfactor? Quote:
Quote:
Let's say you are dead set on using a certain poly to factor a number using GNFS, how would you choose the number to be factored if you wanted it to be fast? Quote:
What, if anything, do we know about the distribution of the (qualityquantifying)values produced by nps and npr? nps produces an alpha and a size score, if I'm not mistaken. Should we consider both, or only the size? npr produces (most importantly) an Evalue. Can it be assumed that there isn't a sizable penalty to defining the portion of npsoutput that should be passed to npr as the "top n entries" or "top x percent"? Or does it have to be more complex than that? Given the assumptions above, we'd have to find the best combination of npsnorm, portion of sizeoptimized candidates to pass on, and nprnorm. There's also the issue of how high to aim. Like: Do we go for a oneinamillionchance of an extreeemely good poly, or do we want to be 99% sure to find a workable poly? Should it matter a lot in this context, or can we say (to simplify) that we are looking to maximize the portion of the Escoredistribution that is above the old "good score"cutoff? Quote:
Running npr on every candidate obviously isn't efficient. Running npr only on the one candidate with the best sizescore wouldn't be efficient either. The question: where is the sweet spot? 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Polynomial Discriminant is n^k for an n1 degree polynomial  carpetpool  Miscellaneous Math  14  20170218 19:46 
Help choosing motherboard please.  Flatlander  GPU Computing  4  20110126 08:15 
Choosing the best CPU for sieving  siew  Factoring  14  20100227 10:07 
MPQS: choosing a good polynomial  ThiloHarich  Factoring  4  20060905 07:51 
Choosing amount of memory  azhad  Software  2  20041016 16:41 