![]() |
![]() |
#100 |
Sep 2009
37248 Posts |
![]() |
![]() |
![]() |
![]() |
#101 |
Apr 2010
Over the rainbow
32·281 Posts |
![]()
If anyone need a poly in this range (150-170 digit) , ask. Or if I forgot one in this thread, remind me.
|
![]() |
![]() |
![]() |
#102 |
Tribal Bullet
Oct 2004
2·3·19·31 Posts |
![]()
Please move requests for polynomials to be computed for specific numbers to the thread reserved for them
|
![]() |
![]() |
![]() |
#103 |
I moo ablest echo power!
May 2013
174110 Posts |
![]()
Thanks Jason!
|
![]() |
![]() |
![]() |
#104 |
Sep 2010
Scandinavia
3·5·41 Posts |
![]()
How should we be using the poly select speed-up provided by GPUs?
Say the speed-up is about ~50x (is it?), should we just divide the time in stage1 by 50? We should probably use some of the speed-up to get better polys... but how much better? What is the current protocol? Seems to be; run poly select long enough to get a decent poly by the old standards. Default stage1_norm (possibly higher, while focusing slightly more on the approximate range of coefficients that should yield the best polys?). Run -nps with a stage2_norm smaller than default by a factor of 20(?). Should this be done on the whole set or just the top x%? Run -npr on the top y% using... what norm? How should we go about optimizing this further? (I recently got a GPU with CC 3.0. Anyone have binaries for that?) Unrelated: Would it be possible to do "reverse" poly select? Find a number that would sieve exceptionally well with a given polynomial? |
![]() |
![]() |
![]() |
#105 |
Apr 2010
Over the rainbow
32×281 Posts |
![]()
I usually set a harsh enough stage 2 bound in nps to be left with only few poly to be npr'ed.
|
![]() |
![]() |
![]() |
#106 |
Sep 2010
Scandinavia
26716 Posts |
![]()
But is that efficient? Or is it inefficient like setting a strict stage1_norm?
|
![]() |
![]() |
![]() |
#107 |
Apr 2010
Over the rainbow
32×281 Posts |
![]()
I would say it is.
Let's say that , you have 1000 poly that pass the (stock bounded) nps stage in a range, and you want to keep only the 200 best. you will have to reorder your 1k poly and remove the last 800. Now with a strict bounded nps, you will have only 90 or 120 poly in your file. The npr stage will be much faster. Last fiddled with by firejuggler on 2013-07-25 at 17:12 |
![]() |
![]() |
![]() |
#108 | ||
Tribal Bullet
Oct 2004
2·3·19·31 Posts |
![]() Quote:
Quote:
Your unrelated question is what the special number field sieve is all about :) Stage 1 is a fairly loose 'net' for dragging in polynomials; passing the bound in stage 1 only means that one extra coefficient in the algebraic polynomial is very small. It doesn't mean, at all, that the polynomial is any good. Passing the bound in stage 2 means the polynomial has a high chance of being good, and might be very good if it wins the root score lottery. So setting a strict bound for the size in stage 2 is a good idea, since the best overall polynomials will have both very good size and very good root scores. The only danger is that you ignore polynomials with a very good root score if you are greedy about the size score, because in a sense they are competing goals. |
||
![]() |
![]() |
![]() |
#109 | ||||
Sep 2010
Scandinavia
61510 Posts |
![]()
It seems to me that the high throughput of a GPU should move the sweet spot slightly upwards. It may be that it has moved a negligible amount. Is that what you're saying or am I missing something?
What do you think by the way? Should we just divide the stage 1 deadline by the speed-up-factor? Quote:
Quote:
Let's say you are dead set on using a certain poly to factor a number using GNFS, how would you choose the number to be factored if you wanted it to be fast? Quote:
What, if anything, do we know about the distribution of the (quality-quantifying-)values produced by -nps and -npr? -nps produces an alpha and a size score, if I'm not mistaken. Should we consider both, or only the size? -npr produces (most importantly) an E-value. Can it be assumed that there isn't a sizable penalty to defining the portion of -nps-output that should be passed to -npr as the "top n entries" or "top x percent"? Or does it have to be more complex than that? Given the assumptions above, we'd have to find the best combination of nps-norm, portion of size-optimized candidates to pass on, and npr-norm. There's also the issue of how high to aim. Like: Do we go for a one-in-a-million-chance of an extreeemely good poly, or do we want to be 99% sure to find a workable poly? Should it matter a lot in this context, or can we say (to simplify) that we are looking to maximize the portion of the E-score-distribution that is above the old "good score"-cut-off? Quote:
Running -npr on every candidate obviously isn't efficient. Running -npr only on the one candidate with the best size-score wouldn't be efficient either. The question: where is the sweet spot? |
||||
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Polynomial Discriminant is n^k for an n-1 degree polynomial | carpetpool | Miscellaneous Math | 14 | 2017-02-18 19:46 |
Help choosing motherboard please. | Flatlander | GPU Computing | 4 | 2011-01-26 08:15 |
Choosing the best CPU for sieving | siew | Factoring | 14 | 2010-02-27 10:07 |
MPQS: choosing a good polynomial | ThiloHarich | Factoring | 4 | 2006-09-05 07:51 |
Choosing amount of memory | azhad | Software | 2 | 2004-10-16 16:41 |