mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > Msieve

Reply
 
Thread Tools
Old 2013-07-13, 16:12   #100
chris2be8
 
chris2be8's Avatar
 
Sep 2009

37248 Posts
Default

Quote:
Originally Posted by firejuggler View Post
I found a few god one, but 2 in particular might be of interest
Many thanks, I'll queue the first one for processing. I should have a result within a week.

Chris
chris2be8 is online now   Reply With Quote
Old 2013-07-13, 16:51   #101
firejuggler
 
firejuggler's Avatar
 
Apr 2010
Over the rainbow

32·281 Posts
Default

If anyone need a poly in this range (150-170 digit) , ask. Or if I forgot one in this thread, remind me.
firejuggler is offline   Reply With Quote
Old 2013-07-16, 10:02   #102
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

2·3·19·31 Posts
Default

Please move requests for polynomials to be computed for specific numbers to the thread reserved for them
jasonp is offline   Reply With Quote
Old 2013-07-16, 13:27   #103
wombatman
I moo ablest echo power!
 
wombatman's Avatar
 
May 2013

174110 Posts
Default

Thanks Jason!
wombatman is offline   Reply With Quote
Old 2013-07-25, 14:51   #104
lorgix
 
lorgix's Avatar
 
Sep 2010
Scandinavia

3·5·41 Posts
Default

How should we be using the poly select speed-up provided by GPUs?

Say the speed-up is about ~50x (is it?), should we just divide the time in stage1 by 50?
We should probably use some of the speed-up to get better polys... but how much better?

What is the current protocol?
Seems to be; run poly select long enough to get a decent poly by the old standards.

Default stage1_norm (possibly higher, while focusing slightly more on the approximate range of coefficients that should yield the best polys?).

Run -nps with a stage2_norm smaller than default by a factor of 20(?). Should this be done on the whole set or just the top x%?

Run -npr on the top y% using... what norm?

How should we go about optimizing this further?

(I recently got a GPU with CC 3.0. Anyone have binaries for that?)

Unrelated: Would it be possible to do "reverse" poly select? Find a number that would sieve exceptionally well with a given polynomial?
lorgix is offline   Reply With Quote
Old 2013-07-25, 15:06   #105
firejuggler
 
firejuggler's Avatar
 
Apr 2010
Over the rainbow

32×281 Posts
Default

I usually set a harsh enough stage 2 bound in nps to be left with only few poly to be npr'ed.
firejuggler is offline   Reply With Quote
Old 2013-07-25, 16:06   #106
lorgix
 
lorgix's Avatar
 
Sep 2010
Scandinavia

26716 Posts
Default

But is that efficient? Or is it inefficient like setting a strict stage1_norm?
lorgix is offline   Reply With Quote
Old 2013-07-25, 17:09   #107
firejuggler
 
firejuggler's Avatar
 
Apr 2010
Over the rainbow

32×281 Posts
Default

I would say it is.
Let's say that , you have 1000 poly that pass the (stock bounded) nps stage in a range, and you want to keep only the 200 best. you will have to reorder your 1k poly and remove the last 800.
Now with a strict bounded nps, you will have only 90 or 120 poly in your file. The npr stage will be much faster.

Last fiddled with by firejuggler on 2013-07-25 at 17:12
firejuggler is offline   Reply With Quote
Old 2013-07-25, 17:11   #108
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

2·3·19·31 Posts
Default

Quote:
Originally Posted by lorgix View Post
We should probably use some of the speed-up to get better polys... but how much better?
Nobody is sure of that.
Quote:
(I recently got a GPU with CC 3.0. Anyone have binaries for that?)

Unrelated: Would it be possible to do "reverse" poly select? Find a number that would sieve exceptionally well with a given polynomial?
The bulk of the GPU time uses a third-party sorting library, which AFAIK does not have specific optimizations for Kepler-class GPUs.

Your unrelated question is what the special number field sieve is all about :)

Stage 1 is a fairly loose 'net' for dragging in polynomials; passing the bound in stage 1 only means that one extra coefficient in the algebraic polynomial is very small. It doesn't mean, at all, that the polynomial is any good. Passing the bound in stage 2 means the polynomial has a high chance of being good, and might be very good if it wins the root score lottery. So setting a strict bound for the size in stage 2 is a good idea, since the best overall polynomials will have both very good size and very good root scores. The only danger is that you ignore polynomials with a very good root score if you are greedy about the size score, because in a sense they are competing goals.
jasonp is offline   Reply With Quote
Old 2013-07-25, 19:06   #109
lorgix
 
lorgix's Avatar
 
Sep 2010
Scandinavia

61510 Posts
Default

Quote:
Originally Posted by jasonp View Post
Nobody is sure of that.
It seems to me that the high throughput of a GPU should move the sweet spot slightly upwards. It may be that it has moved a negligible amount. Is that what you're saying or am I missing something?

What do you think by the way? Should we just divide the stage 1 deadline by the speed-up-factor?
Quote:
Originally Posted by jasonp
The bulk of the GPU time uses a third-party sorting library, which AFAIK does not have specific optimizations for Kepler-class GPUs.
OK, thanks. It is performing well anyway.
Quote:
Originally Posted by jasonp
Your unrelated question is what the special number field sieve is all about :)
I see what you mean, but:
Let's say you are dead set on using a certain poly to factor a number using GNFS, how would you choose the number to be factored if you wanted it to be fast?
Quote:
Originally Posted by jasonp
Stage 1 is a fairly loose 'net' for dragging in polynomials; passing the bound in stage 1 only means that one extra coefficient in the algebraic polynomial is very small. It doesn't mean, at all, that the polynomial is any good. Passing the bound in stage 2 means the polynomial has a high chance of being good, and might be very good if it wins the root score lottery. So setting a strict bound for the size in stage 2 is a good idea, since the best overall polynomials will have both very good size and very good root scores. The only danger is that you ignore polynomials with a very good root score if you are greedy about the size score, because in a sense they are competing goals.
I see. If we are too picky about size, then we will have fewer chances to "win the root score lottery". We want to find the parameters that produce the most "wins" per unit of time.

What, if anything, do we know about the distribution of the (quality-quantifying-)values produced by -nps and -npr?
-nps produces an alpha and a size score, if I'm not mistaken. Should we consider both, or only the size?
-npr produces (most importantly) an E-value.
Can it be assumed that there isn't a sizable penalty to defining the portion of -nps-output that should be passed to -npr as the "top n entries" or "top x percent"? Or does it have to be more complex than that?

Given the assumptions above, we'd have to find the best combination of nps-norm, portion of size-optimized candidates to pass on, and npr-norm.

There's also the issue of how high to aim. Like: Do we go for a one-in-a-million-chance of an extreeemely good poly, or do we want to be 99% sure to find a workable poly? Should it matter a lot in this context, or can we say (to simplify) that we are looking to maximize the portion of the E-score-distribution that is above the old "good score"-cut-off?
Quote:
Originally Posted by firejuggler View Post
I would say it is.
Let's say that , you have 1000 poly that pass the (stock bounded) nps stage in a range, and you want to keep only the 200 best. you will have to reorder your 1k poly and remove the last 800.
Now with a strict bounded nps, you will have only 90 or 120 poly in your file. The npr stage will be much faster.
I see your point, but I'm not sure how consistently (if at all) 90~120 candidates with somewhat better properties would actually beat 200 candidates.
Running -npr on every candidate obviously isn't efficient. Running -npr only on the one candidate with the best size-score wouldn't be efficient either. The question: where is the sweet spot?
lorgix is offline   Reply With Quote
Old 2013-07-25, 19:23   #110
firejuggler
 
firejuggler's Avatar
 
Apr 2010
Over the rainbow

252910 Posts
Default

42, lorgix, 42.. (H2G2 ref)

That sweet spot is our grail, and... nobody got it it yet, that's why we experiment.
firejuggler is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Polynomial Discriminant is n^k for an n-1 degree polynomial carpetpool Miscellaneous Math 14 2017-02-18 19:46
Help choosing motherboard please. Flatlander GPU Computing 4 2011-01-26 08:15
Choosing the best CPU for sieving siew Factoring 14 2010-02-27 10:07
MPQS: choosing a good polynomial ThiloHarich Factoring 4 2006-09-05 07:51
Choosing amount of memory azhad Software 2 2004-10-16 16:41

All times are UTC. The time now is 17:26.

Tue Mar 2 17:26:40 UTC 2021 up 89 days, 13:37, 1 user, load averages: 2.99, 2.61, 2.48

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.