mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Msieve (https://www.mersenneforum.org/forumdisplay.php?f=83)
-   -   Polynomial Request Thread (https://www.mersenneforum.org/showthread.php?t=18368)

RichD 2013-07-23 04:54

A Challenge
 
Aliquot Sequence 4788 sits at index 5154 with a c173 remaining. The best poly found so far is posted at: [URL]http://mersenneforum.org/showpost.php?p=337945&postcount=2102[/URL].

This is a very, very low request but can this poly be bettered? I believe the leading coefficient has been tried through 800K.

firejuggler 2013-07-23 05:27

I will try.

VBCurtis 2013-07-24 21:57

Another poly for Tom's C176:
[CODE] n: 23847813234751095518553790092375554156554053397317779773831395300907022889988067196688707035368393323174109573706580716877436977083912429179428455659750299481884888866554791173
# norm 3.445278e-017 alpha -6.820795 e 1.321e-013 rroots 5
skew: 3088331.89
c0: -498014898833129675206187724146860356135
c1: 24249301299856929460089682902161763
c2: -34320331326371745034488026956
c3: -7376626463852947241062
c4: 3051637511514764
c5: 214785480
Y0: -2565008026979220326617490926563632
Y1: 186995647017448237 [/CODE]
If I understand things correctly, for a job this big it's worth trial-sieving the polys within, say, 2% of the best score? I am still running the GPU on this in hopes of score 1.40; if I give up, I'll post the 1.30 for trial-sieving purposes also.

wombatman 2013-07-24 22:04

Stupid question about trial-sieving:

Is that essentially seeing how many relations/sec one gets with a given polynomial?

swellman 2013-07-24 22:52

[QUOTE=wombatman;347260]Stupid question about trial-sieving:

Is that essentially seeing how many relations/sec one gets with a given polynomial?[/QUOTE]

Yes, as well as relations/special_q.

ETA: See [url=http://www.mersenneforum.org/showpost.php?p=347141&postcount=56]this thread[/url] for a discussion on sieving estimates.

jasonp 2013-07-25 00:35

Trial sieving is supposed to try minimizing the time that sieving needs to complete. That means picking the size and number of large primes, estimating the number of relations that would be needed from that, then sieving a few widely separated Q values (say 1/1000 of the complete job) and computing the estimated total sieving time giving the relations/sec found across each range.

It amazes me that folks here have the discipline to manage full-scale trial sieving for any job. I don't think I could do it.

wombatman 2013-07-25 01:01

Man, you're not kidding. Might be interesting to play around with, but at this point, I think I'm limited more by not having a cluster (quad core laptop processor! woooooo!) to use for the sieving rather than tweaking the parameters for optimization.

VBCurtis 2013-07-25 01:26

I, too, have an i7 laptop as my factoring tool. You're mostly right about trial-sieving mattering less for us- if, say, 3 human-hours spent on optimizing parameters can save 10% of a job's sieve length, we'd have to be running a very lengthy job (or REALLY treasure our CPU time vs our human time) for the trial time to be worthwhile.

But as one moves up into GNFS-150s, the prospect of 10% saved can mean days of CPU time; not to say we'll automatically find 10% efficiency...

Further, it strikes me that one ought to learn which knobs to turn to attempt to optimize when the cost of a mistaken "optimization" is days rather than weeks. When I do get around to tackling a GNFS-150, I plan to try just this even though best parameters are well known (?)in the 150s.

Do enough people use the factmsieve script for 150+ factorizations that we (I volunteer) should update the parameters files?

swellman 2013-07-25 01:44

[QUOTE=wombatman;347280]Man, you're not kidding. Might be interesting to play around with, but at this point, I think I'm limited more by not having a cluster (quad core laptop processor! woooooo!) to use for the sieving rather than tweaking the parameters for optimization.[/QUOTE]

I've done over 40 factorizations with the same rig. Depending on memory, you're good for GNFS into the 160s, equivalent SNFS. Don't get too wrapped up in the optimization - for factorizations of that size, the net sieving time difference between a perfect and a suboptimal poly is only a few days. Personally I don't worry if a factorization takes say 26 days instead of a theoretical optimal of 23 days.

Start hunting big game (e.g. GNFS 190s) and things are much different. [url=http://www.mersenneforum.org/showthread.php?t=15744]Sieving jobs can take CPU-centuries[/url]. :max:

All IMHO of course.

swellman 2013-07-25 01:54

[QUOTE=VBCurtis;347285]I, too, have an i7 laptop as my factoring tool. You're mostly right about trial-sieving mattering less for us- if, say, 3 human-hours spent on optimizing parameters can save 10% of a job's sieve length, we'd have to be running a very lengthy job (or REALLY treasure our CPU time vs our human time) for the trial time to be worthwhile.

But as one moves up into GNFS-150s, the prospect of 10% saved can mean days of CPU time; not to say we'll automatically find 10% efficiency...

Further, it strikes me that one ought to learn which knobs to turn to attempt to optimize when the cost of a mistaken "optimization" is days rather than weeks. When I do get around to tackling a GNFS-150, I plan to try just this even though best parameters are well known (?)in the 150s.

Do enough people use the factmsieve script for 150+ factorizations that we (I volunteer) should update the parameters files?[/QUOTE]

I use YAFU exclusively and with great success. But its automated GNFS parameter table tops out at 155, as that was considered the bleeding edge when YAFU was first created. Manual intervention is required when factoring larger GNFS composites. While I find this manual process fun and educational, expanding the parameter base might be beneficial.

wombatman 2013-07-25 15:51

[QUOTE=swellman;347286]I've done over 40 factorizations with the same rig. Depending on memory, you're good for GNFS into the 160s, equivalent SNFS. Don't get too wrapped up in the optimization - for factorizations of that size, the net sieving time difference between a perfect and a suboptimal poly is only a few days. Personally I don't worry if a factorization takes say 26 days instead of a theoretical optimal of 23 days.

Start hunting big game (e.g. GNFS 190s) and things are much different. [url=http://www.mersenneforum.org/showthread.php?t=15744]Sieving jobs can take CPU-centuries[/url]. :max:

All IMHO of course.[/QUOTE]

I'm with you. I've worked on various sizes of numbers. I don't recall the largest I've completed start-to-finish, but I think it was high C130s or low C140s. I have a C157, C158, and C159 I'm working on now. I suppose if I were to do a big number, I would just post the poly on here and ask for help. Thanks for all the information re: sieving time of larger numbers.

I hope that one day soon we'll be able to sieve using a CUDA-based system. That would be fantastic.


All times are UTC. The time now is 12:53.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.