mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Msieve (https://www.mersenneforum.org/forumdisplay.php?f=83)
-   -   Polynomial Request Thread (https://www.mersenneforum.org/showthread.php?t=18368)

firejuggler 2013-10-09 11:58

another low expo LD. will try now in a reasonnable range (12 to 17M)
[code]
R0: -174549336556401481202069007537515
R1: 440721410024969593
A0: -281491614847856620351679317663874215083264
A1: -52465697511380707542888318448808376
A2: 12109969047207747484463577258
A3: 316512889878104730031
A4: -48637872015544
A5: 3201660
skew 11587470.20, size 1.937e-016, alpha -8.325, combined = 4.227e-013 rroots =3
[/code]

VBCurtis 2013-10-13 07:48

[QUOTE=LaurV;355717]He says that in the same period of time, one guy runs 100 meters throwing out into the public one dollar at every 10 meters he runs, and the second guy runs 133 meters (13.3 % faster) but spitting one dollar every 14.6 meters (14.6% slower). At the end, the public collects 10 or 11 dollars from the first guy (depending where the thrown out the first dollar), and 10 or 11 from the second (again, depending on the luck), so there is no relevant comparison between the productivity of the two guys.[/QUOTE]

This is what I thought he was trying to say. Here's the problem: The sec/rel is seconds per relation, NOT seconds per Q-tested. My guy spits out more dollars per second, even though the dollars per meter measure is worse. Why do we care how many meters he covers, if we collect the money 13% faster?

It takes my guy 14.6% more meters to get a dollar, but he runs 28% faster (roughly), so my guy throws dollars out 13% more often than the other guy. My point is that this is clearly better, and that you two are confusing sec/rel with sec/Q. Sec/rel is how often we get dollars, period. As long as the number of relations required for the two polys match (with all parameters equal, we should assume this), that is.

EdH 2013-10-13 14:02

How do duplicate relations fit into the above scenarios?

firejuggler 2013-10-13 15:43

adverse wind?

VBCurtis 2013-10-14 20:08

[QUOTE=EdH;356125]How do duplicate relations fit into the above scenarios?[/QUOTE]

I am not sure, but my wild guess is that searching more Q might lead to more duplicates, which might require us to find yet more relations in order to build a matrix. However, we're talking about perhaps a single-digit percentage of add'l dups, when dups are a single-digit percentage of total rels found. So any effect would be on the order of 1% of extra sieve effort needed. That leads us to ignore the effect.

firejuggler 2013-10-14 20:13

I can't get anything good for swellman's C168.

henryzz 2013-10-14 22:37

[QUOTE=VBCurtis;356231]I am not sure, but my wild guess is that searching more Q might lead to more duplicates, which might require us to find yet more relations in order to build a matrix. However, we're talking about perhaps a single-digit percentage of add'l dups, when dups are a single-digit percentage of total rels found. So any effect would be on the order of 1% of extra sieve effort needed. That leads us to ignore the effect.[/QUOTE]
Dunno near the end of sieving with problematic numbers I have seen figures like >30% of the new relations being duplicates.

VBCurtis 2013-10-15 01:19

[QUOTE=henryzz;356242]Dunno near the end of sieving with problematic numbers I have seen figures like >30% of the new relations being duplicates.[/QUOTE]

Are there specific scenarios where this happens? I mean, can we predict which situations are high-risk for this to happen?

Possibly related: Do special-Q values above our preferred search region lose efficiency because we find relations more slowly, or because more of the relations we find are duplicates (or both??)?

My statements were meant as an average over the entire project, because our measurement of sieving speed is also an avg over the entire project. In other words, you may be right that the last, say, 10% of special-Q searched produce 30% duplicates, but that results in only a 3% increase in total duplicates (and thus total # of relations needed) vs a project that needs 10% fewer special-Q and so never sieves the high-duplicate region.

RichD 2013-10-15 03:02

In my limited experience, I am happy with 18-20% dup rate. Even SNFS polys can't beat the 16% barrier. As both of you pointed out, with a not so good poly, the extended range of special-Q will achieve a better than 30% dup rate. Additionally, starting the range too low will have a higher dup rate, but my guess it is nowhere near 30%.

I don't have empirical data to back my claim, just "gut feel" from doing several hundred NFS jobs including a few dozen for RSALS & NFS@Home.

swellman 2013-10-16 01:48

[QUOTE=sashamkrt;355601][CODE]
# norm 2.880607e-016 alpha -8.201338 e 4.765e-013 rroots 5
skew: 89100677.43
c0: -27857995650530594214542724173490926656056000
c1: 489092174862742972528340289017184040
c2: 30770925245570759449861491874
c3: -51337377580050003917
c4: -4407409358582
c5: 10200
Y0: -551154318026277087308020573543333
Y1: 109459496504263717

[/CODE][/QUOTE]

Is this the best poly? By far the best e-score, and it's near the top of the expected range.

Thanks to all for the heavy lifting here.

swellman 2013-10-21 23:04

NFS@Home has the C168 prepped for queue. Thanks to all.

If folks are willing, there is another xyyxf C168 composite for factoring by GNFS, and it has survived 21k curves at B1=260M.

[code]C168_130_119

619210585289939300853894524032703690620745598172616026950373110134063171003372954902148138740033081843433800699129174962738134140333754085866583619110148324516327414223[/code]

Thanks in advance for any and all poly searching.


All times are UTC. The time now is 22:30.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.