![]() |
As behind as the status page suggests. In the weeds. :max:
This is why I've been prioritizing the base-2 numbers. But 2,1165+ will give us some time to catch up a bit. And I don't think anyone is in a hurry to know these factors. They'll get done eventually. If anyone wants to solve a 70M+ matrix, send them my way! :smile: |
I'll do any matrix around 60M for your queue; if you stumble into one in the low 60s, give me a holla.
|
A New Target (easy!)
[QUOTE=VBCurtis;537964]I'll do any matrix around 60M for your queue; if you stumble into one in the low 60s, give me a holla.[/QUOTE]
Here is a new GNFS target for everyone: A C202 6523 10,337- c268 906533749251005245151122204670351312590267105760052002862150546121.c202 |
[QUOTE=R.D. Silverman;538091]Here is a new GNFS target for everyone: A C202
6523 10,337- c268 906533749251005245151122204670351312590267105760052002862150546121.c202[/QUOTE] Noted. We can add it to the list. For reference, the decimal form of this C202 is [CODE] 2076486865904164187880498803002833020624706055858258295123907760787910463183237701437319913688727165276132151609318284002818920807675158414601157967453931895433506042829474274993772412901816590191592923 [/CODE] The record e-score poly (deg 5) for a C202 was 3.665e-15. |
Did it have a t60 worth of t65-sized curves? I mean, is any more ECM necessary?
Sean and I can poly select this within a couple weeks. We could imitate the 2,1165+ sieve approach, using CADO for Q under, say, 100M and the 15e queue for 100M-up. Or just a Spring team-CADO-sieve with A=30 (equivalent to I=15.5), which would need about 5GB ram per process. |
[QUOTE=VBCurtis;538118]Did it have a t60 worth of t65-sized curves? I mean, is any more ECM necessary?
[/QUOTE] The P66 factor was found with ECM by Sam. Between his work, Bruce Dodson's work, my work plus the work of others [extent unknown], it has had more than sufficient ECM. The total extent is unknown: too many different participants, each with an unknown amount of work. I do believe that Bruce did a t65 by himself. It was among the first 5 holes when he did his work. |
[QUOTE=VBCurtis;538118]Did it have a t60 worth of t65-sized curves? I mean, is any more ECM necessary?
Sean and I can poly select this within a couple weeks. We could imitate the 2,1165+ sieve approach, using CADO for Q under, say, 100M and the 15e queue for 100M-up. Or just a Spring team-CADO-sieve with A=30 (equivalent to I=15.5), which would need about 5GB ram per process.[/QUOTE] I like the idea of a team sieve for low Q combined with a 15e queue for higher Q. But is this C202 GNFS too difficult for 15e? It seems to be stretching the bounds a bit. But 16e is fully tasked for the foreseeable future, so I would be happy to help in a team sieve if 15e proves “too suboptimal”. |
[QUOTE=swellman;538127]I like the idea of a team sieve for low Q combined with a 15e queue for higher Q. But is this C202 GNFS too difficult for 15e? It seems to be stretching the bounds a bit. But 16e is fully tasked for the foreseeable future, so I would be happy to help in a team sieve if 15e proves “too suboptimal”.[/QUOTE]
I understand that the relative efficiencies of ggnfs sievers vs cado sievers are quite different, but recall that we sieved half the C207 job with I=15. I don't think it's too bad an idea to use a large siever area on small Q with CADO, while doing higher Q with ggnfs/15e. We'd use I=15 for the higher ranges on CADO anyway, and on higher Q yield is more similar between the software packages than it is at low Q. So, A=30 on CADO combined with 15e on nfs@home should nicely utilize both large-memory linux resources and lower-memory mass contributions. I think I'd pick 33/34LP if it were a pure CADO job, so going down half a large-prime to be compatible with the 15e queue is no big deal. Something like Q=5-150M on CADO and 150-600 on 15e ought to do the trick. |
[QUOTE=VBCurtis;538130]I don't think it's too bad an idea to use a large siever area on small Q with CADO, while doing higher Q with ggnfs/15e.[/QUOTE]
I recall Bob saying something to the effect that sieving a larger area at smaller q is theoretically optimal. |
On 2,1165+ I had a wonderful feedback from teams. They advise setting up a new app with details on memory requirements on project preference page and increase reward. I believe this is feasible, only maybe change or add more intermediate badge levels. Right now individuals cannot reach highest badge level.
|
[QUOTE=axn;538131]I recall Bob saying something to the effect that sieving a larger area at smaller q is theoretically optimal.[/QUOTE]
The following is a theorem. The total number of lattice points that are sieved is minimized when the sieve area for each q is proportional to the yield for that q. The constant of proportionality falls out of the analysis as an eigenvalue in the calc of variations problem. Its value depends on the total number of relations needed. Since smaller q have higher yields this means that the sieve area for small q should be larger. One would think that smaller q would have smaller yield, but the following happens: There is a "seesaw" effect that takes place between the two norms that need to be smooth. As one makes one norm smaller (let's say the rational one), the other norm gets bigger [and vice versa]. The effect is non-linear; the rising norm increases faster than the decreasing norm decreases. To see this look at what happens for a fixed factorization when one changes the algebraic degree. Also, look at what happens as q changes size. For example, we need (rational norm/q) to be smooth as q changes. As q gets bigger this gets smaller. But the [i]algebraic[/i] norm [i] increases[/i] as ~q^(d/2) where d is the degree when we use q on the rational side. |
All times are UTC. The time now is 01:32. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.