![]() |
Thats the poly they use:
[CODE]skew: 1.41 c6: 1 c5: 0 c4: 0 c3: 0 c2: 0 c1: 0 c0: -2 Y1: 1 Y0: -191561942608236107294793378393788647952342390272950272 alim: 250000000 rlim: 250000000 lpbr: 33 lpba: 33 mfba: 67 mfbr: 96 alambda: 2.6 rlambda: 3.6[/CODE] +16e siever I guess they'll need something like 1.5 to 2 billion raw relations |
[QUOTE=firejuggler;257939]Another question : How many raw relation are we looking for (For a [URL="http://mersenneforum.org/showpost.php?p=257919&postcount=17"]162 digit [/URL]
Syd needed around 112 M relations, and for a 135 one, 23.5M..)? Naïve estimate : 162-135= 27 112/23.5=4.766 so doubling the number of relation for each 5.66 digits (lets say 6) 320-162=158 158/6= 26.33 112M*2^26.33=9 447 954 834 860 241 relations? Must have done something wrong.[/QUOTE]The number of relations depends almost entirely on the large prime bounds. If they are LPB1 and LBP2, the rule of thumb suggests that 0.8 * (pi(LPB1) + pi(LPB2)) unique relations are required. Paul |
2 billions? seem far easier than my first estimate of 9 quadrillions.
|
The number of relations needed does not depend on the size of the input number, only on the size of the large primes used. Basically you have to get enough relations so that the construction of matrix columns can be made to cancel out (most of) the large primes.
The size of the input does affect how long it takes to find each relation, though. |
[QUOTE=firejuggler;257939]Another question : How many raw relation are we looking for (For a [URL="http://mersenneforum.org/showpost.php?p=257919&postcount=17"]162 digit [/URL]
Syd needed around 112 M relations, and for a 135 one, 23.5M..)?[/QUOTE] Those were GNFS jobs, and this is an SNFS job. For SNFS, more relevant figures might be (from my records) 205 digits about 40 million relations about 400 CPU-hours on k8/2500 228 digits about 95 million relations about 3500 CPU-hours 263 digits about 300 million relations about 46000 CPU-hours 283 digits about 630 million relations about 220000 CPU-hours and this is a 320-digit job, for which I'd extrapolate to about 2.5 billion relations and about three million CPU-hours. Say a thousand CPUs for four months. |
In another thread, I wondered if I should try to TF M1061-...or if the final Lanczos step of combining the relations could be done on lesser computers than the Lomonosov cluster...I think we were batting around 60M relations. But the gods will have to tell us for sure. NFS@home seems to be significantly smaller than primenet. I calculated that my 6-core AMD machine would be able to contribute about 1/2% of the sieving on a given day. By comparison, I get around 0.1% of the daily primenet credit.
|
[QUOTE=M0CZY;257924]NFS@Home is currently sieving 2,1061-
[URL]http://escatter11.fullerton.edu/nfs/[/URL][/QUOTE] Is that so? I can't find it on the page. |
[QUOTE=lorgix;262071]Is that so? I can't find it on the page.[/QUOTE]
I can. |
[QUOTE=lorgix;262071]Is that so? I can't find it on the page.[/QUOTE]
Have a look at "Status of numbers" or just take the direct link: [url]http://escatter11.fullerton.edu/nfs/numbers.html[/url] |
Thanks!
So... M1277 anyone? I have a feeling Silverman can't wait to get that started. |
[QUOTE=lorgix;262075]Thanks!
So... M1277 anyone? I have a feeling Silverman can't wait to get that started.[/QUOTE] Your feeling would be incorrect. I have no desire to get that started. |
| All times are UTC. The time now is 07:36. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.