![]() |
|
|
#34 | |
|
"Nancy"
Aug 2002
Alexandria
1001101000112 Posts |
Quote:
I might do a smaller job, though, for example 7,269- looks interesting. It's a prime base, prime expoent minus 1 number so the OPN folks might like it. Only the factor 2153 is known at the moment. Alex |
|
|
|
|
|
|
#35 |
|
Oct 2004
Austria
2·17·73 Posts |
|
|
|
|
|
|
#36 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
2·132·19 Posts |
OK, the polynomial for 3,499+ is obvious, and with alim=rlim=5e7, lpa=lpr=30, q around 5e7 take ~0.55s/relation on a 2.2GHz K8 I have lying around, so the relation-collection sounds as if it was around 12,000 CPU-hours, ~1,000 GHz-days.
7,269- with those alim/rlim parameters is taking about 0.31s/rel on the same machine and so would be about 7,000 CPU-hours. 2,841- is a much more interestingly exotic prospect; you start running into yield issues with gnfs-lasieve4I14e (though gnfs-lasieve4I15e with small-prime size 100M has 'only' a 850M virt / 400M res memory usage, and most fast-enough machines will by now have 1G/CPU); you probably have to use large-prime size 2^31, meaning you've got 150M relations to collect and manipulate; the matrix will be a challenge, and after all that it wouldn't even be the second-largest Cunningham SNFS job done. But what's the point of projects that can easily be done? I'm doing a little pre-sieving; 25 GHz-years feels like the right order of estimate. Will post some figures in a couple of hours when the jobs are over. |
|
|
|
|
|
#37 | |
|
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
47×229 Posts |
Quote:
That said, 3,512+ really ought to be done some time reasonably soon. Paul |
|
|
|
|
|
|
#38 |
|
"Sander"
Oct 2002
52.345322,5.52471
29·41 Posts |
I can contribute 15 - 25 GHz of sieving (GGNFS franke) if needed.
|
|
|
|
|
|
#39 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
11001000101102 Posts |
OK, the pre-sieving for 2,841- gives some rather odd results.
I'm assuming that the yield-per-Q drops off as x^-(1/3), which is the best fit to the yield figures I have obtained on 7,263-, then solving integral_{sieve_max}^N measured_yield_per_Q x^(-1/3) dx = expected_relations_needed. The yield-dropoff exponent is not straightforward to measure - I've needed to analyse most of a 75-million-relation sieving job to do the statistics to the point that I'm confident that the first decimal place is a 3 - though this may just be that I'm doing the stats wrong. Fortunately, values between 0.3 and 0.4 don't alter the conclusion below. For lp=2^30, expected_relations_needed is 85M, for lp=2^31 it's 170M. I then measured yield of relations for 10000 Q starting at sieve_max, and the time per relation to get those, for various parameter choices sieve_max / lpb / sieve_size; figures are yield, time-per-relation-in-seconds, and time-for-enough-relations in GHz-years given the assumptions above. Hardware is K8/2200, I was running one job on each core of a dual-core, but previous experience suggests this doesn't affect the timings significantly. Software is Franke siever from the ggnfs build, with the makefile modified to make gnfs-lasieve4I15e as well as 12..14. 50/30/14 4247 1.26 11.3 50/31/14 8276 0.66 11.8 50/30/15 8987 1.49 11.2 50/31/15 16364 0.78 12.0 100/30/14 3889 1.90 14.7 100/31/14 7888 0.86 14.5 100/30/15 8271 1.64 12.3 100/31/15 17091 0.85 12.6 So: this is a job of more than 10 but probably not as much as 15 GHz-years; enlarging the sieve space makes life slower at small=50M and faster at small=100M; going from lp=30 to lp=31 doesn't seem by these measurements a good idea even at this 254-digit level with the current sieving software, though I notice that the Aoki 274-digit SNFS was done with lp=34, and M1039 was done with lp=36. I'm currently running special-Q in [10^8, 10^8+10^4] for small=50,60,70,80,90 and space=30,31, results after the weekend. Has anyone got a good reference for techniques for minimising expensive-to-compute functions? I suppose this may be a simplex-method job. |
|
|
|
|
|
#40 | |
|
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
47×229 Posts |
Quote:
Paul |
|
|
|
|
|
|
#41 | |
|
Nov 2003
22×5×373 Posts |
Quote:
The answer depends on several things: (1) Is the constraint region convex? Is it linear? (2) The number of local extrema. (3) Smoothness of the objective function (4) How well/easily the gradient of the objective can be approximated. gradient-descent and conjugate-gradient methods can work well if grad F can be accurately and easily computed and if there are not a lot of local extrema. |
|
|
|
|
|
|
#42 |
|
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
47·229 Posts |
|
|
|
|
|
|
#43 | |
|
(loop (#_fork))
Feb 2006
Cambridge, England
2×132×19 Posts |
Quote:
|
|
|
|
|
|
|
#44 | |
|
Nov 2003
22·5·373 Posts |
Quote:
It does for the lattice sieve what the above paper does for a line-siever. I am stuck for the time being on a sub-problem. Given an initial lattice | p r | | 0 1 | where r may be assumed to be a u.r.v. on [1,p-1], then WHAT IS THE EXPECTED VALUE OF THE COEFFICIENTS of a completely reduced basis????? This is a difficult problem. One may expect on rough heuristical grounds that the reduced coefficients should have a mean that is somewhere between sqrt(p) and k*sqrt(p) for some k. One thing is clear. Since the yield decreases as the special-q increases (the cause of this is obvious), then the SIZE of the sieve region for each special-q must decrease as it increases. Exactly how this should be done depends on an answer to the above question. |
|
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| 2^947+1 status | fivemack | Factoring | 17 | 2014-05-06 18:00 |
| Status | bsquared | Game 2 - ββββββ - Shaolin Pirates | 4 | 2013-10-01 06:18 |
| Status of p-1.... | dave_0273 | Marin's Mersenne-aries | 80 | 2008-01-28 00:18 |
| 7,295- status and discussion | Raman | Cunningham Tables | 2 | 2008-01-01 14:52 |
| status | wfgarnett3 | PSearch | 3 | 2004-03-02 18:04 |