![]() |
[QUOTE=bdodson;118937]
> 3,508+ C188 (difficulty 242.38) or 3,512+ C193 (diff 244.29) > from the more wanted list? [/QUOTE] I think I won't take on a project this large for a little while. I can't use the machines at the Technische Universität München any more. I can access a lot of Opterons in the "Grid5000" network here in France now, but jobs in the idle queue often get kicked off the nodes and putting together the partial output files and restarting jobs would take me a lot of time. I can't do it at the moment. I might do a smaller job, though, for example 7,269- looks interesting. It's a prime base, prime expoent minus 1 number so the OPN folks might like it. Only the factor 2153 is known at the moment. Alex |
[QUOTE=bdodson;118961]M821 = 2,821- C208[/quote]
What is the ECM status of this one? (I don't want to run ECM on it once somebody is sieving it, but before then I could do a few curves) |
OK, the polynomial for 3,499+ is obvious, and with alim=rlim=5e7, lpa=lpr=30, q around 5e7 take ~0.55s/relation on a 2.2GHz K8 I have lying around, so the relation-collection sounds as if it was around 12,000 CPU-hours, ~1,000 GHz-days.
7,269- with those alim/rlim parameters is taking about 0.31s/rel on the same machine and so would be about 7,000 CPU-hours. 2,841- is a much more interestingly exotic prospect; you start running into yield issues with gnfs-lasieve4I14e (though gnfs-lasieve4I15e with small-prime size 100M has 'only' a 850M virt / 400M res memory usage, and most fast-enough machines will by now have 1G/CPU); you probably have to use large-prime size 2^31, meaning you've got 150M relations to collect and manipulate; the matrix will be a challenge, and after all that it wouldn't even be the second-largest Cunningham SNFS job done. But what's the point of projects that can easily be done? I'm doing a little pre-sieving; 25 GHz-years feels like the right order of estimate. Will post some figures in a couple of hours when the jobs are over. |
[QUOTE=bdodson;118937]Any thoughts on either of
> 3,508+ C188 (difficulty 242.38) or 3,512+ C193 (diff 244.29) > from the more wanted list? I'd try to finish testing to p55 if/when there's confirmation that they're near-term sieving candidates. -Bruce[/QUOTE]I'm tempted to clear out more of the base-3 tables but (a) I don't really have the time (my time, not cpu time) and (b) the cofactor sizes are rather small compared with the SNFS difficulty and so not as attractive to my (perhaps unusual) value function. That said, 3,512+ really ought to be done some time reasonably soon. Paul |
I can contribute 15 - 25 GHz of sieving (GGNFS franke) if needed.
|
OK, the pre-sieving for 2,841- gives some rather odd results.
I'm assuming that the yield-per-Q drops off as x^-(1/3), which is the best fit to the yield figures I have obtained on 7,263-, then solving integral_{sieve_max}^N measured_yield_per_Q x^(-1/3) dx = expected_relations_needed. The yield-dropoff exponent is not straightforward to measure - I've needed to analyse most of a 75-million-relation sieving job to do the statistics to the point that I'm confident that the first decimal place is a 3 - though this may just be that I'm doing the stats wrong. Fortunately, values between 0.3 and 0.4 don't alter the conclusion below. For lp=2^30, expected_relations_needed is 85M, for lp=2^31 it's 170M. I then measured yield of relations for 10000 Q starting at sieve_max, and the time per relation to get those, for various parameter choices sieve_max / lpb / sieve_size; figures are yield, time-per-relation-in-seconds, and time-for-enough-relations in GHz-years given the assumptions above. Hardware is K8/2200, I was running one job on each core of a dual-core, but previous experience suggests this doesn't affect the timings significantly. Software is Franke siever from the ggnfs build, with the makefile modified to make gnfs-lasieve4I15e as well as 12..14. 50/30/14 4247 1.26 11.3 50/31/14 8276 0.66 11.8 50/30/15 8987 1.49 11.2 50/31/15 16364 0.78 12.0 100/30/14 3889 1.90 14.7 100/31/14 7888 0.86 14.5 100/30/15 8271 1.64 12.3 100/31/15 17091 0.85 12.6 So: this is a job of more than 10 but probably not as much as 15 GHz-years; enlarging the sieve space makes life slower at small=50M and faster at small=100M; going from lp=30 to lp=31 doesn't seem by these measurements a good idea even at this 254-digit level with the current sieving software, though I notice that the Aoki 274-digit SNFS was done with lp=34, and M1039 was done with lp=36. I'm currently running special-Q in [10^8, 10^8+10^4] for small=50,60,70,80,90 and space=30,31, results after the weekend. Has anyone got a good reference for techniques for minimising expensive-to-compute functions? I suppose this may be a simplex-method job. |
[QUOTE=fivemack;119026]Has anyone got a good reference for techniques for minimising expensive-to-compute functions? I suppose this may be a simplex-method job.[/QUOTE]I don't even pretend to be an expert on this subject. However, I've always found [i]Numerical Recipes[/i] a good starting point. If your problem is simple enough, the NR code is probably good enough. If it isn't, NR contains useful pointers to begin a literature search.
Paul |
[QUOTE=xilman;119097]I don't even pretend to be an expert on this subject. However, I've always found [i]Numerical Recipes[/i] a good starting point. If your problem is simple enough, the NR code is probably good enough. If it isn't, NR contains useful pointers to begin a literature search.
Paul[/QUOTE] The original question is a bit wide open. The answer depends on several things: (1) Is the constraint region convex? Is it linear? (2) The number of local extrema. (3) Smoothness of the objective function (4) How well/easily the gradient of the objective can be approximated. gradient-descent and conjugate-gradient methods can work well if grad F can be accurately and easily computed and if there are not a lot of local extrema. |
[QUOTE=R.D. Silverman;119107]The original question is a bit wide open.[/QUOTE]Agreed. That is why, at least in part, I gave a reference to a work which covers a wide range of techniques!
Paul |
[QUOTE=fivemack]I'm currently running special-Q in [10^8, 10^8+10^4] for small=50,60,70,80,90 and space=14,15, results after the weekend.
[/QUOTE] I made a mistake in the script, and there was a power-cut in the building on Friday evening. Results maybe-Wednesday. |
[QUOTE=fivemack;119026]OK, the pre-sieving for 2,841- gives some rather odd results.
For lp=2^30, expected_relations_needed is 85M, for lp=2^31 it's 170M. <snip> Has anyone got a good reference for techniques for minimising expensive-to-compute functions? I suppose this may be a simplex-method job.[/QUOTE] I am working on a follow-on paper to "Optimal Parameterization of SNFS". It does for the lattice sieve what the above paper does for a line-siever. I am stuck for the time being on a sub-problem. Given an initial lattice | p r | | 0 1 | where r may be assumed to be a u.r.v. on [1,p-1], then WHAT IS THE EXPECTED VALUE OF THE COEFFICIENTS of a completely reduced basis????? This is a difficult problem. One may expect on rough heuristical grounds that the reduced coefficients should have a mean that is somewhere between sqrt(p) and k*sqrt(p) for some k. One thing is clear. Since the yield decreases as the special-q increases (the cause of this is obvious), then the SIZE of the sieve region for each special-q must decrease as it increases. Exactly how this should be done depends on an answer to the above question. |
| All times are UTC. The time now is 08:14. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.