![]() |
3,599+ status and discussion
[QUOTE=akruppa;95258]Is anyone already working on 3,499+? If not, I'd like to reserve it, please.
Alex[/QUOTE]I made a small start late last year, doing survey sieving and the like, but haven't formally requested it either here or with SSW. I'd be happy to join with you on this one if you'd like a cow-orker. (Or is that core-searcher?) Paul |
[QUOTE=xilman;95263]I made a small start late last year, doing survey sieving and the like, but haven't formally requested it either here or with SSW.
I'd be happy to join with you on this one if you'd like a cow-orker. (Or is that core-searcher?) Paul[/QUOTE] I can help as well. Will you be doing line or lattice sieving? I only have a small number of machines, and prefer to use my own code. It's output conforms to the CWI format. I'd send the data via snail mail CD. |
I'm doing lattice sieving, the only line siever I have is CWI's and it's not terribly fast. Some line sieving over the lattice siever's factor base would make sense, though. I've chosen fb primes < 20M on both sides, large primes <2^30 and will sieve special-q up to 60M - 70M on each side.
I'm doing sq in [20M, 30M] on the algebraic side atm. Feel free to take any range above that for lattice sieving. Please post which sq range you are doing, or if you're doing line sieving. Oh, and the polynomial is the obvious 3x^6+1. Alex |
[QUOTE=akruppa;95281]I'm doing lattice sieving, the only line siever I have is CWI's and it's not terribly fast. Some line sieving over the lattice siever's factor base would make sense, though. I've chosen fb primes < 20M on both sides, large primes <2^30 and will sieve special-q up to 60M - 70M on each side.
I'm doing sq in [20M, 30M] on the algebraic side atm. Feel free to take any range above that for lattice sieving. Please post which sq range you are doing, or if you're doing line sieving. Oh, and the polynomial is the obvious 3x^6+1. Alex[/QUOTE] I will do special q's that are *within* the factor base, i.e. [10K, 20M]. I will start with 5K. I don't know how many I will be able to do, but will start this weekend. |
[QUOTE=akruppa;95281]I'm doing lattice sieving, the only line siever I have is CWI's and it's not terribly fast. Some line sieving over the lattice siever's factor base would make sense, though. I've chosen fb primes < 20M on both sides, large primes <2^30 and will sieve special-q up to 60M - 70M on each side.
I'm doing sq in [20M, 30M] on the algebraic side atm. Feel free to take any range above that for lattice sieving. Please post which sq range you are doing, or if you're doing line sieving. Oh, and the polynomial is the obvious 3x^6+1. Alex[/QUOTE] Alex, A factor base bound of 20M is quite a bit too small. I would recommend a bound in the 30M to 35M range. |
[QUOTE=R.D. Silverman;95287]Alex,
A factor base bound of 20M is quite a bit too small. I would recommend a bound in the 30M to 35M range.[/QUOTE] Hi, Please tell me if you definitely want an fb bound of only 20M. |
When I sieve sq on the rational side, I'll use fb limit of 60M on the algebraic side, and increase the fb limit on the rational side along with the sq value. That should catch nearly all relations where the norms on both sides are 60M-smooth with up to two large primes.
Alex Edit: > Please tell me if you definitely want an fb bound of only 20M. 20M is a lower limit for the sq value I'll use on each side. At first, this will also be the factor base limit, but not throughout all the sieving. |
[QUOTE=akruppa;95291]When I sieve sq on the rational side, I'll use fb limit of 60M on the algebraic side, and increase the fb limit on the rational side along with the sq value. That should catch nearly all relations where the norms on both sides are 60M-smooth with up to two large primes.
Alex Edit: > Please tell me if you definitely want an fb bound of only 20M. 20M is a lower limit for the sq value I'll use on each side. At first, this will also be the factor base limit, but not throughout all the sieving.[/QUOTE] I set up 3,499+ to run, but am getting no relations. I suspect that some array isn't big enough and that I need to recompile. I don't have the time to investigate right now, so this will have to wait. |
[QUOTE=R.D. Silverman;95359]I set up 3,499+ to run, but am getting no relations. I suspect
that some array isn't big enough and that I need to recompile. I don't have the time to investigate right now, so this will have to wait.[/QUOTE] It wasn't the code. There was a non-printing control character in one of the input files. I have started sieving 3,499+ on 3 machines. |
I'm sieving algebraic special-q from 30M to 31M now and will take q up to 40M when some other machines come on line.
Relations seem to be coming in at something over 5 per second. When the other machines come on-line I hope to be able to increase that rate to about 40 per second. Paul |
[QUOTE=xilman;95432]I'm sieving algebraic special-q from 30M to 31M now and will take q up to 40M when some other machines come on line.
Relations seem to be coming in at something over 5 per second. When the other machines come on-line I hope to be able to increase that rate to about 40 per second. Paul[/QUOTE] I am getting a little more than 2 relations/second per machine and I have 4 machines. I have been sieving since Friday on 3 machines and started another last night. I have a total of just under 1.3 million total relations. ??? Do we have an estimate of how many will be needed? ????? My most recent results, with a factor base bound of about 30M and large prime bounds of 700M required about 80 million total relations. |
[QUOTE=R.D. Silverman;95585]I am getting a little more than 2 relations/second per machine and I have 4
machines. I have been sieving since Friday on 3 machines and started another last night. I have a total of just under 1.3 million total relations. ??? Do we have an estimate of how many will be needed? ????? My most recent results, with a factor base bound of about 30M and large prime bounds of 700M required about 80 million total relations.[/QUOTE]The initial rate was abnormally high. It's settled to 3.5Hz on that machine. I won't get 40 but may manage 25 when fully powered up. Two more machines are now running, at about 1.3 Hz each. I've slightly under 1M relations. The standard estimate for the number of unique relations required is 0.8*(pi(LPB1) + pi(LPB2). With LP1==LP2==2^30 that expression evaluates to 87M. I generally add 20% or so to account for duplicates so somewhere around 100-105M will be sufficient. I've been using 100M for my estimates, purely because it makes mental arithmetic easier. Paul |
I currently have 3225056 relations. I'm not sure of the rate of relations found, but I guess it's something like not quite 10 Hz or so.
Alex |
[QUOTE=akruppa;95653]I currently have 3225056 relations. I'm not sure of the rate of relations found, but I guess it's something like not quite 10 Hz or so.
Alex[/QUOTE] Alex, I need a snail mail address so I can send my results via CD. How frequently shall I send them? Every 2 weeks or so, or should I wait until the end and send them all at once? |
Sending them once at the end should do. Alternatively, if you can send data out via TCP at all, we could use one of the public file up-/download services like rapidshare etc.
Alex |
[QUOTE=akruppa;95660]Sending them once at the end should do. Alternatively, if you can send data out via TCP at all, we could use one of the public file up-/download services like rapidshare etc.
Alex[/QUOTE] I can't send them out from work, owing to firewall restrictions, and can't see sending them via AOL email from home.......:grin: |
[QUOTE=R.D. Silverman;95661]and can't see sending them via AOL email from home.......:grin:[/QUOTE]Can't you use FTP on AOL?
|
3,499+
I am getting close to finishing my part of the sieving. I have
sieved all of the rational side q's up to 17M. I plan on doing all of them up to 20M. Where shall I send the data? I need to send it via snail mail on a CD. Bob |
You have a PM.
My own sieving has been stopped temporarily as the machines at the TU Muenchen I used to use were needed for other tasks. I hope I can access them again soon. Otherwise I can do the sieving on the Opterons here at LORIA. Alex |
[QUOTE=akruppa;98274]My own sieving has been stopped temporarily as the machines at the TU Muenchen I used to use were needed for other tasks.[/QUOTE] Mine likewise, principally for the NFSNET 5,313+ matrix. Now that's finished, I hope to pick up sieving again.
Paul |
3,499+
What's the status of 3,499+?
Alex: Did you get the data that I sent by snail mail? Bob |
[QUOTE=R.D. Silverman;100768]What's the status of 3,499+?
Alex: Did you get the data that I sent by snail mail? Bob[/QUOTE]That reminds me --- I need to send Alex some data and to start sieving again. Swamped at the moment :sad: Paul |
[QUOTE=R.D. Silverman;100768]What's the status of 3,499+?
Alex: Did you get the data that I sent by snail mail? Bob[/QUOTE] Yes, received it, thanks! Between your set and mine, there are currently 24095194 unique relations here. Edit: actually, it's 25123591 unique relations now. I just transferred a new batch of files from the TU München. Btw, Paul: Please remind me which interval you are sieving. I'd like to avoid duplication. Alex |
Current count on my end is 27543834 unique relations.
Edit: sorry, it's 30155284. I had a parameter (-lp) wrong during the filtering. Update (2007.04.10) : 33240022 unique relations. Update (2007.04.16) : 36526558 unique relations. Update (2007.04.26) : 40192316 unique relations. Update (2007.05.15) : 48477140 unique relations. Update (2007.05.29) : 52116454 unique relations. Update (2007.07.02) : 59803271 unique relations. Update (2007.07.20) : 64934927 unique relations. Alex |
The matrix job is now running, estimated time to completion is 36 days at the moment. It is very early in the run, though, so that figure may change.
Alex |
[QUOTE=akruppa;113101]The matrix job is now running, estimated time to completion is 36 days at the moment. It is very early in the run, though, so that figure may change.
Alex[/QUOTE]How big and dense is the matrix? Paul |
It's "6889926 x 6893757, total weight 620937048." It probably is a bit too heavy, but I had some problems building the matrix (mostly due to poor planning on my part) and don't have the time/patience to start over. So I took the first matrix of valid dimension I got and ran with it...
Alex Update (18.9.2007): The matrix is 50% done now. Update (18.10.2007): Block-Lanczos on the matrix is paused at the moment as the machine is used for other work at the moment. It is 77% done and will take a bit over a week to finish once I resume computation. |
[CODE]
Probable prime factor 1 has 113 digits: 50291482856324544404027093373360635121063081581216864654202148086500144195079163724148106275521673861082817129401 Probable prime factor 2 has 125 digits: 60249253834468029722846922340566907742203320456026527837840758742969239944910379299755028256308064416617431085523496168044367 [/CODE] Sqrt was a bit of a pain because we used an lp bound of 2^30 with Franke's lattice siever and I forgot to remove relations with primes >10^9, which CWI's sqrt uses as CRT primes. I found a long enough interval of primes that don't appear among the relations and let sqrt use those, the rest went pretty smoothly. The p113 replaces my p108 of 5,349- as the second largest penultimate factor found within the Cunningham project. Alex |
Congratulations! That's a spectacular factorisation, after an enormous amount of work.
How many relations did you end up using, and have you got an estimate for the sieving effort in CPU-hours? |
[QUOTE=akruppa;118888]
Probable prime factor 1 has 113 digits: ... Probable prime factor 2 has 125 digits: ... The p113 replaces my p108 of 5,349- as the second largest penultimate factor found within the Cunningham project. Alex[/QUOTE] Congratulations, indeed; especially great to see how to fix the crt overlap. I'm looking forward to more of these large numbers. Any thoughts on either of > 3,508+ C188 (difficulty 242.38) or 3,512+ C193 (diff 244.29) > from the more wanted list? With greg's sieving contributions these are near nfsnet range, but not for a while yet; not sure whether joint or separate projects would be better. Some other next-next candidates would include > MWN-#9, 12, 227+ C213, at difficulty 244.97 ... then a huge number at > 12, 229- C242, the largest cofactor on the wanted lists, at difficulty 247; > I'd like to see all four factored. [and] there's 10,239+ (diff 239). I'd try to finish testing to p55 if/when there's confirmation that they're near-term sieving candidates. -Bruce |
[QUOTE=bdodson;118937]Congratulations, indeed; especially great to see how to fix
the crt overlap. I'm looking forward to more of these large numbers. Any thoughts on either of > 3,508+ C188 (difficulty 242.38) or 3,512+ C193 (diff 244.29) > from the more wanted list? With greg's sieving contributions these are near nfsnet range, but not for a while yet; not sure whether joint or separate projects would be better. Some other next-next candidates would include > MWN-#9, 12, 227+ C213, at difficulty 244.97 ... then a huge number at > 12, 229- C242, the largest cofactor on the wanted lists, at difficulty 247; > I'd like to see all four factored. [and] there's 10,239+ (diff 239). I'd try to finish testing to p55 if/when there's confirmation that they're near-term sieving candidates. -Bruce[/QUOTE] CWI is doing 12,229-. |
[QUOTE=R.D. Silverman;118948]CWI is doing 12,229-.[/QUOTE]
Yes; I ought to have recalled that, thanks! The other two I was thinking of, once 2,787-/+ are nearer clearing were M821 = 2,821- C208 (which will be a new 1st hole after we finish 787-), at difficulty 247; and, a bit further out yet, there's M841 = 2, 841- C245, at difficulty 253. -Bruce |
[QUOTE=fivemack;118908]Congratulations! That's a spectacular factorisation, after an enormous amount of work.
How many relations did you end up using, and have you got an estimate for the sieving effort in CPU-hours?[/QUOTE] There were 75784708 unique non-free relations overall. After singleton removal (examining ideals of norm >1M), 37510063 non-free relations remained on 36816190 ideals, excess 2339180. I reduced excess to 350k and merged to get the matrix dimension listed above. I didn't keep track of the cpu time I spent sieving, and Bob and Paul contributed a lot of relations. Sorry, I have no good estimate of how much cpu time we spent. Alex |
[QUOTE=bdodson;118937]
> 3,508+ C188 (difficulty 242.38) or 3,512+ C193 (diff 244.29) > from the more wanted list? [/QUOTE] I think I won't take on a project this large for a little while. I can't use the machines at the Technische Universität München any more. I can access a lot of Opterons in the "Grid5000" network here in France now, but jobs in the idle queue often get kicked off the nodes and putting together the partial output files and restarting jobs would take me a lot of time. I can't do it at the moment. I might do a smaller job, though, for example 7,269- looks interesting. It's a prime base, prime expoent minus 1 number so the OPN folks might like it. Only the factor 2153 is known at the moment. Alex |
[QUOTE=bdodson;118961]M821 = 2,821- C208[/quote]
What is the ECM status of this one? (I don't want to run ECM on it once somebody is sieving it, but before then I could do a few curves) |
OK, the polynomial for 3,499+ is obvious, and with alim=rlim=5e7, lpa=lpr=30, q around 5e7 take ~0.55s/relation on a 2.2GHz K8 I have lying around, so the relation-collection sounds as if it was around 12,000 CPU-hours, ~1,000 GHz-days.
7,269- with those alim/rlim parameters is taking about 0.31s/rel on the same machine and so would be about 7,000 CPU-hours. 2,841- is a much more interestingly exotic prospect; you start running into yield issues with gnfs-lasieve4I14e (though gnfs-lasieve4I15e with small-prime size 100M has 'only' a 850M virt / 400M res memory usage, and most fast-enough machines will by now have 1G/CPU); you probably have to use large-prime size 2^31, meaning you've got 150M relations to collect and manipulate; the matrix will be a challenge, and after all that it wouldn't even be the second-largest Cunningham SNFS job done. But what's the point of projects that can easily be done? I'm doing a little pre-sieving; 25 GHz-years feels like the right order of estimate. Will post some figures in a couple of hours when the jobs are over. |
[QUOTE=bdodson;118937]Any thoughts on either of
> 3,508+ C188 (difficulty 242.38) or 3,512+ C193 (diff 244.29) > from the more wanted list? I'd try to finish testing to p55 if/when there's confirmation that they're near-term sieving candidates. -Bruce[/QUOTE]I'm tempted to clear out more of the base-3 tables but (a) I don't really have the time (my time, not cpu time) and (b) the cofactor sizes are rather small compared with the SNFS difficulty and so not as attractive to my (perhaps unusual) value function. That said, 3,512+ really ought to be done some time reasonably soon. Paul |
I can contribute 15 - 25 GHz of sieving (GGNFS franke) if needed.
|
OK, the pre-sieving for 2,841- gives some rather odd results.
I'm assuming that the yield-per-Q drops off as x^-(1/3), which is the best fit to the yield figures I have obtained on 7,263-, then solving integral_{sieve_max}^N measured_yield_per_Q x^(-1/3) dx = expected_relations_needed. The yield-dropoff exponent is not straightforward to measure - I've needed to analyse most of a 75-million-relation sieving job to do the statistics to the point that I'm confident that the first decimal place is a 3 - though this may just be that I'm doing the stats wrong. Fortunately, values between 0.3 and 0.4 don't alter the conclusion below. For lp=2^30, expected_relations_needed is 85M, for lp=2^31 it's 170M. I then measured yield of relations for 10000 Q starting at sieve_max, and the time per relation to get those, for various parameter choices sieve_max / lpb / sieve_size; figures are yield, time-per-relation-in-seconds, and time-for-enough-relations in GHz-years given the assumptions above. Hardware is K8/2200, I was running one job on each core of a dual-core, but previous experience suggests this doesn't affect the timings significantly. Software is Franke siever from the ggnfs build, with the makefile modified to make gnfs-lasieve4I15e as well as 12..14. 50/30/14 4247 1.26 11.3 50/31/14 8276 0.66 11.8 50/30/15 8987 1.49 11.2 50/31/15 16364 0.78 12.0 100/30/14 3889 1.90 14.7 100/31/14 7888 0.86 14.5 100/30/15 8271 1.64 12.3 100/31/15 17091 0.85 12.6 So: this is a job of more than 10 but probably not as much as 15 GHz-years; enlarging the sieve space makes life slower at small=50M and faster at small=100M; going from lp=30 to lp=31 doesn't seem by these measurements a good idea even at this 254-digit level with the current sieving software, though I notice that the Aoki 274-digit SNFS was done with lp=34, and M1039 was done with lp=36. I'm currently running special-Q in [10^8, 10^8+10^4] for small=50,60,70,80,90 and space=30,31, results after the weekend. Has anyone got a good reference for techniques for minimising expensive-to-compute functions? I suppose this may be a simplex-method job. |
[QUOTE=fivemack;119026]Has anyone got a good reference for techniques for minimising expensive-to-compute functions? I suppose this may be a simplex-method job.[/QUOTE]I don't even pretend to be an expert on this subject. However, I've always found [i]Numerical Recipes[/i] a good starting point. If your problem is simple enough, the NR code is probably good enough. If it isn't, NR contains useful pointers to begin a literature search.
Paul |
[QUOTE=xilman;119097]I don't even pretend to be an expert on this subject. However, I've always found [i]Numerical Recipes[/i] a good starting point. If your problem is simple enough, the NR code is probably good enough. If it isn't, NR contains useful pointers to begin a literature search.
Paul[/QUOTE] The original question is a bit wide open. The answer depends on several things: (1) Is the constraint region convex? Is it linear? (2) The number of local extrema. (3) Smoothness of the objective function (4) How well/easily the gradient of the objective can be approximated. gradient-descent and conjugate-gradient methods can work well if grad F can be accurately and easily computed and if there are not a lot of local extrema. |
[QUOTE=R.D. Silverman;119107]The original question is a bit wide open.[/QUOTE]Agreed. That is why, at least in part, I gave a reference to a work which covers a wide range of techniques!
Paul |
[QUOTE=fivemack]I'm currently running special-Q in [10^8, 10^8+10^4] for small=50,60,70,80,90 and space=14,15, results after the weekend.
[/QUOTE] I made a mistake in the script, and there was a power-cut in the building on Friday evening. Results maybe-Wednesday. |
[QUOTE=fivemack;119026]OK, the pre-sieving for 2,841- gives some rather odd results.
For lp=2^30, expected_relations_needed is 85M, for lp=2^31 it's 170M. <snip> Has anyone got a good reference for techniques for minimising expensive-to-compute functions? I suppose this may be a simplex-method job.[/QUOTE] I am working on a follow-on paper to "Optimal Parameterization of SNFS". It does for the lattice sieve what the above paper does for a line-siever. I am stuck for the time being on a sub-problem. Given an initial lattice | p r | | 0 1 | where r may be assumed to be a u.r.v. on [1,p-1], then WHAT IS THE EXPECTED VALUE OF THE COEFFICIENTS of a completely reduced basis????? This is a difficult problem. One may expect on rough heuristical grounds that the reduced coefficients should have a mean that is somewhere between sqrt(p) and k*sqrt(p) for some k. One thing is clear. Since the yield decreases as the special-q increases (the cause of this is obvious), then the SIZE of the sieve region for each special-q must decrease as it increases. Exactly how this should be done depends on an answer to the above question. |
Experimentally, the distribution of the maximum absolute value M of the entries of qflll(m) for m a matrix of your form is interesting; there's a reasonably sharp spike and then a long tail, and the tail is fit very well by p(M<t) ~ t^-2.
The M1039 write-up says that they threw out special-Q where the reduced matrix had too-large coefficients, which makes sense given how common and how useless such matrices appear to be. A simple fit for x going from 10^8 to 2^13*10^8 in powers of two, looking at 10^5 matrices for each x, gives (I'm confident to the number of significant figures I give) 25th-percentile = 0.81*x^0.500 median= 1.00 * x^0.500 75th-percentile = 1.34 * x^0.501 90th-percentile = 2.1*x^0.501 99th-percentile = 6*x^0.50 mean = 1.3 * x^0.500 Reduction of 2x2 matrices is essentially representation by continued fractions, and I think it's well-understood; I'd be tempted to look in volume 1 or 2 of the recent edition of Knuth's Art of Computer Programming, in the section where he writes about GCD algorithms. |
The section of this thread after #29 is no longer to do with 3,499+; is there any way that the moderators could be implored to move #39 and follow-ups to, say, a thread called 'Modelling lattice-sieving yields' in the Factoring forum?
|
[QUOTE=fivemack;119253]Experimentally, the distribution of the maximum absolute value M of the entries of qflll(m) for m a matrix of your form is interesting; there's a reasonably sharp spike and then a long tail, and the tail is fit very well by p(M<t) ~ t^-2.
The M1039 write-up says that they threw out special-Q where the reduced matrix had too-large coefficients, which makes sense given how common and how useless such matrices appear to be. [/QUOTE] Experimentally, yes. The reduced coefficients seem to fit an exponential/Erlang, or possibly a gamma distribution. My sieve code has always thrown out 'bad' reduced lattices, but I define bad as anything with a large *condition number*. It is not just that large coefficients give poor yield, but so do highly skewed lattices. Checking the condition number handles both problems. |
Ah yes, condition number is almost certainly the right way to recognise bad lattices.
I'm pretty sure that the distribution is too tail-heavy to be any of the exponential-based ones, though I can't get it to fit very clearly to any of the distributions in the 'heavy-tailed distributions' Wikipedia article either. The PDF dropping off for large x as almost exactly x^-3 must be telling me something, but I'm not sure what; it doesn't appear to match the distribution for random matrices of a given determinant. |
[QUOTE=fivemack;119263] The PDF dropping off for large x as almost exactly x^-3 must be telling me something, but I'm not sure what; it doesn't appear to match the distribution for random matrices of a given determinant.[/QUOTE]
Of course not. Not all matrices of determinant p are affine transforms of the original lattice.. |
Well, that was odd. I'd really expect that, since I'm sieving over the same region at each stage, #relations would be an increasing function of SP - I'm sieving further, I'm leaving all the other parameters the same, I really ought to be getting more relations.
[code] SP SZ Relations Secs / relation 50 14 2956 1.51067 50 15 6123 1.86277 60 14 3209 1.55215 60 15 6742 1.79827 70 14 3417 1.6059 70 15 7085 1.80086 80 14 3610 1.66312 80 15 7045 1.87672 90 14 3755 1.73279 90,15 failed to start, giving an out-of-memory error [/code] On further investigation, I am only [B]losing[/B] relations as I go from a small-prime bound of 70M to 80M in gnfs-lasieve4I15e. Maybe this is why gnfs-lasieve4I15e doesn't ship as standard with ggnfs :huh: |
[QUOTE=Andi47;118970]What is the ECM status of this one? (I don't want to run ECM on it once somebody is sieving it, but before then I could do a few curves)[/QUOTE]
M821 = 2,821- C208, along with all of the other Cunninghams with fewer than 234-digits, is either at 3*t50 or at 4*t50, depending upon whether it's come up yet on the que for the 4th t50. As I was saying, if/when this one becomes a near-term sieving target, I'd expect to finish a version of t55; which in recent practice (the base-2s with 787) has meant 6*t50 --- so adding 2 or 3 t50's, depending upon how many have already been done. So far, I've been discounting curves with B1 = 43M (p50-optimal), and running either B1 = 110M or B1 = 260M for the new curves. I haven't heard of anyone that'd be sieving M821 anytime soon. On the remaining numbers below 768-bits, Alex's 7,269- in particular, I'm not clear whether (t55-4*t50) more curves are indicated. The numbers in c190-c233 are qued on the Opteron/quadcore beowulf, so scheduling an extra two t50's would be easy --- especially if that would make the sieving more likely to happen (sooner). Of course, in Alex's case, he could probably slip in the extra curves over-night on the grid he's using, if he were so inclined. -Bruce PS - I suppose that'd be "below 768-bits" or "degree 5 or 6 with difficulty below 220". Comparing Tom's "1110 CPU-hours" for sieving 10,309- for example; c. 8hrs on 230 cores of the quads for a t50 gives 2*8*200 for the extra two t50's (in t55-4*t50) = c. 3200 CPU-hours. That doesn't sound good; those curves would_have/were better spent on harder numbers. |
[QUOTE=bdodson;119340]M821 = 2,821- C208, along with all of the other Cunninghams with
fewer than 234-digits, is either at 3*t50 or at 4*t50, depending upon whether it's come up yet on the que for the 4th t50. As I was saying, if/when this one becomes a near-term sieving target, I'd expect to finish a version of t55; which in recent practice (the base-2s with 787) has meant 6*t50 --- so adding 2 or 3 t50's, depending upon how many have already been done. So far, I've been discounting curves with B1 = 43M (p50-optimal), and running either B1 = 110M or B1 = 260M for the new curves. I haven't heard of anyone that'd be sieving M821 anytime soon. [/QUOTE] Thanks for the info. I have done 5 curves with B1 = 44M and sheduled 100 curves with B1 = 110M (85 of them already done). I can do more curves on either 110M or 260M if wanted P.S.: How actual is [URL="http://home.tele2.at/kennmich/cunningham/page2.html"]this page[/URL]? It says that 4968 curves are done on M821 at B1 = 44M - according to your post it must be rather ~21000 curves? And for M1061 AFIK t55 is finished and a few thousand curves have been done at B1=260M, see [URL="http://www.mersenneforum.org/showthread.php?t=6148"]this thread[/URL]. |
[QUOTE=Andi47;119357]I can do more curves on either 110M or 260M if wanted
P.S.: How actual is [URL="http://home.tele2.at/kennmich/cunningham/page2.html"]this page[/URL]? It says that 4968 curves are done on M821 at B1 = 44M - according to your post it must be rather ~21000 curves? And for M1061 AFIK t55 is finished and a few thousand curves have been done at B1=260M, see [URL="http://www.mersenneforum.org/showthread.php?t=6148"]this thread[/URL].[/QUOTE] I had an exchange with Mischa in one of the base-2 discussions; and believe I recall that the counts on his page are personal curves, not the sum of curves over what other people did. Same for my counts. I checked M821, and found that my 4th t50 finished Wednesday (by coincidence, in order, from an old input file). So that's 620 curves with B1 = 110M (my initial t45), then 5835 curves with B1 = 260M. That's evalf(620/17900+5835/8000) = 0.764*t55, same as in what I reported for the new NFSNET number M787 (which started sieving yesterday). Two more c. t50's will raise the B1 = 260M's to 8835 curves, which gives the 1.139*t55 from the thread, loc.cit. On M1061, I'm still saying what I've been saying since finding that the first kilobit snfs was M1039; namely that so many curves have been run that additional curves aren't really part of ecm-factoring; but rather ecm-pretesting, as in the description of the M1039 sieving. If you're having trouble with the distinction, observe Aoki being _dissapointed_ at having found the 2nd largest (at the time) ecm factor of p64 --- [QUOTE] ...Unfortunately, after trying 11784 curves for Step 1 and 11214 curves for Step 2 a factor was found. [/QUOTE] since it meant that the work done on setting-up R311 = 11...1 (that's 311) = (10^311-1)/(10-1) for sieving had been wasted. Spending the curves on other large numbers has a much better chance of finding a factor --- the fewer curves run previously, the better. Of course, they're your cycles; and we have George's description of the forum objective of "having fun". I may even take my own advice and take a 2nd pass through the c251-c366's; perhaps after New Years. -Bruce |
M821
[QUOTE=Andi47;119357]Thanks for the info. I have done 5 curves with B1 = 44M and sheduled 100 curves with B1 = 110M (85 of them already done). [/QUOTE]
100 curves at B1 = 110M as mentioned above are now ready. I will do a few more curves at B1 = 260M. |
| All times are UTC. The time now is 08:14. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.