![]() |
What's Happening
We haven't heard anynews in a long time. What's happening
vis-a-vis 7,319+ and 7,304+????? |
[QUOTE=R.D. Silverman;178577]We haven't heard anynews in a long time. What's happening
vis-a-vis 7,319+ and 7,304+?????[/QUOTE] The matrix for 319+ took two weeks, but we lost both the dependencies and the checkpoint file. We're re-running. The other one hasn't been started yet (and we're no longer distributing files with the linesiever). -bd (with aplogies to Richard and Greg for my presumption(s)) |
Update
[QUOTE=R.D. Silverman;178577]We haven't heard anynews in a long time. What's happening
vis-a-vis 7,319+ and 7,304+?????[/QUOTE] Greg reported 7,319+ C224 = p70*p154. Richard seems to have been working on 7, 304+ C244 for some time. For my part, I expect to contribute a good chunk of sieving (on 304+). For other current bdodson@lehigh projects, sieving for 5, 398+ C274, with Greg's new binary of the 16e siever, is reduced to the last few 100 tasks, and should finish today with 400M+ unique relations. Sieving on the Batalov+Dodson number 10, 393+ C253 started earlier this morning. Tom reports that 12+256 C228 had a final count of 433466098 unique, for a monstrous 24.5M^2 matrix that's now past 40%. -Bruce PS - Sam's page 111 just finished (with the 319+ factorization), and the Wanted lists have been updated to include the Dodson/ECMNET cofactors C149 and C153 as Smaller-but-Needed from page 111. Both numbers are reserved. |
[QUOTE=bdodson;181368]Greg reported 7,319+ C224 = p70*p154. Richard seems to
have been working on 7, 304+ C244 for some time. For my part, I expect to contribute a good chunk of sieving (on 304+). -Bruce [/QUOTE] Looks like sieving for 7, 304+ C244 may finish soon; we're reserving 10,269- c233, a Most wanted first hole, and also a repunit, (10^269-1)/9 = 111...111 (269 ones). Several factors are already known, but this is snfs, with difficulty 268. -Bruce |
Difficulty will be 270 with the upped sextic, right?
(insert :splitting hairs: smilie) [SIZE=1]Incidentally, only recently I mentally fixed my own similar math error, the c268 2,2086M has difficulty 26[B][I]9[/I][/B].1 (and "could have been" a c270, but the algebraic 2,14L factor eats up two digits)[/SIZE] |
[QUOTE=Batalov;187529]Difficulty will be 270 with the upped sextic, right?
(insert :splitting hairs: smilie) [SIZE=1]Incidentally, only recently I mentally fixed my own similar math error, the c268 2,2086M has difficulty 26[B][I]9[/I][/B].1 (and "could have been" a c270, but the algebraic 2,14L factor eats up two digits)[/SIZE][/QUOTE] Looks like I mis-remembered and/or recorded from [code] 233 10 269 - 269 0.866171 243 10 268 + 268 0.906716 [/code] both of which were on my to-do list (not necessarily by me; making sure plausible targets were sufficiently ecm'd). If you're saying that you've updated the entry in your table to 270, guess that means sieving will be a bit more difficult; not enough to worry about though. -Bruce |
[QUOTE=bdodson;187521]Looks like sieving for 7, 304+ C244 may finish soon; we're reserving
10,269- c233, a Most wanted first hole, and also a repunit, (10^269-1)/9 = 111...111 (269 ones). Several factors are already known, but this is snfs, with difficulty 268. -Bruce[/QUOTE] What's the status of 7,304+??? It seems to be taking forever. Has NFSNET been put out of business by NFS@HOME? 10,269- is difficult..... Good luck. |
[QUOTE=R.D. Silverman;191918]What's the status of 7,304+??? It seems to be taking forever.[/QUOTE]
A 14.5M square matrix was created, and the LA is about 20% done on my slow computer. At the current rate, it's got about 2.5 months left. Greg |
[QUOTE=frmky;191943]A 14.5M square matrix was created, and the LA is about 20% done on my slow computer. At the current rate, it's got about 2.5 months left.
Greg[/QUOTE] The Wiki page still lists it as being sieved.... Which is one reason why NFSNET is fading away..... It provides no feedback to participants. |
It has no participants because it has no lattice siever code which can operate within its control structure. It is not worthwhile even attempting to use the CWI line siever any more. It is far too inefficient for the class of numbers that we have been processing.
|
[QUOTE=Wacky;192211]It has no participants because it has no lattice siever code which can operate within its control structure. It is not worthwhile even attempting to use the CWI line siever any more. It is far too inefficient for the class of numbers that we have been processing.[/QUOTE]
I can provide lattice siever code. GGNFS is available...... You once used my (line) siever code before switching to CWI. Allow me to ask: how will you do 10,269- without participants? This number is quite difficult. |
[QUOTE=R.D. Silverman;192227]I can provide lattice siever code. GGNFS is available......
You once used my (line) siever code before switching to CWI. Allow me to ask: how will you do 10,269- without participants? This number is quite difficult.[/QUOTE] Sieving is in progress here, with the 16e siever. Region has width 200M-20M, of which 50M or so is complete. I'm keeping the number of cores between 200-300, with up to 100 small memory cores (1Gb) running Batalov-Dodson numbers (3, 521+ due tomorrow). -Bruce PS - page 112 looks to be full, at 30 entries; Serge reports that the first-five are already updated. Sam's also upated the progress on the wanted lists from page 111. |
[QUOTE=bdodson;192229]Sieving is in progress here, with the 16e siever. Region has width
200M-20M, of which 50M or so is complete. I'm keeping the number of cores between 200-300, with up to 100 small memory cores (1Gb) running Batalov-Dodson numbers (3, 521+ due tomorrow). -Bruce PS - page 112 looks to be full, at 30 entries; Serge reports that the first-five are already updated. Sam's also upated the progress on the wanted lists from page 111.[/QUOTE] I find it curious that over time the number of entries per page has shrunk... There used to be near 60 entries/page. |
[QUOTE=R.D. Silverman;192231]I find it curious that over time the number of entries per page has shrunk...
There used to be near 60 entries/page.[/QUOTE] Maybe due to smaller numbers? It's just the most recent 2 or 3 pages that seem to have stopped near 30 entries; back at page 90 there were 40 entries. There has already been a lot of activity on wanted and/or first-fives; so perhaps I'm premature on the page closing, due to wishful thinking. -Bruce |
[QUOTE=bdodson;192239]Maybe due to smaller numbers? It's just the most recent
2 or 3 pages that seem to have stopped near 30 entries; back at page 90 there were 40 entries. There has already been a lot of activity on wanted and/or first-fives; so perhaps I'm premature on the page closing, due to wishful thinking. -Bruce[/QUOTE] I think it driven by the need to have everything fit on an 8.5" x 11" *printed* page. Many of the numbers have to be broken into 2 lines. |
[QUOTE=R.D. Silverman;192227]I can provide lattice siever code. GGNFS is available......
You once used my (line) siever code before switching to CWI. [/QUOTE] As I stated over in the NFS@Home thread, the issue is not access to source code for a lattice siever. The issue is access to development platforms and programmers who have the time, tools, and ability to debug the porting of a common protocol to the various platforms. I have a "NFSNet" version of the siever siever running on Mac OSX. However, my largest contributor does not have a "compatible" version on 64-bit Linux. Nor do we have a version for any form of Windows. [QUOTE] Allow me to ask: how will you do 10,269- without participants? This number is quite difficult. [/QUOTE] The same way that Tom does things -- a very non-NFSNet collaboration of a few contributors (with access to many machines). I find it interesting that NFSNet is continually "faulted" because of shortcomings when competing efforts are applauded even though they have those same shortcomings. |
I think the way that I do things gets a rather different set of collaborators than NFSnet or NFS@home can manage: in particular, I imagine that the administrators of large clusters with idle time and with batch-submission interfaces are generally much happier with users running scripts which call executables to do a fairly well-defined job than with users running clients that collect their own work over the Internet.
Certainly I would not be happy to run NFS@home on the machines here at the office on which I sometimes run gnfs-lasieve4I16e. I know that my approach is much inferior in terms of getting really large amounts of compute time to fully-automated systems running on many home PCs, but the activation energy to doing it my way is much lower, and something like the way I do it is necessary to exploit the set of machines that I get to use. |
Tom,
I understand your comments about "effort", and the "constraints" on comfortable participation. My only regret is that we cannot all come together and produce a protocol that provides a common "format" for the allocation and reporting of results. This protocol would provide a uniform method of problem description, and a uniform format for the reporting the results. This reporting should be done in a manner that allows the easy extraction of a summary of the sieving without transmitting ALL of the details of the relations found. NFS@Home has effectively replaced NFSNet because Greg (at least thinks that he) has the resources to handle thousands of participants on a single central server. NFSNet did not utilize that approach because we lacked the resources and also wished to have a "fall bacK" protocol that would compensate for a failure at any server node within the system. |
Calling Don Leclair
Does anyone know how to reach Don?
it has been some time since I have been on contact with him. Richard |
Hi Richard,
Very nice to hear from you. I'll send you my current e-mail address in a PM. -Don |
| All times are UTC. The time now is 23:57. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.