![]() |
|
|
#45 | |
|
(loop (#_fork))
Feb 2006
Cambridge, England
191616 Posts |
Quote:
6,341- is quite a hard number - harder than I'd want to do on my own. With the right parameters it's probably three months using the 64-bit sievers on a Q6600, or six to eight months using 32-bit ones. |
|
|
|
|
|
|
#46 | |
|
Nov 2003
22×5×373 Posts |
Quote:
|
|
|
|
|
|
|
#47 |
|
Nov 2003
22×5×373 Posts |
|
|
|
|
|
|
#48 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
642210 Posts |
Ah yes, 6,762M may well be the easiest Cunningham-table number at the moment - a quartic, but the difficulty's actually under 200, so it's not a hard SNFS at all.
Nor a terribly hard GNFS, of course; I'm having surprising success (under 4500 CPU-hours for a C159, estimate under 9000 for a C163 after very little polynomial-search effort) using rather low large-prime bounds, rather large sieve regions, and the very skewed polynomials produced by msieve with small A5. See write-up for the C159 which (fingers crossed) will be on Gratuitous Factors by Sunday night. |
|
|
|
|
|
#49 |
|
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
24·593 Posts |
...and 5,370+ and 5,745L to round up the easy list.
...6,340+, 10,530L/M... Many neglected quartics. |
|
|
|
|
|
#50 | |
|
Noodles
"Mr. Tuch"
Dec 2007
Chennai, India
3×419 Posts |
Quote:
To everyone: Is it still difficult to tackle 2,1061- by SNFS? It is already 2009! Don't we have enough resources? Even if it is feasible to do sieving by the whole forum, a year round, doesn't someone have the resources to do so with the Linear Algebra atleast in parallel? M1039 was done by Kleinjung, et. al. in May 2009, can't we use the same software for Linear Algebra, (or try to supply the relations to them)? Seems that Mr. Bruce Dodson has a lot of computers in his university, I think so of. If it is feasible, such a connection of about 100 closely coupled computers will be sufficient to do the LA for a maximum of only 3 months of time period, provided that perhaps someone has it up so, thus. No small ECM factors at all for 2,1237- and 2,1277- tested so far. They both have been tested upto atleast t60? Similarly, also for the subsequent Mersenne Numbers with no known factors at all too... |
|
|
|
|
|
|
#51 |
|
(loop (#_fork))
Feb 2006
Cambridge, England
2×132×19 Posts |
Parameters for 2,860+ were: alim=12M rlim=60M, skew=1 (possibly not optimal), rlambda=alambda=2.6, lpbr=30 lpba=29, mfbr=60 mfba=58, sieved 42M-60M with gnfs-lasieve4I15e. Use these as a starting point rather than taking them as proclaimed truth.
2^1061-1 is still hard, even with Batalov's mods to make lasieve4I16e work; it's too big for a 65536x32768 per-lattice search region to be optimal, so you need a 131072x65536 search region, and the current siever code really wants the coordinates for hits to fit in two bytes which doesn't work in that case. It could be fixed but there are two or three numbers accessible with Batalov's tweaked 16e siever that I'd want to do first, at twenty CPU-years apiece plus three months per number on my i7 machine. The Kleinjung team (probably better called the Lenstra/Aoki team for M1039) is I suspect still rather tied up with RSA768, and I doubt they have a hundred spare computers for two months in order to push the SNFS record twenty bits further. Last fiddled with by fivemack on 2009-04-24 at 19:23 |
|
|
|
|
|
#52 |
|
Tribal Bullet
Oct 2004
354310 Posts |
There is no way that msieve in its current form can handle the postprocessing for M1061. Just about everything would have to change: relations cannot exceed 4G, large primes must be <= 32 bits in size, and even Bruce and Greg's machines would be hard-pressed to fit the resulting matrix into memory. Modifying the linear algebra to work on a cluster is left as an exercise to the reader.
|
|
|
|
|
|
#53 | |||
|
Jun 2005
lehigh.edu
210 Posts |
Quote:
Quote:
matrix; 66.7M^2, we're not anywhere near that. The pc's had 4Gb/core, and the network isn't like our generic "whitebox". I'm sieving. No data exchange between processors during the computation. For a better comparison on the linear algebra, the number of "threads" was 110 at NTT, or 36 (or is that 36x2?) at EPFL. With the hardware Tom and Serge are using, we're currently at 4 threads (a totally bogus comparison, the BW isn't using threads ...). Compare also this current snfs record with the gnfs record RSA200, for which they had Quote:
same range as the gnfs matrix --- both big, I was expecting to be able to see the snfs1024 matrix being harder (it was, yes? ...). Too large to be done at a single site (sustained access to fast network), distributed across two sites. Anyway, Tom/mersenne forum hit gnfs180, as compared with gnfs200; and we're around snfs difficulty 274, as compared with c. 310. Actually, we're making better relative progress than I'm seeing in rsa200 --> snfs1024; anyone expect that we'll see RSA786 this year ... we can have a more current data point to see where matrix and sieving are ... sieving was done already some time ago, and the matrix has been grinding away for a year or so? -bd |
|||
|
|
|
|
|
#54 |
|
Jul 2003
So Cal
2,111 Posts |
Is it too early to put in a Christmas wish? I'd love a distributed (MPI would be fine) block Wiedemann implementation this year. Patrick Stach was working on implementations for both for Linux x86-64 and nVidia CUDA, but he seems to have vanished around January. The ability to read msieve mat files would be a plus, but I'm willing to create a conversion program if necessary. The hardware I have access to here includes an nVidia Tesla S1070, an 8-way quad-core Opteron shared memory system, and a 10 quad-core Core2 mini-cluster connected by gigabit ethernet. Pretty please?
|
|
|
|
|
|
#55 |
|
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
24·593 Posts |
"As I walk through the valley where I harvest my relns
I take a look at my matrix and realize she's very plain But that's just perfect for an Amish like me You know I shun fancy things like B-Wee..." |
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| BOINC effort for CRUS | gd_barnes | Conjectures 'R Us | 75 | 2015-06-17 14:25 |
| Best effort: What is the priority? | Aillas | Operation Billion Digits | 2 | 2010-09-30 08:38 |
| Best month ever for PSPs prp effort | ltd | Prime Sierpinski Project | 22 | 2006-03-02 17:55 |
| Group Effort | robert44444uk | Sierpinski/Riesel Base 5 | 15 | 2005-06-25 14:07 |
| Where is P-1, P+1 effort recorded? | geoff | Factoring | 14 | 2004-05-13 21:18 |