![]() |
|
|
#45 |
|
Mar 2004
Ukraine, Kiev
568 Posts |
I'm in! Remember ecc2-109.... this was fun!
http://www.rsattack576.com:8080/ - check this out.... you mean rsaattack640.com? and hey, Paul, why dont you buy Mac? dual g5 screams! |
|
|
|
|
#46 | |
|
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
10,753 Posts |
Quote:
If I wanted a 64-bit dual-proc BSD box I could get one with the same performance at a markedly lower price by buying an AMD system and installing FreeBSD. Alternatively, I could get better performance for the same price. That pretty much summarizes Apple for the last twenty years, ever since the Mac first appeared. Paul |
|
|
|
|
|
#47 | |
|
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
10,753 Posts |
Quote:
Trial division is even worse at factoring 640-bit hard integers than their 576-bit counterparts. Paul |
|
|
|
|
|
#48 |
|
Mar 2004
Ukraine, Kiev
2·23 Posts |
well as I understand that was you guys that spoil such nice competition?
such thing happened with http://md5crk.com can you estimate a time needed to write any client? |
|
|
|
|
#49 | |
|
Jul 2004
Potsdam, Germany
3·277 Posts |
Quote:
![]() But no matter how long it takes to write a client, if it is very ineffective (like trial division here), it's no real competition to an effective approach (GNFS is definitely more effective, although it would take long enough even then). GIMPS does trial division to 66-68 bits right now. Each new bit (more or less) doubles the effort, I guess. Finding a 288 bit factor would take 2^220 times longer. Even if we get down to 2^50 due to the smaller composite (and omitting the advantage that Mersenne factors have to be of a very special form), and assume a computation time of 5 hours for a GIMPS TF, that's 5,629,499,534,213,120 hours or 642,197,072,121 years! ![]() 2^20 would take some time, but is basically doable. Everything above... better look for a better approach... Last fiddled with by Mystwalker on 2004-10-05 at 21:39 |
|
|
|
|
|
#50 |
|
Aug 2004
New Zealand
DF16 Posts |
I've completed 127 GNFS factorizations and 422 SNFS factorizations (excluding my sieving contributions to NFSNET). In my experience the sieving stage is by far the easiest. Worse, the larger the number the less significant the sieving step becomes (sure it takes more time but there are no new technical hurdles).
However, because to the outside world NFSNET exhibits only the sieving part of the task, it is perhaps natural that many contributors under-estimate the difficulty of the other stages involved in the overall task. From what I have read here, we need to find ways to free up the time of the experts. We need the experts to be concentrating on the math and algorithms needed for distributed Lanczos for example. That means finding other people to help with the webpages, stats, and perhaps data management. (I'm saying this from outside the NFSNET team, so it's not an official position and might not reflect the immediate issues.) I notice that the existing FAQ (http://www.nfsnet.org/faq.html) is targetted more at the client and could be improved by adding answers to few more "dumb" questions in the "About the Number Field Sieve" section. I'm willing to have a go at that. I would suggest that those who really want to get an appreciation of the difficulties a 640-bit GNFS would present should first attempt an end-to-end GNFS factorization for themselves. There are plenty of wanted factorizations in the 110-130 digit range suitable for such individual effort. Be aware that a NFS factorization involves running a number of programs in the correct order and with careful parameter selection. Don't harass the experts about that either, but feel free to harass me. If you are looking for an implementation try Googling for "GGNFS" or get Franke's from his ftp site. I think prize money is a red herring. The prize does not affect the difficulty of the factorization and would scarcely cover the cost of electricity for the computation. Giving it to charity (as has been done in the past) sounds fine to me. Perhaps contributors could vote on the charity or charities if it really worries people. Alternatively, offering the $20K as a "NFSNET prize" for the a good open-source distributed matrix reduction implementation meeting certain performance criteria might be workable. my 2c S. |
|
|
|
|
#51 | |
|
Jul 2003
So Cal
2·34·13 Posts |
Quote:
Greg |
|
|
|
|
|
#52 | |
|
Nov 2003
22×5×373 Posts |
Quote:
I often find myself starting with the initial data, doing a preliminary filtering pass, then deciding on parameters for the next pass based on the info I get from the first pass. I sometimes make judgmental compromises: Should I squeeze the matrix a little more at the risk of increasing density? How many heavy relations can I drop, etc. etc. I find that it requires judgment and sometimes finicky twiddling to choose the parameters. Sometimes I have to back up and re-do an earlier filter pass. |
|
|
|
|
|
#53 | ||
|
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
10,753 Posts |
Quote:
You have a lot of experience with NFS, more than I, but I suggest that you may have less experience with managing large scale collaborations. In my experience, solving the non-computational problems is much harder than solving the computational ones. Quote:
On occasion, I've had to include criteria such as how to avoid tickling a known but uncharacterized bug and how to work around architectural limitations such as a maximum file size or maximum virtual or physical memory limits. Paul |
||
|
|
|
|
#54 | |
|
"Sander"
Oct 2002
52.345322,5.52471
29×41 Posts |
Quote:
The largest SNFS i did was 158 digits, but it would quite easily do larger ones, but my fastest pc doesn't have enough memory. For GNFS, i did two c101's. Now trying a c106/7 A mailinglist for ggnfs can be found here. |
|
|
|
|
|
#55 | |
|
Jul 2003
So Cal
2·34·13 Posts |
Quote:
Greg |
|
|
|