mersenneforum.org  

Go Back   mersenneforum.org > Other Stuff > Archived Projects > NFSNET Discussion

 
 
Thread Tools
Old 2004-10-05, 09:13   #45
Death
 
Death's Avatar
 
Mar 2004
Ukraine, Kiev

2×23 Posts
Default

I'm in! Remember ecc2-109.... this was fun!

http://www.rsattack576.com:8080/ - check this out....

you mean rsaattack640.com?

and hey, Paul, why dont you buy Mac? dual g5 screams!
Death is offline  
Old 2004-10-05, 10:46   #46
xilman
Bamboozled!
 
xilman's Avatar
 
"π’‰Ίπ’ŒŒπ’‡·π’†·π’€­"
May 2003
Down not across

1075310 Posts
Default

Quote:
Originally Posted by Death
and hey, Paul, why dont you buy Mac? dual g5 screams!
Nice system but grossly overpriced --- in the UK at least.

If I wanted a 64-bit dual-proc BSD box I could get one with the same performance at a markedly lower price by buying an AMD system and installing FreeBSD. Alternatively, I could get better performance for the same price.

That pretty much summarizes Apple for the last twenty years, ever since the Mac first appeared.

Paul
xilman is offline  
Old 2004-10-05, 10:56   #47
xilman
Bamboozled!
 
xilman's Avatar
 
"π’‰Ίπ’ŒŒπ’‡·π’†·π’€­"
May 2003
Down not across

10,753 Posts
Default

Quote:
Originally Posted by Death
I'm in! Remember ecc2-109.... this was fun!

http://www.rsattack576.com:8080/ - check this out....
Trial division is not a good way to factor 576-bit hard integers. To be a little more accurate, it is an amazingly bad way. The particular implementation of trial division used there is an even more amazingly bad way of factoring rsa-576. Why I make that latter statement is left as an exercise but this quote from the site's FAQ provides a hint: "These prime numbers have a size variation: from 270 to 288 bits."

Trial division is even worse at factoring 640-bit hard integers than their 576-bit counterparts.

Paul
xilman is offline  
Old 2004-10-05, 11:16   #48
Death
 
Death's Avatar
 
Mar 2004
Ukraine, Kiev

2·23 Posts
Default

well as I understand that was you guys that spoil such nice competition?

such thing happened with http://md5crk.com

can you estimate a time needed to write any client?
Death is offline  
Old 2004-10-05, 21:39   #49
Mystwalker
 
Mystwalker's Avatar
 
Jul 2004
Potsdam, Germany

33F16 Posts
Post

Quote:
Originally Posted by Death
can you estimate a time needed to write any client?
I guess he knows...

But no matter how long it takes to write a client, if it is very ineffective (like trial division here), it's no real competition to an effective approach (GNFS is definitely more effective, although it would take long enough even then).

GIMPS does trial division to 66-68 bits right now. Each new bit (more or less) doubles the effort, I guess.
Finding a 288 bit factor would take 2^220 times longer. Even if we get down to 2^50 due to the smaller composite (and omitting the advantage that Mersenne factors have to be of a very special form), and assume a computation time of 5 hours for a GIMPS TF, that's 5,629,499,534,213,120 hours or 642,197,072,121 years!

2^20 would take some time, but is basically doable. Everything above... better look for a better approach...

Last fiddled with by Mystwalker on 2004-10-05 at 21:39
Mystwalker is offline  
Old 2004-10-12, 02:48   #50
sean
 
sean's Avatar
 
Aug 2004
New Zealand

223 Posts
Default GNFS factorization

I've completed 127 GNFS factorizations and 422 SNFS factorizations (excluding my sieving contributions to NFSNET). In my experience the sieving stage is by far the easiest. Worse, the larger the number the less significant the sieving step becomes (sure it takes more time but there are no new technical hurdles).

However, because to the outside world NFSNET exhibits only the sieving part of the task, it is perhaps natural that many contributors under-estimate the difficulty of the other stages involved in the overall task.

From what I have read here, we need to find ways to free up the time of the experts. We need the experts to be concentrating on the math and algorithms needed for distributed Lanczos for example. That means finding other people to help with the webpages, stats, and perhaps data management. (I'm saying this from outside the NFSNET team, so it's not an official position and might not reflect the immediate issues.)

I notice that the existing FAQ (http://www.nfsnet.org/faq.html) is targetted more at the client and could be improved by adding answers to few more "dumb" questions in the "About the Number Field Sieve" section. I'm willing to have a go at that.

I would suggest that those who really want to get an appreciation of the difficulties a 640-bit GNFS would present should first attempt an end-to-end GNFS factorization for themselves. There are plenty of wanted factorizations in the 110-130 digit range suitable for such individual effort. Be aware that a NFS factorization involves running a number of programs in the correct order and with careful parameter selection. Don't harass the experts about that either, but feel free to harass me. If you are looking for an implementation try Googling for "GGNFS" or get Franke's from his ftp site.

I think prize money is a red herring. The prize does not affect the difficulty of the factorization and would scarcely cover the cost of electricity for the computation. Giving it to charity (as has been done in the past) sounds fine to me. Perhaps contributors could vote on the charity or charities if it really worries people. Alternatively, offering the $20K as a "NFSNET prize" for the a good open-source distributed matrix reduction implementation meeting certain performance criteria might be workable.

my 2c
S.
sean is offline  
Old 2004-10-12, 18:12   #51
frmky
 
frmky's Avatar
 
Jul 2003
So Cal

2·34·13 Posts
Default

Quote:
Originally Posted by sean
I would suggest that those who really want to get an appreciation of the difficulties a 640-bit GNFS would present should first attempt an end-to-end GNFS factorization for themselves. There are plenty of wanted factorizations in the 110-130 digit range suitable for such individual effort. Be aware that a NFS factorization involves running a number of programs in the correct order and with careful parameter selection. Don't harass the experts about that either, but feel free to harass me. If you are looking for an implementation try Googling for "GGNFS" or get Franke's from his ftp site.
Actually, using GGNFS with the script that Chris Monico has provided is extremely easy. I've done over 100 SNFS (130-150 digits range) and a handful of GNFS (100-115 digits) factorizations with it. It takes a bit of time experimenting to select intelligent parameters and get a good polynomial (for GNFS), but once that's done it's simply a matter of running the factLat.sh script and waiting. The factor base creation, sieving, filtering, matrix solving, and square root phases are all automated from within that script. The drawback is that it's currently not set up to run any phase on more than one machine. A Perl version of the script has also been created and is available in the Files section of the ggnfs Yahoo group, and given Perl's network capabilities I'm sure a guru could extend the script and hack up a client/server interface for the sieving in a few hours.

Greg
frmky is online now  
Old 2004-10-12, 18:52   #52
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

22·5·373 Posts
Default

Quote:
Originally Posted by frmky
Actually, using GGNFS with the script that Chris Monico has provided is extremely easy. I've done over 100 SNFS (130-150 digits range) and a handful of GNFS (100-115 digits) factorizations with it. It takes a bit of time experimenting to select intelligent parameters and get a good polynomial (for GNFS), but once that's done it's simply a matter of running the factLat.sh script and waiting. The factor base creation, sieving, filtering, matrix solving, and square root phases are all automated from within that script. The drawback is that it's currently not set up to run any phase on more than one machine. A Perl version of the script has also been created and is available in the Files section of the ggnfs Yahoo group, and given Perl's network capabilities I'm sure a guru could extend the script and hack up a client/server interface for the sieving in a few hours.

Greg
Automation isn't hard except for one step: filtering

I often find myself starting with the initial data, doing a preliminary filtering
pass, then deciding on parameters for the next pass based on the info
I get from the first pass. I sometimes make judgmental compromises:
Should I squeeze the matrix a little more at the risk of increasing
density? How many heavy relations can I drop, etc. etc. I find that it
requires judgment and sometimes finicky twiddling to choose the parameters.
Sometimes I have to back up and re-do an earlier filter pass.
R.D. Silverman is offline  
Old 2004-10-12, 19:16   #53
xilman
Bamboozled!
 
xilman's Avatar
 
"π’‰Ίπ’ŒŒπ’‡·π’†·π’€­"
May 2003
Down not across

10,753 Posts
Default

Quote:
Originally Posted by Bob Silverman
Automation isn't hard except for one step: filtering
Automation isn't hard except for error recovery (and filtering). The NFSNET people often have to spend a lot of time fixing things up when they go wrong. Fixing things up includes reassuring and educating people, mending feed-back mechanisms like web sites and stats displays, repairing corrupt files, extracting useful material from the garbage, re-cycling unsieved lines/special-q, ensuring that enough storage is available in the correct locations, sufficient communication capacity is available to the correct systems, maintaining data integrity, providing an effective disaster recovery mechanism, ....

You have a lot of experience with NFS, more than I, but I suggest that you may have less experience with managing large scale collaborations. In my experience, solving the non-computational problems is much harder than solving the computational ones.

Quote:
Originally Posted by Bob Silverman
I often find myself starting with the initial data, doing a preliminary filtering
pass, then deciding on parameters for the next pass based on the info
I get from the first pass. I sometimes make judgmental compromises:
Should I squeeze the matrix a little more at the risk of increasing
density? How many heavy relations can I drop, etc. etc. I find that it
requires judgment and sometimes finicky twiddling to choose the parameters.
Sometimes I have to back up and re-do an earlier filter pass.
Yup, so do we. Filtering is still more an art, or perhaps a skill, than an algorithm.
On occasion, I've had to include criteria such as how to avoid tickling a known but uncharacterized bug and how to work around architectural limitations such as a maximum file size or maximum virtual or physical memory limits.

Paul
xilman is offline  
Old 2004-10-12, 19:36   #54
smh
 
smh's Avatar
 
"Sander"
Oct 2002
52.345322,5.52471

22458 Posts
Default

Quote:
Originally Posted by sean
I would suggest that those who really want to get an appreciation of the difficulties a 640-bit GNFS would present should first attempt an end-to-end GNFS factorization for themselves. There are plenty of wanted factorizations in the 110-130 digit range suitable for such individual effort. Be aware that a NFS factorization involves running a number of programs in the correct order and with careful parameter selection. Don't harass the experts about that either, but feel free to harass me. If you are looking for an implementation try Googling for "GGNFS" or get Franke's from his ftp site.
To save you the trouble of googling for GGNFS, it can be found here. This implementation uses Franke's latice siever and runs also under cygwin.

The largest SNFS i did was 158 digits, but it would quite easily do larger ones, but my fastest pc doesn't have enough memory.

For GNFS, i did two c101's. Now trying a c106/7

A mailinglist for ggnfs can be found here.
smh is offline  
Old 2004-10-12, 21:09   #55
frmky
 
frmky's Avatar
 
Jul 2003
So Cal

2·34·13 Posts
Default

Quote:
Originally Posted by Bob Silverman
Automation isn't hard except for one step: filtering
Interestingly, GGNFS has no user-adjustable parameters for filtering. The only thing that can be changed is the excess number of relations found in the sieving step so that heavier relations can be thrown away in filtering.

Greg
frmky is online now  
 



All times are UTC. The time now is 00:17.


Sat Jul 17 00:17:49 UTC 2021 up 49 days, 22:05, 1 user, load averages: 1.60, 1.63, 1.59

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.