mersenneforum.org  

Go Back   mersenneforum.org > Other Stuff > Archived Projects > NFSNET Discussion

 
 
Thread Tools
Old 2004-08-16, 12:28   #12
scottsaxman
 
scottsaxman's Avatar
 
Aug 2004
way out west

2610 Posts
Default

Quote:
Originally Posted by xilman
Now that we no longer have access to the cluster at Microsoft Research to run the linear algebra, I chose parameters for 11,199- which will make the matrix much smaller than would normally be the case but at the cost of requiring more sieving effort. There is no point in sieving rapidly if as a result we would have a matrix that could not be processed with the resources available.
How does this affect the project as a whole? I assume there is some lesser system available, but is this a permanent setback or is there some other alternative in the works?
scottsaxman is offline  
Old 2004-08-16, 16:21   #13
xilman
Bamboozled!
 
xilman's Avatar
 
"π’‰Ίπ’ŒŒπ’‡·π’†·π’€­"
May 2003
Down not across

2B1116 Posts
Default

Quote:
Originally Posted by scottsaxman
How does this affect the project as a whole? I assume there is some lesser system available, but is this a permanent setback or is there some other alternative in the works?
It means we rebalance our work load.

We have several systems around that can perform the linear algebra on a 5M matrix in around a month. Wacky has his dual-G5 box which solved the 10,227+ matrix a month or so back. Another person, (I won't name him because I don't yet know whether he wants publicity) can also run a matrix of that size in around that amount of time. What's required is a reasonably modern cpu (2GHz or more) and a fair amount of memory. At this level, 1Gb isn't quite enough, 1.5Gb is sufficient and 2Gb is plenty.

Let us say that in the good old days we could do a 5M matrix in two weeks on the cluster and that these days we need six weeks on a single machine. To maintain the same throughput then, we need three chunky single machines. This is easily obtainable. In practice, we would like to have more than three machines to allow for some flexibility and to allow for non-uniform work flows.

Anyone who has such a machine and is prepared to devote a cpu-month or more to a single computation is encouraged to volunteer. Unfortunately the license restrictions on the code (it's neither our code nor our license) means we can only provide binaries but we can do so for almost all common architectures and operating systems.

If a really big matrix is produced as a consequence of an unusually large computation (remember M811?) there is an excellent chance that some of our friends with access to large clusters will be able to help out in a one-off situation.

Paul
xilman is offline  
Old 2004-08-16, 16:34   #14
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

22×5×373 Posts
Default

Quote:
Originally Posted by xilman
It means we rebalance our work load.

We have several systems around that can perform the linear algebra on a 5M matrix in around a month. Wacky has his dual-G5 box which solved the 10,227+ matrix a month or so back. Another person, (I won't name him because I don't yet know whether he wants publicity) can also run a matrix of that size in around that amount of time. What's required is a reasonably modern cpu (2GHz or more) and a fair amount of memory. At this level, 1Gb isn't quite enough, 1.5Gb is sufficient and 2Gb is plenty.

Let us say that in the good old days we could do a 5M matrix in two weeks on the cluster and that these days we need six weeks on a single machine. To maintain the same throughput then, we need three chunky single machines. This is easily obtainable. In practice, we would like to have more than three machines to allow for some flexibility and to allow for non-uniform work flows.

Anyone who has such a machine and is prepared to devote a cpu-month or more to a single computation is encouraged to volunteer. Unfortunately the license restrictions on the code (it's neither our code nor our license) means we can only provide binaries but we can do so for almost all common architectures and operating systems.

If a really big matrix is produced as a consequence of an unusually large computation (remember M811?) there is an excellent chance that some of our friends with access to large clusters will be able to help out in a one-off situation.

Paul
It took me 440 hours on a 3.2GHz P IV to do a matrix with 5.1million rows.
This was for 2,661+. It required just over 1Gbyte of memory.

The time to solve the matrix grows a little worse than quadratically in the
number of rows. The actual run time is O(N^2 d) where d is the average
number of lit bits per row. d grows very slowly (theoretically as loglog(C))
where C is the composite being factored. N (#rows) grows with the square
root of the sieve time.

Bob
R.D. Silverman is offline  
Old 2004-08-17, 11:01   #15
junky
 
junky's Avatar
 
Jan 2004

7·19 Posts
Default

is there any why we can't use anymore the cluster at microsoft cambridge ?
what type will be required for a matrix similar to the one we got for M811?

Richard, how many RAM has ur dual-G5, btw, 2G ?

a 64-bits architecture would help to ur post-linear processing ?
cause i wonder if an opteron could really help with post-processing.

Last fiddled with by junky on 2004-08-17 at 11:02
junky is offline  
Old 2004-08-17, 11:48   #16
Wacky
 
Wacky's Avatar
 
Jun 2003
The Texas Hill Country

44116 Posts
Default

We no longer have access to the cluster at Microsoft Research Cambridge because the individual who was providing that access is no longer employed there.

My Dual-G5 has 2GB of RAM (except on Thursdays when it has 2.5GB)

The 64 bit registers certainly don't hurt when it comes to doing bit-vector operations. However, I suspect that the FSB memory architecture is equally important.
Wacky is offline  
Old 2004-08-17, 15:31   #17
wpolly
 
wpolly's Avatar
 
Sep 2002
Vienna, Austria

3338 Posts
Default

Quote:
Originally Posted by Wacky
Actually, Paul reserved that number for NFSNET. It is going to be our next number.
That would be 11,199-.c173

11^199-1=2.5.797.140893.18242336369.4645373755026923.C173

Last fiddled with by wpolly on 2004-08-17 at 15:34
wpolly is offline  
Old 2004-08-18, 00:19   #18
sean
 
sean's Avatar
 
Aug 2004
New Zealand

22510 Posts
Default machines

Quote:
Originally Posted by scottsaxman
How does this affect the project as a whole? I assume there is some lesser system available, but is this a permanent setback or is there some other alternative in the works?
I have access to a dual 1.8GHz Opteron with 16GB RAM (running Linux).
While I tend to use faster machines with less memory for the matrix step,
I find the 16GB machine handy for some of the filtering step (at least
when working with the Franke code base).
sean is offline  
Old 2004-08-23, 02:37   #19
junky
 
junky's Avatar
 
Jan 2004

7·19 Posts
Default

since the new project (11_199M) has started, can we know what's the estimated time for tha project ?
what type of machine do ya plan to use to complete the post-processing of the 3_491P ? the Richard's Dual-G5 ?

thanks.
junky is offline  
Old 2004-08-26, 19:45   #20
digitalAmit
 

163F16 Posts
Default

Looking at the couple of days stats ... I hope should be finished within 10 ~ 15 days.
 
Old 2004-08-26, 22:50   #21
scottsaxman
 
scottsaxman's Avatar
 
Aug 2004
way out west

2×13 Posts
Default

Yeah, we are cruising through this project. So, to bring things full circle...

What's next?
scottsaxman is offline  
Old 2004-09-09, 03:40   #22
scottsaxman
 
scottsaxman's Avatar
 
Aug 2004
way out west

2·13 Posts
Default

^bump

Additionally, is there any chance of updating the NFSNET home page to reflect current activities? [Best South Park - Jimmy voice]I mean, come on[/voice], it was last updated in June...
scottsaxman is offline  
 

Thread Tools


All times are UTC. The time now is 09:26.


Thu Dec 2 09:26:19 UTC 2021 up 132 days, 3:55, 0 users, load averages: 1.48, 1.35, 1.23

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.