![]() |
Great job :smile:
|
[QUOTE=fivemack;367854]725 million relations collected so far. I've sieved 20-114, 140-173, 190-200. Probably another month and a half to go, depending entirely on whether the machines crash mysteriously the day after I go to Mexico.
It turns out that I do not have the required temperament just to leave all my computers working on a problem and go and do something else for six months, so the estimates (30 hours per MQ on machine one, 72 hours per MQ on machine two, 60 hours per MQ on machine three) don't correspond well with the actual elapsed time.[/QUOTE] Excellent Work! You did all this with only 3 machines?!? Very impressive. :tu: How many cores/threads did you have sieving? How did you farm out the work to the different machines? Was it all by hand, or did you automate it somehow? |
AFAIK, for automation, he has his own set of scripts, developed over time.
I once looked into coercing SaltStack into the kind of work distribution / orchestration system that we need for NFS sieving. At that time, there was nothing to make it fulfill the "here's a set of N tasks, distribute them onto this set of computers on a FCFS basis, and gather the result on a computer" usage pattern, akin to "poor man's BOINC without BOINC-ifying the executables"). |
Only three machines, but one of them is a four-socket 48-CPU 64GB Opteron that I bought when a special offer made it about the most cost-effective, and certainly the most convenient, way to acquire that many computrons (the others are a 16GB i7/4770 and a 32GB i7/4930K; the post-processing is done on that last one)
I basically just run jobs with make [code] G=$(shell seq 0 12999) S=/home/nfsworld/gnfs-batalov/gnfs-lasieve4I16e all: $(patsubst %,%.t1,$G) %.t1: $(S) snfs -r -f $(shell echo $*\*10000+20000000 | bc) -c 10000 2> $*.t wc -c $*.t > $*.t1 [/code] and manually write makefiles that run for about a month on however-many CPUs. |
[QUOTE=fivemack;372632]I basically just run jobs with make
and manually write makefiles that run for about a month on however-many CPUs.[/QUOTE] That's a very creative use of make! :cool: I don't know if I would have thought of something like that. I wonder what other kinds of unix tools could be used in a creative way like this to do large batches of sieving? Personally, I rewrote parts of factmsieve.py to connect to my web server and get work, gzip the relations, and then ftp the gz file to my home computer. The web server is running a simple php script that keeps track of the next unit of work to hand out. I'll be writing a more detailed description of all of this if I ever finish the large factoring project I'm working on! :max: :smile: |
[QUOTE=WraithX;372673]That's a very creative use of make! :cool: I don't know if I would have thought of something like that. I wonder what other kinds of unix tools could be used in a creative way like this to do large batches of sieving?
Personally, I rewrote parts of factmsieve.py to connect to my web server and get work, gzip the relations, and then ftp the gz file to my home computer. The web server is running a simple php script that keeps track of the next unit of work to hand out. I'll be writing a more detailed description of all of this if I ever finish the large factoring project I'm working on! :max: :smile:[/QUOTE] Many years ago I wrote cabal[cd].c (client and daemon respectively) to co-ordinate NFS factorizations. If you search on "The Cabal" and NFS you should find the reason for the name and just how many years ago. It was last used in anger at several sites for RSA-768. One of the RSA-768 papers describes its use. Finding that paper is also left as an exercise for the reader The code may well be out there but anyone who wants it for adaption to their projects is welcome to a copy from me if it can't be found otherwise. |
I spun out the push for the next factorization in a [URL="http://mersenneforum.org/showthread.php?t=19334"]separate thread[/URL].
|
| All times are UTC. The time now is 11:06. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.