mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   NFS@Home (https://www.mersenneforum.org/forumdisplay.php?f=98)
-   -   BOINC NFS sieving - NFS@Home (https://www.mersenneforum.org/showthread.php?t=12388)

frmky 2009-09-05 05:53

BOINC NFS sieving - NFS@Home
 
Taking a cue from a certain group of calculator enthusiasts, I decided to test how well BOINC works for NFS sieving. I couldn't resist the oh-so-fashionable @Home name. :smile: I've been playing with it for a few days now, and it seems to work fine. Of course, real problems don't crop up until it's in the wild, so if you have time and the inclination, I'd appreciate it if you would give it a try. The project is hosted at [URL="http://escatter11.fullerton.edu/nfs/"]http://escatter11.fullerton.edu/nfs/[/URL] If you have BOINC already installed, you can use this address to attach to the project. If not, you can download BOINC from the link on that page.

Included are Serge's latest optimized gnfs-lasieve binaries (BOINCified, of course) for 32-bit Windows, 32-bit Linux, and 64-bit Linux, but the Linux binaries I suspect require a 2.6 kernel to work. It's currently loaded with work for sieving the Cunningham number 2,2214L. As problems crop up, there will probably be brief periods of downtime, so if you have trouble connecting, try a little bit later.

Thanks!

Batalov 2009-09-05 06:09

That's a nice number.
Let's hope that all the hacked T84+ will be sieving as well.
You [I]did[/I] provide a Z80 16-bit siever binary, didn't you? :smile:

henryzz 2009-09-05 07:23

i have heard of the BOINC client using lots of extra cpu
using 1 core out of 4
does it still do that at all?
if it doesnt i might join and do a few workunits

wreck 2009-09-05 11:16

It seems like that each work unit takes about 40 minutes and the data is about 1 MB.

Nice job.

frmky 2009-09-05 16:33

On a fast 64-bit linux computer, a work unit can take as little as 15-20 minutes. I tried to balance the time so that it's not too short on 64-bit linux but not too slow on an older, 32-bit machine. On my very aging laptop, they take a bit under two hours.

I was a bit worried about the result size, but a little over 1MB seems ok. There have been no upload errors so far. As we go to larger projects, the upload size will drop.

After an initial minor hiccup, everything seems to be running fine (knock on wood). Based on submitted and uploaded-but-not-yet-submitted results, 2,2214L is already about 14% sieved! I'll make the next one a bit harder. :smile:

mdettweiler 2009-09-05 16:47

[quote=henryzz;188694]i have heard of the BOINC client using lots of extra cpu
using 1 core out of 4
does it still do that at all?
if it doesnt i might join and do a few workunits[/quote]
No, nothing like that. Its overhead is a bit more than some "lighter" client/server setups like LLRnet, PRPnet, and ECMnet, but I recall that back when I used BOINC regularly, it would only rack up about 5-10 minutes of CPU time in Task Manager over the course of a number of days.

At any rate, though, since NFS is already a somewhat overhead-heavy thing as it is, BOINC is a perfect fit for it; BOINC's overhead should be significantly less than the time normally lost as idling between ranges and whatnot.

wblipp 2009-09-05 18:08

Bravo!

Successfully tapping the broad BOINC community seems to require some combination of responsiveness and support for points and teams. Those projects that figure out these things often discover they have unleashed a firehose of processing power that threatens to drown their project. If you haven't already, you might look at yoyo@home - yoyo has successfully translated several mathematical distributed computing projects, including ECMNET, to the BOINC community. If you reach the level of success where you are suffering from lack of siever-ready numbers, ElevenSmooth and OddPerfect can provide suitable composites at any size from tiny to record size.

Good Luck|

William

yoyo 2009-09-05 18:31

Hello,
it's really nice to have no GNFS in Boinc. This was also on my ToDo list. I will spread the URL of the project a bit and you will get some more beta testers. I think when the URL will be known it will spread fast in the Boinc community and you will get dozens of testers ;)
yoyo

Btw: ECM in Boinc runs here [url]http://www.rechenkraft.net/yoyo[/url]

yoyo 2009-09-05 19:20

Hello,
some hints for the Boinc project:
- the Forum on the Boinc project should be enabled or linked to an existing forum somewhere. Boinc users will have questions which they want to place somewhere.
- the estimated runtime is to small, I think it should be doubled or multiplied by 3.
yoyo

frmky 2009-09-05 21:38

[QUOTE=yoyo;188744]Hello,
some hints for the Boinc project:
- the Forum on the Boinc project should be enabled or linked to an existing forum somewhere. Boinc users will have questions which they want to place somewhere.
- the estimated runtime is to small, I think it should be doubled or multiplied by 3.
yoyo[/QUOTE]

Done, and I've adjusted the template file for the next project.

Batalov 2009-09-06 00:36

I tried and liked the thing! Greg, this is the sliced bread, right here. I wonder how the Cunningham page 112 will look in two months.

I looked at the output files and noticed that
1) they still use lower- and uppercase letters (if you compile to v106 to use only one case, you will save on the compressed transfer and your staging disk)
2) they are not immediately gzipped (which is probably ok, because the BOINC manager will gzip-or-whatever on the fly, wouldn't it?)
3+) do you validate the received data? (both for the correctness and being consistent with the task: the q0 to be in the task range; prevent sending the same data, spoofing, filling up your disk with junk...)

Arigato Gozaimasu!

frmky 2009-09-06 01:39

1. If it becomes an issue, I'll deal with it. (Interpret as I'm not ready to go through another round of application compiling, signing, copying, etc.)
2. Yes, if I had remembered to enable it. It will be in the next project.
3. I really don't want to waste computer time getting a quorum of identical results, so I only issue each work unit one time. On receipt, I do a few quick checks to make sure it looks ok (big enough, not too big, has about the expected number of relations, contains q in the right range, etc.). I hope to be able to keep it this way but if it becomes necessary I'll move to the bitwise validator and require a quorum of 2 results.

Batalov 2009-09-07 05:08

No, no, no, quorum of 2 will probably be not worth it.
It is not like protein modeling or other exaustive projects (where every part must be exactly right); ah, GIMPS is another handy example project with a quorum of 2. :rolleyes:
Here, if a sieved part is wrong then it will be discarded by filtering and good riddance! No range is better than any other (well, within reason) and need not be redone; of course as long as not too many workunits fail and drive the sieving range much higher than desired.

Related question: do you get the [I]abort[/I] results and return them to the pool? When I was looking at the client behaviour (and looked at ins and outs; hey, the poly is good, btw), the client reserved a hundred or so units. I've finished a dozen and at the same time set "request no more work" and then aborted all not yet started chunks. Can you check that the server processed those aborts right? (My ID is the same as here.)

_________
[SIZE=1][I]5,353+? Goodness![/I][/SIZE]

wreck 2009-09-07 07:17

Project is down?

[CODE]
2009-9-7 13:52:24|NFS@Home|Started upload of file L221440260_0_0
2009-9-7 13:53:07|NFS@Home|Finished upload of file L221440260_0_0
2009-9-7 13:53:07|NFS@Home|Throughput 39312 bytes/sec
2009-9-7 14:08:38|NFS@Home|Sending scheduler request to http://escatter11.fullerton.edu/nfs_cgi/cgi
2009-9-7 14:08:38|NFS@Home|Reason: To fetch work
2009-9-7 14:08:38|NFS@Home|Requesting 2634 seconds of new work, and reporting 1 completed tasks
2009-9-7 14:08:44|NFS@Home|Scheduler request succeeded
2009-9-7 14:08:46|NFS@Home|Started download of file lasievee_1.05_windows_intelx86.exe
2009-9-7 14:08:46|NFS@Home|Started download of file S5p353.poly
2009-9-7 14:08:48|NFS@Home|Finished download of file S5p353.poly
2009-9-7 14:08:48|NFS@Home|Throughput 930 bytes/sec
2009-9-7 14:08:49||Allowing work fetch again.
2009-9-7 14:08:51|NFS@Home|Finished download of file lasievee_1.05_windows_intelx86.exe
2009-9-7 14:08:51|NFS@Home|Throughput 133934 bytes/sec
2009-9-7 14:08:52||Rescheduling CPU: files downloaded
2009-9-7 14:08:52||Using earliest-deadline-first scheduling because computer is overcommitted.
2009-9-7 14:08:52|NFS@Home|Pausing task L221441030_0 (removed from memory)
2009-9-7 14:08:52|NFS@Home|Starting task S5p353_50062_0 using lasievee version 105
2009-9-7 14:08:55||Suspending work fetch because computer is overcommitted.
2009-9-7 14:15:54||Rescheduling CPU: application exited
2009-9-7 14:15:54|NFS@Home|Computation for task L221441440_0 finished
2009-9-7 14:15:54|NFS@Home|Restarting task L221441030_0 using lasieved version 105
2009-9-7 14:15:56|NFS@Home|Started upload of file L221441440_0_0
2009-9-7 14:16:32|NFS@Home|Finished upload of file L221441440_0_0
2009-9-7 14:16:32|NFS@Home|Throughput 46396 bytes/sec
2009-9-7 14:50:25|NFS@Home|Sending scheduler request to http://escatter11.fullerton.edu/nfs_cgi/cgi
2009-9-7 14:50:25|NFS@Home|Reason: To fetch work
2009-9-7 14:50:25|NFS@Home|Requesting 436 seconds of new work, and reporting 1 completed tasks
2009-9-7 14:50:31|NFS@Home|Scheduler request succeeded
2009-9-7 14:50:31|NFS@Home|Message from server: Server can't open log file (../log_escatter11/scheduler.log)
[COLOR="Red"]2009-9-7 14:50:31|NFS@Home|Project is down[/COLOR]
2009-9-7 14:54:15||Rescheduling CPU: application exited
2009-9-7 14:54:15|NFS@Home|Computation for task L221441030_0 finished
2009-9-7 14:54:15||Resuming round-robin CPU scheduling.
2009-9-7 14:54:17|NFS@Home|Started upload of file L221441030_0_0
[COLOR="red"]2009-9-7 14:54:18|NFS@Home|Error on file upload: can't open log file '../log_escatter11/file_upload_handler.log' (errno: 9)[/COLOR]
2009-9-7 14:54:18|NFS@Home|Temporarily failed upload of L221441030_0_0: transient upload error
2009-9-7 14:54:18|NFS@Home|Backing off 1 minutes and 0 seconds on upload of file L221441030_0_0
2009-9-7 14:55:18|NFS@Home|Started upload of file L221441030_0_0
2009-9-7 14:55:20|NFS@Home|Error on file upload: can't open log file '../log_escatter11/file_upload_handler.log' (errno: 9)
2009-9-7 14:55:20|NFS@Home|Temporarily failed upload of L221441030_0_0: transient upload error
2009-9-7 14:55:20|NFS@Home|Backing off 1 minutes and 0 seconds on upload of file L221441030_0_0
2009-9-7 14:56:20|NFS@Home|Started upload of file L221441030_0_0
2009-9-7 14:56:21|NFS@Home|Error on file upload: can't open log file '../log_escatter11/file_upload_handler.log' (errno: 9)
2009-9-7 14:56:21|NFS@Home|Temporarily failed upload of L221441030_0_0: transient upload error
2009-9-7 14:56:21|NFS@Home|Backing off 1 minutes and 0 seconds on upload of file L221441030_0_0
2009-9-7 14:57:21|NFS@Home|Started upload of file L221441030_0_0
2009-9-7 14:57:22|NFS@Home|Error on file upload: can't open log file '../log_escatter11/file_upload_handler.log' (errno: 9)
2009-9-7 14:57:22|NFS@Home|Temporarily failed upload of L221441030_0_0: transient upload error
2009-9-7 14:57:22|NFS@Home|Backing off 1 minutes and 0 seconds on upload of file L221441030_0_0
2009-9-7 14:58:22|NFS@Home|Started upload of file L221441030_0_0
2009-9-7 14:58:24|NFS@Home|Error on file upload: can't open log file '../log_escatter11/file_upload_handler.log' (errno: 9)
2009-9-7 14:58:24|NFS@Home|Temporarily failed upload of L221441030_0_0: transient upload error
2009-9-7 14:58:24|NFS@Home|Backing off 1 minutes and 24 seconds on upload of file L221441030_0_0
2009-9-7 14:59:42||Rescheduling CPU: application exited
2009-9-7 14:59:42|NFS@Home|Computation for task S5p353_50062_0 finished
2009-9-7 14:59:44|NFS@Home|Started upload of file S5p353_50062_0_0
2009-9-7 14:59:45|NFS@Home|Error on file upload: can't open log file '../log_escatter11/file_upload_handler.log' (errno: 9)
2009-9-7 14:59:45|NFS@Home|Temporarily failed upload of S5p353_50062_0_0: transient upload error
2009-9-7 14:59:45|NFS@Home|Backing off 1 minutes and 0 seconds on upload of file S5p353_50062_0_0
2009-9-7 14:59:48|NFS@Home|Started upload of file L221441030_0_0
2009-9-7 14:59:50|NFS@Home|Error on file upload: can't open log file '../log_escatter11/file_upload_handler.log' (errno: 9)
2009-9-7 14:59:50|NFS@Home|Temporarily failed upload of L221441030_0_0: transient upload error
2009-9-7 14:59:50|NFS@Home|Backing off 1 minutes and 3 seconds on upload of file L221441030_0_0
2009-9-7 15:00:45|NFS@Home|Started upload of file S5p353_50062_0_0
2009-9-7 15:00:46|NFS@Home|Error on file upload: can't open log file '../log_escatter11/file_upload_handler.log' (errno: 9)
2009-9-7 15:00:46|NFS@Home|Temporarily failed upload of S5p353_50062_0_0: transient upload error
2009-9-7 15:00:46|NFS@Home|Backing off 1 minutes and 0 seconds on upload of file S5p353_50062_0_0
2009-9-7 15:00:54|NFS@Home|Started upload of file L221441030_0_0
2009-9-7 15:00:56|NFS@Home|Error on file upload: can't open log file '../log_escatter11/file_upload_handler.log' (errno: 9)
2009-9-7 15:00:56|NFS@Home|Temporarily failed upload of L221441030_0_0: transient upload error
2009-9-7 15:00:56|NFS@Home|Backing off 13 minutes and 36 seconds on upload of file L221441030_0_0
2009-9-7 15:01:46|NFS@Home|Started upload of file S5p353_50062_0_0
2009-9-7 15:01:47|NFS@Home|Error on file upload: can't open log file '../log_escatter11/file_upload_handler.log' (errno: 9)
2009-9-7 15:01:47|NFS@Home|Temporarily failed upload of S5p353_50062_0_0: transient upload error
2009-9-7 15:01:47|NFS@Home|Backing off 1 minutes and 0 seconds on upload of file S5p353_50062_0_0
2009-9-7 15:02:47|NFS@Home|Started upload of file S5p353_50062_0_0
2009-9-7 15:02:49|NFS@Home|Error on file upload: can't open log file '../log_escatter11/file_upload_handler.log' (errno: 9)
2009-9-7 15:02:49|NFS@Home|Temporarily failed upload of S5p353_50062_0_0: transient upload error
2009-9-7 15:02:49|NFS@Home|Backing off 1 minutes and 0 seconds on upload of file S5p353_50062_0_0
2009-9-7 15:03:49|NFS@Home|Started upload of file S5p353_50062_0_0
2009-9-7 15:03:50|NFS@Home|Error on file upload: can't open log file '../log_escatter11/file_upload_handler.log' (errno: 9)
2009-9-7 15:03:50|NFS@Home|Temporarily failed upload of S5p353_50062_0_0: transient upload error
2009-9-7 15:03:50|NFS@Home|Backing off 1 minutes and 45 seconds on upload of file S5p353_50062_0_0
2009-9-7 15:05:24|NFS@Home|Sending scheduler request to http://escatter11.fullerton.edu/nfs_cgi/cgi
2009-9-7 15:05:24|NFS@Home|Reason: Requested by user
2009-9-7 15:05:24|NFS@Home|Requesting 17280 seconds of new work, and reporting 1 completed tasks
2009-9-7 15:05:30|NFS@Home|Scheduler request succeeded
2009-9-7 15:05:30|NFS@Home|Message from server: Server can't open log file (../log_escatter11/scheduler.log)
2009-9-7 15:05:30|NFS@Home|Project is down
2009-9-7 15:05:37|NFS@Home|Started upload of file S5p353_50062_0_0
2009-9-7 15:05:39|NFS@Home|Error on file upload: can't open log file '../log_escatter11/file_upload_handler.log' (errno: 9)
2009-9-7 15:05:39|NFS@Home|Temporarily failed upload of S5p353_50062_0_0: transient upload error
2009-9-7 15:05:39|NFS@Home|Backing off 4 minutes and 39 seconds on upload of file S5p353_50062_0_0
2009-9-7 15:10:19|NFS@Home|Started upload of file S5p353_50062_0_0
2009-9-7 15:10:20|NFS@Home|Error on file upload: can't open log file '../log_escatter11/file_upload_handler.log' (errno: 9)
2009-9-7 15:10:20|NFS@Home|Temporarily failed upload of S5p353_50062_0_0: transient upload error
2009-9-7 15:10:20|NFS@Home|Backing off 15 minutes and 40 seconds on upload of file S5p353_50062_0_0

[/CODE]

frmky 2009-09-07 10:10

[QUOTE=Batalov;188869]do you get the [I]abort[/I] results and return them to the pool? [/QUOTE]
Yes, aborted work units are reissued.

frmky 2009-09-07 10:12

[QUOTE=wreck;188878]Project is down?
[/QUOTE]
It was for a little while. Back up now. All part of the beta warning... Hopefully everything is in place and working now. :smile:

wreck 2009-09-07 12:45

Could I ask why NFS@Home is not on the BOINC's Project list?
Reference url is [url]http://boincstats.com/index.php?list=full&or=0[/url]

debrouxl 2009-09-09 13:58

[quote]I decided to test how well BOINC works for NFS sieving[/quote]
If you manage to get enough people willing to contribute, it works extremely well, because BOINC makes for a much more user-friendly setup and usage.

bdodson 2009-09-09 21:28

[QUOTE=debrouxl;189161]If you manage to get enough people willing to contribute, it works extremely well, because BOINC makes for a much more user-friendly setup and usage.[/QUOTE]

Thanks for this explanation. Now, could you tell us how to
manage to get enough people to contribute? For example, does
boinc make user/team stats easy? That was one of the things that
brought NFSNet down, as far as a broad pool of contributors. -bd

fivemack 2009-09-09 22:50

Yes: go to

[url]http://escatter11.fullerton.edu/nfs/stats.php[/url]

(assuming that it's up; it's not responding as of 1am 10/9, but nor should I be worrying about it at 1am on a work night ...)

and there are what look like quite decent stats; top users, top computers, top teams. I don't know how much of this has been specially coded up by Greg and how much of it comes with the BOINC infrastructure.

I've started BOINC on half of my i7 (I may stop it again if it makes the 2-877 linalg slow; it's by definition rather hard to use the idle time on a hyperthreaded processor ...) and it seems to be basically silent while giving lots of interesting info.

frmky 2009-09-10 01:45

[QUOTE=fivemack;189224]
(assuming that it's up; it's not responding as of 1am 10/9, but nor should I be worrying about it at 1am on a work night ...)
[/QUOTE]

You would try to access it in the 15 minutes that I took it down to do a quick backup of the latest settings! :smile: Everything's back up now.

All of the stats are coded by BOINC. I've probably written only about 30 lines of new code for the website. Most of the time has been spent figuring out their system of scripts.

debrouxl 2009-09-10 07:07

bdodson: we "certain group of calculator enthusiasts" (as mentioned by frmky in the first post) managed to get hundreds of computers totaling more than 1K cores crunching away on sieving between 12 and 13 512-bit numbers in less than one month (even though we used a WU quota of 2 and the bitwise validator !) :smile:
Needless to say, the raw CPU power has been growing throughout the project.
As frmky mentioned it, BOINC packs a lot of things built-in, is fairly little of a hassle to set up, and the modifications to ggnfs-lasieve turned out to be small.

When we were going to run out of WUs in the next few days, we started writing something about our BOINC version of gnfs-lasieve4I14e, what we achieved with it, and how we thought about improving it (with the help of a wider community): [URL]http://www.yaronet.com/posts.php?sl=&s=123860&p=26&h=755#755[/URL] .
We planned on presenting the project to the wider factoring community, right on this section of the mersenneforum board, as advised by jasonp... but the BOINC project creator is on holiday, and we didn't get to finishing the post contents before frmky, who was aware of ours, beat us with his own BOINC project :razz:

See:
* [URL]http://boinc.unsads.com/rsals/[/URL] : our BOINC project, in which I've pushed WUs corresponding to sieving 1M-50M on the C151_105_101 number of [URL]http://xyyxf.at.tut.by/wanted.txt[/URL] . About 24h into the distribution of those WUs, about half of them were distributed to clients. I've performed some polynomial selection for C157_105_74 and C152_107_66.
* [URL]http://www.unitedti.org/index.php?showtopic=8888[/URL], [URL]http://www.unitedti.org/index.php?showtopic=8899[/URL] : the topics which started it all (the first one is the announcement of the solo desktop factoring of a 512-bit number). Both jasonp (who suggested posting on this board) and frmky posted there.
* [URL]http://www.yaronet.com/sujets.php?s=&f=15[/URL] : the news/misc section of the main TI-68k French-speaking board (English-written posts are tolerated), main place of coordination for the BOINC project (the topic with currently more than 800 posts)

Batalov 2009-09-10 08:08

[quote=debrouxl;189258] I've performed some polynomial selection for C157_105_74 and C152_107_66...[/quote]

Sorry, dude, but what for? They have very simple polynomials.

Both difficulty 197, easily done on a single home computer in a fairly short time. You have to pick up some theory. Not every wall deserves to be knocked down with one's forehead.

frmky 2009-09-10 09:08

Oops! Sorry for stealing your thunder! Over the past year or so, I've been growing more frustrated with the state of NFSNet and had been contemplating trying a BOINC project for a while. Your success with the RSA keys, along with the time afforded by my sabbatical this semester, spurred me to action. The applications that I'm using are based on your code modified to support the other sievers, catch more exit conditions, pass sieve ranges on the command line rather than in the input file, and support 64-bit linux. I started with a ~220 digit SNFS, but that proved too small. The current contributors are doing a ~246 digit SNFS in less than a week, and it's still growing.

I'll be happy to share the modified code with you, and I certainly don't mind you going ahead with your plans. I plan to mostly do Cunningham composites. You could concentrate on others, such as XYYXF, OddPerfect, and ElevenSmooth composites.

R.D. Silverman 2009-09-10 09:44

[QUOTE=frmky;189268]Oops! Sorry for stealing your thunder! Over the past year or so, I've been growing more frustrated with the state of NFSNet and had been contemplating trying a BOINC project for a while. Your success with the RSA keys, along with the time afforded by my sabbatical this semester, spurred me to action. The applications that I'm using are based on your code modified to support the other sievers, catch more exit conditions, pass sieve ranges on the command line rather than in the input file, and support 64-bit linux. I started with a ~220 digit SNFS, but that proved too small. The current contributors are doing a ~246 digit SNFS in less than a week, and it's still growing.

I'll be happy to share the modified code with you, and I certainly don't mind you going ahead with your plans. I plan to mostly do Cunningham composites. You could concentrate on others, such as XYYXF, OddPerfect, and ElevenSmooth composites.[/QUOTE]


The Fibonacci/Lucas numbers have a much longer history and tradition
than these last 3..........

fivemack 2009-09-10 09:48

polynomial selection for XYYX is really easy
 
Hello debrouxl

[code]
n: 17075208162502696769894374083900022560666476082993604775871225203067880924979815162772202343023227879807349366915642555719257659222044063449483070644063
skew: 0.5
c6: 66
c0: 1
Y0: 21048519522998348950643
Y1: 564664961438246926567398233604096
[/code]

is a reasonable polynomial for C152_107_66;

[code]
n: 1958662481641326248094427703301790313367601031324000981342287598641007898602778576888614085103799825026482689632839088276269469580340645292584464747639935853
skew: 0.4
c0: 105
c5: 1
Y1: 1794180426060713946801538628555399757824
Y0: -2078928179411367257720947265625
[/code]

should be OK for C157_105_74.

basically you've got, say, 103^55 + 55^103, and you want to write a small multiple of it as a sum of small multiples of fifth or sixth powers.

so it would be 103*(103^9)^6 + 55*(55^17)^6
which you do as c0 = 103, c6 = 55, and either Y0 = 103^9, Y1=55^17 or the other way round, depending on a convention that I don't quite know: I try both, and for one of them the siever gives an error message. If you're using fifth powers, you also need to fiddle with choice of sign until the siever stops complaining. Scientific method, don't you just love it?

Batalov 2009-09-10 09:51

...And there are some Cullen and Woodall's.
W951 is particularly appealing.

debrouxl 2009-09-10 09:59

Batalov: yeah, I'll easily admit that I haven't picked a lot of theory :redface:
Someone suggested a 149-digit xyyxf number at
[url]http://www.unitedti.org/index.php?showtopic=8888&view=findpost&p=135938[/url] (and later posts), so I just tried several other 150-160 digit numbers from the "wanted" list of that project, and wanted to feed them to the rsals grid so as to prevent starvation (and therefore people detaching, weakening the grid).

fivemack: thanks for the explanation :smile:

frmky: don't worry, I wholeheartedly agree that there's room for multiple BOINC projects, each of them dealing with a kind of number, with collaboration between these similar projects :wink:
You did it completely right to go forward with improving squalyl's and FloppusMaximus' initial work. You seem to have time on your hands, and you're putting it to good use :smile:

everybody: from tonight to next Wednesday or Thursday, it's likely that I'll be without Internet access at all... so don't worry if I don't reply.

fivemack 2009-09-10 10:02

[QUOTE=Batalov;189275]...And there are some Cullen and Woodall's.
W951 is particularly appealing.[/QUOTE]

You spelled 'appalling' wrong :smile:

That one's probably over the 32-33 boundary; I don't know whether the issue with >32-bit large primes in msieve is that they don't fit in 32 bits, in which case a simple encoding like

(p<210)?p:210+48*(p/210)+lut[p%210]

gets you to 34 bits, or something a whole lot more subtle.

I don't need a 4x4 compute cluster with infiniband interconnect
I don't need a 4x4 compute cluster with infiniband interconnect
I don't need a 4x4 compute cluster with infiniband interconnect
A 4x4 compute cluster with infiniband interconnect only uses a kilowatt, and it's probably not very much louder than a quite loud vacuum cleaner running 24/7, and it costs under £3000, and think of the matrices you could do, and now that Lynnfield is out it's a lot faster than the one I specced with Shanghais
I don't need a 4x4 compute cluster with infiniband interconnect
I don't need a 4x4 compute cluster with infiniband interconnect
I don't need a 4x4 compute cluster with infiniband interconnect

jasonp 2009-09-10 12:18

[QUOTE=fivemack;189278]
That one's probably over the 32-33 boundary; I don't know whether the issue with >32-bit large primes in msieve is that they don't fit in 32 bits, in which case a simple encoding like

(p<210)?p:210+48*(p/210)+lut[p%210]

gets you to 34 bits, or something a whole lot more subtle.
[/QUOTE]
It just involves a lot of small changes to the initial filtering and linear algebra stages, modifications of data structures, etc. Actually the big reason I haven't gotten to it is that I wanted to use the changes as an excuse to implement run-length compression of lists of primes or lists of ideals. The hardest part is extending code that finds roots of polynomials mod p to p > 2^32

@debrouxl: welcome to the forum :) As fivemack demonstrated, there are tricks that make NFS dramatically easier when your input is not an RSA key; perhaps you should let us know how you'd like to pitch in to any of the ongoing projects here, once you decide.

R.D. Silverman 2009-09-10 12:53

[QUOTE=jasonp;189291]The hardest part is extending code that finds roots of polynomials mod p to p > 2^32
[/QUOTE]

When do you use this? AFAIK we still are not close to using a factor
base with elements > 2^32.....?

R.D. Silverman 2009-09-10 12:54

[QUOTE=frmky;189268] Over the past year or so, I've been growing more frustrated with the state of NFSNet .[/QUOTE]

Join the club!!!!!

R.D. Silverman 2009-09-10 12:58

[QUOTE=Batalov;189275]...And there are some Cullen and Woodall's.
W951 is particularly appealing.[/QUOTE]

How about doing the first two holes in the base 12 tables?

Among all of the first five holes in all the tables, these have been
there the longest.

jasonp 2009-09-10 13:26

[QUOTE=R.D. Silverman;189293]When do you use this? AFAIK we still are not close to using a factor
base with elements > 2^32.....?[/QUOTE]
I thought there was a point in the filtering where poly roots were needed, but that turns out only to be needed for free relations, which can safely be limited to be < 2^32 in size.

Mea culpa.

fivemack 2009-09-10 14:16

[QUOTE=R.D. Silverman;189295]How about doing the first two holes in the base 12 tables?

Among all of the first five holes in all the tables, these have been
there the longest.[/QUOTE]

I think that's simply that for a long time they were the largest; even now they're third-largest, after some lucky ECM results on the 7 tables and mersenneforum's hammering away at 2-. It looks as if the BOINC project is pretty much exactly the right scale for finishing off the Cunningham most-wanted lists; if it sieves faster than Greg can process alone, I'm happy to contribute a quad-core.

bdodson 2009-09-10 14:19

[QUOTE=R.D. Silverman;189295]How about doing the first two holes in the base 12 tables?

Among all of the first five holes in all the tables, these have been
there the longest.[/QUOTE]

Yes, the only numbers still left from the wanted lists issued with
page 106 that aren't yet reserved. (We recently finished the last
one from page 107.) Last I heard, Greg's looking at two more
numbers under difficulty 250. -Bruce

bdodson 2009-09-10 15:09

[QUOTE=debrouxl;189258]bdodson: we "certain group of calculator enthusiasts" (as mentioned by frmky in the first post) managed to get hundreds of computers totaling more than 1K cores crunching away on sieving between 12 and 13 512-bit numbers in less than one month (even though we used a WU quota of 2 and the bitwise validator !) :smile:
Needless to say, the raw CPU power has been growing throughout the project.
As frmky mentioned it, BOINC packs a lot of things built-in, is fairly little of a hassle to set up, and the modifications to ggnfs-lasieve turned out to be small.
...[/QUOTE]

Thanks for signing in; and for the links. There was an early presentation
at one of the RSA conferences (in California), if I recall correctly from the
seti people; I'm thinking 1999-2001, ridiculing distributed factoring projects
for not managing to produce clients that could be run by people that
don't already have advanced degrees in computing, math or physics. There
was an early attempt at such a client in the project that factored RSA-130
in 1995, as for example at
[url]http://www.npac.syr.edu/factoring.html[/url]

I ran a hand distribution project for a small slice of the first snfs factorization
above 768-bits. For a laugh, you could check
[url]http://www.lehigh.edu/~bad0/cabal773.html[/url]

from March 2000. Unfortunately, the files from the NFSNet project no
longer seem to be available, but the discussion here on the forum dates
from 2003. An early project from the era of that RSA presentation
(without much wide participation, but the first parallel matrix use) is
extracted at
[url]http://www.lehigh.edu/~bad0/msg06332.html[/url]

One last reference, if I'm recalling correctly, the group at epfl (with
some of the leading factoring research) had BOINC clients for some
portion of their work on MD5/SHA-1. But still, no BOINC wrapper for
NFS factoring until your group. Can you account for your success
at an objective that's been so elusive for such a long time? We all
knew
[QUOTE]
BOINC makes for a much more user-friendly setup and usage. [/QUOTE]
so what did your group know that everyone else was missing?

-Bruce

debrouxl 2009-09-10 19:07

My Internet access at home came back today without intervention on our part, after going away for two days without intervention on our part... so if things remain the same, I'll be able to reply in the topics, and launch some WUs and do other administrative stuff on the rsals BOINC server.


Well, I'd be fairly surprised that nobody before us had ever tried making a BOINC version of any NFS implementation... so I have no convincing explanation for our success at an objective that's been elusive for a long time (we didn't know that, we learnt it in jasonp's post) :confused:

In the TI-Z80 & TI-68k communities, nobody but "FloppusMaximus" Benjamin Moody ever figured that in 2009, factoring 512-bit integers is, after all, rather easy (~73 days on his single, fairly ancient, dual-core Athlon 64). "Godzil" suggested making a BOINC client, and "squalyl" implemented a small set of modifications that did the job well enough, with contributions from "FloppusMaximus".
Now, we can all work together (well, those who aren't on holiday or otherwise busy - squalyl is) on gnfs-lasieve* and other items of the common wish/todo list :smile:

I don't know what kind of integers rsals is going to tackle next, but it's clear that we need to learn a few tricks so as to become more efficient. This forum is an excellent place for that :smile:

fivemack 2009-09-11 08:35

[QUOTE=fivemack;189224]I've started BOINC on half of my i7 (I may stop it again if it makes the 2-877 linalg slow; it's by definition rather hard to use the idle time on a hyperthreaded processor ...) and it seems to be basically silent while giving lots of interesting info.[/QUOTE]

Hyperthreaded processors, indeed, don't have idle time; the ETA for the linear algebra was moving backwards at about two hours per elapsed hour, so I've stopped the client.

I am sure the BOINC basic infrastructure is very efficient, but if you leave the graphical front-end boincmgr running 24/7 it eats about a quarter of a CPU.

R.D. Silverman 2009-09-11 12:28

[QUOTE=bdodson;189317] I'm thinking 1999-2001, ridiculing distributed factoring projects
for not managing to produce clients that could be run by people that
don't already have advanced degrees in computing, math or physics. -Bruce[/QUOTE]


It depends upon one's [b]objectives[/b]. If one merely wants to
produce factorizations, then NFS@HOME is a [b]fantastic[/b] idea.
However, I don't see NFS@HOME as being a leading edge research tool.
Does anyone envision it attempting (say) the RSA-768 effort that is
now underway?

However, factorizations by themselves don't have all that much value.
It is certainly [b]fun[/b], but I still think that O. Atkin's comment is
applicable.

If one wants to do research into improving factoring algorithms (and the
code used to implement them), then advanced degrees become a
requirement.

mdettweiler 2009-09-11 15:36

[quote=fivemack;189438]I am sure the BOINC basic infrastructure is very efficient, but if you leave the graphical front-end boincmgr running 24/7 it eats about a quarter of a CPU.[/quote]
Really? You sure on that one? Because I don't remember it being that bad back when I ran BOINC full-time a few years ago. Or do you only mean that it eats a quarter of the CPU time if you leave boincmgr actually up on the screen, rather than minimized to the system tray?

fivemack 2009-09-11 16:08

[QUOTE=mdettweiler;189488]Really? You sure on that one? Because I don't remember it being that bad back when I ran BOINC full-time a few years ago. Or do you only mean that it eats a quarter of the CPU time if you leave boincmgr actually up on the screen, rather than minimized to the system tray?[/QUOTE]

boincmgr was actually up on the screen, and updating the extremely boring one-red-line statistics window - I don't think Linux has a concept of 'minimized to the system tray'.

Jeff Gilchrist 2009-09-11 16:08

[QUOTE=bdodson;189317]I ran a hand distribution project for a small slice of the first snfs factorization
above 768-bits. For a laugh, you could check
[url]http://www.lehigh.edu/~bad0/cabal773.html[/url]
[/QUOTE]

That brings back memories, I had almost forgotten about that.

Jeff.

mdettweiler 2009-09-11 16:27

[quote=fivemack;189492]boincmgr was actually up on the screen, and updating the extremely boring one-red-line statistics window - I don't think Linux has a concept of 'minimized to the system tray'.[/quote]
Ah, I thought you were talking about Windows. Okay, yes, that makes sense. I think Linux has boincmgr as a separate executable, right? Windows has it the same way, though in that case, the standard procedure is not to run boinc.exe, but rather run boincmgr.exe and have it minimized to the system tray. (You can of course still run them independently for dedicated crunchers and the like.)

As I recall, boincmgr didn't use tons of CPU time on Windows; but then again, I had it minimized to the system tray most of the time. I think boinc.exe itself actually accumulated more CPU time (on the order of a couple of minutes a week) than boincmgr.exe did for the same period of time (about 15-20 seconds at max).

frmky 2009-09-11 19:04

[QUOTE=R.D. Silverman;189463]
Does anyone envision it attempting (say) the RSA-768 effort that is
now underway?
[/QUOTE]

The big restriction with BOINC is that the application needs to run on commonly deployed hardware. Today, that [URL="http://infoworld.com/windows-pulse"]appears to be[/URL] a dual core computer with 2-3 GB of memory. I can specify that a work unit not be sent to a computer with less than X MB of memory, so in principle it [B]is[/B] feasible as long as the memory use of the siever can be kept in the Win32 limit of 3GB per process. Of course, though, I want to exercise the infrastructure and grow a userbase with more reasonable targets first. :smile:

frmky 2009-09-11 20:57

The first factors are in. 2,2214L factors as

87-digit prime factor:
650129030757448838848987009049036364296974617097033424890396170500076127717454884707697

104-digit prime factor:
11946827680341235646112762882603877501599655320760206302298918228497279186673314456312477897500132634593

Greg

debrouxl 2009-09-11 21:05

FWIW, for the 512-bit RSA keys of rsals, each WU (gnfs-lasieve4I14e, 32-bit) usually took less than 150 MB of RAM.

frmky: nice :smile:
How much raw CPU time for gnfs and for msieve ?

frmky 2009-09-11 21:23

[QUOTE=debrouxl;189535]
How much raw CPU time for gnfs and for msieve ?[/QUOTE]
The community contributed about 140 days of CPU time to the sieving. msieve took 2 days to do the filtering, linear algebra, and square roots. For reference, this was about as difficult as 492-bit RSA would be, so a bit easier than RSA-512.

The sieving for 5,353+ is nearly done. For that one, the community has contributed about 2 years of CPU time. For reference, that is about as difficult as 547-bit RSA would be.

fivemack 2009-09-11 23:38

[QUOTE=R.D. Silverman;189463]
However, I don't see NFS@HOME as being a leading edge research tool.
Does anyone envision it attempting (say) the RSA-768 effort that is
now underway?
[/QUOTE]

As I presume you're intending to point out, there are two hard parts of the RSA-768 effort: software development for the matrix-production step ([url]http://cado.gforge.inria.fr/workshop/slides/montgomery.pdf[/url] gives a decent idea of the problems involved, though with perhaps a bit much concentration on the square-root step which is likely to be practical using direct methods on the large shared-memory supercomputer that you'd need anyway for the matrix), and grantsmanship to get access to a machine big enough and fast enough (the slides mention a 256GB 64 x Power5 box at SARA and mention that that might not be enough) to solve the ~300M^2 matrix in reasonable time. If anyone can do that, the Lenstra-CADO-Montgomery-Aoki team can - it's pretty much all the people who've worked on serious-scale GNFS lined up in the same direction.

The sieving is however (as I've heard attributed, but not citably, to Churchill) a matter of applying increasingly great resources to a well-understood problem. [url]http://ludwig-sun1.unil.ch/~hstockin/crypto/[/url] has information about one of the grant requests for doing the sieving, on a grid;

[quote]
Resources required per job:

* x86 or x86-64 processor (preferred)
* 1GB RAM, 600MB swap, 450MB disk
* Operating system: Linux (preferred), FreeBSD, Windows
* Ownership: Jens Franke, Thorsten Kleinjung, Free Software Foundation, GPL v2, LGPL

Expected timelines: The expected timeline depends on the resources that would be available. We can start any moment. We expect that we need between 1500 and 2500 CPU years (depending on the type of processor used).
[/quote]

2500 CPU-years is eminently reasonable for a popular BOINC project, it would take a month or so for PrimeGrid as currently constituted.

Batalov 2009-09-12 00:11

[quote=frmky;189531]The first factors are in. 2,2214L [/quote]
Too easy, Greg!
Congratulations, great stuff!

R.D. Silverman 2009-09-12 02:49

[QUOTE=fivemack;189542]As I presume you're intending to point out, there are two hard parts of the RSA-768 effort: software development for the matrix-production step ([url]http://cado.gforge.inria.fr/workshop/slides/montgomery.pdf[/url] gives a decent idea of the problems involved, though with perhaps a bit much concentration on the square-root step which is likely to be practical using direct methods on the large shared-memory supercomputer that you'd need anyway for the matrix), and grantsmanship to get access to a machine big enough and fast enough .[/QUOTE]


The U.S. government gives access grants to large scale compute projects.
The grants are for the 80,000+ node (yes, 80,000 CPUs) supercomputer
at Oakridge. I suspect that this machine is big enough/fast enough.

frmky 2009-09-12 04:10

[QUOTE=R.D. Silverman;189559]The U.S. government gives access grants to large scale compute projects.
[/QUOTE]
I have had one small but successful NSF Teragrid grant. I certainly wouldn't be opposed to going for another. :smile:

10metreh 2009-09-12 06:40

Could you quickly (a few hours) put an end to the c141 that is blocking aliquot sequence 4788? No-one seems to want to find a poly.

henryzz 2009-09-12 06:44

[quote=10metreh;189570]Could you quickly (a few hours) put an end to the c141 that is blocking aliquot sequence 4788? No-one seems to want to find a poly.[/quote]
i was planning to do the poly search but i got distracted by my birthday
TBH it would take me a while to do a large enough poly search
if anyone fancies doing it quickly feel free

XYYXF 2009-09-12 14:21

[QUOTE=debrouxl;189277]Batalov: yeah, I'll easily admit that I haven't picked a lot of theory :redface:
Someone suggested a 149-digit xyyxf number at
[url]http://www.unitedti.org/index.php?showtopic=8888&view=findpost&p=135938[/url] (and later posts), so I just tried several other 150-160 digit numbers from the "wanted" list of that project, and wanted to feed them to the rsals grid so as to prevent starvation (and therefore people detaching, weakening the grid).[/QUOTE]Hey people,

since I'm the coordinator of this project, please let me know if some new results appear :-)

squalyl 2009-09-15 13:36

Be sure we won't forget you :)

squalyl

wreck 2009-09-15 13:47

It seems like the server is not stable, and the site often cann't open.

[CODE]
2009-9-15 21:41:11|NFS@Home|Started upload of file S6p316_128138_0_0
2009-9-15 21:41:13||Project communication failed: attempting access to reference site
2009-9-15 21:41:13|NFS@Home|Temporarily failed upload of S6p316_128138_0_0: http error
2009-9-15 21:41:13|NFS@Home|Backing off 1 minutes and 53 seconds on upload of file S6p316_128138_0_0
2009-9-15 21:41:14||Access to reference site succeeded - project servers may be temporarily down.
2009-9-15 21:43:08|NFS@Home|Started upload of file S6p316_128138_0_0
2009-9-15 21:43:10||Project communication failed: attempting access to reference site
2009-9-15 21:43:10|NFS@Home|Temporarily failed upload of S6p316_128138_0_0: http error
2009-9-15 21:43:10|NFS@Home|Backing off 1 minutes and 47 seconds on upload of file S6p316_128138_0_0
2009-9-15 21:43:11||Access to reference site succeeded - project servers may be temporarily down.

[/CODE]

R.D. Silverman 2009-09-15 14:21

[QUOTE=wreck;189844]It seems like the server is not stable, and the site often cann't open.

[CODE]
2009-9-15 21:41:11|NFS@Home|Started upload of file S6p316_128138_0_0
2009-9-15 21:41:13||Project communication failed: attempting access to reference site
2009-9-15 21:41:13|NFS@Home|Temporarily failed upload of S6p316_128138_0_0: http error
2009-9-15 21:41:13|NFS@Home|Backing off 1 minutes and 53 seconds on upload of file S6p316_128138_0_0
2009-9-15 21:41:14||Access to reference site succeeded - project servers may be temporarily down.
2009-9-15 21:43:08|NFS@Home|Started upload of file S6p316_128138_0_0
2009-9-15 21:43:10||Project communication failed: attempting access to reference site
2009-9-15 21:43:10|NFS@Home|Temporarily failed upload of S6p316_128138_0_0: http error
2009-9-15 21:43:10|NFS@Home|Backing off 1 minutes and 47 seconds on upload of file S6p316_128138_0_0
2009-9-15 21:43:11||Access to reference site succeeded - project servers may be temporarily down.

[/CODE][/QUOTE]

I suspect access problems to the server. When I try to access the
website via IE, it sits forever before connecting. Sometimes it does
not connect.

frmky 2009-09-15 17:20

[QUOTE=R.D. Silverman;189847]I suspect access problems to the server. When I try to access the
website via IE, it sits forever before connecting. Sometimes it does
not connect.[/QUOTE]

Yes, the site is experiencing growing pains. I see the connection issues as well, even when top says that the server is 90% idle. I'll tweak the apache settings to see if it makes a difference. It doesn't help, either, that the campus network went down last night, kicking the server off. :sad:

frmky 2009-09-16 02:52

I moved the project over to a beefier server, so hopefully the issues are resolved. Please let me know if you are still having connection problems. Thanks!

R.D. Silverman 2009-09-17 11:11

[QUOTE=frmky;189917]I moved the project over to a beefier server, so hopefully the issues are resolved. Please let me know if you are still having connection problems. Thanks![/QUOTE]

Can the LA keep up with the sieving? It seems that sieving is finishing
much faster than the LA. Do you have more than one platform doing
the LA?

BTW, I applaud this effort. This is what NFSNET should have been.

Wacky 2009-09-17 12:43

[QUOTE=R.D. Silverman;190050]
BTW, I applaud this effort. This is what NFSNET should have been.[/QUOTE]

Bob,

NFSNet never had the resources to operate in this mode. BOINC requires a single server. As Gregg has seen, even with his superior resources, it can be problematic to handle the traffic. NFSNet did not have this kind of server resource. Instead, we established a distributed server scheme that included redundancy and provisions for load balancing.

NFSNet also addressed handling multiple sievings simultaneous, allowing us to stop sieving, perform some post-processing, and resume sieving without stopping the sieving effort or requiring manual intervention in the flow of the incoming relations.

NFSNet was hampered by having to deal with the Windows OS because I never had anyone willing to help port a single source model to that platform. All that we had was a "hacked" version of the CWI line siever. And line sievers are no longer useful since there are now many potential client machines that are capable of running much more efficient lattice sievers.

Then Tom, Bruce, and Gregg got involved and basically destroyed any chance of NFSNet functioning as designed because they were swamping the sieving efforts using clients that did not provide any management feedback.
Having been "cut out of the loop", there is no way for me to provide any useful function other than just "participating" in their other projects.

In my opinion, it is lack of resources, both computer and developer time, that has killed the NFSNet effort.

I would still like for NFSNet to function. But I cannot do it by myself and I haven't found others willing to help with the development.

Richard

fivemack 2009-09-17 15:41

I am fairly sure that NFS@Home will let several sieving projects run simultaneously; at least, the news messages on the Web site are certainly consistent with several projects running at once.

NFSNet is a fantastic and substantial piece of infrastructure, and made it possible to do a lot of factorizations which were way beyond reasonable manual organization at the time.

At least partly because of the huge gap of time before 64-bit Windows became sensible, I don't know how many people interested in NFSNet development even have Windows machines. I know I don't. So I wasn't in a position to help add a lattice siever to NFSNet under Windows, and that was the bottleneck.

And by that point, with the Kleinjung siever, msieve, and the number of idle 64-bit institutional Linux clusters around, any factorization for which the post-processing was practical on a single machine had the sieving within reasonable manual organization. NFS@Home will be able to do huge numbers of factorizations, but I don't see it being able to break the factorization records unless CADO become interested.

R.D. Silverman 2009-09-17 17:31

[QUOTE=fivemack;190082]
At least partly because of the huge gap of time before 64-bit Windows became sensible, I don't know how many people interested in NFSNet development even have Windows machines. I know I don't. So I wasn't in a position to help add a lattice siever to NFSNet under Windows, and that was the bottleneck.
.[/QUOTE]

I *did* offer my 32-bit Windows lattice siever to NFSNET a long time ago....

frmky 2009-09-17 17:33

[QUOTE=R.D. Silverman;190050]Can the LA keep up with the sieving? It seems that sieving is finishing
much faster than the LA. Do you have more than one platform doing
the LA?
[/QUOTE]
I can do the LA on about 18 numbers at once with the resources that I have access to now. I'm actually running 5 LA runs now, two of which are NFS@Home. I have had offers of a couple more already and if necessary I suspect I can recruit a few more here. So as long as the LA time to sieving time ratio less than about 20 or so, all will be well. Right now, it's a bit over 5.

I do see the project moving to the 16e siever at some point. The BOINC framework allows me to specify that 16e tasks be sent only to computers with at least 2 GB or 2.5GB of memory. A 15e factorization can be run in parallel for computers with less memory. Once I get more proficient with MySQL, I'll figure out how to determine what fraction of the computers attached to the project have that much memory.

Wacky 2009-09-17 18:26

[QUOTE=R.D. Silverman;190090]I *did* offer my 32-bit Windows lattice siever to NFSNET a long time ago....[/QUOTE]

It wasn't the lack of source for a reasonable lattice siever, per se, that was the bottleneck. The problem was that there was no-one who had an appropriate Windows development environment AND the time to track down the problems.

em99010pepe 2009-09-26 16:01

Greg,

What about a progress bar for sieving under the [URL="http://escatter11.fullerton.edu/nfs/numbers.html"]Status of Numbers[/URL] link? I would prefer one on the main page.

Thanks in advance,

Carlos

frmky 2009-09-26 16:12

As you saw with 12,233-, the amount of sieving necessary isn't always easy to determine ahead of time. Therefore, the progress isn't always known. In any case, the sieving can now be done in less than an week on each of these so I'm not sure it's really necessary.

em99010pepe 2009-09-28 20:36

[quote=frmky;191172]As you saw with 12,233-, the amount of sieving necessary isn't always easy to determine ahead of time. Therefore, the progress isn't always known. In any case, the sieving can now be done in less than an week on each of these so I'm not sure it's really necessary.[/quote]

Sieving in less than a week, is that good? I'm asking because I don't have the feeling how much is necessary to sieve a number. Can you elucidate us with some benches?

Thanks in advance,

Carlos

FactorEyes 2009-09-28 21:27

You'd need around 70 2GHz 64-bit AMD Barcelona cores to sieve 12,233- in one week.

Variations in clock speed, cache architecture, and core design could drop the number of cores required by quite a bit.

R.D. Silverman 2009-10-06 17:48

SNFS/GNFS
 
[QUOTE=FactorEyes;191378]

<snip>

.[/QUOTE]

I see that 3,538+ has been reserved. Will you do it with GNFS or SNFS?
The ratio is ~.69 which makes it a toss-up.

frmky 2009-10-06 18:23

Since it's a toss-up, I think the cpu time required for poly selection is better spent sieving. We're going to do it with SNFS.

bdodson 2009-10-20 14:50

[QUOTE=R.D. Silverman;192025]I see that 3,538+ has been reserved. Will you do it with GNFS or SNFS?
The ratio is ~.69 which makes it a toss-up.[/QUOTE]

A more recent reservation made earlier this week included 6, 334+ C239,
at snfs difficulty 259. I found an early ecm factor during 2t50 --> 2t55
testing,

p59 =
37597376323754357344197406664995834047249702145969970498293

the 4th largest on this year's Top10; seems likely to remain on the 2009
list (this is October already; there's never been a p59 as large as this
that's been bumped from the annual list -- only 2007 was a close call).
The cofactor is a C181, so the number is still snfs of difficulty 259.

I can also report that the bonic stats are far more robust than the
late lamented NFSNet stats; more entertaining even than the ECDL
project that supplied the cgi still used by ECMNET. Speaking of which,
it's good to hear from Jeff; Entrust-Cabalist alliance forever! -Bruce

em99010pepe 2009-10-20 17:23

12,233- is factored. Results sent to Greg.

Jeff Gilchrist 2009-10-20 17:27

[QUOTE=bdodson;193362]the 4th largest on this year's Top10; seems likely to remain on the 2009 list (this is October already; there's never been a p59 as large as this that's been bumped from the annual list -- only 2007 was a close call).

Speaking of which, it's good to hear from Jeff; Entrust-Cabalist alliance forever! -Bruce[/QUOTE]

Congrats on bumping me from 4th to 5th on the list. :smile: I have been around on the forums for years now, but left Entrust about 8 years ago now. I never really heard from the Cabalists after that 768bit job, have they been in hiding working on other secret projects?

[B]frmky[/B]: Are you doing all the post-processing yourself or do you have others on a volunteer list in case you get overwhelmed with work?

frmky 2009-10-21 06:02

[QUOTE=Jeff Gilchrist;193375]
[B]frmky[/B]: Are you doing all the post-processing yourself or do you have others on a volunteer list in case you get overwhelmed with work?[/QUOTE]

I have a volunteer list. I have four fast computers (eleven once a memory upgrade eventually arrives), so I've done all but one so far myself. However, as the project has been ramping up and the expected upgrade hasn't yet happened, I'm starting to use those volunteers. The minimum requirements are a fast (quad-core) computer with at least 6 GB of fast memory (8 GB if you plan to run anything else on your computer at the same time). Core 2's and Core i7's are great for the task. Shall I add you to the list?

fivemack 2009-10-21 12:33

I think I should probably be on this list: 1 i7/12G, 1 k10/8G

Jeff Gilchrist 2009-10-21 13:16

[QUOTE=frmky;193408]Shall I add you to the list?[/QUOTE]

Sure, I have some 8 core Core2 Xeons with 16GB RAM at my disposal.

jasonp 2009-10-21 14:14

Greg, it sounds like NFS@Home needs a separate BOINC server for postprocessing. Perhaps a David Hasselhoff version of the regular server :)

More seriously, now that there seem to be lots of resources for 'small huge' NFS problems, perhaps we need to start thinking about what to do for 'big huge' ones.

siew 2009-10-21 17:04

Please tell,it is possible to establish my own BOINC server with NFS?

i installed the VMWARed boinc server from official site, but it there server part available in public, or do i have to write it by myself?


tanx!

frmky 2009-10-21 17:46

[QUOTE=jasonp;193450]Perhaps a David Hasselhoff version of the regular server :)

More seriously, now that there seem to be lots of resources for 'small huge' NFS problems, perhaps we need to start thinking about what to do for 'big huge' ones.[/QUOTE]

If only! :smile:

The 15e sieve on numbers of this size uses about 400 MB of actual memory per core. The BOINC community views this as large but acceptable. They describe NFS@Home as one of the more "resource-intensive" projects. I don't think that they would find the memory use of the 16e sieve acceptable. This is the only limitation preventing the BOINC community (with 1650 and growing currently active computers) from sieving record size numbers.

yoyo 2009-10-21 18:21

If you assign the estimated memory usage to each workunit, than only host with this amount of free RAM are getting work. I have sometimes wus, which require up to 1800MB of RAM. Hosts with less free RAM do net get them. Honestly some of the useres are considering it as to much. I think they just disable this sub project on yoyo@home.

yoyo

frmky 2009-10-21 18:22

[QUOTE=siew;193467]Please tell,it is possible to establish my own BOINC server with NFS?

i installed the VMWARed boinc server from official site, but it there server part available in public, or do i have to write it by myself?
[/QUOTE]

Setting up and maintaining a BOINC project is a lot of work! The volunteers are interested only if the project is smoothly run and well maintained, has plenty of work available, and is expected to last a long time. Meeting these criteria takes a lot of time. It is definitely not fire and forget! NFS is more difficult in that if you attract a lot of volunteer sievers, that means you have a lot of postprocessing to do. You need to have computers available or, as I've found necessary while waiting for computers to be upgraded, find volunteers to help with that as well. If you want to devote the time and effort to set up and maintain another NFS BOINC project, I'll be happy to share the modifications to the GNFS lasieve source code.

em99010pepe 2009-10-21 20:52

Greg,

Can you increase the task limit from 120 to 200?

Thanks in advance.

frmky 2009-10-22 03:48

[QUOTE=yoyo;193481]If you assign the estimated memory usage to each workunit, than only host with this amount of free RAM are getting work. I have sometimes wus, which require up to 1800MB of RAM. Hosts with less free RAM do net get them. Honestly some of the useres are considering it as to much. I think they just disable this sub project on yoyo@home.[/QUOTE]
One thing I'm still not sure about. Assume I have some tasks set to require 1800MB. If someone has a 2GB quad-core, does the computer not get the 1800MB tasks, or does is get them and leave 3 cores idle while processing them? If the latter, I imagine that many participants would not be happy...

frmky 2009-10-22 03:49

[QUOTE=em99010pepe;193503]
Can you increase the task limit from 120 to 200?
[/QUOTE]
Yes, I can. But why should I? :smile:

bdodson 2009-10-22 05:09

[QUOTE=frmky;193536]Yes, I can. But why should I? :smile:[/QUOTE]

Our old xeon server has been getting 240 tasks. Perhaps this
is a function of the number of processors (32), or the rate at which
tasks are being completed? For computers continuously online there's
some reason to keep the number of tasks somewhere within sight
of when the project is completed; c. 1 week recently. No more than
a day's worth, for example. One doesn't want completed tasks drifting
in after postprocessing has started.

So far, I've only been running NFS@Home on computers that aren't
on the main x86-64 condor server. Most of the fast hosts listed in
the stats seem to be core i7's. A whole bunch of them. But there's
a steep drop-off after the top 60-or-so. I'm considering giving the
new AMD a week off from heavy lifting; maybe a weekend for the
new xeons, just to see how things look. -Bruce

frmky 2009-10-22 05:18

The limit is set to 30 WUs per core right now, so he's requesting it be increased to 50 per core. BOINC typically doesn't download that much work, but the user can change the options to request more work. 30 WUs per core is at least a half-day's work for fast computers. The deadline is fairly short at 3.5 days, so my concern is that at 50, if the user has asked for a lot of work then even moderate computers that are shut down at night might not be able to get through them all before the deadline. Do you think that this will not be an issue?

em99010pepe 2009-10-22 05:33

So how do I cache work for two days?

frmky 2009-10-22 05:38

[QUOTE=em99010pepe;193540]So how do I cache work for two days?[/QUOTE]

You need to? Don't all computers have always on connections these days? :smile:

I'll try it. If I see a large increase in timed-out WUs, I'll dial it back. You should now be able to download 50 per core.

em99010pepe 2009-10-22 06:08

I like to run for 2-3 days in a row and then dump the results.

yoyo 2009-10-22 17:59

[QUOTE=frmky;193535]One thing I'm still not sure about. Assume I have some tasks set to require 1800MB. If someone has a 2GB quad-core, does the computer not get the 1800MB tasks, or does is get them and leave 3 cores idle while processing them? If the latter, I imagine that many participants would not be happy...[/QUOTE]

It download the tasks until there are 1800MB physical RAM free.
It starts the tasks until there are 1800MB free.
So it might happen, that only one of these tasks is running and the others are waiting. But if the useres has other tasks too (from other projects or with smaller RAM requirements) it runs these.
This problem is not only related to such big tasks also with a quad, 2GB RAM and tasks which require 500MB you have such problems.
yoyo

frmky 2009-10-22 18:24

[QUOTE=yoyo;193581]It download the tasks until there are 1800MB physical RAM free.
It starts the tasks until there are 1800MB free.
So it might happen, that only one of these tasks is running and the others are waiting. But if the useres has other tasks too (from other projects or with smaller RAM requirements) it runs these.
This problem is not only related to such big tasks also with a quad, 2GB RAM and tasks which require 500MB you have such problems.
yoyo[/QUOTE]

Actually the modified 16e sieve uses less memory than I remembered. It seems to top out around 1 GB. I'll just try it and see how it goes.

em99010pepe 2009-10-22 18:46

New option for the 16e sieve? Cool..lol

[code]Run only the selected applications
lasieved - only used for small numbers
lasievee - usual application, uses up to 0.5 GB memory
lasievef - for large factorizations, uses up to 1 GB memory [/code]But 22-10-2009 19:45:01 NFS@Home Message from server: No work is available for 16e Lattice Sieve. I'll wait.

frmky 2009-10-22 18:48

Yep, so it can be disabled if the users don't want them. Hmmm... everything we do is "large" so I should change that to "huge."

em99010pepe 2009-10-22 18:51

But you didn't feed yet the server with those new tasks, right?

EDIT: What about increasing the length of the tasks to one hour?

bdodson 2009-10-22 23:23

[QUOTE=em99010pepe;193591]New option for the 16e sieve? Cool..lol

[code]Run only the selected applications
lasieved - only used for small numbers
lasievee - usual application, uses up to 0.5 GB memory
lasievef - for large factorizations, uses up to 1 GB memory [/code]...[/QUOTE]

Ah; so the school computers with 1Gb/cpu (workstations) get labeled
"school", with lasievef disabled; and the school computers with c. 2Gb/cpu
(clusters) aren't given a location. Thanks. -bd

[B]Irony Department[/B]: So I was saying that I'd give the Opteron 8384 (Shanghai)
and Xeon X5570 (Nehalem) a "week off from heavy lifting" by seeing how
they'd run NFS@Home. The heaving lifting in question was a Dodson
80M-(a+r) reservation from 110M-190M for M941, and the break they're
getting is running the NFS@Home M941 reservation! But they get credit
in the stats; looks like _lots_ of credit. This is not a zero-sum for
M941 though, as I'm compensating the Dodson reservation by shifting
the small memory Opterons over to 16e. They have 2Gb/node, with
two cpu/node, but nobody's sitting at a console; and they're actually
quicker than the tasks running 8 jobs/dual_quad.

Ah. That's except for the four xeons running the matrix for Batalov+Dodson's
11m233, which should start in the morning; as the last of the small memory
Opteron jobs clear.


All times are UTC. The time now is 14:07.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.