mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Conjectures 'R Us (https://www.mersenneforum.org/forumdisplay.php?f=81)
-   -   Sieving drive Riesel base 6 n=1M-2M (https://www.mersenneforum.org/showthread.php?t=13567)

Lennart 2010-06-30 20:38

Reserving 45T-46T Lennart

Note: In ~3 hr i will upload a new file sieved to 40T

Lennart 2010-06-30 22:51

Reserving 46T-70T Lennart

Lennart 2010-06-30 23:39

1 Attachment(s)
Here are the new sievefile. sieved to 40T


Lennart

gd_barnes 2010-07-01 04:20

[quote=mdettweiler;220331]Ah, I think I see where the misunderstanding is coming in. The -t switch does not specify [I]which[/I] core that instance of sr*sieve should use, but rather [I]how many[/I] cores that instance should use. That is, if you use -t 2, then one instance of sr*sieve will fill up two cores (just like two instances without the -t switch).

On either Windows or Linux, one can still use the "old fashioned" method of dividing up p-ranges over multiple cores manually and running separate instances. However, this offers an alternative that automates that process somewhat.[/quote]

I'm still confused. So if you specify -t8 and there are no other processes running on the machine, one instance of either sr1sieve or sr2sieve will run 8 times as fast because it FILLS UP all 8 cores? That seems to be what you are implying but it seems incorrect to me.

I still don't get it. Don't you have to run 8 instances of srxsieve? Or can you just run one instance and have it run 8 times as fast by using the -t8 switch? I would test this myself except my I7 is busy with 3 things that I don't want to stop right now.

(Well, technically, it would run ~5-6 times as fast since the 5-6 cores is the full multithreading equivalent.)

mdettweiler 2010-07-01 04:46

[quote=gd_barnes;220361]I'm still confused. So if you specify -t8 and there are no other processes running on the machine, one instance of either sr1sieve or sr2sieve will run 8 times as fast because it FILLS UP all 8 cores? That seems to be what you are implying but it seems incorrect to me.

I still don't get it. Don't you have to run 8 instances of srxsieve? Or can you just run one instance and have it run 8 times as fast by using the -t8 switch? I would test this myself except my I7 is busy with 3 things that I don't want to stop right now.

(Well, technically, it would run ~5-6 times as fast since the equivalent of 5-6 cores is the full hyperthreading equivalent.)[/quote]
You are correct--running with -t 8 fills up all 8 cores and utilizes them to run (theoretically--in real life there's a slight performance hit) 8 times as fast.

Normally when you want to run a range of size x on y cores, you divide it into y chunks of size x/y, and run one instance of sr*sieve on each core, each running one of the smaller chunks. Each sr*sieve process is a rather straightforward process that can only talk to one core at a time, referred to as "single-threaded" in programming parlance. But when the -t flag is used, it goes into "multi-threaded" mode: when you use -t y, for instance, it splits into y+1 "subprocesses", one coordinating process and y workers. The coordinating process takes a small chunk of work (say, a p-range of 32000) and divides it up into y chunks, of p=32000/y size apiece. Each worker thread is given one of these. When it completes, it reports back to the main thread and tells it which factors it found. After each worker has finished its respective chunk, the coordinating process divides up the next p=32000 chunk similarly and begins again. Etc., etc. until the overall p-range is done.

The end effect is that (for instance) when sr*sieve is run with -t 4 on a quad, it keeps all four cores busy, and runs about four times as fast as a "regular" single instance would. The small performance hit I mentioned before is the trade-off to be considered against the extra effort of dividing up p-ranges manually and running 4 separate single-threaded instances of sr*sieve.

Does that make sense now? :smile:

gd_barnes 2010-07-01 04:49

[quote=mdettweiler;220363]Does that make sense now? :smile:[/quote]

No, not at all. Please reword it completely.

:missingteeth::missingteeth:

Thanks for the detailed explanation. I will utilize it in the future if the performance hit isn't too large.

mdettweiler 2010-07-01 05:06

[quote=gd_barnes;220364]No, not at all. Please reword it completely.

:missingteeth::missingteeth:

Thanks for the detailed explanation. I will utilize it in the future if the performance hit isn't too large.[/quote]
Note that this only works under Linux because Geoff (the guy who wrote sr*sieve) couldn't figure out how to get mutlithreading to work on Windows yet. Unfortunately, IIRC he's essentially put sr*sieve development on hold lately for lack of time--so this may not be remedied any time soon.

Of late a possible replacement for srsieve, has emerged in the form of ppsieve, which is based on tpsieve (a twin prime sieve used over at TPS that's the current state of the art and which is in turn based on sr1sieve). Currently it only supports k*2^n+-1 (so only base 2 and power-of-2 bases), but I believe it supports mutithreading on both Windows and Linux. I'm not sure how it compares to sr2sieve speed-wise but I believe it's at least about as fast. There's also a GPU version in beta that PrimeGrid has used to great effect with their Proth Prime Search recently. (I mentioned this before to you in an email when we were discussing GPUs--this would be useful at NPLB but of limited use for CRUS.)

Lennart 2010-07-01 20:53

1 Attachment(s)
Here are all factors for 45T-70T

Lennart

gd_barnes 2010-07-14 10:39

Chris,

Can you post your factors for P=40T-45T? I'll then remove all factors to P=70T from the file. Thanks.

Flatlander 2010-07-14 11:26

1 Attachment(s)
Sorry, I thought I had.

" Just post the file here in this thread or if it is too big..." lol

gd_barnes 2010-07-15 05:47

All factors to P=70T have now been removed from the sieve file in the 1st post.


All times are UTC. The time now is 10:18.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.