mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   PrimeNet (https://www.mersenneforum.org/forumdisplay.php?f=11)
-   -   P-1 factoring anyone? (https://www.mersenneforum.org/showthread.php?t=11101)

lycorn 2011-06-14 22:53

[QUOTE=lorgix;263609]Stage 2 can be done in parallel, but it requires the save file from stage 1.[/QUOTE]
Not quite right.
What is actually paralelized is the FFT algorithm. Both Stage 1 and 2 use FFTs to perform the calculations, so both Stages might benefit from the FFT paralelization (multi-threading). What can´t be paralelized, IIRC, are the GCD computations performed at the end of each Stage.

xilman 2011-06-15 08:18

[QUOTE=lycorn;263805]Not quite right.
What is actually paralelized is the FFT algorithm. Both Stage 1 and 2 use FFTs to perform the calculations, so both Stages might benefit from the FFT paralelization (multi-threading). What can´t be paralelized, IIRC, are the GCD computations performed at the end of each Stage.[/QUOTE]Not quite right. :wink:

Stage 2 could, [i]in principle,[/i] be parallelized by running disjoint ranges of B2 between B1 and B2_max on separate threads, each thread performing its own GCD at the end of the range.

Whether anyone has coded for this approach is another matter entirely.

Paul

Mr. P-1 2011-06-15 13:08

[QUOTE=xilman;263827]Stage 2 could, [i]in principle,[/i] be parallelized by running disjoint ranges of B2 between B1 and B2_max on separate threads, each thread performing its own GCD at the end of the range.[/QUOTE]

Or - again in principle - the result of each of the parallel computations could be multiplied together, allowing just a single GCD at the end.

lorgix 2011-06-15 17:19

From whatsnew.txt;

[CODE]2) Starting in build 2, P-1 work will display the chance of finding a factor.
The worktodo.txt line must include how_far_factored using the new syntax:
Pminus1=k,b,n,c,B1,[COLOR=Red][B]B2[/B][/COLOR][,how_far_factored][COLOR=Red][B][,B2_start][/B][/COLOR][,"factors"][/CODE]That's what I was thinking about.

lycorn 2011-06-16 13:23

[QUOTE=xilman;263827]Not quite right. :wink:
[/QUOTE]

Fair enough :bow:.
I took lorgix´s post to mean [U]only[/U] Stage 2 could be parallelized.

James Heinrich 2011-06-24 23:05

Any idea what Axon is doing, with 100% success on 2208 P-1s?
[url]http://v5www.mersenne.org/report_top_500_P-1/[/url]

lycorn 2011-06-24 23:30

Yep.

[URL]http://www.mersenneforum.org/showthread.php?t=15690[/URL]

Christenson 2011-06-25 03:48

Big case of same problem I have with TF: Server assumes factors found are from P-1!

petrw1 2011-07-14 22:30

Here I come...anyone wanna join me???
 
I have started to move PCs over to P-1 with the goal of having at least 20 cores and sustaining a rate of 10 completions per day by early August.
:batalov:

I will have 3 quads with all 4 cores doing P-1
Several Dual Cores - the newer ones with both cores assigned and the older slower ones with 1 core P-1 and 1 core TF
and 1 PIV.

I believe with "Memory=" now working at the Worker level
AND
the ability to set HighMemWorkers by time of day

I should be able to balance the work so even the quads can have decent thruput with all 4 cores doing P-1 limiting them to 2 cores in Stage 2 most of time with 3 cores only as needed to keep up.

I'll let the masses know how well this plan works but as long as the thruput is decent I plan to keep it this way at least for the remainder of 2011.

Christenson 2011-07-15 00:15

Petrw1, you are pulling a judo throw on me...following me, then going further...since I'm doing LL and CUDA TF, and it sounds like you may not be, but I'm doing as much P-1 as my memory will support.

Check out my P-1 numbers....most of those factors are P-1 factors, just a few TF...

davieddy 2011-07-15 22:20

I would like to see a list of P-1 factors found for exponents 40-60M,
or more concisely, the frequency f(x) of them exceeding the TF
bit limit by x bits.

Any ideas?

David

Mr. P-1 2011-07-15 22:54

[QUOTE=davieddy;266542]I would like to see a list of P-1 factors found for exponents 40-60M,
or more concisely, the frequency f(x) of them exceeding the TF
bit limit by x bits.

Any ideas?

David[/QUOTE]

The [url=http://mersenne.org/report_factors/]known factors[/url] report has an option to specify a minimum factor depth. Given a list of factors exceeding the TF bit limit, you could calculate the smoothness of k for each one, to determine whether it could plausibly have been found with P-1.

James Heinrich 2011-07-15 23:00

1 Attachment(s)
This is rough, but first column shows exponent range (millions), second column is number of known factors small enough to have been found by TF, third column is number of known factors too big to have been found by normal TF limits:[code]40 43736 868
41 43048 911
42 43283 909
43 42964 924
44 43417 976
45 43332 1011
46 42886 1012
47 42966 982
48 42640 997
49 43012 903
50 43234 1082
51 43536 1258
52 42829 676
53 42707 1547
54 42758 396
55 42702 378
56 42378 247
57 42184 286
58 42003 275
59 41965 163[/code]It's rough because the cutoff changes from 2^68 to 2^69 around M49M, and TF limits have changed over the years, but it should give you some idea.

edit: attached is a more detailed breakdown of the number of factors of each bit depth for each of the 20 exponent ranges.

davieddy 2011-07-16 02:31

Many thanks for the data:

[url=http://mersenneforum.org/showthread.php?p=266558#post266558]Here are my deductions therefrom[/url]

David

petrw1 2011-07-21 05:17

[QUOTE=petrw1;261083]Before George decided to add another TF bit in the 53-59M range the P-1'ers had almost 18,000 exponents ready for LL in the 53M range. We should see a similar number back in the LL Available column in a few days once the 53M TF is done that extra bit.

Then we can watch if the LL available number grows or drops in the coming months as an indication of whether P-1 is keeping up or not.[/QUOTE]

Well 2 months and 10 days later and the extra (now 2) bits of TF are past 53M ... and in the 53-55M range where most P-1 are being assigned currently we are down to ONLY about 9,700 exponents ready for LL .... DEFINITELY NOT KEEPING UP.

As I suggested a couple weeks ago I am a few weeks away from attempting to run enough cores on P-1 to sustain a rate of 10 completions per day for the remainder of 2011 ... ANYONE ELSE?

davieddy 2011-07-21 08:38

[QUOTE=petrw1;267075]Well 2 months and 10 days later and the extra (now 2) bits of TF are past 53M ... and in the 53-55M range where most P-1 are being assigned currently we are down to ONLY about 9,700 exponents ready for LL .... DEFINITELY NOT KEEPING UP.

As I suggested a couple weeks ago I am a few weeks away from attempting to run enough cores on P-1 to sustain a rate of 10 completions per day for the remainder of 2011 ... ANYONE ELSE?[/QUOTE]

You, I and others (eg Jacob and those magnificent gerbils on
their flying machine) are on the same wavelength here.

We hope people embark on [B]and complete[/B] LL tests as frequently as possible.
Having to start with P-1 may constitute a deterrent: nuisance value,
memory, not appreciating its value etc. but at least the CPU LL tester
can profitably do it. This doesn't apply to TFing to the "Mad Max" anymore
since GPUs have made TFing their sole preserve.

I requested that a mod move a few (at the time) posts to this thread.
Instead, George (I guess) started a new thread entitled
"TF on GPUs and its impact on P-1", and placed it in the lounge
despite my opinion that the subject was a bit deeper than "coffee talk".
If you haven't done so already, I think that if you peruse it, you will
spot the relevance to this post and this thread in general.

David

Christenson 2011-07-21 11:15

I did tell another core to do P-1 this week...but it's not going to do even 1 per day.

Is it better to do 2 cores of P-1 with relatively limited (500M) memory per core or 1 core with less limited(1000M) memory per core? If the latter is the case, I can squeeze a bit more P-1 out of my 7 machines

From my individual searcher perspective, P-1 has been quite profitable...15 or 20 factors found, that's more than I'd have been able to do on LL tests in the same time.

davieddy 2011-07-21 11:53

[QUOTE=Christenson;267107]I did tell another core to do P-1 this week...but it's not going to do even 1 per day.

Is it better to do 2 cores of P-1 with relatively limited (500M) memory per core or 1 core with less limited(1000M) memory per core? If the latter is the case, I can squeeze a bit more P-1 out of my 7 machines

From my individual searcher perspective, P-1 has been quite profitable...15 or 20 factors found, that's more than I'd have been able to do on LL tests in the same time.[/QUOTE]

Unlike TF, I think one needs to get P-1 bounds right (i.e. optimized
with LL in mind) first time. Otherwise, you either have to live with it
or start from scratch.

May I politely remind you that LL tests never find a factor?*

:smile:

David

*Thanks to Sod's Law, they invariably take a long time
to confirm what we already knew - it's composite.
Can't think why we bother really.

James Heinrich 2011-07-21 12:57

I'm running P-1 on 7 of 8 cores (leaving 1 for mfaktc) across my two machines.
However, my focus is on the "missed" exponents, which could be anything from 50M exponents that have had 1 L-L but no P-1 at all, to [url=http://v5www.mersenne.org/report_exponent/?exp_lo=100000049]large[/url] or [url=http://v5www.mersenne.org/report_exponent/?exp_lo=10106741]small[/url] exponents that are flagged as having P-1 done, but done very poorly (e.g. B1=B2=50000 on a 100M exponent).

[QUOTE=Christenson;267107]Is it better to do 2 cores of P-1 with relatively limited (500M) memory per core or 1 core with less limited(1000M) memory per core?[/QUOTE]More memory will (slightly) increase the selected bounds, which will (very slightly) increase the chance of finding a factor, and will also (slightly) increase the runtime for P-1 on that exponent. Adding a second core basically double your throughput. And under normal circumstances the two workers should be able to alternate the 1000MB much of the time (if one is in stage1, and the other in stage2, the stage2 one can grab all 1000MB; it's only when they're both in stage2 that they'll get 500MB/ea). Making up some numbers (for illustrative purposes only):
2 cores fixed at 500MB/ea: 2x 5.0% chance over 3.2GHz-days
2 cores sharing 1000MB: 2x 5.1% chance over 3.4GHz-days
1 cores fixed at 1000MB: 1x 5.1% chance over 1.7GHz-days

Christenson 2011-07-22 01:03

So I will ask another core here to do P-1...hope it helps....

Mr. P-1 2011-07-26 21:19

[QUOTE=James Heinrich;267118]And under normal circumstances the two workers should be able to alternate the 1000MB much of the time (if one is in stage1, and the other in stage2, the stage2 one can grab all 1000MB; it's only when they're both in stage2 that they'll get 500MB/ea).[/QUOTE]

Use MaxHighMemWorkers=1 to ensure that both cores aren't ever in stage 2 at the same time. If you find yourself accumulating a backlog of uncompleted stage 2 work, either take out this directive until your backlog is clear, or run a second instance of the client. What I do in practice is use the second instance method whenever I know I'm going to be away from the computer for a while. I don't touch the (already optimised) memory settings of the primary instance, but shut down the graphics subsystem (I have a Linux box; Windows users can't do this) to increase the free memory available for the second instance.

Provided both cores are getting an antiquate amount (finger-in-the-air guesstimate, about 300MB each), contention for memory space is not the major issue when running multiple stage 2s in parallel; contention for memory bandwidth is*. When I do the above, even without changing the primary instance's memory settings, I find it runs significantly slower. So I have another trick to reduce the need for parallel stage 2s: When running with just one core in stage 2, I run it at high priority. This works for me because I use my computer for other things that consume a fair amount of processing (mostly watching streaming TV), but not so much that the experience is degraded significantly by restricting these processes to just the one core. If your computer usage is different, this trick may be infeasible or ineffective.

*At least, this is true on my venerable Core 2 Duo box. This may or may not be an issue with other processor/chipset/memory setups.

S34960zz 2011-08-02 23:35

[QUOTE=James Heinrich;153813]As an illustration, P-1 on 100,000,000-digit numbers (e.g. M332203901 that I'm working on) takes 838MB plus 163MB for each relative prime it processes in each batch). So, to process 10 relative primes you need >=2468MB available.

Fortunately, only crazy people like myself are working on P-1 this far up the available-work spectrum right now; by the time it's mainstream these hardware requirements will seem trivial :smile:[/QUOTE]

[QUOTE=James Heinrich;154297]After a couple more P-1 observations, it seems that you need approximately:
* FFTsize * 50 base memory, plus:
* FFTsize * 8 memory per relative prime processed

So (for examples) to process 20 relative primes you'd need, depending on the size of the exponent tested:[code]
* M 5,000,000 [256K FFT] = (0.25 * 50) + (0.25 * 8 * 20) = 53MB
* M 25,000,000 [1.5M FFT] = (1.5 * 50) + (1.5 * 8 * 20) = 315MB
* M 50,000,000 [ 3M FFT] = (3 * 50) + (3 * 8 * 20) = 630MB
* M100,000,000 [ 6M FFT] = (6 * 50) + (6 * 8 * 20) = 1260MB
* M333,000,000 [ 20M FFT] = (20 * 50) + (20 * 8 * 20) = 4200MB
* M500,000,000 [ 28M FFT] = (28 * 50) + (28 * 8 * 20) = 5880MB
[/code]I make no claim to the exactitude of these numbers, nor is there anything magical about processing 20 relative primes; I simply provide the above as a very rough guide to the approximate magnitude of RAM that is (or possibly should) be involved in P-1'ing a particular size of exponent.[/QUOTE]

So, what is the current upper limit for P-1 exponents, and corresponding RAM requirements? I think I stepped above it with an M667,xxx,xxx exponent (similar error for both Prime95 x64 version 26.5b5 and 26.6b3).

PFactor=xxxxxxxxxxx,1,2,667xxxxxx,-1,80,2

Win7-x64 Pro, i7-840QM, 16GB RAM, Prime95 x64 Version 26.6b3

[code]
[Main thread Jul 31 22:36] Mersenne number primality test program version 26.6
[Main thread Jul 31 22:36:55] Optimizing for CPU architecture: Core i3/i5/i7, L2 cache size: 256 KB, L3 cache size: 8 MB
[Main thread Jul 31 22:36:56] Logical CPUs 1,2 form one physical CPU.
[Main thread Jul 31 22:36:56] Logical CPUs 3,4 form one physical CPU.
[Main thread Jul 31 22:36:56] Logical CPUs 5,6 form one physical CPU.
[Main thread Jul 31 22:36:56] Logical CPUs 7,8 form one physical CPU.
[Main thread Jul 31 22:36:56] Using AffinityScramble2 settings to set affinity mask.
[Main thread Jul 31 22:36:56] Starting workers.

[Jul 31 22:36:56] Worker starting
[Jul 31 22:36:56] Setting affinity to run worker on logical CPU #1
[Jul 31 22:36:56] Optimal P-1 factoring of M667xxxxxx using up to 14336MB of memory.
[Jul 31 22:36:56] Assuming no factors below 2^80 and 2 primality tests saved if a factor is found.
[Jul 31 22:37:04] Optimal bounds are B1=6255000, B2=190777500
[Jul 31 22:37:04] Chance of finding a factor is an estimated 5.76%
[Jul 31 22:37:04] Cannot initialize FFT code, errcode=1002
[Jul 31 22:37:04] Worker stopped.
[/code]

James Heinrich 2011-08-03 02:41

[QUOTE=S34960zz;268146]So, what is the current upper limit for P-1 exponents, and corresponding RAM requirements? I think I stepped above it with an M667,xxx,xxx exponent[/QUOTE]You did. The upper limit is an exponent <596,000,000 which is the upper limit for 32MB FFT, which is the largest currently in Prime95 code. That means that [url=http://v5www.mersenne.org/report_exponent/?exp_lo=595999000&exp_hi=596000000]M595999993[/url] is the largest that can be P-1'd or L-L'd (or ECM, I suppose). Trial factoring is not constrained by this limit, and mfaktc (I'm not sure about Prime95) will do TF up to 2^2^32, but PrimeNet doesn't track any exponents above M999,999,999.

But if it hypothetically did work, it would look something like this:[quote]M667,000,000, factored to 80 bits, with B1=4,280,643 and B2=102,735,432
Probability = 5.000000%
Should take about 385.749074 GHz-days (using FFT size 36,672K)
Recommended RAM allocation: min=5,928MB; good=17,663MB; max=144,400MB; insane=707,674MB;[/quote]

petrw1 2011-08-19 03:09

[QUOTE=petrw1;266424]I have started to move PCs over to P-1 with the goal of having at least 20 cores and sustaining a rate of 10 completions per day by early August.

I'll let the masses know how well this plan works but as long as the thruput is decent I plan to keep it this way at least for the remainder of 2011.[/QUOTE]

First week update: Full complement of cores in place as of evening of August 11:

76 P-1 completions (6 factors) in 7 days on 24 cores: 10.86 per day

However, sadly with today being my official last day of work it will only be a matter of time before I lose at least 1 dual-core and 2 more that I borged.

Christenson 2011-08-19 03:49

[QUOTE=petrw1;269476]First week update: Full complement of cores in place as of evening of August 11:

76 P-1 completions (6 factors) in 7 days on 24 cores: 10.86 per day

However, sadly with today being my official last day of work it will only be a matter of time before I lose at least 1 dual-core and 2 more that I borged.[/QUOTE]
It's still a dozen LL tests saved in that time....nice job !

petrw1 2011-08-26 03:53

Two week update:

168 P-1 completions (10 factors) in 14 days on 24 cores: 12 per day.

With a max of 17 on August 19th

KingKurly 2011-08-26 04:03

[QUOTE=petrw1;270123]Two week update:

168 P-1 completions (10 factors) in 14 days on 24 cores: 12 per day.

With a max of 17 on August 19th[/QUOTE]
Congrats, sounds like a great machine. What size are the exponents you are testing? How many GHz-days per day does the machine crank out?


On a side note, if you are interested in some Ruby code that will magically find the k-values for your factors, see [url]https://github.com/gkubaryk/mersenne/[/url]

Right now, you have to manually scrape results from mersenne.org/results and paste them into full.tsv. My factors are in the pre-existing full.tsv; many are P-1, but some are TF or ECM.

I hope somebody finds the code useful for something. Feel free to improve upon it or suggest ideas on how it could be improved.

James Heinrich 2011-08-26 11:43

[QUOTE=KingKurly;270125]I hope somebody finds the code useful for something.[/QUOTE]It will be tied in with my site, as soon as I can get my server recompiled with a newer version of PHP and supporting gmp (arbitrary-precision math).

ET_ 2011-08-26 12:16

[QUOTE=James Heinrich;270136]It will be tied in with my site, as soon as I can get my server recompiled with a newer version of PHP and supporting gmp (arbitrary-precision math).[/QUOTE]

If you need it, I wrote a barebone Mersenne factoring applet in PHP using GMP.

Look [URL="http://www.moregimps.it/mersenne-test"]here[/URL] to have a look. :smile:

Luigi

James Heinrich 2011-08-26 13:18

[QUOTE=ET_;270140]If you need it, I wrote a barebone Mersenne factoring applet in PHP using GMP.[/QUOTE]I also have written some code, that works for either gmp or bcmath (whichever is available). And it works fine running locally, but my production server doesn't support it. :no:

[b]edit:[/b] they lied to me! It doesn't support bcmath, but it [i]does[/i] support gmp! :w00t:
(new code, coming right up...)

Rodrigo 2011-08-26 16:07

[QUOTE=ET_;270140]If you need it, I wrote a barebone Mersenne factoring applet in PHP using GMP.

Look [URL="http://www.moregimps.it/mersenne-test"]here[/URL] to have a look. :smile:

Luigi[/QUOTE]
Luigi,

How does one read that table? It says that the average factor bit depth is 46.2003, but all of the exponents listed seem to have been factored to at least 61 bits.

Also, the ordinal series skips numbers (1,3,4,8,11,...). I take it that the skipped numbers are the ones for which factors have been found?

Maybe the answer to the first question has to do with the answer to the second question (that factors were found at low bit levels)?

Rodrigo

cheesehead 2011-08-26 19:37

[QUOTE=Rodrigo;270162]Luigi,

How does one read that table? It says that the average factor bit depth is 46.2003,[/QUOTE]As I understand it, this would less-confusingly(to the uninitiated) be labeled "Average bit-length (base 2 logarithm) of factors found so far".

Thus, this average refers to factors that have been found.

[quote]but all of the exponents listed seem to have been factored to at least 61 bits.[/quote]"to have been factored to at least 61 bits" is our common slang for "to have been searched (so far, unsuccessfully) for factors of length up to at least 61 bits".

Thus, this average refers to the extent to which factor-searches have so far proceeded [I]unsuccessfully[/I].

Comparing the first average to the second means nothing except that we tend to stop searching for factors of a particular Mersenne number once we find one. Given this tendency, one naturally expects the first average to be smaller than the second, but not much else can be derived from the comparison.

[quote]Also, the ordinal series skips numbers (1,3,4,8,11,...). I take it that the skipped numbers are the ones for which factors have been found?[/quote]Yes. Cryptic, isn't it? :-)

[quote]Maybe the answer to the first question has to do with the answer to the second question (that factors were found at low bit levels)?[/quote]... only because we search at low bit levels before we search at high bit levels. If we, instead, started all our searches at the high bit levels and proceeded downwards, then not only would we be less efficient and have much slower success, but also the first average would exceed the second one. :-)

Rodrigo 2011-08-26 20:57

Thanks, cheesehead, that's basically what I thought. What you wrote tells me that I'm [B]starting[/B] to "get" this. :smile:

A little more expansive header for "Average factor bit depth" as you suggest would do the trick, although maybe the part about base 2 logarithm wouldn't be needed. On my end, though, it would also help not to sneak a rushed peek at these things when I'm up against a deadline... :blush:

Rodrigo

Xyzzy 2011-08-26 21:50

[QUOTE]168 P-1 completions (10 factors) in 14 days on 24 cores: 12 per day.[/QUOTE]We cannot match those results, but we are having fun with our single dedicated P-1 quad. 30 P-1 completions (3 factors) in ~22 days on 4 cores: 1.36 per day.

[CODE]50412209 2011-08-26 21:12 B1=590000, B2=15340000 3.7019
51113527 2011-08-26 07:26 B1=595000, B2=15618750 3.7548
58287167 2011-08-25 12:28 B1=695000, B2=20850000 5.5105
56513189 2011-08-24 16:06 B1=670000, B2=18425000 5.0319
57924211 2011-08-23 23:05 B1=695000, B2=20676250 5.4815
57766439 2011-08-23 04:49 B1=690000, B2=20700000 5.4709
57670663 2011-08-22 10:08 1401953857640293957991 2.1205
56668487 2011-08-22 04:04 B1=675000, B2=19912500 5.2955
57320369 2011-08-21 10:08 B1=685000, B2=20378750 5.4026
57260837 2011-08-20 15:38 B1=685000, B2=20378750 5.4026
52411727 2011-08-19 09:50 B1=615000, B2=16605000 4.1285
56843989 2011-08-18 16:26 B1=670000, B2=18592500 5.0599
57231017 2011-08-17 22:37 69911626448940121794289287223 5.4026
52197337 2011-08-17 03:48 B1=610000, B2=16470000 4.0949
56226403 2011-08-16 10:17 B1=665000, B2=18287500 4.9943
56759347 2011-08-15 16:38 B1=670000, B2=18592500 5.0599
50604727 2011-08-14 21:01 B1=590000, B2=15340000 3.7019
55051891 2011-08-14 06:53 B1=655000, B2=17685000 4.5345
56280281 2011-08-13 12:58 B1=665000, B2=18287500 4.9943
50106383 2011-08-12 19:11 B1=585000, B2=15063750 3.6494
52065053 2011-08-12 04:46 B1=610000, B2=16470000 4.0949
54782447 2011-08-11 11:28 B1=655000, B2=17521250 4.5089
56668663 2011-08-10 17:56 B1=675000, B2=19912500 5.2955
56325371 2011-08-09 23:25 B1=670000, B2=19765000 5.2562
52075477 2011-08-09 05:19 B1=610000, B2=16470000 4.0949
56226479 2011-08-08 11:48 B1=665000, B2=18287500 4.9943
50205761 2011-08-07 18:42 B1=585000, B2=15063750 3.6494
52085237 2011-08-07 04:29 B1=610000, B2=16470000 4.0949
54673813 2011-08-06 09:51 B1=645000, B2=17576250 4.4904
50241299 2011-08-05 15:51 2801407650254375710199 3.6705[/CODE]

drh 2011-08-27 00:19

[QUOTE=petrw1;270123]
168 P-1 completions (10 factors) in 14 days on 24 cores: 12 per day.

[/QUOTE]

I'm not quite there yet either ... but

57303131 2011-08-26 12:06 B1=685000, B2=20378750 5.4026
57268621 2011-08-26 10:24 B1=685000, B2=20378750 5.4026
52411783 2011-08-25 18:14 B1=615000, B2=16605000 4.1285
55760819 2011-08-25 17:35 B1=665000, B2=19617500 5.2170
57322553 2011-08-25 12:58 B1=685000, B2=20378750 5.4026
55620469 2011-08-24 20:28 B1=665000, B2=19617500 5.2170
57302939 2011-08-24 13:47 B1=685000, B2=20378750 5.4026
57231001 2011-08-24 00:37 B1=685000, B2=20378750 5.4026
55356113 2011-08-23 22:04 B1=655000, B2=17357500 4.8095
56552473 2011-08-23 20:32 B1=675000, B2=19406250 5.2107
56816653 2011-08-23 20:06 B1=680000, B2=20060000 5.3347
56244593 2011-08-23 14:39 B1=665000, B2=18287500 4.9943
54410173 2011-08-23 08:37 B1=640000, B2=17440000 4.4556
56838739 2011-08-22 17:05 B1=670000, B2=18592500 5.0599
50684063 2011-08-22 10:12 B1=590000, B2=15340000 3.7019
55447831 2011-08-21 19:03 B1=655000, B2=17848750 4.8918
56355113 2011-08-21 15:04 B1=670000, B2=18257500 5.0038
50132491 2011-08-21 07:18 B1=585000, B2=15063750 3.6494
50619223 2011-08-21 06:32 B1=590000, B2=15340000 3.7019
56794643 2011-08-20 21:53 B1=670000, B2=18592500 5.0599
52341683 2011-08-20 17:52 B1=615000, B2=16451250 4.1052
56226553 2011-08-20 14:51 134624114590567994209661373751147664039 4.8830
50447123 2011-08-20 07:05 B1=585000, B2=14917500 3.6282
54571843 2011-08-20 00:05 B1=655000, B2=19158750 4.7645
55708327 2011-08-19 14:29 B1=660000, B2=17820000 4.9015
55391317 2011-08-19 12:23 B1=660000, B2=19470000 5.1778
55748309 2011-08-19 08:42 B1=660000, B2=17985000 4.9291
51873079 2011-08-19 02:14 B1=610000, B2=16470000 4.0949
50051549 2011-08-18 18:43 B1=580000, B2=15080000 3.6392
52892963 2011-08-18 06:23 B1=620000, B2=16740000 4.1620
50260013 2011-08-17 20:21 B1=585000, B2=14917500 3.6282
55826467 2011-08-17 10:28 B1=665000, B2=18121250 4.9665
56658809 2011-08-17 08:12 B1=675000, B2=19912500 5.2955
50262343 2011-08-17 07:29 B1=585000, B2=14917500 3.6282
56410129 2011-08-17 03:10 B1=675000, B2=19912500 5.2955
56368709 2011-08-17 02:09 B1=670000, B2=19765000 5.2562
56902543 2011-08-17 00:43 B1=680000, B2=20060000 5.3347
50506613 2011-08-16 13:04 B1=590000, B2=15340000 3.7019
50330909 2011-08-15 19:49 B1=585000, B2=15210000 3.6705
50557181 2011-08-15 11:28 B1=585000, B2=15063750 3.6494
56764087 2011-08-15 02:41 B1=675000, B2=19912500 5.2955
50402239 2011-08-14 23:13 B1=585000, B2=14917500 3.6282
50447263 2011-08-14 21:41 B1=590000, B2=15340000 3.7019
51070507 2011-08-14 17:54 B1=595000, B2=15618750 3.7548
52055723 2011-08-14 13:50 B1=610000, B2=16470000 4.0949
56179787 2011-08-14 11:51 191038474920409465670953 4.9943
56325989 2011-08-14 03:54 B1=670000, B2=19765000 5.2562
54174737 2011-08-13 22:33 B1=575000, B2=13081250 3.5992
55724819 2011-08-13 04:37 B1=660000, B2=17985000 4.9291
55086649 2011-08-13 04:24 B1=655000, B2=17685000 4.8644
55103911 2011-08-13 02:22 B1=660000, B2=18810000 5.0673
52895473 2011-08-13 00:37 B1=300000, B2=6450000 1.7642
52006937 2011-08-12 14:50 B1=605000, B2=15578750 3.9469
55979857 2011-08-12 14:50 B1=670000, B2=19765000 5.2562
60098641 2011-08-12 07:19 B1=710000, B2=20057500 5.6826
56111983 2011-08-12 04:10 392841669521534965682663 2.0590
54673789 2011-08-12 03:42 B1=645000, B2=17576250 4.4904
55761931 2011-08-11 07:48 B1=660000, B2=17985000 4.9291
55129429 2011-08-11 06:20 B1=655000, B2=17685000 4.8644
50497289 2011-08-10 16:47 B1=590000, B2=15340000 3.7019
54675199 2011-08-10 09:09 B1=645000, B2=17576250 4.4904
50489339 2011-08-10 03:40 B1=590000, B2=15340000 3.7019
52804519 2011-08-10 02:37 B1=620000, B2=16740000 4.1620
52129423 2011-08-09 21:45 B1=610000, B2=15707500 3.9795
57813799 2011-08-09 15:33 B1=690000, B2=20010000 5.3554
50488883 2011-08-09 11:38 B1=590000, B2=15340000 3.7019
52009549 2011-08-08 23:09 B1=610000, B2=16470000 4.0949
51796301 2011-08-08 21:01 B1=610000, B2=16470000 4.0949
52038449 2011-08-08 18:16 78858056319245685398671 1.6945
55804477 2011-08-08 10:45 B1=660000, B2=17985000 4.9291
55583029 2011-08-08 09:42 B1=660000, B2=17820000 4.9015
56827163 2011-08-08 04:55 B1=680000, B2=20060000 5.3347
54194053 2011-08-07 12:13 B1=635000, B2=17303750 4.4208
54970687 2011-08-07 04:43 B1=650000, B2=17225000 4.4491
56577097 2011-08-07 00:49 B1=675000, B2=19912500 5.2955
52761883 2011-08-06 22:52 B1=620000, B2=16740000 4.1620
55331167 2011-08-06 15:40 2925486621346516540119237367 4.8918
51991897 2011-08-06 03:21 B1=605000, B2=15578750 3.9469
57231227 2011-08-06 01:43 B1=685000, B2=20378750 5.4026
55029347 2011-08-05 19:42 B1=655000, B2=17685000 4.5345
55832807 2011-08-05 18:13 B1=665000, B2=18121250 4.9665
51876499 2011-08-05 00:42 B1=610000, B2=16470000 4.0949
56853421 2011-08-04 20:28 B1=670000, B2=18592500 5.0599
50289137 2011-08-04 13:02 B1=585000, B2=15210000 3.6705
53438657 2011-08-04 05:35 B1=635000, B2=18097500 4.4069
55029157 2011-08-03 22:21 B1=655000, B2=17685000 4.5345
50387027 2011-08-03 14:46 B1=595000, B2=15321250 3.7117
52778303 2011-08-03 13:25 B1=620000, B2=16740000 4.1620
54497011 2011-08-03 12:49 B1=655000, B2=19158750 4.7645
56756803 2011-08-03 10:08 B1=675000, B2=19575000 5.2389
54499339 2011-08-03 02:43 B1=655000, B2=19158750 4.7645
50070653 2011-08-03 00:39 B1=580000, B2=15080000 3.6392
53387443 2011-08-02 19:15 B1=635000, B2=18573750 4.4790
51828901 2011-08-02 07:30 B1=610000, B2=16470000 4.0949
54496969 2011-08-01 21:48 B1=655000, B2=19158750 4.7645
54499759 2011-08-01 11:34 B1=655000, B2=19158750 4.7645
52056857 2011-08-01 07:07 1606218329759823059143 1.6944
52048327 2011-08-01 02:42 B1=610000, B2=16470000 4.0949
52018843 2011-07-31 21:35 B1=610000, B2=16470000 4.0949
54494591 2011-07-31 13:27 B1=655000, B2=19158750 4.7645
50023543 2011-07-31 13:18 B1=580000, B2=15080000 3.6392
50205007 2011-07-31 04:01 B1=585000, B2=14917500 3.6282
50204851 2011-07-30 23:00 B1=585000, B2=14917500 3.6282
52061381 2011-07-30 16:55 B1=610000, B2=16470000 4.0949
53900617 2011-07-30 15:31 B1=630000, B2=17167500 4.3860
52719371 2011-07-30 06:39 B1=620000, B2=16430000 4.1151
56289403 2011-07-30 04:31 B1=670000, B2=19765000 5.2562
56730197 2011-07-30 00:28 B1=675000, B2=19912500 5.2955
55075387 2011-07-29 19:39 B1=655000, B2=17685000 4.8644
55011421 2011-07-29 18:46 11464331472925546285409 4.5345
52019819 2011-07-28 19:30 B1=610000, B2=16470000 4.0949
54688199 2011-07-28 19:27 B1=645000, B2=17576250 4.4904
54372293 2011-07-28 16:25 B1=640000, B2=17120000 4.4056
60100717 2011-07-28 12:19 B1=710000, B2=19347500 5.5580
52014709 2011-07-27 22:28 B1=610000, B2=16470000 4.0949
52126493 2011-07-27 15:44 B1=610000, B2=16470000 4.0949
52727921 2011-07-27 12:03 B1=620000, B2=16740000 4.1620
54203507 2011-07-27 11:39 B1=635000, B2=17303750 4.4208

7 Cores, 118 Tests, 31 Days, 7 Factors ... 3.8/Day

Doug

petrw1 2011-09-01 04:43

[QUOTE=petrw1;270123]Two week update:

168 P-1 completions (10 factors) in 14 days on 24 cores: 12 per day.

With a max of 17 on August 19th[/QUOTE]

322 P-1 completions in August ... 19 factors

ckdo 2011-09-01 05:30

1055 completions. 93 factors. 8.8152% success rate, roughly.

James Heinrich 2011-09-01 11:24

1396 completions, 54 factors (3.87%), about 458GHz-days of work.
The quantity is so high and the success rate is so low because I'm mostly cleaning up the ranges that have little-to-no P-1 (current around 10M and 45M).

Broken down a little further:
9.7M-10.4M = 1324 completions, 51 factors (3.85%) // really-old
44.6M-52.6M = 59 completions, 2 factors (3.39%) // between LL and DC
54.3M-71.7M = 13 completions, 1 factor (7.69%) // "normal" pre-LL P-1

petrw1 2011-10-11 16:39

Last 60 days
 
630 P-1 completions in the current LL range. (10.5 per day).

39 Factors found = 6.2%

As a side bar it appears that (at least for now) P-1 is keeping up with LL.
I have been watching the Primenet Summary for a few weeks now and there are consistently more than 3,000 LL tests available in the 5xM range.

davieddy 2011-10-11 23:15

[QUOTE=petrw1;274130]630 P-1 completions in the current LL range. (10.5 per day).

39 Factors found = 6.2%

As a side bar it appears that (at least for now) P-1 is keeping up with LL.
I have been watching the Primenet Summary for a few weeks now and there are consistently more than 3,000 LL tests available in the 5xM range.[/QUOTE]

Good.
But a CPU willing (20% likely) to go through a LL to completion
can probably do the P-1 if necessary.
What it shouldn't get out of bed for is TFing when a GPU does
it 100x faster than it could.

V4 used to proudly proclaim: "TF is comfortably (understatement)
in front of the LL wavefront".

[B]MY ARSE[/B]

Christenson 2011-10-12 03:37

[QUOTE=davieddy;274160]Good.
But a CPU willing (20% likely) to go through a LL to completion
can probably do the P-1 if necessary.
What it shouldn't get out of bed for is TFing when a GPU does
it 100x faster than it could.

V4 used to proudly proclaim: "TF is comfortably (understatement)
in front of the LL wavefront".

[B]MY ARSE[/B][/QUOTE]

Good lord, calm down! Peterw can probably tell you that 20% or so of that P-1 he is reporting in the last 60 days is from me.....all of 8 computers, one just recently on-line, a GT440 GPU and a GT480 GPU....

GIMPS received this great gift of GPUs, which nicely speed up TF and make CPUs obsolete at it, and you complain because, a year or two later, the full optimum (which has us doing only 10-20% less LL tests anyway) hasn't *quite* been reached, and the programming is still not really complete?

I'd better crack my number theory books, so as to find a new, faster, and more reliable method of finding factors....back in a year or three..... :smile:

petrw1 2011-10-12 04:06

[QUOTE=davieddy;274160]Good.

V4 used to proudly proclaim: "TF is comfortably (understatement)
in front of the LL wavefront".

[B]MY ARSE[/B][/QUOTE]

Isn't that what George is suggesting again here?
[url]http://www.mersenneforum.org/showpost.php?p=267323&postcount=83[/url]

A few months ago the TF wavefront was in the 80M range; YEARS ahead of LL so he opened 1 and then another and then another bit level just ahead of the current LL wavefront (5xMillions)

Or am I missing the sarcasm? Or you agreeding with George and suggesting TF is TOO far ahead?

petrw1 2011-10-12 04:09

[QUOTE=Christenson;274190]Peterw can probably tell you that 20% or so of that P-1 he is reporting in the last 60 days is from me[/QUOTE]

Just a minor point of clarification. I was only reporting that I had 630 P-1 completions in 60 days. The project as a whole has done significantly more than that.

davieddy 2011-10-12 05:35

@Eric and Petrw1
 
I think George knows what I'm trying to say.
Dubslow just wondered why he was being asked to TF
a 57 M exponent to 71 bits when it wasn't close to getting an LL test,
while currently LL tests are being dished out routinely under TFed.
I blame ckdo and such for hoarding TF assignments and not doing the
f*ckers.
[url=http://www.youtube.com/watch?v=nlk9Sj4Ns2k][B]Stir it up[/B][/url]

David

ckdo 2011-10-12 07:37

[QUOTE=davieddy;274204]I blame ckdo and such for hoarding TF assignments and not doing the f*ckers.[/QUOTE]

Just to get this straight: The highest exponent I have assigned is 47,447,837. It's been LL tested before, but not P-1 tested (surprise, an on topic post).

So, if you're looking for someone who's holding up progress at whatever wavefront, look elsewhere.

If you're looking for someone to throw some smokin' silicon "at the coalface", also look elsewhere. I ain't interested.

And, lest we forget, you should consider the option of shelling out a few bucks and getting some work done yourself, rather than trying to evangelize those pursuing different goals.

davieddy 2011-10-12 08:54

I came here for an argument
 
and am confronted vith zee legendary German SOH.


[QUOTE=ckdo;274214]Just to get this straight: The highest exponent I have assigned is 47,447,837. It's been LL tested before, but not P-1 tested (surprise, an on topic post).

So, if you're looking for someone who's holding up progress at whatever wavefront, look elsewhere.

If you're looking for someone to throw some smokin' silicon "at the coalface", also look elsewhere. I ain't interested.

And, lest we forget, you should consider the option of shelling out a few bucks and getting some work done yourself, rather than trying to evangelize those pursuing different goals.[/QUOTE]

For what its worth, I run my adequate Celeron 440 24/7, first time LLs,
P-1 when necessary, and ask the willing Eric to TF a few more bits on
his GPU.
I "spend" quite a bit of time monitoring the project's progress, and
if I think some aspect could be improved, I say so.

David Fawlty

LaurV 2011-10-12 09:36

[QUOTE=davieddy;274219]and am confronted vith zee legendary German SOH.[/QUOTE]

Acronym Definition
State Of Health
Start of Heading
Sydney Opera House (Sydney, Australia)
Save Our Homes (Amendment)
Section Overhead (SONET)
[B]Society of Homeopaths[/B]
Sense Of Humor (used in personal ads)
Sound of Hope (radio network)
Start Of Header
Safety and Occupational Health (USACE)
Stock On Hand
Soldiers of Honor (gaming)
Straits of Hormuz
[B]Sons of Hodir [/B](World of Warcraft)
Secretariat of Health
Snyder's of Hanover (bakery)
Siege of Hate (band)
Soldiers of Heaven
Survivors of Homicide
Secretary of Housing (US)
Stoughton Opera House (Stoughton, WI)
Signal Overhead (SONET)
[B]Scientific Observation Hole[/B]
Start of Overhaul
Second-Order-Hold
Saviors of Humor (gaming clan)
Significant Other Half
Space on Hire (India)
[B]Saints of Hell[/B]
Sine is Opposite divided by Hypotenuse (trigonometry)
So Over Him/Her
Sepia Officinalis Hemocyanin

(bold face is my own work)
(just to add some salt and pepper... flame war ahead!)

davieddy 2011-10-12 10:05

As I thought.
 
No[QUOTE=LaurV;274221]Sense Of Humor[/QUOTE]

Doncha just love these TLAs

David

davieddy 2011-10-12 10:44

@ckdo
 
There are ~50K people engaged in GIMPS.
One alone can't do much constructively.
OTOH One awkward c*** can put a f****** great spanner in the works.

Put that in your pipe and smoke it.

David

Christenson 2011-10-12 22:16

[QUOTE=petrw1;274194]Just a minor point of clarification. I was only reporting that I had 630 P-1 completions in 60 days. The project as a whole has done significantly more than that.[/QUOTE]

You mean I'm only doing 20% as much P-1 as YOU? :smile: Oh well...clean my clock again...but my numbers are similar, you probably have more cores doing it...

Davie:

As noted before, we are a only a few bits off the precise mathematical optimum...and ckdo isn't sitting on his TF assignments, he's farming them out to the likes of me and dubslow and chuck...and we are, at this point, finding factors more quickly than we could eliminate exponents by completing an LL test. But, given the equipment and algorithms available today, we will only eliminate 10-20% of exponents by TF, leaving the other 80% still needing LLs.

You ought to know that the global optimum for the project over even a single year's time may not be the same as the precise mathematical optimum accomplishment of proving Mersenne numbers not to be prime. The reason is because there is cross-coupling between the types of work done and how it is done and the total number of participants -- and with the distance from the mathematical optimum being relatively small, it is probably more important to increase the number of participants than it is to reach the precise optimum...

Case in point being the (I assume) large number of casual users that don't have enough patience to complete a full LL test....

Dubslow 2011-10-13 00:01

Are you kidding? I barely have the patience for them. The speed of TF is what keeps me alive! :smile:

petrw1 2011-10-13 03:21

[QUOTE=Christenson;274270]You mean I'm only doing 20% as much P-1 as YOU? :smile: Oh well...clean my clock again...but my numbers are similar, you probably have more cores doing it.[/QUOTE]

I set a goal of 10 P-1 per day a couple months ago and threw everything I could at it that is capable of doing large P-1.

Note with the quads it really only became feasible to assign 4 cores to P-1 use the MaxHigMemWorkers (2 or 3) and Memory= for each worker in Local.txt

Presently that consists of 23 cores:
(Not all run full time - you can guess by the thruput)
i5-750: 4 cores = 3.05 per day (OC'd to 3.2)
i7-860: 4 cores = 2.9 per day
Q9550: 4 cores = 1.65 per day
E8400: 2 cores = 1.2 per day
E6550: 2 cores = 0.4 per day
E8500: 2 cores = 0.4 per day
P4-3.4: 1 core = 0.45 per day
T6500: 1 core = 0.4 per day (I find 2 cores of a LapTop on P-1 overworks/overheats it)
i3-M330: 1 core = 0.15 per day (ditto)
========================
TOTAL: 10.6 per day

That only leaves me the following on TF:
T6500: 1 core
i3-M330: 1 core
P6000: 2 cores (owner complaining it was running too hard even with 1 core P-1)
P4-2.4: 1 core

Christenson 2011-10-13 03:35

I'm surprised you're not doing LLs or ECM on those extra cores...or do I need to sell you a GPU so you can find out why for yourself?

petrw1 2011-10-13 05:07

[QUOTE=Christenson;274304]I'm surprised you're not doing LLs or ECM on those extra cores...or do I need to sell you a GPU so you can find out why for yourself?[/QUOTE]

IMHO....
1. P-1 is where the most help is needed
2. On the P-1 list I can feel I am a top contributor. I tried allocating almost everything to LL for a good part of a year and couldn't crack top-100.
3. ECM does NOT contribute to finding Mersenne Primes (I'd like to help)
4. My next PC (when I am working again) will probably have a good GPU ... not necessarily for TF (there are lots there) but it appears LL on a GPU is becomming a reality.

Dubslow 2011-10-13 05:25

I think he meant the cores you currently use for TF

davieddy 2011-10-13 07:32

Apologies to all for any pottymouthing
 
Although I'm not averse to getting my hands dirty, be it
programming, running software or owning hardware,
I am a theorist at heart.

At the end my second year at Oxford (physics) a graduate had to
peruse my file with the details of the experiments I had done.
(We had the choice of continuing with "practical" or taking the theoretical option)
After five seconds he said "You are going to do the theory option aren't you?"

I would like to think this was because I (gratuitously) noted the
theory which justified the experiment, but it was probably my scrawly handwriting.

BTW there are two large threads in the Lounge discussing the effect
of GPUs on TF.

David

davieddy 2011-10-13 08:17

[QUOTE=petrw1;274311]IMHO....
1. P-1 is where the most help is needed
[/QUOTE]
Yes and no.
When George started this thread, most 1st time
LLs were being dished out without P-1, and he
(correctly in my view) thought that the LL wavefront
could do with some speeding up.

Since then, GPUs have entered the scenario.
As one who has been on a continuous diet of LLs for a year or so,
doing the occasional P-1 comes as welcome relief.
It is the TF on GPUs that needs to get its act together.
72 bits is a modest and easily achievable target.
Hardly any 1st time tests are getting dished out
TFed to more than 70.

This is the source of my (evident!) frustration.

OK 3% increase in the chance of an LL test finding a prime
is no big deal, but as an incentive to [B]finish [/B]the test, its effect
is greatly enhanced. I regard the 80% dropout rate (and Dubslow's
impatience) lamentable.

David

petrw1 2011-10-13 16:52

[QUOTE=davieddy;274320]It is the TF on GPUs that needs to get its act together. 72 bits is a modest and easily achievable target.[/QUOTE]

Agreed....I only wish there was a TF-GPU option so my poor little PC would not get assigned 70 or 71 bit TF's that can take more than a week.

[QUOTE]I regard the 80% dropout rate <snip> lamentable.
David[/QUOTE]

I might suggest to George and Scott that they invest in disk and then consider ways to upload the "save" files (even once a day) so that these tests that are dropped don't have to start over from the beginning. Granted some are dropped very early but I have also seen tests dropped near completion.

Dubslow 2011-10-13 19:06

[QUOTE=davieddy;274320]I regard the 80% dropout rate (and Dubslow's
impatience) lamentable.
[/QUOTE]

Uhhh... I was joking. Aside from one core of mfaktc, I keep all my cores on LL. I have 3 54M exponents that have been going for the last three weeks or so. (I have finished all assigned exponents.)

Although, I had a much slower Athlon at home (I'm in college, physics, as chance would have it) that was running all P-1, but it's been down a while and I currently have no way to get it back up.

davieddy 2011-10-13 21:28

[QUOTE=Dubslow;274384]Uhhh... I was joking. Aside from one core of mfaktc, I keep all my cores on LL. I have 3 54M exponents that have been going for the last three weeks or so. (I have finished all assigned exponents.)

Although, I had a much slower Athlon at home (I'm in college, physics, as chance would have it) that was running all P-1, but it's been down a while and I currently have no way to get it back up.[/QUOTE]

No offence was intended, I can assure you.
Welcome to the House of Fun:smile:
Is physics getting any simpler these days?

David

PS Where you from? (You sexy thing)

davieddy 2011-10-13 22:19

[QUOTE=petrw1;274364]Agreed....I only wish there was a TF-GPU option so my poor little PC would not get assigned 70 or 71 bit TF's that can take more than a week.

I might suggest to George and Scott that they invest in disk and then consider ways to upload the "save" files (even once a day) so that these tests that are dropped don't have to start over from the beginning. Granted some are dropped very early but I have also seen tests dropped near completion.[/QUOTE]

Interfacing GPU/mfaktc with Primenet is on my mate Eric's "to do" list.

As for storing the intermediate residues, this has been suggested
innumerable times. I've yet to hear an answer.
Since the returned expos get dished out immediately the next day,
I can't see why storage ability should constrain this operation.
However, some geezer did raise the point about "torture testing"
and wondered how worthwhile it would be to rely on the result thereof.

I thought several GPU operators might [B]each [/B]TF a few 53/54M exponents to 72 bits.
Instead we encounter what I am now going to dub the "ckdo" effect:
Bag 6000 exponents at once, hog them, farm them out and pretend you
are Lord God Almighty.

(Note my restrained lingo)


David

Chuck 2011-10-14 00:28

[QUOTE=davieddy;274403]

I thought several GPU operators might [B]each [/B]TF a few 53/54M exponents to 72 bits.
Instead we encounter what I am now going to dub the "ckdo" effect:
Bag 6000 exponents at once, hog them, farm them out and pretend you
are Lord God Almighty.

(Note my restrained lingo)


David[/QUOTE]

[URL]http://www.cyberhymnal.org/htm/h/o/holyholy.htm[/URL]

(Sorry, couldn't resist)

Chuck

Christenson 2011-10-14 00:38

[QUOTE=petrw1;274311]IMHO....
1. P-1 is where the most help is needed
2. On the P-1 list I can feel I am a top contributor. I tried allocating almost everything to LL for a good part of a year and couldn't crack top-100.
3. ECM does NOT contribute to finding Mersenne Primes (I'd like to help)
4. My next PC (when I am working again) will probably have a good GPU ... not necessarily for TF (there are lots there) but it appears LL on a GPU is becomming a reality.[/QUOTE]

:innocent:
Most of my CPUs are also doing P-1...I was joking about the cores that aren't doing P-1...thought LL-D would perhaps be more solidly productive for you.

And ckdo is not god, and is doing the right thing, in the long term...he and his helpers are knocking out factors as quickly as we can find them...in our little corner of the world of doing a bit more TF before LL-Ds are done. Others are going to do the entire 53M range to 72 bits before long...again on the plan of finding the most factors with the least effort...

Back to Euler's totient function in my number theory book....

Chuck 2011-10-14 01:08

OK OK I took one
 
Factor=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,54116749,68,72

OK I took a break from my other work for the next hour and a half and accepted this assignment from PrimeNet (upped it from 69—>72).

Chuck

Dubslow 2011-10-14 02:32

I usually do the same thing, up whatever assignment to 71 bits for 53M. That's what George is aiming for.

davieddy 2011-10-14 05:11

[QUOTE=Christenson;274413]:innocent:
Back to Euler's totient function in my number theory book....[/QUOTE]
Have you checked out his angles yet?

Chuck 2011-10-14 05:25

[QUOTE=Chuck;274421]Factor=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,54116749,68,72

OK I took a break from my other work for the next hour and a half and accepted this assignment from PrimeNet (upped it from 69—>72).

Chuck[/QUOTE]

Nuts, didn't find a factor...

davieddy 2011-10-14 05:39

[QUOTE=Chuck;274412][URL]http://www.cyberhymnal.org/htm/h/o/holyholy.htm[/URL]

(Sorry, couldn't resist)

Chuck[/QUOTE]

Think [URL="http://www.youtube.com/watch?v=J3gWi9bBkHQ"]I'm beginning to see the light[/URL]

THX:smile:

God

Jwb52z 2011-10-15 14:02

I don't remember who it was, but I was coming here to hopefully let the person in question know I was taking his or her advice and started doing only P-1 on my i7 quad core now. I was going to ask if, due to my calculations with the setup I use for P95, being able to do 2 P-1s a day is a good number, but I think after reading some of this thread, it's not nearly as good as I hoped in terms of processing and the project itself. Then again, I could be wrong as I'm stupid with math.

garo 2011-10-15 15:50

My 3.3GHz OC i5-750 can do one P-1 per core per day in the 52M range. So with four cores you should be able to do four P-1 test a day.

petrw1 2011-10-15 15:53

[QUOTE=Jwb52z;274614]I don't remember who it was, but I was coming here to hopefully let the person in question know I was taking his or her advice and started doing only P-1 on my i7 quad core now. I was going to ask if, due to my calculations with the setup I use for P95, being able to do 2 P-1s a day is a good number, but I think after reading some of this thread, it's not nearly as good as I hoped in terms of processing and the project itself. Then again, I could be wrong as I'm stupid with math.[/QUOTE]

Based on my recent experiences you probably should be able to complete 3 a day on that PC assuming:

- You are NOT over-clocking. If you do OC you should do even better.
- You are letting the server assign them in the default range (50M-60M) ... that is, you aren't manually assigning much higher exponents.
- You have at least 4GB of RAM on your PC
- You are allocating at least 2GB (2.5GB is probably better) under Options... CPU
- You are running near 24 hours a day
- I would recommend NOT using Hyper-Threading --- that is only 4 workers
- I would recommend using MaxHighMemWorkers in local.txt. Stage 2 of P-1 is considered a High Memory task so you may want to limit how many Stage 2s run at any one time. I think your i7 could handle 3 but if there are times of the day that you are doing other memory intensive tasks then I would suggest a line in local.txt something like:
[CODE]MaxHighMemWorker=2 during 17:00-23:00 else 3[/CODE]
which says have 2 High Memory Worker between 5:00 PM and 11:00 PM when you might be doing intensive gaming or audio visual stuff etc; and 3 High Memory Workers outside of those hours when you are less likely to be doing other memory intensive stuff.
- Note you should have corresponding memory setting in Options... CPU. i.e.
[CODE]Daytime P-1/ECM Stage 2 memory (in MB): 1600
Nighttime P-1/ECM Stage 2 memory (in MB): 2400
Daytime begins: 5:00 PM Ends: 11:00 PM[/CODE]

Then your 2 daytime High Memory workers could use up to 800 MB each or all 1600 MB if only one is running; similarly your 3 nightime High Memory workers could use up to 800 MB each or 1200 MB each if only 2 are running or all 2400 if only 1 is running. The more memory; the quicker it will run.

- optionally you could add a line like this for all 4 workers to local.txt to ensure any one worker (i.e. core) never exceeds 800MB but unless you have a compelling reason to do so I probably would NOT do this since as I noted in the previous paragraph there is a potential for the worker to use a lot more memory and complete an assignment quicker.
[CODE][Worker #1]
Memory=800[/CODE]

Based on these settings each assignment:
- should have B1/B2 parms in the range of: 600000 / 15000000
- should give you about 5 GhzDays (points) unless it finds a factor in Stage 1
- should complete in about 32 hours

Dubslow 2011-10-16 01:06

[CODE]Memory=8000 during 9:00-1:30 else 11000[/CODE]:w00t:
I like LL, so is it possible to only get LL's that require P-1?

Christenson 2011-10-16 02:39

I think if you take an exponent assigned for p-1, then you can probably ask Primenet for the LL assignment on the manual assignments page, since Primenet likes to see P-1 complete before assigning LL tests....this probably will also work in the DC ranges, though it sounds like that's not to your taste...
Of course, if you can't hardly wait for TFs, then first-time LL's are going to be *really* hard to wait for! :smile:

Dubslow 2011-10-16 06:44

I typically run the other three (mfaktc) cores as LL, and occasionally throw in a DC for good measure. Usually I get curious as to what I get assigned if I manually assign an LL or DC (or in the most recent case accidentally got LL instead of TF... it's a good thing PrimeNet limited my request for 60 down to 2 :razz:) so I hardly ever actually am automatically assigned something. Therefore, good idea! (On the other hand, as I indicated, I already have a few tests waiting for each core, so not yet :smile:)

Edit: One test finished about 15 minutes ago! (Not prime of course) First three hexits of the Res64: 3EE. </boring trivialities>

Jwb52z 2011-10-16 16:54

[QUOTE=garo;274626]My 3.3GHz OC i5-750 can do one P-1 per core per day in the 52M range. So with four cores you should be able to do four P-1 test a day.[/QUOTE]I got my numbers backward. I can do 1 per core, but in slightly longer than one day, but then again, I only do 1 at a time for the moment. I use 4, of the 8 that P95 counts them as on my quad core, cores on 1 P-1 with one worker thread. I only let it use 512 MB of memory simply because my machine is a laptop, Qosmio line, and I don't want to take a chance on overtaxing it. Yes, I run it 24 hours a day.

diamonddave 2011-10-18 21:01

Just a quick question about PrimeNet assignment of P-1 work unit.

If I request P-1 work unit with the manual page, I get assignment in the 57M range. Same thing if I request new work with the Prime95 client.

Yet, if I look at the PrimeNet Summary report, I see that the following range have many available P-1 work unit available:

44M: 407
45M: 1356
46M: 971
47M: 1191
48M: 1795
49M: 437

Why are they not assigned?

Dubslow 2011-10-18 22:27

It [I]might[/I] be that PrimeNet reserves those for people who have a proven return rate, i.e. finish all their assignments. On the other hand, this is just a shot in the dark, so better wait for someone who actually knows what they're talking about.

Christenson 2011-10-18 23:09

A couple of things to keep in mind:
1) That report has a lag of at least an hour...a lot of server CPU file I/O is required to develop it...so a few available may have already been assigned by the time you read the report.
2) Why don't you try explicitly asking for something in the range of 44M to 50M?

petrw1 2011-10-18 23:53

[QUOTE=diamonddave;275027]Just a quick question about PrimeNet assignment of P-1 work unit.

If I request P-1 work unit with the manual page, I get assignment in the 57M range. Same thing if I request new work with the Prime95 client.

Yet, if I look at the PrimeNet Summary report, I see that the following range have many available P-1 work unit available:

44M: 407
45M: 1356
46M: 971
47M: 1191
48M: 1795
49M: 437

Why are they not assigned?[/QUOTE]

My guess would be these are assignments that already have had the LL done so this P-1 is less necessary....and/or because George prefers to hand out P-1 tests just ahead of the current LL wavefront.

KingKurly 2011-10-19 01:04

[QUOTE=petrw1;275041]My guess would be these are assignments that already have had the LL done so this P-1 is less necessary....and/or because George prefers to hand out P-1 tests just ahead of the current LL wavefront.[/QUOTE]
You are correct; most (if not all) have had at least one LL test. I've been working through them, grabbing assignments manually (through worktodo,txt, with help from mersenne-aries.sili.net/p1small.php) and manually changing from 2 LL tests saved to 1.1 tests saved.

Some day, I'll catch up to the LL wavefront. Maybe. :wink:

kladner 2011-10-19 03:59

Referring back to the start of this thread, George said, "I'll be watching over the next few months how many people sign up for P-1 factoring. We have tons of people doing LL and TF, we need more people signing up for P-1."

I have already reset my worker preferences so that two are P-1. I had been reserving 2 of 6 cores for mfaktc, but I'm finding that for daytime, that keeps the GTX460 just too busy for interactive use of the computer.

Now I'm working on another plan. At night I'll run two mfaktc. That will be in 64bit, because they seem to do better there. With the four remaining CPU cores I'll have two "What makes sense" and the aforementioned two P-1's. When I want to use the machine, I'll run one mfaktc and let the fifth CPU core do P-1. I have enough RAM, at least in 64 bit space to be generous to P64 for P-1. Right now I'm trying that with 2048 MB allowed in Win7. With the memory limits in 32 bit I'll have to cut that back to something like 1.3 GB or so.

Will P-1 do OK in that latter case with ~400MB per worker if they all happened to be doing stage 2 at the same time?

Dubslow 2011-10-19 05:21

readme.txt suggests 125MB for 50M exponent, or 250MB to be generous.

Brain 2011-10-19 05:39

[QUOTE=Dubslow;275057]readme.txt suggests 125MB for 50M exponent, or 250MB to be generous.[/QUOTE]
256MB are not enough for current P-1 assignments. PrimeNet will assign other work then. Give 512MB per core a try. I'm happy with that.

delta_t 2011-10-19 07:58

[QUOTE=Brain;275058]256MB are not enough for current P-1 assignments. PrimeNet will assign other work then. Give 512MB per core a try. I'm happy with that.[/QUOTE]

Yeah, more memory for P-1 the better I say. P-1 likes RAM and runs faster with more.

kladner 2011-10-19 14:28

[QUOTE=delta_t;275072]Yeah, more memory for P-1 the better I say. P-1 likes RAM and runs faster with more.[/QUOTE]

It certainly seems that P-1 will gulp down as much as it is allowed.

Thanks to all who responded. Brain -I'll see if I can budget that much. I'm kind of counting on not having all three workers doing Stage 2 at the same time.

Mr. P-1 2011-10-19 14:40

[QUOTE=kladner;275081]I'm kind of counting on not having all three workers doing Stage 2 at the same time.[/QUOTE]

You can make sure this never happens by putting

MaxHighMemWorkers=2

In local.txt.

Mr. P-1 2011-10-19 14:44

[QUOTE=delta_t;275072]Yeah, more memory for P-1 the better I say. P-1 likes RAM and runs faster with more.[/QUOTE]

It will also search deeper, and consequently find more factors. Very generous memory will allow it to use the Brent-Suyama extension, which means it will find even more factors.

That said, the returns on extra memory do diminish rather rapidly.

kladner 2011-10-19 18:22

[QUOTE=Mr. P-1;275083]You can make sure this never happens by putting

MaxHighMemWorkers=2

In local.txt.[/QUOTE]

Thanks Mr. P-1. I put that in for the 32 bit version. I'm not as concerned in 64 bit, where all 8GB are available.

"Very generous memory will allow it to use the Brent-Suyama extension, which means it will find even more factors."

What qualifies as 'very generous'?

lorgix 2011-10-19 18:27

[QUOTE=kladner;275095]Thanks Mr. P-1. I put that in for the 32 bit version. I'm not as concerned in 64 bit, where all 8GB are available.

"Very generous memory will allow it to use the Brent-Suyama extension, which means it will find even more factors."

What qualifies as 'very generous'?[/QUOTE]

Anything that causes stg2 to be run in multiple passes [I]does not[/I] qualify. I don't know just how much more is required.

Christenson 2011-10-19 19:03

I think "very generous" is 5-10 gigs

kladner 2011-10-19 19:59

[QUOTE=Christenson;275102]I think "very generous" is 5-10 gigs[/QUOTE]

OUCH! Is that facetious, or for real?

delta_t 2011-10-19 20:24

You don't really need to devote that much, but some of us actually do give that much. I usually give P-1 6GB during the day and 9GB overnight. I only allow 2 instances of P-1 to run during the day, and 3 during the night, but each simultaneous run can get up to 3GB each, or the whole allotment if only one instance is running.

[CODE]Memory=6144 during 7:00-23:00 else 9126
MaxHighMemWorkers=2 during 7:30-22:30 else 3[/CODE]

I do have to stop the high memory P-1 instances when I have to do some big GIS projects, but otherwise when the RAM isn't being used I let it have it's generous share. There are diminishing returns and I don't know where the optimum amount is, but if you have and can spare the memory, give it a little more I say, but I think Brain's suggestion is a good one, give at least 512 per P-1.

Chuck 2011-10-19 20:24

[QUOTE=kladner;275105]OUCH! Is that facetious, or for real?[/QUOTE]

[CODE][Oct 19 10:12] M56324143 stage 1 complete. 1918474 transforms. Time: 46271.881 sec.
[Oct 19 10:12] Starting stage 1 GCD - please be patient.
[Oct 19 10:13] Stage 1 GCD complete. Time: 71.033 sec.
[Oct 19 10:13] Available memory is 7916MB.
[Oct 19 10:13] Using 7388MB of memory. Processing 288 relative primes (0 of 288 already processed).
[Oct 19 10:28] M56324143 stage 2 is 0.00% complete. Time: 847.733 sec.
[/CODE]

It's for real on my machine. It does the Brent-Suyama extension to the 12th.

Chuck

S34960zz 2011-10-19 21:45

[QUOTE=kladner;275105]OUCH! Is that facetious, or for real?[/QUOTE]

[code]
...
[Oct 17 07:35:34] Optimal P-1 factoring of M57013367 using up to 14336MB of memory.
[Oct 17 07:35:34] Assuming no factors below 2^68 and 2 primality tests saved if a factor is found.
[Oct 17 07:35:35] Optimal bounds are B1=680000, B2=20400000
[Oct 17 07:35:35] Chance of finding a factor is an estimated 7.11%
[Oct 17 07:35:36] Using Core2 type-3 FFT length 3M, Pass1=1K, Pass2=3K
[Oct 17 07:45:01] M57013367 stage 1 is 1.019% complete. Time: 564848.944 ms.
...
[Oct 18 02:42:43] M57013367 stage 1 is 99.908% complete. Time: 730762.949 ms.
[Oct 18 02:43:48] M57013367 stage 1 complete. 1961800 transforms. Time: 68887372.793 ms.
[Oct 18 02:43:48] Starting stage 1 GCD - please be patient.
[Oct 18 02:45:34] Stage 1 GCD complete. Time: 105972.309 ms.
[Oct 18 02:45:34] Available memory is 14284MB.
[Oct 18 02:45:34] Using 12062MB of memory. Processing 480 relative primes (0 of 480 already processed).
[Oct 18 03:11:11] M57013367 stage 2 is 0.000% complete. Time: 1536191.236 ms.
...
[Oct 19 03:18:30] M57013367 stage 2 is 99.261% complete. Time: 841249.614 ms.
[Oct 19 03:29:37] M57013367 stage 2 complete. 2192838 transforms. Time: 89035107.422 ms.
[Oct 19 03:29:37] Starting stage 2 GCD - please be patient.
[Oct 19 03:31:24] Stage 2 GCD complete. Time: 107675.838 ms.
[Oct 19 03:31:24] M57013367 completed P-1, B1=680000, B2=20400000, E=12, We4: C3BD3AD0
...
[/code]

kladner 2011-10-19 21:53

Wow! I had no clue. Thanks to all of you for educating me. My 8GB seems paltry by that standard. I thought I was being generous with 2GB for overnight runs of 2-3 P-1's. I guess I'll reset the night allocation in 64bit.

Mr. P-1 2011-10-19 22:42

[QUOTE=kladner;275095]What qualifies as 'very generous'?[/QUOTE]

I don't know. All I know is that it varies depending upon the size of the exponent, and that I don't have enough to get the B-S extension on the 45M exponents I'm testing.

The B-S extension finds more factors, but it also increases the running time, so the benefits are actually quite marginal. The main benefit of increase memory is reducing the number of passes, which reduces the running time, (to which P95 responds by increasing the limits, so finding more factors in about the same time), but again the benefit is marginal once you go past about 1GB.

To summarize: More memory is better, but not much better. If you have it, by all means use it. But if you're planning to build a new computer and intend it to be a P-1 engine, spend your money on a fast processor and fast memory, rather than lots of memory.

kladner 2011-10-19 23:15

[QUOTE=Mr. P-1;275124]But if you're planning to build a new computer and intend it to be a P-1 engine, spend your money on a fast processor and fast memory, rather than lots of memory.[/QUOTE]

That's good to know, but it's way down the road for me. I am running on my "new computer". I do have two more RAM slots, though. <G>


All times are UTC. The time now is 10:25.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.