mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   PrimeNet (https://www.mersenneforum.org/forumdisplay.php?f=11)
-   -   P-1 factoring anyone? (https://www.mersenneforum.org/showthread.php?t=11101)

davieddy 2011-11-05 14:36

[QUOTE=Dubslow;277150]I remember reading somewhere that you aren't supposed to report no factor results from mfakto?[/QUOTE]

[QUOTE=James Heinrich;277154]That would be kind of pointless... where did you read that?[/QUOTE]

[QUOTE=delta_t;277177]That has since been resolved by bdot I believe. Check [URL]http://mersenneforum.org/showpost.php?p=272797&postcount=132[/URL] and [URL]http://mersenneforum.org/showpost.php?p=272868&postcount=136[/URL]

So mfakto 0.09 should be the latest version and fixed.[/QUOTE]

The "point" which comes to mind, is that mfakto might
have been missing some factors, and We would like to be
as confident as possible in the assertion "M(y) has no factors below 2^x".

David

Mr. P-1 2011-11-10 12:43

[QUOTE]I understand your arguments, but if I have 4 cores running LL, I can complete 8 LL's in approx 80 days, where if I use the LL/TF/LL/TF in the same 80 days I can complete 6 LL's and ~160 TF (with the extimated 1% factor found this saves 1.6LL and 1.6DC). It just seems to be a more efficient use of the CPU's.[/QUOTE]

It may seem to be more efficient, but it actually isn't. GIMPS has an excess of TF capacity. Those 1.6LLs and 1.6DC will be saved anyway, possibly by a GPU. The only difference you're making is to save them less efficiently than they might be saved otherwise.

By contrast, GIMPS has a shortage of P-1 and LL capacity. LL is simply a bottleneck - the more machines we have doing this kind of work, the faster the project proceeds. P-1 is even more valuable. About half of all LL machines do not have sufficient memory to do stage 2 P-1. Many stage 2 factors are factors with would not otherwise be found, and thus represent LLs and DCs really saved. Even if the factors you find are factors which would otherwise be found, or if you don't find factors, the project benefits from having these computations completed more efficiently by a machine with plentiful memory.

chalsall 2011-11-10 13:53

[QUOTE=Mr. P-1;277784]Even if the factors you find are factors which would otherwise be found, or if you don't find factors, the project benefits from having these computations completed more efficiently by a machine with plentiful memory.[/QUOTE]

Please forgive me for this "plug", but I'd like to bring to the attention of all P-1 Workers that the "GPU to 72" Tool is now making available low exponents (with no LL work yet done) which have been TFed to high levels (72 bits) by GPU's which need a P-1 run.

Please see [URL="http://gpu.mersenne.info/account/getassignments/p-1/"]http://gpu.mersenne.info/account/getassignments/p-1/[/URL]. (If you don't have an account at the site, you'll have to create one before being able to be assigned work.)

KyleAskine 2011-11-10 14:40

[QUOTE=chalsall;277789]Please forgive me for this "plug", but I'd like to bring to the attention of all P-1 Workers that the "GPU to 72" Tool is now making available low exponents (with no LL work yet done) which have been TFed to high levels (72 bits) by GPU's which need a P-1 run.

Please see [URL="http://gpu.mersenne.info/account/getassignments/p-1/"]http://gpu.mersenne.info/account/getassignments/p-1/[/URL]. (If you don't have an account at the site, you'll have to create one before being able to be assigned work.)[/QUOTE]

Nice! I just started doing TF there a few days ago and love it. I just grabbed 15 P-1's too for my three boxes that do that. Hopefully you will get the first P-1's in from me in a couple days!

bcp19 2011-11-10 16:12

[QUOTE=Mr. P-1;277784]It may seem to be more efficient, but it actually isn't. GIMPS has an excess of TF capacity. Those 1.6LLs and 1.6DC will be saved anyway, possibly by a GPU. The only difference you're making is to save them less efficiently than they might be saved otherwise.

By contrast, GIMPS has a shortage of P-1 and LL capacity. LL is simply a bottleneck - the more machines we have doing this kind of work, the faster the project proceeds. P-1 is even more valuable. About half of all LL machines do not have sufficient memory to do stage 2 P-1. Many stage 2 factors are factors with would not otherwise be found, and thus represent LLs and DCs really saved. Even if the factors you find are factors which would otherwise be found, or if you don't find factors, the project benefits from having these computations completed more efficiently by a machine with plentiful memory.[/QUOTE]

Your idea of efficiency and mine are a bit different. The Core 2 Quad has some sort of bottleneck compared to the i7 in that the i7 can run 4 LL's with only a minor slowdown (.066 to .070 per iteration) where the Quad bogs badly (.060 to .090). Someone said the Quad is actually a Dual-Dual core, whatever that means, but I am guessing that each 'Dual' shares either L1 or L2 memory which causes this bottleneck. With an LL/TF per 'Dual' the cores are not fighting and seem to run more efficiently. So, when I said 'more efficient' I also meant that the CPU's were running at their nominal optimum speed on the tasks given them, rather than fighting for resources. In my case, running P-1 affects the 'shared' Dual the same as running 2 LL's, so 'efficiency' suffers. So for me, a single core being able to complete 3 LL's in a LL/TF 'Dual' vs that same core completing 2 LL's in a LL/LL or LL/P-1 'Dual' is more efficient.

I have recently upgraded my gaming system GPU and installed the old one in the Quad, so I can now continue running cores 1 and 3 on LL while devoting cores 2 and 4 to the GPU running Mfaktc which will keep the 'efficiency' I was referring to while helping with the GPU to 72 project at the same time.

The beauty of this project is that any effort is a step forward. There are many compelling arguments for each aspect of it... LL - Pro: 100% proof of primality, very small possibility of finding a prime, Con: Very slow. P-1 - Pro: Best method %-wise of finding a factor, Con: Memory intensive, multiple instances running S2 concurrently reduces efficiency, high impact on other cores on 'Dual' cpu systems, 0% chance of finding a prime. TF - Pro: Low impact on 'Dual' CPU systems, fairly fast (esp. on GPUs), Con: Low % of factors found, inefficient on CPUs at higher bit levels, 0% chance of finding a prime.

Rodrigo 2011-11-10 16:12

[QUOTE=Mr. P-1;277784]By contrast, GIMPS has a shortage of P-1 and LL capacity. LL is simply a bottleneck - the more machines we have doing this kind of work, the faster the project proceeds.[/QUOTE]
Is there reason to be concerned about DC's, where the factors currently being assigned in that area are only half the size of the ones that LL crunchers are getting? Or, not really?

Rodrigo

Mr. P-1 2011-11-10 16:59

[QUOTE=Rodrigo;277811]Is there reason to be concerned about DC's, where the factors currently being assigned in that area are only half the size of the ones that LL crunchers are getting? Or, not really?[/QUOTE]

I assume that by "factors" you mean exponents. Otherwise I don't understand the question.

I see no reason to be "concerned" about anything about the project. Whether you would view it to be desirable to do more DC, or less, relative to first time LL depends upon whether you think it more, or less, important to verify the status of Mersenne Numbers, than it is to find new Mersenne primes. There's no objective answer to that question.

Mr. P-1 2011-11-10 17:05

[QUOTE=Mr. P-1;277129]There are two things happening here.

First, without changing the bounds, the algorithm runs faster with more memory. Secondly, both P-1 and ECM are memory-bandwidth-hungry algorithms. If both are running at the same time, they will compete for the available memory bandwidth, slowing down both.[/QUOTE]

As bcp19 has just reminded me, there is more going on here than just memory bandwidth contention. There is also cache contention.

Rodrigo 2011-11-10 17:35

[QUOTE=Mr. P-1;277816]I assume that by "factors" you mean exponents. Otherwise I don't understand the question.[/QUOTE]

Yes, I meant exponents. That's what I get for typing at lunchtime.

[QUOTE=Mr. P-1;277816]I see no reason to be "concerned" about anything about the project. Whether you would view it to be desirable to do more DC, or less, relative to first time LL depends upon whether you think it more, or less, important to verify the status of Mersenne Numbers, than it is to find new Mersenne primes. There's no objective answer to that question.[/QUOTE]
The reason I asked is that you'd said that GIMPS has a "shortage" of LL (and P-1) capacity. Noting that the exponents currently being assigned to DC are much smaller than the LLs currently being assigned, I was curious as to whether, by the same token, we could also say that there is a shortage of DC capacity.

Yes? No? Seeking to understand better...

Rodrigo

davieddy 2011-11-10 18:40

[QUOTE=Rodrigo;277819]Yes, I meant exponents. That's what I get for typing at lunchtime.


The reason I asked is that you'd said that GIMPS has a "shortage" of LL (and P-1) capacity. Noting that the exponents currently being assigned to DC are much smaller than the LLs currently being assigned, I was curious as to whether, by the same token, we could also say that there is a shortage of DC capacity.

Yes? No? Seeking to understand better...

Rodrigo[/QUOTE]
You mean "liquid lunch" I assume.

LL is what this project is all about.
If there is a "shortage" we need to seduce some more
participants. Like India or China.

David

Dubslow 2011-11-10 18:43

I think shortage of LL is about as subjective as whether or not DC is too slow. I personally don't see a shortage in LL, but rather think of the LL work being done as setting how important everything else is. The P-1 to LL ratio of work being completed is lower than is optimal, which is why we say there's a shortage of P-1. It really is each to his own here.


All times are UTC. The time now is 23:07.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.