![]() |
[QUOTE=Chuck;275185]When I receive a P-1 assignment, sometimes I do additional trial factoring with the GPU from 68—>71 before the P-1 work begins. It it important to change the worktodo file to reflect this increase before the P-1 process starts?
Chuck[/QUOTE] Sort of. The level of TF done does affect the optimal P-1 bounds |
[QUOTE=Chuck;275185]When I receive a P-1 assignment, sometimes I do additional trial factoring with the GPU from 68—>71 before the P-1 work begins. It it important to change the worktodo file to reflect this increase before the P-1 process starts?[/QUOTE]
It's not critically important. The client uses this information to compute the optimal bounds. If the client thinks the exponent has been factored less deeply than it actually has been (or will be, the order of factoring doesn't make any difference to the factors you will actually find), then it will choose somewhat higher bounds than is optimal. |
[QUOTE=fivemack;275181]Isn't it only bad if the increased runtime doesn't give a proportionate increase in probability of factor?[/QUOTE]
The problem is, as AXN points out, that we don't know that it does. In the case where the assignment is a P-1 (rather than an LL assignment getting an initial P-1) there is another issue: the more time spent on each assignment, the fewer the client is able to complete in a given period of time, and the more assignments which pass through to LL testing without having been P-1ed first. Of these, about half never get a stage 2. This means that, in exchange for a slightly increased chance of finding a factor with the exponents we do test, we're losing even more with the exponents we don't. |
The correct optimality criterion is, for the vast majority of mersenne exponents, how to prove the most of them composite for the least amount of effort. Factors found per GHz-Day is the correct metric.
Mr P-1 points out that by doing relatively deep P-1, we have many exponents not getting any stage 2 P-1, which has a significantly higher return of factors found per time spent. Thus, exponents that could have had a factor found relatively easily are getting LL tested instead. This is also happening with TF, though in this case, the change is due to an increase in the ease of doing TF on GPUs. |
B-S extension
How does P95/64 indicate that B-S has kicked in? Also, what is Stage 1 GCD?
|
[QUOTE=kladner;275214]How does P95/64 indicate that B-S has kicked in?[/QUOTE]
You'll see an "E=6" (or higher) in your results.txt file, if it fails to find a factor. For some reason it doesn't say when it finds one. [QUOTE]Also, what is Stage 1 GCD?[/QUOTE] The client performs a GCD ([URL=http://en.wikipedia.org/wiki/Greatest_common_divisor]Greatest Common Divisor[/URL]) calculation at the end of each stage. The GDC extracts the factor(s) found (if any) from the result of the computation in each stage. |
Thanks, Mr. P-1!
|
[QUOTE=Christenson;275207]Mr P-1 points out that by doing relatively deep P-1, we have many exponents not getting any stage 2 P-1, which has a significantly higher return of factors found per time spent. Thus, exponents that could have had a factor found relatively easily are getting LL tested instead[/QUOTE]
This is too bad this is happening. I know a lot of us have been trying to get P-1 done before the first LL, but I guess there are still too few P-1 still getting done before PrimeNet hands them out for first LL? I've practically turned all my computers that have enough memory to doing P-1. So is it that we are just too few doing P-1? |
[QUOTE=delta_t;275220]
I've practically turned all my computers that have enough memory to doing P-1. So is it that we are just too few doing P-1[/QUOTE] Basically yes. Turning every computer that has enough memory to doing P-1 is probably the best thing you could be doing for GIMPS. The only exception is if you have TF-capable GPUs. Currently the GPU factoring programs also need a great deal of CPU time (typically an entire core or two) to support the GPU. Depending of the specific work you do, this may be even more beneficial to GIMPS than devoting those cores to P-1 |
[QUOTE=Christenson;275207]The correct optimality criterion is, for the vast majority of mersenne exponents, how to prove the most of them composite for the least amount of effort. Factors found per GHz-Day is the correct metric.[/QUOTE]
In fact a dedicated P-1er's contribution is optimal if he maximises the number of factors he finds [i]that would otherwise not be found[/i] (and minimises the number of factors he fails to find that would otherwise be found.) It's easy to see that P-1ers "should" choose lower bounds than LL testers doing preliminary P-1s, but quantifying how much lower is extraordinarily difficult. Despite the logic, it "feels" wrong to deliberately reduce the bounds in any way, so I don't do this. A dedicated P-1er with a reasonable amount of memory who uses prime95's default bounds calculation is making a contribution to GIMPS that is significantly greater than if he devoted his cores to LL testing. And that is good enough for me. |
Quote Mr. P-1: "You'll see an "E=6" (or higher) in your results.txt file, if it fails to find a factor."
Ah! Like this: [Wed Oct 19 21:45:54 2011] UID: kladner/pod64, M52315441 completed P-1, B1=610000, B2=15555000, E=6, We4: 498F4FED, AID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx [Wed Oct 19 22:32:00 2011] UID: kladner/pod64, M52310233 completed P-1, B1=610000, B2=15555000, E=6, We4: 49964FA4, AID: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx So given discussion just previous, perhaps I don't need to allocate quite so much RAM; perhaps instead dedicating another worker to P-1 and spreading the benefits further. |
| All times are UTC. The time now is 23:08. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.