mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > PrimeNet

Reply
 
Thread Tools
Old 2013-12-06, 14:06   #1
otutusaus
 
Nov 2010
Ann Arbor, MI

2·47 Posts
Default P-1 & LL wavefront slowed down?

I recently decided to shift to p-1 tests and most of my assignments are for exponents around 68M-69M. However, every now and then I am assigned one exponent on the 62M-65M region. So I decided to check the work distribution map to check the wavefront on TF, p-1 and LL.

I noticed there's a long list of exponents starting at 62913187 currently assigned for TF to the user "GPU Factoring" on the same computer "ll_work", even though those have been already factored until 2^74 and so are being held from p-1 and LL testing. See links:
http://www.mersenne.org/assignments/...et+Assignments
http://www.mersenne.org/report_expon...&B1=Get+status

Additionally, that same user/computer is having a lot more workload. It has currently assigned another long list of exponents on the 30M and 40M regions (which have been already LL'd) for TF.

I also noticed most of the first time LL assignments are on those same exponents (62M to 64M). Who is that user? Also, they are affecting first time LL and p-1 assignments by shifting other users' efforts to higher exponents. Is there anything to be done about it?
otutusaus is offline   Reply With Quote
Old 2013-12-06, 14:25   #2
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

3×13×191 Posts
Default

Quote:
Originally Posted by otutusaus View Post
I recently decided to shift to p-1 tests and most of my assignments are for exponents around 68M-69M. However, every now and then I am assigned one exponent on the 62M-65M region. So I decided to check the work distribution map to check the wavefront on TF, p-1 and LL.

I noticed there's a long list of exponents starting at 62913187 currently assigned for TF to the user "GPU Factoring" on the same computer "ll_work", even though those have been already factored until 2^74 and so are being held from p-1 and LL testing. See links:
http://www.mersenne.org/assignments/...et+Assignments
http://www.mersenne.org/report_expon...&B1=Get+status

Additionally, that same user/computer is having a lot more workload. It has currently assigned another long list of exponents on the 30M and 40M regions (which have been already LL'd) for TF.

I also noticed most of the first time LL assignments are on those same exponents (62M to 64M). Who is that user? Also, they are affecting first time LL and p-1 assignments by shifting other users' efforts to higher exponents. Is there anything to be done about it?
A more important question:

Why on Earth does it matter??? There is no deadline for finding the
next Mersenne prime. It will be found in due course.

This isn't a race. The computations move forward.

As for those who argue constantly over the 'optimal' TF levels and the
TF vs. P-1 tradeoffs, I say to you: You are all seriously deluded if you
think it makes any difference. It really doesn't matter whether you
do TF to 71, 72, 73, ....... bits.
R.D. Silverman is offline   Reply With Quote
Old 2013-12-06, 14:27   #3
lycorn
 
lycorn's Avatar
 
Sep 2002
Oeiras, Portugal

22·3·5·23 Posts
Default

@OP,
Have you heard of the GPUto72 subproject?
See http://www.mersenneforum.org/forumdisplay.php?f=95

Also check the GPUto72 site.

You are most welcome to participate!

Last fiddled with by lycorn on 2013-12-06 at 14:27
lycorn is offline   Reply With Quote
Old 2013-12-06, 18:21   #4
TheMawn
 
TheMawn's Avatar
 
May 2013
East. Always East.

11·157 Posts
Default

GPU to 72 was created back when 72 bits was the optimal TF'ing level. Primenet at this point in time only hands out work for CPU's. GPU72.com is a resource that offers work for GPU's, which now includes the full range of TF, LL and P-1.

Primenet assigns GPU to 72 with a massive amount of work which it delegates among its users. Results are submitted to Primenet manually by the user themself, and are given the appropriate credit. The two servers take care of the rest.

The computer / user you pointed out is probably the central computer for the whole subproject. Check out position two in top teams overall:

http://mersenne.org/report_top_teams/

EDIT: and position five in first LL's

http://mersenne.org/report_top_teams_LL/

Last fiddled with by TheMawn on 2013-12-06 at 18:22
TheMawn is offline   Reply With Quote
Old 2013-12-06, 19:16   #5
otutusaus
 
Nov 2010
Ann Arbor, MI

10111102 Posts
Default

Quote:
Originally Posted by R.D. Silverman View Post
A more important question:
Why on Earth does it matter??? There is no deadline for finding the
next Mersenne prime. It will be found in due course.
This isn't a race. The computations move forward.
I am with you, we'll eventually get there. However, work assignment rules were set up to optimize the progress of GIMPS and reduce the amount of time required per test. And that's important because the longer a test is run, the more chances an error may occur. So why LLing 69M exponents now if we can LL 62M; that only increases the probablity of an error.
Quote:
Originally Posted by lycorn View Post
@OP,
Have you heard of the GPUto72 subproject?
See http://www.mersenneforum.org/forumdisplay.php?f=95
Quote:
Originally Posted by TheMawn View Post
GPU to 72 was created back when 72 bits was the optimal TF'ing level. Primenet at this point in time only hands out work for CPU's. GPU72.com is a resource that offers work for GPU's, which now includes the full range of TF, LL and P-1.
Primenet assigns GPU to 72 with a massive amount of work which it delegates among its users. Results are submitted to Primenet manually by the user themself, and are given the appropriate credit. The two servers take care of the rest.
The computer / user you pointed out is probably the central computer for the whole subproject.
I knew of it, but I didn't know it was running p-1 and LL tests. I think bringing GPUs to the quest is great. Still, massive exponent reservation for TF work delays the availability of those exponents for p-1 and LL tests, without adding much success rate:
Quote:
Originally Posted by R.D. Silverman View Post
It really doesn't matter whether you do TF to 71, 72, 73, ....... bits.
otutusaus is offline   Reply With Quote
Old 2013-12-06, 19:31   #6
lycorn
 
lycorn's Avatar
 
Sep 2002
Oeiras, Portugal

22·3·5·23 Posts
Default

Quote:
Originally Posted by otutusaus View Post
Still, massive exponent reservation for TF work delays the availability of those exponents for p-1 and LL tests, without adding much success rate:
Read this thread, for info on that subject:

http://www.mersenneforum.org/showthread.php?t=18975

GPUto72 is succeeding in keeping ahead of the front wave of new 1st time LL tests, which means that, except in some relatively rare occasions, exponents are being handed out for testing already TFed to a high bit level and P-1ed, so the testers may proceed straight to the actual LL test. In this process, several additional candidates are eliminated (due to the higher-than-default level of factoring done by the GPUs). There´s a lot to read on the subject in this forum.
lycorn is offline   Reply With Quote
Old 2013-12-06, 20:14   #7
otutusaus
 
Nov 2010
Ann Arbor, MI

2·47 Posts
Default

Quote:
Originally Posted by lycorn View Post
Read this thread, for info on that subject:

http://www.mersenneforum.org/showthread.php?t=18975

GPUto72 is succeeding in keeping ahead of the front wave of new 1st time LL tests, which means that, except in some relatively rare occasions, exponents are being handed out for testing already TFed to a high bit level and P-1ed, so the testers may proceed straight to the actual LL test. In this process, several additional candidates are eliminated (due to the higher-than-default level of factoring done by the GPUs). There´s a lot to read on the subject in this forum.
Oh, well! It looks TF vs p-1 in GPU vs CPU is a hot topic these days.
otutusaus is offline   Reply With Quote
Old 2013-12-06, 20:19   #8
otutusaus
 
Nov 2010
Ann Arbor, MI

10111102 Posts
Default

Quote:
Originally Posted by lycorn View Post
Read this thread, for info on that subject:

http://www.mersenneforum.org/showthread.php?t=18975

GPUto72 is succeeding in keeping ahead of the front wave of new 1st time LL tests, which means that, except in some relatively rare occasions, exponents are being handed out for testing already TFed to a high bit level and P-1ed, so the testers may proceed straight to the actual LL test. In this process, several additional candidates are eliminated (due to the higher-than-default level of factoring done by the GPUs). There´s a lot to read on the subject in this forum.
Still, if you check the link I posted, most of the assignments are almost a year old. They feel more like a placeholder than an actual asignment.
otutusaus is offline   Reply With Quote
Old 2013-12-06, 20:26   #9
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

3×31×97 Posts
Default

Quote:
Originally Posted by otutusaus View Post
Still, if you check the link I posted, most of the assignments are almost a year old. They feel more like a placeholder than an actual asignment.
Deal with it, David.
chalsall is online now   Reply With Quote
Old 2013-12-06, 21:01   #10
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

11101000110012 Posts
Default

Quote:
Originally Posted by otutusaus View Post
I am with you, we'll eventually get there. However, work assignment rules were set up to optimize the progress of GIMPS
Thank you for making this claim. Please give a precise specification
of the optimization that is being done. Such an optimization must
take into account:

- The different algorithms: LL, P-1, TF
- A very accurate specification of the computational complexity of each
algorithm. This means an accurate determination of the implied
constants in the big O estimate.
- A very accurate measurement on how fast each of them runs
on each of the various (many different!)
- An accurate measurement of how many machines of each type are being used.
- An accurate determination of the percentage of time each one spends
running the algorithms.
- Accurate estimates of the probability that a given computation succeeds
based upon the input parameters.

To put it plainly: The required data is not available. Nor has an objective
function been specified. Not also that it will be a chance-constrained
optimization model.

Conclusion: The notion of 'optimize' is poorly conceived at best.
Noone (me included) knows how to specify this optimization problem.
I could do it as a research project, but have neither the time nor inclination. Does anyone else here have the necessary skills?

Quote:
and reduce the amount of time required per test.
Time per test is meaningless by itself. You must also know
the probability of success/failure for P-1 and TF. And minimizing
'time per test' fails to take into account the allocation of machine resources to those tests. Running method A on machine 1 and
method B on machine 2 might be better with the methods swapped,
or even run on a totally different machine. i.e. you might find
that running methods A and B work better entirely on machine 3,
and that you should run method C on 1 and 2.

Does anyone know whether the allocation
of the many different machines to the set of tests has been done correctly. You also need to take into account tradeoffs that noone
seems to consider. e.g. TF and P-1 tests are NOT independent.

Quote:
And that's important because the longer a test is run, the more chances an error may occur.
True only for LL. And it would be less of a problem if people
would stop worrying about overclocking.

And there is another part of the optimization problem: You might
have fewer errors and less total time spent by actually REDUCING
clock speeds.

In a highly heterogeneous environment, it seems impossible to
even approach optimizing the calculations.
R.D. Silverman is offline   Reply With Quote
Old 2013-12-06, 21:11   #11
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

11101000110012 Posts
Default

Quote:
Originally Posted by R.D. Silverman View Post
Thank you for making this claim. Please give a precise specification
of the optimization that is being done. Such an optimization must
take into account:

- The different algorithms: LL, P-1, TF
- A very accurate specification of the computational complexity of each
algorithm. This means an accurate determination of the implied
constants in the big O estimate.
- A very accurate measurement on how fast each of them runs
on each of the various (many different!)
- An accurate measurement of how many machines of each type are being used.
- An accurate determination of the percentage of time each one spends
running the algorithms.
- Accurate estimates of the probability that a given computation succeeds
based upon the input parameters.

To put it plainly: The required data is not available. Nor has an objective
function been specified. Not also that it will be a chance-constrained
optimization model.

Conclusion: The notion of 'optimize' is poorly conceived at best.
Noone (me included) knows how to specify this optimization problem.
I could do it as a research project, but have neither the time nor inclination. Does anyone else here have the necessary skills?



Time per test is meaningless by itself. You must also know
the probability of success/failure for P-1 and TF. And minimizing
'time per test' fails to take into account the allocation of machine resources to those tests. Running method A on machine 1 and
method B on machine 2 might be better with the methods swapped,
or even run on a totally different machine. i.e. you might find
that running methods A and B work better entirely on machine 3,
and that you should run method C on 1 and 2.

Does anyone know whether the allocation
of the many different machines to the set of tests has been done correctly. You also need to take into account tradeoffs that noone
seems to consider. e.g. TF and P-1 tests are NOT independent.



True only for LL. And it would be less of a problem if people
would stop worrying about overclocking.

And there is another part of the optimization problem: You might
have fewer errors and less total time spent by actually REDUCING
clock speeds.

In a highly heterogeneous environment, it seems impossible to
even approach optimizing the calculations.
Let me add that even if you had all of the data and a properly
formulated model, implementing it would be night to impossible.
Can you imagine a central organizer trying to tell (say) a given user:

"You can't run LL. You should run TF to 70 bits"

Can you imagine users' reactions to someone telling them what to
run and how to run it?

Finally, all you can hope to accomplish is to reduce the EXPECTED
time to find the next prime. And you will never know if some other
allocation of resources would have done it faster. And any time
savings that you might achieve is UNMEASURABLE and lost in the
noise of the process. You will not be able to observe any savings of
time that you might achieve.
R.D. Silverman is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Call for GPU Workers to help at the "LL Wavefront" chalsall GPU Computing 24 2015-07-11 17:48
Prime95 slowed down after I restarted computer ixfd64 Software 13 2010-12-18 06:56

All times are UTC. The time now is 01:23.

Fri Jul 10 01:23:39 UTC 2020 up 106 days, 22:56, 0 users, load averages: 1.60, 1.51, 1.42

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.