mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Software (https://www.mersenneforum.org/forumdisplay.php?f=10)
-   -   Prime95 version 29.4 (https://www.mersenneforum.org/showthread.php?t=22683)

Prime95 2017-11-03 02:01

Prime95 version 29.4
 
Prime95 version 29.4 build 7 is available.

From whatsnew.txt:

[code]1) GIMPS has a new sub-project -- finding (probable) prime Mersenne cofactors.
This sub-project has two parts: 1) Running PRP tests, and 2) Finding
additional factors. To support this new sub-project there are three
new work preferences: PRP on Mersenne cofactors, PRP double-checking on
Mersenne cofactors, ECM on Mersenne cofactors.
2) Like LL tests, PRP tests now support shift counts to aid in running double-checks.
Shift counts are only supported for Mersenne numbers and Mersenne cofactors.
3) PRP tests now support a type of low overhead error checking that almost guarantees
correct results even on flaky hardware. We call this Gerbicz error-checking after
it was proposed by Robert Gerbicz at mersenneforum.org. This error-check only
works for base-2 numbers.
4) Because PRP tests are highly reliable, we now offer the option to do PRP tests
instead of Lucas-Lehmer primality tests. There are 4 new work preferences
similar to LL work preferences: first-time PRP tests, world record PRP tests,
PRP tests on 100 million digit numbers, and PRP double-checking.
If you are looking for a 100 million digit prime, PRP testing is recommended
rather than LL testing.
5) For non-base-2 PRP tests, there is a new option to run each iteration twice and
rollback if a mismatch occurs. Useful only on flaky hardware due to the obvious
high overhead.
6) Minor performance tweaks were made to stage 1 of P-1. Save files are incompatible
in stage 1. Wait for your P-1 test to reach stage 2 before upgrading.
[/code]

This version is not heavily tested - but I don't expect many problems. A few users have been testing the new PRP code for a while. If you are doing any PRP work you should upgrade to this version.

Also, some bugs were fixed: [url]http://www.mersenneforum.org/showpost.php?p=467365&postcount=2[/url] If you have a flaky machine, please upgrade so that you don't lose a lot of work when an LL error occurs and a rollback to the .bu3 or .bu4 file is required.


Download links:
Windows 64-bit: [URL]ftp://mersenne.org/gimps/p95v294b7.win64.zip[/URL]
Linux 64-bit: [URL]ftp://mersenne.org/gimps/p95v294b7.linux64.tar.gz[/URL]
Mac OS X: [URL]ftp://mersenne.org/gimps/p95v294b7.MacOSX.zip[/URL]
Windows 32-bit: [URL]ftp://mersenne.org/gimps/p95v294b7.win32.zip[/URL]
Linux 32-bit: [URL]ftp://mersenne.org/gimps/p95v294b7.linux32.tar.gz[/URL]
FreeBSD11 64-bit: [URL]ftp://mersenne.org/gimps/p95v294b7.FreeBSD11-64.tar.gz[/URL]
Source: [URL]ftp://mersenne.org/gimps/p95v294b7.source.zip[/URL]
Windows 64-bit service: [URL]ftp://mersenne.org/gimps/p95v294b7.win64.service.zip[/URL]
Windows 32-bit service: [URL]ftp://mersenne.org/gimps/p95v294b7.win32.service.zip[/URL]

Prime95 2017-11-03 02:01

1) Linux mprime menu rejects work preferences above 150. Workaround is to manually edit prime.txt. Fixed in 29.4 build 4.
2) A bug in caused extraneous (and stale) data to be output to the screen and results.txt. Fixed in 29.4 build 4.
3) A bug in allocating cores in low-worker throughput benchmarks on Xeon (and Threadripper?) caused too many cores to be allocated for some workers and thus affinity assignment errors. Fixed in 29.4 build 4.
4) Many Mac users do not know how or where to install libgmp. Prime95 now references a libgmp contained within the bundle. Fixed in build 6.

GP2 2017-11-03 02:44

It might be useful to post SHA256 sums.

I get:

[CODE]
Get-FileHash ~\Downloads\p95v294b3.win64.zip

Algorithm Hash Path
--------- ---- ----
SHA256 AD0576CA2E63BB433A2A2D0974EF3D481ACAEA300E7C52466D5DE882D7C82B17
[/CODE]

[CODE]
sha256sum p95v294b3.linux64.tar.gz
efc2b3edb47b5625be446101f14b832dd0d13fcd3b51b738d1aac24c36585108 p95v294b3.linux64.tar.gz
[/CODE]

Dubslow 2017-11-03 04:57

For those of us who've only been partially lurking, does this indicate a mass shift for GIMPS away from LL tests? That is, how much effort does PRP take on a given exponent (say the current LL or DC wavefronts) relative to the LL? If the PRP takes only a few percent more effort in exchange for being nearly 100% reliable (as opposed to the current 96% LL reliability), should that not mean that GIMPS should primarily use PRP tests over the LL?

Prime95 2017-11-03 05:08

[QUOTE=Dubslow;470871]For those of us who've only been partially lurking, does this indicate a mass shift for GIMPS away from LL tests? That is, how much effort does PRP take on a given exponent (say the current LL or DC wavefronts) relative to the LL? If the PRP takes only a few percent more effort in exchange for being nearly 100% reliable (as opposed to the current 96% LL reliability), should that not mean that GIMPS should primarily use PRP tests over the LL?[/QUOTE]

I have not done any comparisons - Gerbicz adds only 0.2% runtime. LL's Jacobi testing adds about 0.1% overhead. So the runtimes should be very, very close.

The biggest problem with PRP instead of LL is that the server is not ready. Yes, we've shoe-horned in support, but it is not ready for everyone to convert. Maybe we'll start a thread here where people can be volunteer to do first-time PRP testing and double-checking to work out the inevitable issues.

Mark Rose 2017-11-03 05:16

I've switched my flaky machine to this version and PRP-D.

I've eliminated the memory channels and memory sticks as issues and power supply is next.

Dubslow 2017-11-03 05:44

[QUOTE=Prime95;470872]I have not done any comparisons - Gerbicz adds only 0.2% runtime. LL's Jacobi testing adds about 0.1% overhead. So the runtimes should be very, very close.

The biggest problem with PRP instead of LL is that the server is not ready. Yes, we've shoe-horned in support, but it is not ready for everyone to convert. Maybe we'll start a thread here where people can be volunteer to do first-time PRP testing and double-checking to work out the inevitable issues.[/QUOTE]

So PRP-Gerbicz and LL-Jacobi error variants are within less than a percent of each other in total runtime? What's the relative reliability then? The 96% I cited is of course without the Jacobi error check.

axn 2017-11-03 06:00

[QUOTE=Dubslow;470874]So PRP-Gerbicz and LL-Jacobi error variants are within less than a percent of each other in total runtime? What's the relative reliability then? The 96% I cited is of course without the Jacobi error check.[/QUOTE]

Jacobi error check will catch 50% of the errors. I guess, that makes the reliability about 98%.

Prime95 2017-11-03 06:17

[QUOTE=Mark Rose;470873]I've switched my flaky machine to this version and PRP-D.

I've eliminated the memory channels and memory sticks as issues and power supply is next.[/QUOTE]

There are very few PRP-D assignments available. If you run out of assignments, you can switch to first-time PRP.

Dubslow 2017-11-03 06:30

[QUOTE=axn;470875]Jacobi error check will catch 50% of the errors. I guess, that makes the reliability about 98%.[/QUOTE]

That's still substantially less than the PRP-gerbicz reliability though, correct?

That suggests that GIMPS should mostly eliminate LL, though I suppose the benefits are marginal.

srow7 2017-11-03 07:52

[QUOTE=Prime95;470866]Placeholder for reported bugs and fixes.[/QUOTE]
mprime menu 2 type of work
menu will not let me enter 160 or 161 PRP on m cofactors
says
please enter a value between 0 and 150

I can manually edit prime.txt, server then gives me expected assignments.

rudi_m 2017-11-03 07:54

I see that 29.4 now comes with libgmp.so. But it is still using the globally installed one, see

[CODE]
$ ps aux | grep mprime
rudi 28456 0.2 0.0 144196 5036 pts/10 SNl+ 08:38 0:00 ./mprime -m
$ sof -p 28456 | grep libgmp
mprime 28456 rudi mem REG 254,0 551496 176632 /usr/lib64/libgmp.so.10.1.2
[/CODE]

That's not a problem, but if you really want the user to use the local one by default then you should add an rpath link, like
[QUOTE]gcc -Wl,-rpath,'$ORIGIN' ...[/QUOTE]
(In a Makefile you would need to write $$ORIGIN).

R. Gerbicz 2017-11-03 10:19

It is an interesting question whether it'd worth to keep or skip the Jacobi test.

Check me, my analysis:
Suppose that there is a 3% error rate for LL/Prp (without error check) and it takes t time for a given p exponent.
Do only strong error check, with it a single (detected!) error check results an t/80*1.002 overhead in time in roll back to a good iteration if we are in the main wavefront p~8e7 with a (very) traditional error check at every 1e6 iteration (don't know what is currently used in p95). [the extra 0.002 is due to the strong error check].

Assume exactly 3% probability on a single error, then the expected overhead on rollback is
[CODE]
0.03*t/80*1.002=0.00037575*t
[/CODE]
but with Jacobi check you would save only half of this, because that detects errors
with 50%. Since Jacobi takes more than 5 times of this, it is simply not worth to do it also with the strong error check.

An advantage of the Jacobi check is that it gives a more reliable result, but with a strong error check it has no value, with ~1/mp error rate we would see less than 2^(-1e6) summed error probability for all p>1e6.


About the 0.2% cost of the strong error check overhead in time:
You simply can't do it better than 2/sqrt(p) (where the whole is 1), if you want to see at least one strong error check.
So you can achieve the 0.2% (total) overhead in time for p>1e6.
What would/could happen with much larger p, and with even better error rate (better than 3%):
With L=H=sqrtint(p)/10 we would see at least 100 error checks
and the overhead would be only 20/sqrt(p), and this one can be arbitrarily small, if p is "large". But this is still not a recommended setup, because we don't know what would be the future memory's error rate, and what would be the used algorithm/method on integer multiplication.

R. Gerbicz 2017-11-03 10:49

Ouch, forget the first part of my post, for Prp test there is no Jacobi check!

James Heinrich 2017-11-03 14:35

Feature request: Can there be a defined value for "low memory" as used in "LowMemWhileRunning" and "MaxHighMemWorkers"? For example, I have 64GB RAM and let my P-1 workers use 11GB each, but not while running Photoshop. Rather than locking out stage2 entirely, could there be an option to use the "low" memory amount (e.g. 1GB instead of 11GB)? This option already exists by time of day (e.g. "Memory=5000 during 7:30-23:30 else 50000") but I would like it to kick in for "LowMemWhileRunning".

GP2 2017-11-03 16:30

Something very weird happened yesterday in one of my work directories.

I was running the prerelease 29.4b2, and the only unusual thing I can think of was that I was using InterimResidues=10000 in prime.txt

This might be a rare old problem though, rather than a new one caused by 29.4, and it might explain why exponents occasionally get abandoned after a large percentage of the LL test has already been done.


The worktodo.txt file looked like this before:

[CODE]
DoubleCheck=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,82082339,75,1
AdvancedTest=32985569
[/CODE]

(I had queued up a triple check of an old exponent, but that's not really relevant)

Now it looks like this:

[CODE]
DoubleCheck=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx,46843957,73,1
[/CODE]

In other words, the worktodo.txt file, with exponent 82082339 at 68.3% completed, was simply deleted, and recreated from scratch.

The exponent still shows up assigned to me in my Assignment Details page (for each person's account it's at [url]https://mersenne.org/workload/[/url] ), but it's gone from worktodo.txt, and if I wasn't monitoring the progress of my exponents and this one in particular, it would no doubt expire after 40 days or so.

prime.log looks like this:

[CODE]
[Thu Nov 2 16:01:34 2017 - ver 29.4]
Registering assignment: LL M32985569
PrimeNet error 40: No assignment
ra: redundant LL effort, exponent: 32985569
Registering assignment: LL M32985569
PrimeNet error 40: No assignment
ra: redundant LL effort, exponent: 32985569
[Thu Nov 2 16:17:59 2017 - ver 29.4]
Registering assignment: LL M32985569
PrimeNet error 40: No assignment
ra: redundant LL effort, exponent: 32985569
Registering assignment: LL M32985569
PrimeNet error 40: No assignment
ra: redundant LL effort, exponent: 32985569
[Thu Nov 2 16:30:31 2017 - ver 29.4]
Registering assignment: LL M32985569
PrimeNet error 40: No assignment
ra: redundant LL effort, exponent: 32985569
Registering assignment: LL M32985569
PrimeNet error 40: No assignment
ra: redundant LL effort, exponent: 32985569
[Thu Nov 2 16:45:58 2017 - ver 29.4]
Registering assignment: LL M32985569
PrimeNet error 40: No assignment
ra: redundant LL effort, exponent: 32985569
[Thu Nov 2 16:49:17 2017 - ver 29.4]
Getting assignment from server
PrimeNet success code with additional info:
Server assigned Lucas Lehmer primality double-check work.
Got assignment xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: Double check M46843957
Sending expected completion date for M46843957: Nov 7 2017
[Thu Nov 2 18:11:33 2017 - ver 29.4]
Sending expected completion date for M46843957: Nov 7 2017
[/CODE]

The results.txt file looks like this:

[CODE]
[Thu Nov 2 16:38:42 2017]
M82082339 interim Wg8 residue 04EEEDF862DE5EF2 at iteration 56020000
M82082339 interim Wg8 residue 959127764538F882 at iteration 56020001
M82082339 interim Wg8 residue A0B5015145E9626F at iteration 56020002
[Thu Nov 2 16:59:39 2017]
M46843957 interim Wg8 residue 7F425D08EC6C04EC at iteration 10000
M46843957 interim Wg8 residue EBE26DB177C74266 at iteration 10001
M46843957 interim Wg8 residue 86524632C0753142 at iteration 10002
[/CODE]

The timestamp of the savefile is:

[CODE]
-rw-r--r-- 1 root ec2-user 10260364 Nov 2 16:40 p82082339
[/CODE]

The launch time of the current instance is November 2, 2017 at 18:28:22 (all times are UTC).

There were a lot of predictable complaints in prime.log about the unassigned triple check, which are irrelevant, I've seen those before. There may have been a lot of restarts in a short period of time, because I run spot instances on the AWS cloud, and when the spot market price is close to the limit price it is not uncommon for instances to launch and then get terminated after only a few minutes.

The point is, in monitoring expiring exponents for strategic doublechecks, it's not that uncommon to see exponents abandoned by other users after a large percentage of the LL test has been completed. That's been a mystery, and we always assumed that this was because the user quit GIMPS, or their computer died, or some other reason. But now it looks like exponents can get wiped from worktodo.txt without any user intention.

This is NOT the same as the "unreserving" bug. In that one, prime.log records "Unreserving Mxxxxxxxx" lines, the assignment also disappears from the Assignment Details page, and exponents get deleted from the bottom of the worktodo.txt file, but exponents that already have more than 0.1% progress are immune from being unreserved.

Here it looks like the existing worktodo.txt file simply got deleted somehow, and a new one got created from scratch. We know that the program rewrites the worktodo.txt file after each DiskWrite interval, because if you manually edit it, then that edit gets overwritten at the next DiskWrite and the program restores what it thinks the worktodo.txt file should contain.

So is it possible that this periodic worktodo.txt rewrite is done non-atomically? Maybe the existing file gets deleted and then the new version is written immediately after? If it's done non-atomically, then if the system gets shut down or rebooted at precisely the right moment in between deletion of the old worktodo.txt and recreation of the new worktodo.txt, then when the system boots up again, there is no worktodo.txt and a new one gets recreated from scratch, and the assignments in the old just end up expiring some weeks later and the existing work progress is lost.

Could that be what happened here, and is it happening regularly elsewhere? Even if the odds are one in 10,000 of a reboot happening at exactly the right (wrong) moment, we are dealing with many millions of LL tests being done, after all.

Prime95 2017-11-03 18:17

[QUOTE=rudi_m;470894]I see that 29.4 now comes with libgmp.so. But it is still using the globally installed one.[/QUOTE]

I included the library since one linux user reported difficulty getting libgmp. I think he was using an older distro. I thought including the library would be more convenient than telling users to go build it from scratch.

I have no idea what the "correct" solution is. I would think using the global one is best in case the user has gone to the trouble of making a version optimized for his machine or a newer libgmp has been released with bug fixes and better algorithms.

I'm more than happy to do whatever the linux experts here say is best.

Prime95 2017-11-03 18:28

[QUOTE=GP2;470949]So is it possible that this periodic worktodo.txt rewrite is done non-atomically? Maybe the existing file gets deleted and then the new version is written immediately after? [/QUOTE]

Yes, that is exactly how it is coded.

Save files are written using the more tedious process of create x.write, deleting x, renaming x.write to x and on reading it looks for x and if it does not exist looks for x.write.

It looks like I need to do the same for worktodo.txt, prime.txt, and local.txt.

Mark Rose 2017-11-03 19:06

[QUOTE=Prime95;470878]There are very few PRP-D assignments available. If you run out of assignments, you can switch to first-time PRP.[/QUOTE]

I may switch back to doing DCLL if I can figure out the stability issues. I kind of want to finish all the DCLL.

GP2 2017-11-03 20:04

[QUOTE=Mark Rose;470873]I've switched my flaky machine to this version and PRP-D.

I've eliminated the memory channels and memory sticks as issues and power supply is next.[/QUOTE]

Why not have your flaky machine do Gerbicz PRP first time checks instead? No better way to put the vaunted error detection code to a thorough test. The flakier the better.

I am running PRP-DC on ten machines at the moment, they're being assigned gpuOwL exponents so far, so I could queue up your exponents after they're finished.

ATH 2017-11-03 20:09

I had another issue with 24b2 on an Amazon instance. I was doing a PRP CF and after a break for the automatic benchmark, it suddenly could not read any of the save files and I had to restart the work.

@GP2: Maybe the "Elastic File System" is not 100% reliable on Amazon instances. Any limit to how many instances can write to the same EFS?

[QUOTE]
Iteration 11000000 / 28035701
M28035701/known_factors interim Wg8 residue E1E2C59AACEA8CE6 at iteration 11000000
[Thu Nov 2 04:04:05 2017]
Iteration 12000000 / 28035701
M28035701/known_factors interim Wg8 residue 6358034B610F739C at iteration 12000000
[Thu Nov 2 05:54:28 2017]
Iteration 12947159 / 28035701
FFTlen=1440K, Type=3, Arch=4, Pass1=320, Pass2=4608, clm=4 (1 core, 1 worker): 7.03 ms. Throughput: 142.32 iter/sec.
.
.
.
FFTlen=1728K, Type=3, Arch=4, Pass1=768, Pass2=2304, clm=1 (1 core, 1 worker): 8.73 ms. Throughput: 114.59 iter/sec.
Error reading intermediate file: pS035701
Renaming pS035701 to pS035701.bad1
Trying backup intermediate file: pS035701.bu
Error reading intermediate file: pS035701.bu
Renaming pS035701.bu to pS035701.bad2
All intermediate files bad. Temporarily abandoning work unit.
[/QUOTE]

GP2 2017-11-03 21:07

[QUOTE=ATH;470969]I had another issue with 24b2 on an Amazon instance. I was doing a PRP CF and after a break for the automatic benchmark, it suddenly could not read any of the save files and I had to restart the work.

@GP2: Maybe the "Elastic File System" is not 100% reliable on Amazon instances. Any limit to how many instances can write to the same EFS?[/QUOTE]

I had the same issue, but that wasn't a filesystem issue, it was a one-time backwards incompatibility done deliberately by the mprime program.

In the final 24b2 pre-release version, George introduced Gerbicz-like error testing for PRP-CF. You can see that the results started appearing as "Unverified (Reliable)" on the exponent status page. As a result, all the old save files could not be resumed. The program renamed them to bad, and restarted those exponents. It was a one-time issue, and not really an issue since we are still doing small exponents and lost only minutes of work.


As for the Elastic File System, it does have the same issues as any other network file system, with latency and replication across availability zones. But not file corruption.

One major issue with EFS is that I/O throughput is throttled. You are charged according to how much disk space your EFS filesystem uses, and your I/O rate is proportional to how much disk space you use. It's documented, but it caught me by surprise when I first encountered it.

If you keep the default DiskWriteTime of 30 minutes, and only do LL testing, which creates relatively small save files, then you should be OK. But if you do stuff that creates big save files, like P−1, ECM, Fermat with large B2, or you reduce your DiskWriteTime to smaller values, then you start having problems. Simple LInux commands take a long time to complete, or you see 100M .write files taking several minutes to finish writing.

The long term solution for using mprime on the cloud would be to have it read and write directly to S3 storage instead of to files. The throttling doesn't happen with I/O to S3, or to an instance's EBS storage ("local disk space").

preda 2017-11-04 04:17

[QUOTE=Dubslow;470879]That's still substantially less than the PRP-gerbicz reliability though, correct?

That suggests that GIMPS should mostly eliminate LL, though I suppose the benefits are marginal.[/QUOTE]

I would say the benefits of PRP-first-time vs. LL-first-time are major, not marginal; because *every* LL result is double-checked, even if the LL error rate is only 4%.

if LL wavefront is at N, and let's say the LL-double-check is at N/2, that would imply that 25% of total LL compute is used for double checks.

The gain from PRP would be these 25% being replaced by some small percent only.

retina 2017-11-04 04:20

[QUOTE=preda;471004]I would say the benefits of PRP-first-time vs. LL-first-time are major, not marginal; because *every* LL result is double-checked, even if the LL error rate is only 4%.

if LL wavefront is at N, and let's say the LL-double-check is at N/2, that would imply that 25% of total LL compute is used for double checks.

The gain from PRP would be these 25% being replaced by some small percent only.[/QUOTE]You will still need double checks for every exponent even with a test regime that has known 100% perfect results. Because not everyone is honest, you can't simply accept a result and trust it.

preda 2017-11-04 04:26

[QUOTE=Dubslow;470879]That's still substantially less than the PRP-gerbicz reliability though, correct?

That suggests that GIMPS should mostly eliminate LL, though I suppose the benefits are marginal.[/QUOTE]

[QUOTE=retina;471005]You will still need double checks for every exponent even with a test regime that has known 100% perfect results. Because not everyone is honest, you can't simply accept a result and trust it.[/QUOTE]

Yes I agree and understand this necessity. That's why I mention "the few percents" still needed for some kind of double-check to be established.

But my point was that the potential gain is 25%, not 4% (minus the PRP double check policy).

In the worst case, I would see the PRP-double-check as being: complete double check up to N/4 (vs. N/2 for LL now), and that'd be 25% (1/4) being replaced by 6% (1/16) "check tax".

retina 2017-11-04 05:03

[QUOTE=preda;471006]In the worst case, I would see the PRP-double-check as being: complete double check up to N/4 (vs. N/2 for LL now) ...[/QUOTE]I think we need double checks up to N/1 regardless of LL or PRP or any other test type.

preda 2017-11-04 07:51

[QUOTE=retina;471008]I think we need double checks up to N/1 regardless of LL or PRP or any other test type.[/QUOTE]

But there's a trade-off between using the compute for finding new primes vs. verifying the existing results. We could even say that the goal is to find a new mersenne prime as fast as possible, and then the double-check arises as a natural outcome of efficient search.

Now, if you plug in some error rate of the existing results in the above formula, that would produce the "optimally efficient" rate of double-check. LL and PRP having different error rates, they'd require different rates of double-check.

We could say that a secondary goal is being able to say "there's no mersenne prime up to this value", and this would also play into the double check rate.

Also, maybe people come up with ideas about how to do "smart" double-check for PRP, which would validate mostly against malicious user action, not for correct computation.

GP2 2017-11-04 09:02

I think the answer is straightforward:

1. Double checking will continue to exist, even after all old LL tests have been verified.

2. There will be multiple statuses reflecting increasing degrees of certainty: never tested; unverified; unverified but reliable; verified; factored. Just like today, except with that one additional status.

2. The project administrators will continue to decide "what makes the most sense" and might adjust the default ratio of double checks to first time tests. It might go from one in ten to, say, one in a hundred.

3. The "one double check a year" default might be turned off by default instead.

4. Each individual user will have the option of specializing in double checks, but considerably fewer will choose to do so. "Strategic" double checking will probably go extinct.

5. The backlog of double checks will increase from the current ten years behind first-time tests to twenty years or more.

6. Moore's Law will ensure that at least some double checking will be done no matter what. For example, LL testing all exponents up to one million was a milestone twenty years ago when the project was first starting, but now a single user with sufficient resources can do this over the course of a weekend.

7. LL tests will still be needed to prove a Mersenne prime, since strictly speaking, PRP tests can only prove that a Mersenne number is composite.

8. We will receive a transmission from extraterrestrials with a complete list of the first hundred Mersenne primes. Primenet disbands because we only managed to discover sixty of them ourselves so far and there's no point continuing. There will be a scandal because we should have discovered sixty-one, and the Russians were responsible.

retina 2017-11-04 10:31

[QUOTE=GP2;471018]8. We will receive a transmission from extraterrestrials with a complete list of the first hundred Mersenne primes. Primenet disbands because we only managed to discover sixty of them ourselves so far and there's no point continuing.[/QUOTE]But we will still need to verify the aliens reported results. So DC will go into overdrive. :showoff:

preda 2017-11-04 22:55

If a new mersenne prime is found by PRP followed by LL verification, who is credited with finding the prime? I think an official statement on that is needed.

Prime95 2017-11-05 04:30

[QUOTE=preda;471069]If a new mersenne prime is found by PRP followed by LL verification, who is credited with finding the prime? I think an official statement on that is needed.[/QUOTE]

Seems pretty obvious to me it is the person that did the PRP.

S485122 2017-11-05 10:29

output of the worker window
 
It seems that some lines are output to the worker window a second time with a different time stamp.
At the same time there is a communication with the server, only the expected completion date of the current workunit is sent to the server (according to prime.log) and the number of completed iterations of the previous worker screen output is written to result.txt. This happens every 160 minutes and 210000 iterations approximatively.
I checked : the program is not repeating the iterations, meaning the iteration count of the lines on the screen and results.txt is wrong.
This behaviour is observed with prime95 and mprime 29.4.3.

Jacob

Gordon 2017-11-05 22:36

[QUOTE=GP2;471018]

7. LL tests will still be needed to prove a Mersenne prime, since strictly speaking, PRP tests can only prove that a Mersenne number is composite.

[/QUOTE]

This! - let's not get carried by doing *probable* testing, the goal of the project is to find Mersenne Primes, the only way to guarantee that is to do the full LL test, followed by a verification run.

You can argue all you like but no matter how much you say ah yes, well, PRP is 99.9999% (or whatever number you care to plug in) it's a prime, you don't KNOW that for a fact....the chance if it being wrong are not zero.

Gordon 2017-11-05 22:37

[QUOTE=Prime95;471080]Seems pretty obvious to me it is the person that did the PRP.[/QUOTE]

Really? The person who *proved* it prime is the one who ran the LL test isn't it?

James Heinrich 2017-11-05 22:41

[QUOTE=Gordon;471123]Really? The person who *proved* it prime is the one who ran the LL test isn't it?[/QUOTE]Or is it the second person who ran the LL test, proving that the first LL test was accurate? :smile:


My opinion is the first person to make some reasonable claim, by whatever method, that a specific Mersenne number is prime, which is later proved to be true, should be credited as the discoverer.

Prime95 2017-11-05 23:06

[QUOTE=James Heinrich;471125]My opinion is the first person to make some reasonable claim, by whatever method, that a specific Mersenne number is prime, which is later proved to be true, should be credited as the discoverer.[/QUOTE]

The key word is "discoverer" not "prover".

retina 2017-11-05 23:18

[QUOTE=Gordon;471122]This! - let's not get carried by doing *probable* testing, the goal of the project is to find Mersenne Primes, the only way to guarantee that is to do the full LL test, followed by a verification run.

You can argue all you like but no matter how much you say ah yes, well, PRP is 99.9999% (or whatever number you care to plug in) it's a prime, you don't KNOW that for a fact....the chance if it being wrong are not zero.[/QUOTE]I think everyone already knows that.

preda 2017-11-05 23:58

[QUOTE=Gordon;471122]This! - let's not get carried by doing *probable* testing, the goal of the project is to find Mersenne Primes, the only way to guarantee that is to do the full LL test, followed by a verification run.

You can argue all you like but no matter how much you say ah yes, well, PRP is 99.9999% (or whatever number you care to plug in) it's a prime, you don't KNOW that for a fact....the chance if it being wrong are not zero.[/QUOTE]

Well, comparing an LL proved-prime result with 96% reliability, and a PRP probable-prime result with 99.99% probability and 99.99% reliability, I'd take the PRP.

Prime95 2017-11-06 01:20

[QUOTE=S485122;471087]It seems that some lines are output to the worker window a second time with a different time stamp.
At the same time there is a communication with the server, only the expected completion date of the current workunit is sent to the server (according to prime.log) and the number of completed iterations of the previous worker screen output is written to result.txt. This happens every 160 minutes and 210000 iterations approximatively.
I checked : the program is not repeating the iterations, meaning the iteration count of the lines on the screen and results.txt is wrong.
This behaviour is observed with prime95 and mprime 29.4.3.[/QUOTE]

Could you provide more details? Work type? Screen shot?

Note to all: 29.4 sends interim residues to the server for the 500,000th iteration and every multiple of 5,000,000. This was Madpoo's idea to give us the ability to do a quick(ish) partial check should be want to.

ATH 2017-11-06 06:05

[QUOTE=James Heinrich;471125]Or is it the second person who ran the LL test, proving that the first LL test was accurate? :smile:


My opinion is the first person to make some reasonable claim, by whatever method, that a specific Mersenne number is prime, which is later proved to be true, should be credited as the discoverer.[/QUOTE]

If a PRP test says probably prime it might be more likely to be prime than a positive LL test, due to the fact that with Gerbicz error check the risk of hardware error is lower, and the extra risk of it being a 3-sprps is infinitesimal at these sizes.

S485122 2017-11-06 06:12

1 Attachment(s)
[QUOTE=Prime95;471134]Could you provide more details? Work type? Screen shot?

Note to all: 29.4 sends interim residues to the server for the 500,000th iteration and every multiple of 5,000,000. This was Madpoo's idea to give us the ability to do a quick(ish) partial check should be want to.[/QUOTE]OK it makes sense then. But it means there is a bug in the screen and results.txt file output : instead of the real number of iterations it is the last number of iterations output to the screen that is shown. Also I see no trace of output for the 500 000th iteration.

Another new feature that was not documented in what's new is that at the start of a LL test there is now a line "Starting primality test of Mxxx using FMA3 FFT length 2304K, Pass1=384, Pass2=6K, clm=2, 6 threads".

Attached, screen output, excerpt of results.txt and prime.log. I marked the "new" lines are marked "+ ".

Anyway that output belongs in the Communication window and prime.log only. It should be something like "sending interim residue for iteration 9000000 of M99999999 to server"...

I will end with two wishes :
- The possibility to have the time stamps output to the screen, the prime.log and results.txt files in ISO format. It could be a setting in one the configuration files.
- The other one is to have each result in results.txt preceded by a time stamp : at the moment there is no time stamp if two results come within a few minutes of each other (very short assignments or multiple workers.)

Jacob

Prime95 2017-11-06 15:44

[QUOTE=S485122;471145]OK it makes sense then. But it means there is a bug in the screen and results.txt file output : instead of the real number of iterations it is the last number of iterations output to the screen that is shown. Also I see no trace of output for the 500 000th iteration.[/QUOTE]

Ah, I see the bug now. Don't know why I missed seeing it here. There's not supposed to be any screen output or results.txt output when sending an interim residue to the server. A "left over" line from a cut/paste operation is reprinting the last line output.

I've improved the text message output to prime.log.

Madpoo 2017-11-06 20:43

[QUOTE=GP2;470868]It might be useful to post SHA256 sums.
...
[/QUOTE]

[CODE]p95v294b3.FreeBSD11-64.tar.gz - 0F3A089E7AAB1A38F6FCF241A80D2B3D2EF95A3B758B87C6718012972207AB31
p95v294b3.linux32.tar.gz - CE944D628FB1CB0FA165BF7423957981F575A20961AE9C96D4691BCC4B0DE549
p95v294b3.linux64.tar.gz - EFC2B3EDB47B5625BE446101F14B832DD0D13FCD3B51B738D1AAC24C36585108
p95v294b3.MacOSX.zip - 180F7EAC59316ED298D4BCB479F4952B27059F2BD7D7011118A5A26A17E24FDC
p95v294b3.source.zip - C7B21388342A43AA4E50F9B2394DE0A422A56DB6BEACD0E17B05944854321664
p95v294b3.win32.zip - 5436C674230A040EE68B420872061AEE65E136BBC2460B3C285D4B0885D514AD
p95v294b3.win64.zip - AD0576CA2E63BB433A2A2D0974EF3D481ACAEA300E7C52466D5DE882D7C82B17[/CODE]

Madpoo 2017-11-06 20:57

[QUOTE=Prime95;471134]Could you provide more details? Work type? Screen shot?

Note to all: 29.4 sends interim residues to the server for the 500,000th iteration and every multiple of 5,000,000. This was Madpoo's idea to give us the ability to do a quick(ish) partial check should be want to.[/QUOTE]

Oh cool... that will be helpful. I've been off the forum for the past week so I guess I missed this news. :smile:

James Heinrich 2017-11-06 21:48

[QUOTE=GP2;470868]It might be useful to post SHA256 sums.[/QUOTE]The mersenne.ca download mirror now includes options to display MD5 / SHA1 / SHA256 hashes (previously it only displayed MD5)
e.g. [url]http://download.mersenne.ca/gimps[/url]

Of course, this is a [i]mirror[/i] so it's not impossible that something could have gotten corrupted between the original and my mirror (but hopefully not). If you download from mersenne.org and compare the hash to the one on mersenne.ca it should give you some assurance.

Dubslow 2017-11-06 23:34

[QUOTE=Prime95;471134]

Note to all: 29.4 sends interim residues to the server for the 500,000th iteration and every multiple of 5,000,000. This was Madpoo's idea to give us the ability to do a quick(ish) partial check should be want to.[/QUOTE]

Is this not several MB per instance?

If so, doing this without notification to the user would be a pretty big violation of trust IMO. It should be an easily-configurable option with the default set to "off".

Prime95 2017-11-06 23:46

[QUOTE=Dubslow;471202]Is this not several MB per instance?.[/QUOTE]

A residue is 16 bytes.

Dubslow 2017-11-07 06:55

[QUOTE=Prime95;471203]A residue is 16 bytes.[/QUOTE]

Ah, my bad. "interim residues" and "partial check" had me going there. I thought you meant stuff like "redo the last 10,000" or so like is done for a prime report, and such a manner of check does require the full interim reside. Oops :smile:

ATH 2017-11-07 17:05

29.4b3 adds this line to prime.txt including all the spaces:

[CODE]PRPGerbiczCompareIntervalAdj= 1[/CODE]

It should be 1000000 by default.

Prime95 2017-11-07 17:36

[QUOTE=ATH;471259]29.4b3 adds this line to prime.txt including all the spaces:

[CODE]PRPGerbiczCompareIntervalAdj= 1[/CODE]

It should be 1000000 by default.[/QUOTE]


This is OK. This setting adjusts the interval downward if you do run into an error. It then slowly drifts back upward as you complete Gerbicz intervals without error. The theory is why rollback a million iterations on a flaky machine, lets increase the overhead a little bit and rollback 100,000 iterations on each error.

Mark Rose 2017-11-07 18:24

I think I found another bug:

Main Menu

1. Test/Primenet
2. Test/Worker threads
3. Test/Status
4. Test/Continue
5. Test/Exit
6. Advanced/Test
7. Advanced/Time
8. Advanced/P-1
9. Advanced/ECM
10. Advanced/Manual Communication
11. Advanced/Unreserve Exponent
12. Advanced/Quit Gimps
13. Options/CPU
14. Options/Preferences
15. Options/Torture Test
16. Options/Benchmark
17. Help/About
18. Help/About PrimeNet Server
Your choice: 16

Benchmark type (0 = Throughput, 1 = FFT timings, 2 = Trial factoring) (0):

FFTs to benchmark
Minimum FFT size (in K) (2048):
Maximum FFT size (in K) (8192):
Benchmark with round-off checking enabled (N):
Benchmark all-complex FFTs (for LLR,PFGW,PRP users) (N):
Limit FFT sizes (mimic older benchmarking code) (N):

CPU cores to benchmark
Number of CPU cores (comma separated list of ranges) (36):
Benchmark hyperthreading (Y):

Throughput benchmark options
Number of workers (comma separated list of ranges) (1,2,10,36):
Benchmark all FFT implementations to find best one for your machine (N):
Time to run each benchmark (in seconds) (15):

Accept the answers above? (Y):
Main Menu

1. Test/Primenet
2. Test/Worker threads
3. Test/Status
4. Test/Stop
5. Test/Exit
6. Advanced/Test
7. Advanced/Time
8. Advanced/P-1
9. Advanced/ECM
10. Advanced/Manual Communication
11. Advanced/Unreserve Exponent
12. Advanced/Quit Gimps
13. Options/CPU
14. Options/Preferences
15. Options/Torture Test
16. Options/Benchmark
[Main thread Nov 7 18:22] Starting worker.
17. Help/About
18. Help/About PrimeNet Server
Your choice: [Worker #1 Nov 7 18:22] Worker starting
[Worker #1 Nov 7 18:22] Your timings will be written to the results.txt file.
[Worker #1 Nov 7 18:22] Compare your results to other computers at [url]http://www.mersenne.org/report_benchmarks[/url]

[Worker #1 Nov 7 18:22] Benchmarking multiple workers to measure the impact of memory bandwidth
[Worker #1 Nov 7 18:22] Timing 2048K FFT, 36 cores, 1 worker. Average times: 1.69 ms. Total throughput: 592.82 iter/sec.
[Worker #1 Nov 7 18:22] Timing 2048K FFT, 36 cores, 2 workers. [Nov 7 18:22] Error setting affinity to core #37. There are 36 cores.
[Worker #1 Nov 7 18:22] Error setting affinity to core #38. There are 36 cores.
[Worker #1 Nov 7 18:22] Error setting affinity to core #39. There are 36 cores.
[Worker #1 Nov 7 18:22] Error setting affinity to core #40. There are 36 cores.
[Worker #1 Nov 7 18:22] Error setting affinity to core #41. There are 36 cores.
[Worker #1 Nov 7 18:22] Error setting affinity to core #42. There are 36 cores.
[Worker #1 Nov 7 18:22] Error setting affinity to core #43. There are 36 cores.
[Worker #1 Nov 7 18:22] Error setting affinity to core #44. There are 36 cores.
[Worker #1 Nov 7 18:22] Error setting affinity to core #45. There are 36 cores.
[Worker #1 Nov 7 18:22] Error setting affinity to core #46. There are 36 cores.
[Worker #1 Nov 7 18:22] Error setting affinity to core #47. There are 36 cores.
[Worker #1 Nov 7 18:22] Error setting affinity to core #48. There are 36 cores.
[Worker #1 Nov 7 18:22] Error setting affinity to core #49. There are 36 cores.
[Worker #1 Nov 7 18:22] Error setting affinity to core #50. There are 36 cores.
[Worker #1 Nov 7 18:22] Error setting affinity to core #51. There are 36 cores.
[Worker #1 Nov 7 18:22] Error setting affinity to core #52. There are 36 cores.
[Worker #1 Nov 7 18:22] Error setting affinity to core #53. There are 36 cores.
[Worker #1 Nov 7 18:22] Error setting affinity to core #54. There are 36 cores.
[Worker #1] Average times: 1.27, 1.78 ms. Total throughput: 1349.93 iter/sec.

Prime95 2017-11-08 19:29

[QUOTE=Mark Rose;471265]I think I found another bug:[/QUOTE]

Indeed you did. The 2 worker case (and maybe the 3 and 4 worker cases - I'd have to see the hwloc output) will be wrong. The two worker case is running 18 cores on worker 1 and 36 cores on worker 2.

This bug will only affect Xeon systems (and maybe Threadripper). The bug is in the allocation of cores on systems that contain multiple L3 caches or NUMA-like memory.

Prime95 2017-11-08 20:55

Build 4 now ready. Let me know what I screwed up - the release builds are not a "push button" process.

rainchill 2017-11-09 04:19

When posting new builds perhaps the title of the thread can be edited to reflect the newest build # so it is easier to see a new builds are available without checking the thread?

Mark Rose 2017-11-09 16:21

[quote]
[Work thread Nov 9 09:16] Iteration: 33260000 / 75572820 [44.01%], ms/iter: 6.942, ETA: 3d 09:35
[Work thread Nov 9 09:16] Hardware errors have occurred during the test!
[Work thread Nov 9 09:16] 1 Gerbicz/double-check errors.
[Work thread Nov 9 09:16] Confidence in final result is excellent.
[Work thread Nov 9 09:17] ERROR: Comparing Gerbicz checksum values failed. Rolling back to iteration 32603177.
[Work thread Nov 9 09:17] Continuing from last save file.
[Work thread Nov 9 09:17] Setting affinity to run helper thread 1 on CPU core #2
[Work thread Nov 9 09:17] Setting affinity to run helper thread 2 on CPU core #3
[Work thread Nov 9 09:17] Setting affinity to run helper thread 3 on CPU core #4
[Work thread Nov 9 09:17] Trying backup intermediate file: p57P2821.bu
[Work thread Nov 9 09:17] Trying backup intermediate file: p57P2821.bu2
[Work thread Nov 9 09:17] Trying backup intermediate file: p57P2821.bu3
[Work thread Nov 9 09:17] Resuming Gerbicz error-checking PRP test of M75572821 using FMA3 FFT length 4032K, Pass1=448, Pass2=9K, clm=4, 4 threads
[Work thread Nov 9 09:17] Iteration: 32603178 / 75572820 [43.14%].
[Work thread Nov 9 09:17] Hardware errors have occurred during the test!
[Work thread Nov 9 09:17] 2 Gerbicz/double-check errors.
[Work thread Nov 9 09:17] Confidence in final result is excellent
[/quote]

Excellent :smile:

kladner 2017-11-10 14:23

1 Attachment(s)
Running P95 v29.4, Win 7 Pro.
Lately I have started seeing P95 print multiple status lines for the same assignment. See screen grab.

Prime95 2017-11-10 17:40

[QUOTE=kladner;471478]Running P95 v29.4, Win 7 Pro.
Lately I have started seeing P95 print multiple status lines for the same assignment. See screen grab.[/QUOTE]

Build 4?

I suspect it is a collection of interim residues sent to the server every 5,000,000 iterations.

kladner 2017-11-10 17:56

[QUOTE=Prime95;471491]Build 4?

I suspect it is a collection of interim residues sent to the server every 5,000,000 iterations.[/QUOTE]
OK. Cool. Thanks!

Madpoo 2017-11-10 19:28

[QUOTE=Prime95;471373]Build 4 now ready. Let me know what I screwed up - the release builds are not a "push button" process.[/QUOTE]

For reference, SHA256 on the new buidls:
[CODE]
p95v294b4.FreeBSD11-64.tar.gz - 7AAF9447F8683F53B482BBAA67C9D2016CE4D449BCBCCF6936A8DE3A4E1795B5
p95v294b4.linux32.tar.gz - 1C3092861FB2B47E7DBAFCDD1D176EC38AEBB05E743104DCA0BEA1B94EA73571
p95v294b4.linux64.tar.gz - 5F5EF2A268C8683347ED4E7F1CF7C774C7EC1B295E6DE56AFC1A43DD7750979B
p95v294b4.MacOSX.zip - D50D6B5DFB1A208449B96A41F0639E15F9BBFFAAA00F2FD0AAA9FF84C6031F05
p95v294b4.source.zip - 130B7AE3F12EC0FE6317D287E952A7C23109FEB3E776C8E44AACAE9B4DA9BFCE
p95v294b4.win32.zip - 0188E36B22E4554DB6959BFCF1A2D3881CDF70A869AC45AC4C62A0F6EF5BED5D
p95v294b4.win64.zip - B9659D26827F14ED7EBF76A9089BED9BEC5D1A46621DCB92094DD1F2B5312D0C[/CODE]

kladner 2017-11-10 21:14

[QUOTE=Prime95;471491]Build 4?

I suspect it is a collection of interim residues sent to the server every 5,000,000 iterations.[/QUOTE]
Oops. It is 29.4B3. I'll have to update.

S485122 2017-11-11 09:49

Still some issues in relation to interim residues output
 
4 Attachment(s)
The errors I reported previously have been corrected with the latest version. There are still some cosmetic output issues : on both Linux and Windows the lines about sending the interim results are not ended by end of line characters. This concerns the communication thread and the log file.

Then it seems the interim residues are not always sent when produced, but at communication time. Perhaps this is by design.

On a linux machine where I run two workers the screen output is every 350000 iterations while it is set at 500000 in the settings. Difficult to say if this happened with older versions of the program : that window output is lost :-) One worker does 42M double checks, the other P-1 and "ScaleOutputFrequency" is set to 1 in prime.txt.

In the attched log and output files I prefixed the "offending" lines with "+++ ".

Jacob

R. Gerbicz 2017-11-11 10:47

[QUOTE=S485122;471537]
Jacob

[Worker #2 Nov 9 18:50] Optimal P-1 factoring of M12345678 using up to 30000MB of memory.
[Worker #2 Nov 9 18:50] Assuming no factors below 2^76 and 2 primality tests saved if a factor is found.[/QUOTE]

I could save both of them, M12345678 is clearly composite.

S485122 2017-11-11 11:21

[QUOTE=R. Gerbicz;471540]I could save both of them, M12345678 is clearly composite.[/QUOTE]I replaced all the AID's of the workunits with the string "AID" and the exponents with 12345678 ... Some people get nervous when they see active assigments data on the forum :-)

Jacob

James Heinrich 2017-11-11 23:51

Is there a setting to configure the number of seconds for "Waiting ([b]5 * workernumber[/b]) seconds to stagger worker starts"?

Prime95 2017-11-12 00:31

[strike]No, I can add one for build 5: "StaggerSeconds"[/strike]

Yes, it is called "StaggerStarts".

James Heinrich 2017-11-12 00:38

[QUOTE=Prime95;471583]Yes, it is called "StaggerStarts"[/QUOTE]Ah, thanks. Quoting from undoc.txt for reference:[quote]Some machines report much better timings if the worker threads stagger their starts. This was first noticed on Core i7 machines running Windows. Presumably staggering starts improves timings due to each worker allocating contiguous memory. You can control how long the program waits between starting each worker. In prime.txt, enter:
[FONT="Courier New"][COLOR="Blue"]StaggerStarts=n[/COLOR][/FONT]
where n is the number of seconds to wait. The default is 5 seconds.[/quote]

Prime95 2017-11-12 01:31

Build 5 now ready.

Adds newline to "sending interim residue" log file message. Does not Jacobi test if GMP version is not at least 5.0.0.

GP2 2017-11-12 04:22

[QUOTE=Prime95;471585]Build 5 now ready.

[...] Does not Jacobi test if GMP version is not at least 5.0.0.[/QUOTE]

Actually, this should be 5.1.0

My bad, I gave the wrong version initially. I misread the [URL="https://gmplib.org/list-archives/gmp-devel/2010-January/001451.html"]old mailing list message[/URL] that I linked to [URL="http://mersenneforum.org/showthread.php?p=471534#post471534"]in the other thread[/URL].

It actually said:

[QUOTE]
GMP 5.0.0 implements a quadratic algorithm for the Jacobi symbol. In
[url]http://wwwmaths.anu.edu.au/~brent/pub/pub236.html[/url] we describe a subquadratic
algorithm
[/QUOTE]

In other words, GMP 5.0.0 still had the older slow (quadratic) code.

Doing a little digging, it was actually GMP 5.1.0 that introduced the faster (subquadratic) code, see [url]https://gmplib.org/gmp5.1.html[/url] or [url]https://gmplib.org/list-archives/gmp-announce/2012-December/000036.html[/url]

Looking at the old versions of various distros:

Ubuntu 14.04 LTS (trusty) uses GMP 5.1
Ubuntu 16.04 LTS (xenial) uses GMP 6.0
Latest version of Ubuntu is 17.10 (artful)

Debian 7.0 (wheezy) uses GMP 5.0
Debian 8.0 (jessie) uses GMP 6.0
Latest version of Debian is 9.0 (stretch)

CentOS 6 and RedHat EL 6 use GMP 4.3
CentOS 7 and RedHat EL 7 use GMP 6.0

GP2 2017-11-12 04:27

My first two PRP double checks completed successfully ([M]M75560141[/M] and [M]M75560141[/M]), matching preda's gpuOwL results.

Runtime Error 2017-11-12 05:21

When to run PRP vs LL?
 
Hi, are there any recommended guidelines to follow for deciding to run PRP vs LL on a given machine? Thanks!

S485122 2017-11-12 13:20

[QUOTE=GP2;471596]My first two PRP double checks completed successfully ([M]M75560141[/M] and [M]M75560141[/M]), matching preda's gpuOwL results.[/QUOTE]Isn't it a waste of time to do PRP tests on an exponent where an LL test was done ? The same would apply to doing LL tests on exponents that already have a PRP result.

Shouldn't the two methods for proving the Mersenne number is composite be used exclusively on the the different candidates ?

Jacob

science_man_88 2017-11-12 13:58

[QUOTE=S485122;471605]Isn't it a waste of time to do PRP tests on an exponent where an LL test was done ? The same would apply to doing LL tests on exponents that already have a PRP result.

Shouldn't the two methods for proving the Mersenne number is composite be used exclusively on the the different candidates ?

Jacob[/QUOTE]

I would argue that like DC-LL it has it's purpose. That purpose however, is not confirming the residue given by LL.

axn 2017-11-12 17:39

[QUOTE=S485122;471605]Isn't it a waste of time to do PRP tests on an exponent where an LL test was done ? [/QUOTE]

Yes, yes it is. I don't think this was officially sanctioned by TPTB.

GP2 2017-11-12 17:41

[QUOTE=S485122;471605]Isn't it a waste of time to do PRP tests on an exponent where an LL test was done ? The same would apply to doing LL tests on exponents that already have a PRP result.

Shouldn't the two methods for proving the Mersenne number is composite be used exclusively on the the different candidates ?[/QUOTE]

Yes.

However, in this case the two separate first-time tests (LL and PRP) were already done. Hopefully, in the future there will be coordination to avoid this.

In my case, I simply set a few of my working directories to do PRP double checks (WorkPreference=151) and they do whatever exponents they are assigned.

GP2 2017-11-12 18:30

[QUOTE=Runtime Error;471597]Hi, are there any recommended guidelines to follow for deciding to run PRP vs LL on a given machine? Thanks![/QUOTE]

PRP has better error correction, and should give very reliable results even on unreliable machines. It may eventually take over as the main form of testing. However, the Gerbicz error correction algorithm is very new, so adoption may be gradual and cautious. Meanwhile there are ten years' worth of old LL results that need double-checking.

PRP tests can prove a Mersenne number is composite, but can't mathematically prove that is prime (although there is a very high degree of confidence). LL tests do prove primality. This is a non-issue in practice, since Mersenne primes are extremely rare and credit will be given for any finds made with PRP testing even though a confirming LL test will be run subsequently.

The savefiles for PRP testing appear to be about three times larger than LL save files for equivalent exponents. Around 30MB vs. 10MB for exponents around the 80M range. Shouldn't be an issue unless you are extremely constrained for disk space or I/O throughput bandwidth (the latter may actually be an issue with the EFS filesystem on the AWS cloud if there is a low DiskWriteTime interval, low filesystem storage usage, and very frequent churning of spot instances).

The kinds of tests assigned by the default "whatever makes sense" setting will undoubtedly change over time. If that's what you use now, there's no need to change it.

pepi37 2017-11-12 19:25

I cannot find way to revert output of PRP to "old way" (like this - 4*332^458778+1 is not prime. RES64: DFFD7CC51D5214C7. Wf4: 4B7B7071,00000000)
Any command in prime.txt?

Cruelty 2017-11-12 19:29

Is this a standard output right now? :cool:[code]{"status":"C", "k":127, "b":2, "n":12000569, "c":-1, "worktype":"PRP-3", "res64":"700854A79E1515ED", "residue-type":1, "fft-length":786432, "error-code":"00000000", "security-code":"6DAF586E", "program":{"name":"Prime95", "version":"29.4", "build":4, "port":4}, "timestamp":"2017-11-12 11:32:16", "errors":{"gerbicz":0}}[/code]
I haven't touched config files in a while, I guess from v28.9, and so far everything was OK.

pepi37 2017-11-12 19:30

[QUOTE=Cruelty;471639]Is this a standard output right now? :cool:[code]{"status":"C", "k":127, "b":2, "n":12000569, "c":-1, "worktype":"PRP-3", "res64":"700854A79E1515ED", "residue-type":1, "fft-length":786432, "error-code":"00000000", "security-code":"6DAF586E", "program":{"name":"Prime95", "version":"29.4", "build":4, "port":4}, "timestamp":"2017-11-12 11:32:16", "errors":{"gerbicz":0}}[/code]I haven't touched config files in a while, I guess from v28.9, and so far everything was OK.[/QUOTE]

Yes it look like :( - I hope that is way to revert in "old way"

James Heinrich 2017-11-12 20:29

PRP results in v29.4 are now output in JSON format. Other result types will also migrate to JSON over the next few versions when George has time to implement it. Using JSON as a result format makes it both more flexible and robust, and allows a common format between all result types and all software (e.g. Prime95, mfaktc, gpuowl, etc).

pepi37 2017-11-12 21:26

[QUOTE=James Heinrich;471644]PRP results in v29.4 are now output in JSON format. Other result types will also migrate to JSON over the next few versions when George has time to implement it. Using JSON as a result format makes it both more flexible and robust, and allows a common format between all result types and all software (e.g. Prime95, mfaktc, gpuowl, etc).[/QUOTE]
I hope that will also make "old results style" as one of option

Dubslow 2017-11-12 21:37

[QUOTE=pepi37;471647]I hope that will also make "old results style" as one of option[/QUOTE]

It should be relatively easy to have code that can prettyprint the json format results. It might not look exactly the same but should be just as easy to read as the old style.

pepi37 2017-11-12 21:40

[QUOTE=Dubslow;471649]It should be relatively easy to have code that can prettyprint the json format results. It might not look exactly the same but should be just as easy to read as the old style.[/QUOTE]

I am nearly 100% sure that George will add switch ( like was in past) to have user choice: old or new style :)

Add this in prime.txt and result is revert to old way!

[QUOTE]OutputPrimes=1
OutputJSON=0
OutputComposites=1[/QUOTE]

[QUOTE]2*10^76345-1 is not prime. RES64: 210D1323F923FF54. Wg4: 34C07A14,00000000
2*10^21456-1 is not prime. RES64: B66A0B65630E740D. Wg4: 5D55A7A0,00000000
[Sun Nov 12 23:08:00 2017]
2*10^38232-1 is a probable prime! Wg4: 961ABD81,00000000[/QUOTE]

James Heinrich 2017-11-12 21:56

It may be what you're used to, but the non-JSON results are much harder to deal with when processing manual results. I'm personally in favour of deprecating all non-JSON results and eventually no longer them accepting them in the manual results forms. Don't worry, we're probably years away from that, but be prepared to see more JSON in the future.

BTW: George is less [url=https://en.wikipedia.org/wiki/Hobson%27s_choice]Hobsonian[/url] than I am so the option is already there. I however still strongly encourage you to embrace the JSON in your workflow. :smile:

pepi37 2017-11-12 22:13

[QUOTE=James Heinrich;471653]It may be what you're used to, but the non-JSON results are much harder to deal with when processing manual results. I'm personally in favour of deprecating all non-JSON results and eventually no longer them accepting them in the manual results forms. Don't worry, we're probably years away from that, but be prepared to see more JSON in the future.

BTW: George is less [URL="https://en.wikipedia.org/wiki/Hobson%27s_choice"]Hobsonian[/URL] than I am so the option is already there. I however still strongly encourage you to embrace the JSON in your workflow. :smile:[/QUOTE]

For me ,and my use of Prime 95 all I need is

[QUOTE]2*10^76345-1 is not prime. RES64: 210D1323F923FF54. Wg4: 34C07A14,00000000

2*10^38232-1 is a probable prime! Wg4: 961ABD81,00000000 [/QUOTE]

So I am back: and Prime95 is better in any way , any time :)
Thanks George :)

James Heinrich 2017-11-12 22:15

As long as you don't plan on submitting results to PrimeNet, have 'at it in the old format :smile:

pepi37 2017-11-12 22:16

[QUOTE=James Heinrich;471656]As long as you don't plan on submitting results to PrimeNet, have 'at it in the old format :smile:[/QUOTE]

Yes, I found it :) Upgrade Linux distro also :)
Error checking is always good thing!

Runtime Error 2017-11-14 01:47

[QUOTE=GP2;471635]PRP has better error correction, and should give very reliable results even on unreliable machines. It may eventually take over as the main form of testing. However, the Gerbicz error correction algorithm is very new, so adoption may be gradual and cautious. Meanwhile there are ten years' worth of old LL results that need double-checking.

PRP tests can prove a Mersenne number is composite, but can't mathematically prove that is prime (although there is a very high degree of confidence). LL tests do prove primality. This is a non-issue in practice, since Mersenne primes are extremely rare and credit will be given for any finds made with PRP testing even though a confirming LL test will be run subsequently.

The savefiles for PRP testing appear to be about three times larger than LL save files for equivalent exponents. Around 30MB vs. 10MB for exponents around the 80M range. Shouldn't be an issue unless you are extremely constrained for disk space or I/O throughput bandwidth (the latter may actually be an issue with the EFS filesystem on the AWS cloud if there is a low DiskWriteTime interval, low filesystem storage usage, and very frequent churning of spot instances).

The kinds of tests assigned by the default "whatever makes sense" setting will undoubtedly change over time. If that's what you use now, there's no need to change it.[/QUOTE]

Thank you for the reply, GP2. I understand the points that you have made. I'm sure this be addressed explicitly in future versions. Thank you!

kruoli 2017-11-15 18:45

When having invalid PRP-assignments, he will skip them, but he displays the wrong assignment as skipped:
[CODE][Worker #3 Nov 15 19:37] 5^100000-98 is not prime. RES64: 66F3AC8D2C121F65. Wg4: A3426CBC,00000000
[Worker #3 Nov 15 19:37] PRP test of 5^100000-98 aborted -- number is divisible by 3
[Worker #3 Nov 15 19:37] PRP test of 5^100000-98 aborted -- number is divisible by 3[/CODE]

worktodo.txt:
[CODE]PRP=N/A,1,5,100000,-98,99,0,3,1
PRP=N/A,1,5,100000,98,99,0,3,1
PRP=N/A,1,5,100000,-100,99,0,3,1[/CODE]

kruoli 2017-11-15 18:51

If you have set your [I]Iterations between screen outputs[/I] by chance to exactly the number of iterations to be done, the following will happen:
[CODE][Worker #1 Nov 15 19:48] Iteration: 100000 / 100000 [100.00%], roundoff: 0.070, ms/iter: 0.026, ETA: 30:25:40[/CODE]
Of course the ETA should be nearly zero or equal to zero.

Madpoo 2017-11-16 04:30

[QUOTE=James Heinrich;471653]It may be what you're used to, but the non-JSON results are much harder to deal with when processing manual results. I'm personally in favour of deprecating all non-JSON results and eventually no longer them accepting them in the manual results forms. Don't worry, we're probably years away from that, but be prepared to see more JSON in the future.

BTW: George is less [url=https://en.wikipedia.org/wiki/Hobson%27s_choice]Hobsonian[/url] than I am so the option is already there. I however still strongly encourage you to embrace the JSON in your workflow. :smile:[/QUOTE]

I agree... json is pretty nice. It's as easy to read as the old format, and even more so for a machine since it's well-formatted. As a human, I like being able to look at a value and see the description of it right there, no guessing. With the current/old format, unless you're here on this forum and/or getting into the source, it can be somewhat indecipherable to know what all those values are exactly. I mean, the exponent, residue and is/isn't prime are easy enough, but all that other stuff is important (shift count, error code, assignment id, checksum...)

Madpoo 2017-11-18 00:36

Prime95 v29.4 build 5 is official!
 
The download page has just been updated... v29.4 build 5 is now the official build. George gave his stamp of approval...

Enjoy!
[URL="https://www.mersenne.org/download/"]https://www.mersenne.org/download/[/URL]

GP2 2017-11-18 02:28

[QUOTE=Madpoo;472035]The download page has just been updated... v29.4 build 5 is now the official build.[/QUOTE]

One small issue, the fast Jacobi criterion in the source code should check strcmp(gmp_version, "5.1.0"), [URL="http://www.mersenneforum.org/showthread.php?p=471594#post471594"]as mentioned earlier[/URL].

Madpoo 2017-11-18 04:20

[QUOTE=GP2;472037]One small issue, the fast Jacobi criterion in the source code should check strcmp(gmp_version, "5.1.0"), [URL="http://www.mersenneforum.org/showthread.php?p=471594#post471594"]as mentioned earlier[/URL].[/QUOTE]

[URL="http://www.mersenneforum.org/showpost.php?p=470955&postcount=17"]http://www.mersenneforum.org/showpost.php?p=470955&postcount=17[/URL]

Dubslow 2017-11-18 06:06

[QUOTE=Madpoo;472042][URL="http://www.mersenneforum.org/showpost.php?p=470955&postcount=17"]http://www.mersenneforum.org/showpost.php?p=470955&postcount=17[/URL][/QUOTE]

....what does that have to do with anything? GP2 proposed a fix for a problem, George implemented it, and after that, GP2 realized that the version number wasn't quite right the first time. The second correction is what's failed to be included, and still allows for Prime95 to fail in the same way as before on certain specific versions of GMP (namely >=5.0, <5.1)

R. Gerbicz 2017-11-18 08:02

[QUOTE=GP2;472037]One small issue, the fast Jacobi criterion in the source code should check strcmp(gmp_version, "5.1.0"), [URL="http://www.mersenneforum.org/showthread.php?p=471594#post471594"]as mentioned earlier[/URL].[/QUOTE]

In this case you would introduce another bug, check out:
[CODE]
printf("%d\n",strcmp("6.1.2","5.1.0"));
printf("%d\n",strcmp("10.0.0","5.1.0"));
[/CODE]
this gives:
[CODE]
1
-1
[/CODE]
ofcourse so far the gmp major version not reached 10, but in future gmp version it would be a quite real bug.

ps. not forget [url]https://gmplib.org/manual/Useful-Macros-and-Constants.html[/url]
"Global Constant: const char * const gmp_version
The GMP version number, as a null-terminated string, in the form “i.j.k”. This release is "6.1.2". Note that the format “i.j” was used, before version 4.3.0, when k was zero."

ATH 2017-11-18 13:48

There are variables for the version and minor version, so something like this should work:

[CODE]if ((__GNU_MP_VERSION<5) || (__GNU_MP_VERSION==5 && __GNU_MP_VERSION_MINOR<1)) { jacobi_check=0; }[/CODE]

GP2 2017-11-18 18:48

[QUOTE=ATH;472059]There are variables for the version and minor version, so something like this should work:

[CODE]if ((__GNU_MP_VERSION<5) || (__GNU_MP_VERSION==5 && __GNU_MP_VERSION_MINOR<1)) { jacobi_check=0; }[/CODE][/QUOTE]

But this is at compile time only. It needs a runtime check, to check the version of the shared library. So the global variable gmp_version is needed instead.


All times are UTC. The time now is 01:32.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.