mersenneforum.org Prime95 v30.3
 Register FAQ Search Today's Posts Mark Forums Read

2020-09-30, 16:13   #375
jwnutter

"Joe"
Oct 2019
United States

7610 Posts

Quote:
 Originally Posted by James Heinrich I doubt there would ever be any "forced" upgrade (as far as I know there's still 15+ year old v24 installations chugging along contributing their little bit to the project). But an encouraged upgrade would make the cycles of those who do upgrade more useful.
I would also add that "DC-only work" could be viewed by some as equivalent to a forced update.

2020-09-30, 16:27   #376
kriesel

"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

10011100101012 Posts

Quote:
 Originally Posted by Prime95 I think encouraging the upgrade is a good idea regardless. I'm ambivalent about DCes being done as LL or PRP+CERT. Cost is about the same. LL-DC provides feedback on the original computer which is of some minor value.
It's worth thinking about the pros and cons.
Reliability data versus exponent on LL will show the effectiveness of the current error detection methods including Jacobi check.

Version upgrade is good for those systems new enough to run wavefront assignments, including the relatively quick Cert work. The XP systems need an OS upgrade to allow it.

The timeliness of a prompt GEC check or Cert is of much greater value in my opinion than LL reliability feedback on a computer that ran LL first test several years ago. Many of those systems that produced existing LLDC candidates' first tests (54M and higher) are likely to no longer be in operation or running primality tests by the time the LL reliability feedback is obtained, if not already replaced by now.

There is a very slight ~2% throughput advantage to PRP/GEC/CERT over LLDC, and a large reliability advantage. Approx 2% x 506K DC to Mp51 adds up (~10,120 tests).

There is no great harm in having a mixed situation, with some LLDC and some PRP/CERT in place of LLDC (& ~4% LLTC, ~0.04% LLQC, ~0.0008% LL5C, ~16E-8 LL6C).

PrimeNet continuing to automatically issue first time LL assignments ought cease sometime soon, since each one commits the project to future DC in some form, at at least 100% of first test cost, at higher more costly exponents. July 2021, a year after PRP proof introduced? Wait till mlucas supports PRP proof?
A separate question is whether to also cease issuing first time LL assignments as manual assignments. Some gpus can't run gpuowl, so can't run PRP, with proof or not.

2020-10-01, 06:29   #377
Aramis Wyler

"Bill Staffen"
Jan 2013
Pittsburgh, PA, USA

3·137 Posts

Quote:
 Originally Posted by richs I searched through this thread and was not able to find how to stop getting cert work with the new version. One of my older laptops that I upgraded to the latest version yesterday has been receiving cert work but keeps getting the message "Error getting CERT starting value." The program tries twice then aborts the assignment. I would prefer to stop getting cert work on this laptop.

Make sure you're not using GPU72 as a proxy. It messes with the Certs.

 2020-10-01, 08:41 #378 tha     Dec 2002 809 Posts https://www.mersenne.org/report_expo...exp_hi=&full=1 On the line "Type : CERT" under status it says "n/a" suggesting something is to come later. I would expect something like "successfully verified" Last fiddled with by tha on 2020-10-01 at 08:48
2020-10-01, 11:12   #379
James Heinrich

"James Heinrich"
May 2004
ex-Northern Ontario

63678 Posts

Quote:
 Originally Posted by tha M98000101 On the line "Type : CERT" under status it says "n/a" suggesting something is to come later. I would expect something like "successfully verified"
As previously noted, it's being worked on (but Aaron lacks spare cycles at the moment):
Quote:
 Originally Posted by James Heinrich This is normal. It still needs fixing, but it's a website display issue and not a problem on your end. Using M99,770,291 as an example, your Cert verifies the PRP run by Ben Delo, therefore the PRP section shows "verified". The "Cert... n/a" is a known display issue. It's not a trivial fix, but Aaron is aware of it and will (when time permits) work some magic to relate the Cert to the PRP and make it display nicely.

2020-10-01, 16:08   #380
richs

"Rich"
Aug 2002
Benicia, California

3×409 Posts

Quote:
 Originally Posted by Aramis Wyler Make sure you're not using GPU72 as a proxy. It messes with the Certs.
Thanks for the info. I am using GPU72 as a proxy. I only do double checks so I'll leave the certs to others.

Last fiddled with by richs on 2020-10-01 at 16:09

2020-10-01, 16:11   #381
James Heinrich

"James Heinrich"
May 2004
ex-Northern Ontario

3,319 Posts

Quote:
 Originally Posted by Aramis Wyler Make sure you're not using GPU72 as a proxy. It messes with the Certs.
Chris is aware of the issue but is so overloaded with Real Life work this month he has not had any opportunity to look into the fix. It will come eventually.

2020-10-01, 17:45   #382
S485122

Sep 2006
Brussels, Belgium

2·829 Posts

Quote:
 Originally Posted by kriesel ... There is no great harm in having a mixed situation, with some LLDC and some PRP/CERT in place of LLDC (& ~4% LLTC, ~0.04% LLQC, ~0.0008% LL5C, ~16E-8 LL6C).
Just for the record the error rate is decreasing for higher (completed) ranges. Based on LL tests it is more like 2% of all tests in error (this includes the necessary triple or quadruple tests. The 51M range has 558 bad tests compared to 39 424 verified tests, but that 1,4% is a record. For the kind of calculations you make 2% is probably better. (I am sure the "special" ranges like 332-332M range will have a much higher bad LL's compared to verified LL's. But we will know once they are completed.)

Jacob

Last fiddled with by S485122 on 2020-10-01 at 17:47

2020-10-01, 18:37   #383
kriesel

"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

32×557 Posts

Quote:
 Originally Posted by S485122 Just for the record the error rate is decreasing for higher (completed) ranges. Based on LL tests it is more like 2% of all tests in error (this includes the necessary triple or quadruple tests. The 51M range has 558 bad tests compared to 39 424 verified tests, but that 1,4% is a record. For the kind of calculations you make 2% is probably better. (I am sure the "special" ranges like 332-332M range will have a much higher bad LL's compared to verified LL's. But we will know once they are completed.) Jacob
Do you mean I should have used 2% per LL first test and DC pair? Perhaps what I wrote earlier was unclear.

I used 2% probability per LL test; 2% for the first test, 2% for the DC, which gives 4% probability of needing a TC, and continued to use 2% probability of error for any later LL retest that may occur, although there may be a factor of 2 missing for the quad and higher checks. George has stated 1.5%. I've seen 2% commonly used elsewhere. I've seen 1.5 & 2%/LLtest in my own cpu & gpu tests. For given hardware and software, the rate is expected to climb with run time and hardware age. The introduction of the Jacobi check should halve the figures at some point, where applicable (prime95, gpuowl, mlucas, not CUDALucas). Phaseout of LL/Jacobi in favor of PRP/GEC will lower the average of PRP & LL combined test error rate.
Assuming 1% chance of uncorrected error per LL test would make the computation time about a wash. Being able to perform a proof of correctness remains an advantage for PRP.

Primality tests via PRP or LL cost about the same; the GEC and occasional Jacobi are ~0.2-0.3% of a primality test, and both George and Mihai IIRC have stated there's no difference in cost between bare LL and bare PRP.

Assuming power 8 PRP, total cost of a PRP with proof & cert is ~1.01 primality tests.
First LL has cost 1 and error rate e, so cost of a correct test is 1+sum from i=1 to infinity, e^i to obtain a correct res64. LLDC has the same cost. Cost of two correct tests is then for e=.02, 2.04081632...

After obtaining a first LL res64, we don't know whether it's right. The chance of a mismatch with a DC is the sum of the probabilities of the first test being wrong, or the second test being wrong (including the case of both being wrong but differently); ~2e.

Last fiddled with by kriesel on 2020-10-01 at 19:03

2020-10-01, 21:59   #384
Uncwilly
6809 > 6502

"""""""""""""""""""
Aug 2003
101×103 Posts

250616 Posts

Quote:
 Originally Posted by richs Thanks for the info. I am using GPU72 as a proxy. I only do double checks so I'll leave the certs to others.
IIRC Chris has indicated that using GPU72 to proxy for any LL, DC, or PRP is only useful to move up your stats on GPU72. GPU72 does not keep any assignments for those work types.

A minor other reason might be to get assignments that your machine does not qualify for normally.

 2020-10-02, 17:15 #385 kriesel     "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 32·557 Posts Daytime and nighttime P-1/ECM stage 2 memory question Options, Resource Limits, Advanced..., Daytime P-1/ECM stage 2 memory (GB): Nighttime P-1/ECM stage 2 memory (GB): On a multiple worker configuration, are these allowed memory settings per worker, or total for the prime95 application? The readme does not say either way. Treating it as per-worker is conservative but suboptimal if it is actually total for the prime95 application. Setting available memory was introduced in v20.0 for P-1, and ECM supported this memory limit beginning at V25.5. Multiple ll test workers support was introduced at V25.5. It's also possible to run P-1 on multiple workers, including overlapping stage 2. And presumably ECM also. I sometimes run a P-1 on each Xeon in a system, among the 2 or 4 workers per system. whatsnew.txt says Code: [New features in Version 25.7 of prime95.exe ------------------------------------------- 1) Time= in ini files no longer supported. A during/else syntax can be unsed instead for some ini file options. 2) PauseWhileRunning enhanced to pause any number of workers. 3) LowMemWhileRunning added. 4) Ability to stop and start individual workers added. 5) DayMemory and NightMemory in local.txt replaced with a single Memory setting. 6) Memory can be set for each worker thread. 7) Scheme to distribute available memory among workers needing a lot of memory has been completely revamped. 8) MaxHighMemWorkers replaces delayStage2Workers option. 9) The executable now defaults to talking to the PrimeNet v5 server. To use the executable with the old v4 server, add "UseV4=1" to the top of prime.txt. undoc.txt describes both cases: Code: The Memory=n setting in local.txt refers to the total amount of memory the program can use. You can also put this in the [Worker #n] section to place a maximum amount of memory that one particular worker can use. Local.txt memory allowance contents set by the menus, and memory allocations by multiple P-1 simultaneous runs, appear consistent with the allowable value being total, not per worker. The first stage 2 to launch gets a lot of memory but not all of the allowed, and the second gets essentially what's left if it would benefit from at least that much. So for example a 4 worker system with 128GB total ram and a setting of 32GB allowed gave 21GB to the first launched stage 2, and 11 to the second. Another system with 128G installed, 2 workers, 48G allowed, allocated 40.5G to one and 7.5G to another.