PRP versus LL first tests for Prime95 / impact on GPU factoring
Hello,
For first time tests using Prime95 if I switch from LL to PRP and the test finishes with no errors using Gerbicz error checking does this mean 100% that there will be no need in the future for a doublecheck (like GIMPS presently does with LL)? If this is a true statement then when my current LL test finishes with Prime95 I will start doing PRP for first time tests (and use the new Prime95 version to have the latest and greatest instead of using 29.4). My CPU (Dell tower Intel CPU i34150) and GPU (EVGA GeForce GTX 1050 SC Gaming 02GP46152KR 2 GB from Newegg) have been 100% reliable so I anticipate no errors anyway. 2nd question  doing the actual factoring/LL test timings on my GPU and using the math below from GIMPS website it made sense for me to go to 76 bits for GPU72.com on my video card with mfaktc when comparing to how long it takes to do 2 LL tests on my video card with CUDALucas.  factoring_cost < chance_of_finding_factor * 2 * primality_test_cost "Looking at past factoring data we see that the chance of finding a factor between 2X and 2X+1 is about 1/x."  I have changed recently and now only go up to 75 bit on my GPU because PRP on the CPU (with no errors) eliminates the need to do a 2nd test as a doublecheck in the future  is this a correct/fair interpretation? (I know this isn't an apples to apples comparison as CUDALucas on the GPU does not do PRP I believe so I am comparing to Prime95 using PRP for the initial test and not needing a 2nd future doublecheck). Plus there is no way of knowing after my 75 bit test if the actual GIMPS user later does LL or PRP (and in reality the GIMPS math formula is only valid for present state comparing GPU factoring to GPU primality testing (LL only) and separately CPU factoring to CPU primality testing (LL or PRP)). Also FYI I only use CUDALucas for manual LL doublechecking  never first time tests. Bonus question :) if anyone knows the answer  does the new Geforce 1650 get a big factoring boost like the other new cards (i.e.  how many GHZ Days per day)? My Geforce 1050 (that EVGA slightly overclocks) gets slightly over 250 GHZ Days per day.  it is nice the 1050/1650 support 300 Watt towers so can easily slide in with no power connector. Thanks for all the answers. William 
There still needs to be a double check with PRP. Using PRP means that errors are greatly reduced so the need for triple checks is greatly reduced. It also makes large exponents viable to test.

[QUOTE=wfgarnett3;516255]
For first time tests using Prime95 if I switch from LL to PRP and the test finishes with no errors using Gerbicz error checking does this mean 100% that there will be no need in the future for a doublecheck (like GIMPS presently does with LL)?[/quote] Good question. Most likely we will still do doublechecking, but with less urgency. There are still vulnerabilities. One is fraud. Who would bother to submit fake results? Well, for 22 years no one. This year we have our first deranged individual finding pleasure doing this. Two is programmer error. Prime95 29.4 had an bug where it would report that it had done Gerbicz checking but did not. There were also small windows where it was vulnerable to a hardware error. Does gpuOwl or prime95 29.6 have any small windows where they are vulnerable to a hardware error? Three is human error in copying/pasting manual results. We've seen this happen recently in TF results. [quote] If this is a true statement then when my current LL test finishes with Prime95 I will start doing PRP for first time tests (and use the new Prime95 version to have the latest and greatest instead of using 29.4). My CPU (Dell tower Intel CPU i34150) and GPU (EVGA GeForce GTX 1050 SC Gaming 02GP46152KR 2 GB from Newegg) have been 100% reliable so I anticipate no errors anyway. 2nd question  doing the actual factoring/LL test timings on my GPU and using the math below from GIMPS website it made sense for me to go to 76 bits for GPU72.com on my video card with mfaktc when comparing to how long it takes to do 2 LL tests on my video card with CUDALucas. I have changed recently and now only go up to 75 bit on my GPU because PRP on the CPU (with no errors) eliminates the need to do a 2nd test as a doublecheck in the future  is this a correct/fair interpretation? (I know this isn't an apples to apples comparison as CUDALucas on the GPU does not do PRP I believe so I am comparing to Prime95 using PRP for the initial test and not needing a 2nd future doublecheck). [/QUOTE] You should still go to 2^76. The answer would be different if there was a CUDA program that did PRP and GIMPS decided not to do doublechecking. Your goal is to maximize the number of exponents your GPU can clear in a given time period. What prime95 does and gpuOwl does is irrelevant. You can only clear exponents by either TF or LL which definitely requires doublechecking. 
[QUOTE=wfgarnett3;516255]Bonus question :) if anyone knows the answer  does the new Geforce 1650 get a big factoring boost like the other new cards (i.e.  how many GHZ Days per day)?
My Geforce 1050 (that EVGA slightly overclocks) gets slightly over 250 GHZ Days per day.  it is nice the 1050/1650 support 300 Watt towers so can easily slide in with no power connector.[/QUOTE] I don't have a 1650, but I do have a 1660 Ti. Running it powerlimited to 70w gives me about 1400 Gd/d. I would estimate a 1650 to deliver about 1000 Gd/d (give or take). Go forth and upgrade! 
Thanks for the helpful answers everyone!

[QUOTE=wfgarnett3;516255]Hello,
For first time tests using Prime95 if I switch from LL to PRP and the test finishes with no errors using Gerbicz error checking does this mean 100% that there will be no need in the future for a doublecheck (like GIMPS presently does with LL)?[/QUOTE]No it does not mean that. But switch to PRP anyway as soon as you can, where you can. The Gerbicz check is that good. Its overhead is only of order 0.2%. So if you could run PRP, or LL with Jacobi check, which catches half of LL errors, primality testing (twice) with PRP takes 2.004 effort, while LL takes 2.02 or more. [QUOTE]My CPU (Dell tower Intel CPU i34150) and GPU (EVGA GeForce GTX 1050 SC Gaming 02GP46152KR 2 GB from Newegg) have been 100% reliable so I anticipate no errors anyway.[/QUOTE]Hardware becomes less reliable with age. Please run double checks at least once or twice a year on each. [QUOTE]2nd question  doing the actual factoring/LL test timings on my GPU and using the math below from GIMPS website it made sense for me to go to 76 bits for GPU72.com on my video card with mfaktc when comparing to how long it takes to do 2 LL tests on my video card with CUDALucas.  factoring_cost < chance_of_finding_factor * 2 * primality_test_cost "Looking at past factoring data we see that the chance of finding a factor between 2X and 2X+1 is about 1/x."  I have changed recently and now only go up to 75 bit on my GPU because PRP on the CPU (with no errors) eliminates the need to do a 2nd test as a doublecheck in the future  is this a correct/fair interpretation?[/QUOTE]Please go up to the gpu72 bounds for the given gpu model and two primality tests saved. There are charts for each gpu model available at James Heinrich's site, for example [URL]https://www.mersenne.ca/cudalucas.php?model=692&mmin=90&mmax=1000[/URL] Or for single exponents, for example [URL]https://www.mersenne.ca/exponent/93000067[/URL] But any additional level is welcome, and there's plenty of work to go around. Whatever you don't complete, someone else is likely to get assigned. Investigate tuning your mfaktc.ini for max performance if you haven't already. [QUOTE]Also FYI I only use CUDALucas for manual LL doublechecking  never first time tests.[/QUOTE]Thank you for the LL double checks. Many more are needed. A good gpu and a proper CUDALucas installation can be quite reliable, even though there's no Jacobi check in it. I've not had any bad LL gpu results in over 1.5 years. There's no such thing as CUDAPRP yet. 
[QUOTE=kriesel;516305]...The Gerbicz check is that good. Its overhead is only of order 0.2%. So if you could run PRP, or LL with Jacobi check, which catches half of LL errors, primality testing (twice) with PRP takes 2.004 effort, while LL takes 2.02 or more...[/QUOTE]
Along that line of thought, it really will be nice if someone with bad hardware is able to catch it early and do something about it. During my analysis, it's amazing how many machines out there had just been churning out one bad result after another, and it was only years and years later that we realized this, once they started getting doublechecks and the trend was spotted. If the Gerbicz error detecting lives up to the promise, it will definitely help out, even if we do still find mismatches down the road. Right now, LL doublechecking is pretty far behind the first time tests, and I still have my theory (with no evidence to back it up) that we've missed a prime somewhere in the < 57M range. :smile: Ultimately that's the real problem, that we may have missed something but it'll be a decade before we know. Ultimately no harm done, but still... 
[QUOTE=Madpoo;516695]. . . I still have my theory (with no evidence to back it up) that we've missed a prime somewhere in the < 57M range. :smile:[/QUOTE]
Now there's a way to create some doublecheck incentive! 
What is the likelihood of someone completing a valid proof of the conjecture where there exists an infinite number of Mersenne Primes? What will happen to the project?
Is it necessary for the proof, if valid, contains some formula/theorem that can be exploited for generating mersenne primes? Otherwise I don't think we have anyway of knowing if we skipped a prime or not ... unless we run all those tests again. What is the likelihood two computers returns the same bad residue though? 
[QUOTE=Madpoo;516695]Along that line of thought, it really will be nice if someone with bad hardware is able to catch it early and do something about it.
During my analysis, it's amazing how many machines out there had just been churning out one bad result after another, and it was only years and years later that we realized this, once they started getting doublechecks and the trend was spotted. [/QUOTE]I think it would be a plus if future releases of primality testing software performed a brief self test before beginning each primality test, and if found unreliable, AT THAT TIME, refused to proceed with a primality test, instead providing the user with recommendations for improving reliability. Perhaps a fast small block of PRP/Gerbicz check, even if what's being run is LL; on the same exponent/fft length, to test more closely what's about to be run. 
[QUOTE=dcheuk;516789]What is the likelihood of someone completing a valid proof of the conjecture where there exists an infinite number of Mersenne Primes? What will happen to the project?[/QUOTE]It would be an encouragement but have little practical effect. If the converse occurred, a proof that only x Mersenne primes could exist, the project would have a potential end point, if x was near the current number found. If x>57, the project still has centuries to run. [QUOTE]
Is it necessary for the proof, if valid, contains some formula/theorem that can be exploited for generating mersenne primes?[/QUOTE]Very unlikely. It's unlikely any such exists. [QUOTE]Otherwise I don't think we have anyway of knowing if we skipped a prime or not ... unless we run all those tests again. What is the likelihood two computers returns the same bad residue though?[/QUOTE]The likelihood of a random error causing a match for the same exponent is trivially small. The likelihood of 1 random erroneous res64 matching any exponent's res64 in p<10[SUP]9[/SUP] is estimated at <10[SUP]11[/SUP]; the likelihood of any such occurrence in p<10[SUP]9[/SUP], <10[SUP]5[/SUP]. (See [URL]https://www.mersenneforum.org/showpost.php?p=509283&postcount=19[/URL]). There are some known error types that are known to not be random, and these are checked for in current software, including symptoms of them along the way, to minimize wasted compute time. The known nonrandom errors produce res64 values at or near zero, or at a much lower rate, near maximum. The LL falsepositive zero res64 has historically been about ten times as common as all other identified nonrandom error res64 outcomes combined. (Details at [URL]https://www.mersenneforum.org/showpost.php?p=509940&postcount=4[/URL]) A check of the existing PRP data for GIMPS showed no coincidence of res64 values among 9945 exponents, admittedly a small sample. The methodology of 2 matching res64 results per exponent has been checked, by systematically triple checking small/fast exponents <3M, with encouraging results. 
One thing you need to know about 1650. That card comes without rtx core. 1660 and up have rtx core. So if the rtx is part of the performance boost of the new nvidia cards.
Gaming benchmarks shows it's somewhere between 1050ti and 1060 I would go for a 1660 or 1660ti. With those you got batter bang for the money. 
I believe the 1660s also lack RTX cores (aka Ray Tracing), hence they are called GTX. Don't quote me on it, though.
I would really like somebody to take one for the team, and test out a 1650 on TF. It could turn out to be the most value for money for this workload. 
[QUOTE=axn;517137]I believe the 1660s also lack RTX cores (aka Ray Tracing), hence they are called GTX. Don't quote me on it, though.
I would really like somebody to take one for the team, and test out a 1650 on TF. It could turn out to be the most value for money for this workload.[/QUOTE] RT Ooooh. Yeah. After some more reading I got it wrong. 1660 supports ray tracing but without RT cores. However 1650 is about 30% less performance than 1060 and 30% more than 1050. So to build a new system it can absolutely be a good choise with the 1650. But to upgrade from 1050 I would go for 1660. 1660 =200$ for 100% more performance 1650= 130140 for 30% more performance Sry for misinformation in the last post. 1660 don't have rt cores. 
[QUOTE=Thecmaster;517130]Gaming benchmarks shows it's somewhere between 1050ti and 1060[/QUOTE]
In this case, gaming benchmarks are completely irrelevant because of architectural improvements between Pascal (GTX10) and Turing (GTX16 / RTX20) cards. The 1650 has about 46% of the CUDA cores of the 2060, and in factoring use, the slower memory doesn't matter. So a rough guesstimate would give about 750800 GHzd/day depending on the clock speed. If it's the 75W version without a PCIe power connector, it probably won't go that high, possibly closer to 700. Still, that would put it in GTX 1070 territory, but just for this one workload (mfaktc). 
[QUOTE=nomead;517147]In this case, gaming benchmarks are completely irrelevant because of architectural improvements between Pascal (GTX10) and Turing (GTX16 / RTX20) cards. The 1650 has about 46% of the CUDA cores of the 2060, and in factoring use, the slower memory doesn't matter. So a rough guesstimate would give about 750800 GHzd/day depending on the clock speed. If it's the 75W version without a PCIe power connector, it probably won't go that high, possibly closer to 700. Still, that would put it in GTX 1070 territory, but just for this one workload (mfaktc).[/QUOTE]
Yeah. I know gaming don't have anything to do with this. It's just to compare different cards. For what I can read the 1660ti does perform for tf as the 1070. Not the 1650. The 1650 have 900 CUDA cores the 1660 have 1400 and the 1660ti have around 1500. The gaming benchmarks shows the performance difference between cards and it is roughly the same difference as it performs in TF. 1650 is the worst card Nvidia have released ever. And as I wrote. If you want to upgrade from 1050 go with the 1660. 
Well, going by the numbers posted in the [URL="https://www.mersenneforum.org/showpost.php?p=515784&postcount=8"]GTX 1660 Ti thread[/URL] and the benchmarks listed on [URL="https://www.mersenne.ca/mfaktc.php"]mersenne.ca[/URL], the 1660Ti is above the 1080Ti in factoring performance, _not_ like the 1070. I don't have any actual GTX10series hardware to test on, so I have to trust these posted benchmark tables. But if you have different information about them, please tell.
The CUDA cores are not the same in Turing anymore. Pascal only had a single clock INT16 multiply and INT32 was several operations. Now Turing has a single clock INT32 multiply and that's why trial factoring, as done with mfaktc, is so much faster. 
When you go to Test>Status for my PRP test for my M85XXXXXX exponent, is it my imagination that the probability for "The chance that your exponent you are testing will yield a Mersenne prime is about 1 in ......." is significantly worse then when I did a first time LL in the previous version of the software?
If I am not imagining things, why is it? 
[QUOTE=nomead;517211]Well, going by the numbers posted in the [URL="https://www.mersenneforum.org/showpost.php?p=515784&postcount=8"]GTX 1660 Ti thread[/URL]
......[/QUOTE] And what about LLperformance with CUDALucas? Is the GTX 1070 faster or the GTX 1660TI? 
[QUOTE=moebius;517916]And what about LLperformance with CUDALucas? Is the GTX 1070 faster or the GTX 1660TI?[/QUOTE]
Well, we were talking about GPU factoring... But in the case of CUDALucas, there is no magic rabbit to pull out of the hat for the Turing architecture. FP64 performance is what it is, sorely lacking. Just guessing, in CUDALucas, the GTX 1070 should be about 20% faster than the GTX 1660 Ti. There's 25% more cores on the 1070, but the 1660Ti can probably run at a somewhat higher clock speed. Both architectures have the same ratio of FP64 to FP32 execution units, though. 
Are there any plans to update the Math page to cover PRP?
At the risk of taking the GIMPS walk of shame (I really don't know how PRP works), if PRP is being considered to replace LL why are we running PRP on exponents already LL/DC'd? As a thorough test of PRP? To see which have more factors? Other? 
Sorry if I wasn't clear  The PRP test on M85XXXXXX I am doing now is a first time test.
I was referring to my last LL first time test on a completely different M85XXXXXX. If in both cases they are first time tests why is the probability stated in Prime95 of the exponent being a Mersenne prime significantly worse with PRP than LL? 
[QUOTE=wfgarnett3;517868]When you go to Test>Status for my PRP test for my M85XXXXXX exponent, is it my imagination that the probability for "The chance that your exponent you are testing will yield a Mersenne prime is about 1 in ......." is significantly worse then when I did a first time LL in the previous version of the software?
If I am not imagining things, why is it?[/QUOTE] What does the worktodo.txt entry look like? 
[QUOTE=wfgarnett3;517927]If in both cases they are first time tests why is the probability stated in Prime95 of the exponent being a Mersenne prime significantly worse with PRP than LL?[/QUOTE]
They shouldn't. The type of test doesn't affect the probability; only the size of number (i.e. exponent) and amount of factorizartion (TF + P1) does. It would be good if you can put some screenshots of the probability. 
1 Attachment(s)
The exponent is being PRP tested on my home desktop computer with Prime95
Here at work I just downloaded Prime95 and copied the worktodo info from mersenne.org and Prime95 is reporting a correct probability for the first time PRP test. (image1 attached). that is similar to prior LL first time probabilities. When I get home from work later today I will attach the screenshot from Prime95 of my home desktop machine and you will see then Prime95 at home is reporting an incorrect probability for the first time PRP test that is significantly worse than the first time PRP test on my work laptop on the same exact exponent. 
Make sure worktodo.txt does not read "PRPDC=....."

1 Attachment(s)
Attached is image2 which shows this home computer was indeed reporting a worse, incorrect probability for this first time PRP test of the exponent.
The worktodo.txt on this home computer is the below: PRP=0A5CF58B6C3666AC9FE2A5E244054E24,1,2,85375753,1 while when I log into mersenne.org it shows the worktodo text to copy as the below (which I used on my work laptop for image1): PRP=0A5CF58B6C3666AC9FE2A5E244054E24,1,2,85375753,1,76,0,3, Is the "factored to 76 bit" missing in worktodo.txt the reason for the incorrect first time test probability? If so, what is wrong with Prime95/PrimeNet, as Prime95 automatically got this assignment on its own but failed to put the factored 76 bit pieces into the PRP worktodo on its own like it used to when I used the previous version software for LL testing? 
[QUOTE=wfgarnett3;517993]
Is the "factored to 76 bit" missing in worktodo.txt the reason for the incorrect first time test probability?[/quote] Yes, prime95 assumed no TF had been done. [quote]If so, what is wrong with Prime95/PrimeNet, as Prime95 automatically got this assignment on its own but failed to put the factored 76 bit pieces into the PRP worktodo on its own like it used to when I used the previous version software for LL testing?[/QUOTE] Older versions of prime95 wasn't smart enough to write that info to worktodo.txt 
But it is the newer software that failed to write the info to worktodo.txt
The older version from 2018 always wrote the factored info in the worktodo.txt when communicating with PrimeNet for LL testing on my home desktop. This new version on my home desktop (placed in its own folder directory in Windows 10) didn't write the info when it communicated with PrimeNet to get the PRP assignment. Image2 from my current PRP first time test is from your latest version of Prime95 from your homepage that you released this month. Prime95 left out the factoring info by itself for this new assignment where the 2018 version always had the info in for prior assignments. That's why I wondered why the probability was off  the factored info is missing. Is this a PrimeNet issue or Prime95 issue? 
[QUOTE=wfgarnett3;518006]  the factored info is missing.
Is this a PrimeNet issue or Prime95 issue?[/QUOTE] This is a prime95 issue (old and new versions). Prime95 is only writing the howfarfactored data if P1 is required. 
George,
Do you mean P1 is required only for LL test and not PRP? My 2nd PRP likewise has missing bit information while if Iog into mersenne.org it shows the 76 bit factored information (and P1 was already done). In a future version you might want to fix this since PRP testing is not showing the true probability like it does with LL testing (i.e. it shows a worse probability for PRP that no factoring has taken place which isn't true...). William 
P1 is recommended for both LL and PRP.

OK then this newer version of Prime95 / PrimeNet is failing to properly include the right info in the worktodo.txt for me  I won't worry too much about it since it only affects stated probability not actual testing...

[QUOTE=Prime95;516268]Good question. Most likely we will still do doublechecking, but with less urgency. There are still vulnerabilities.
One is fraud. Who would bother to submit fake results? Well, for 22 years no one. This year we have our first deranged individual finding pleasure doing this. Two is programmer error. Prime95 29.4 had an bug where it would report that it had done Gerbicz checking but did not. There were also small windows where it was vulnerable to a hardware error. Does gpuOwl or prime95 29.6 have any small windows where they are vulnerable to a hardware error? Three is human error in copying/pasting manual results. We've seen this happen recently in TF results. You should still go to 2^76. The answer would be different if there was a CUDA program that did PRP and GIMPS decided not to do doublechecking. Your goal is to maximize the number of exponents your GPU can clear in a given time period. What prime95 does and gpuOwl does is irrelevant. You can only clear exponents by either TF or LL which definitely requires doublechecking.[/QUOTE] Prime95, The above is your quote from May of last year. Now for my GeForce 1050 I see on mersenneforum that gpuOwl now supports Nvidia video cards and uses OpenCL, and when I tested kriesel's exe gpuowl PRP file over a week ago the iteration time is near CudaLucas's LL iteration time. So since a seond PRP test using gpuOwl on GPU is not needed technically (just like Prime95 technically only needs one PRP test), in a technical sense when I do gpu factoring for gpu72.com using mfaktc I should now only go up to 75 bits and not 76 bits, correct? 
[QUOTE=wfgarnett3;534621]
So since a seond PRP test using gpuOwl on GPU is not needed technically (just like Prime95 technically only needs one PRP test), in a technical sense when I do gpu factoring for gpu72.com using mfaktc I should now only go up to 75 bits and not 76 bits, correct?[/QUOTE] Go to 75. This has nothing to do with PRP and gpuowl. This is because there has been a surge in firsttime PRP and LL testing. There are only enough TF resources to take exponents to 2^75. 
All times are UTC. The time now is 17:50. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2021, Jelsoft Enterprises Ltd.