mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Wagstaff PRP Search (https://www.mersenneforum.org/forumdisplay.php?f=102)
-   -   Searching for Wagstaff PRP (https://www.mersenneforum.org/showthread.php?t=13108)

lalera 2015-02-28 20:12

i like to inform you that i created a forum for
prp-testing wagstaff numbers


[url]http://lalera.freeforums.org/[/url]

TheJudger 2015-02-28 21:06

Hi,

so there is still a real usecase for TF on Wagstaff numbers using mfaktc?
Background: diep asked me once ago how much work it would be to implement Wagstaff TF in mfaktc, so I did a quick look and a fast modification (yes, this was really easy!) in mfaktc. Basically I check if 2[SUP]p[/SUP] mod FC (factor candidate) is -1 (= FC - 1) instead of 2[SUP]p[/SUP] mod FC = 1 (for mersennes). So this was a good test for the code... and it passed. I had a nice conversation with diep, Paul and Tony (major part was with diep). This was an early -pre version of mfaktc 0.21 (took a looooooong time from 0.20 to 0.21, sorry). After a while Ryan came up with [URL="http://www.mersenneforum.org/showthread.php?t=18569"]this[/URL] and I felt that diep, Paul and Tony lost motivation. Because the modification of mfaktc was easy and it does not harm the main focus of mfaktc (TF of mersennes at the primenet wavefront above 2[SUP]64[/SUP]) I decided to keep that feature. Would be nice to know if you really use this now. :smile:

Oliver

lalera 2015-02-28 21:32

[QUOTE=TheJudger;396680]Hi,

so there is still a real usecase for TF on Wagstaff numbers using mfaktc?
Background: diep asked me once ago how much work it would be to implement Wagstaff TF in mfaktc, so I did a quick look and a fast modification (yes, this was really easy!) in mfaktc. Basically I check if 2[SUP]p[/SUP] mod FC (factor candidate) is -1 (= FC - 1) instead of 2[SUP]p[/SUP] mod FC = 1 (for mersennes). So this was a good test for the code... and it passed. I had a nice conversation with diep, Paul and Tony (major part was with diep). This was an early -pre version of mfaktc 0.21 (took a looooooong time from 0.20 to 0.21, sorry). After a while Ryan came up with [URL="http://www.mersenneforum.org/showthread.php?t=18569"]this[/URL] and I felt that diep, Paul and Tony lost motivation. Because the modification of mfaktc was easy and it does not harm the main focus of mfaktc (TF of mersennes at the primenet wavefront above 2[SUP]64[/SUP]) I decided to keep that feature. Would be nice to know if you really use this now. :smile:

Oliver[/QUOTE]

hi,
yes i do use mfaktc v 0.21 for tf wagstaff numbers
i created a forum for wagstaff numbers
[url]http://lalera.freeforums.org/[/url]
i do tf the range 17400000 - 17500000 to 70 bit
and do post the candidates for doing prp-tests with llr
other ranges may follow later on
i hope that some people will take part
the people can do there reservations for prp-testing
and post the results

diep 2015-02-28 23:02

hi Oliver,

We are busy doing math on how far PG could get with X cores within a few years.
Yes there is a VERY good usecase.

We fear it'll take 3 years though to get to 30M with 500 cpu cores (no prp testing with gpu's - that would speed it up) starting at 10M.

lalera 2015-03-01 11:09

1 Attachment(s)
hi,

here are the results for tf the range
17401000 - 17404000 to 70 bit

and i started the range
17404000 - 17410000

you can also download the tf results at

[url]http://lalera.freeforums.org/[/url]

axn 2015-03-01 13:35

[QUOTE=TheJudger;396680]Basically I check if 2[SUP]p[/SUP] mod FC (factor candidate) is -1 (= FC - 1) instead of 2[SUP]p[/SUP] mod FC = 1 (for mersennes).[/QUOTE]

Oliver, just out of curiosity, you're checking for f = 1,3 (mod 8) for Wagstaff candidates, right?

EDIT: Nevermind. lalera's factor list contains both 1 and 3 (mod 8).

TheJudger 2015-03-01 17:02

[QUOTE=axn;396733]Oliver, just out of curiosity, you're checking for f = 1,3 (mod 8) for Wagstaff candidates, right?

EDIT: Nevermind. lalera's factor list contains both 1 and 3 (mod 8).[/QUOTE]

*hehe* Yes, I do. Otherwise it would need a big portion of luck to select the selftest dataset in that way that all factors are 1 mod 8.
Good to see that you not just use the code but also think about potential issues/bugs/whatever. :smile:

Oliver

lalera 2015-03-04 17:42

1 Attachment(s)
hi,
here are the results for tf the range
17404000 - 17410000 to 70 bit

lalera 2015-03-12 17:15

[QUOTE=lalera;397015]hi,
here are the results for tf the range
17404000 - 17410000 to 70 bit[/QUOTE]

i give up now - sorry!

lalera 2015-03-14 16:07

[QUOTE=lalera;397558]i give up now - sorry![/QUOTE]

the plan to do the computations in collaboration with primegrid is too much for me (several thousand machines).
i do not have the capacity to tf that much - and i can not run a server at home (i have a private upc-connection in vienna
75mbit downstream and 7.5mbit upstream).
i asked for a server-connection and they told me that they do not give server-connections to private people - no chance!)
so i will do tf my small range(s).

paulunderwood 2015-03-14 16:17

In order to feed the many cores of a potential PrimeGrid effort of PRP'ing, we need a TF server. At the moment we do not have such a facility :sad:

henryzz 2015-03-14 22:14

Is there any reason why primegrid could not do the tf as well?

paulunderwood 2015-03-14 23:11

They said it would too much work to set up running mfaktc as they are very busy at the moment :sad:

lalera 2015-03-16 10:03

1 Attachment(s)
hi,
here are the results for tf the range
17410000 - 17420000 to 70 bit

lalera 2015-03-23 12:36

1 Attachment(s)
hi,
here are the results for tf the range
17420000 - 17430000 to 70 bit

lalera 2015-03-30 19:17

1 Attachment(s)
hi,
here are the results for tf the range
17430000 - 17440000 to 70 bit

lalera 2015-04-07 23:34

1 Attachment(s)
hi,
here are the results for tf the range
17440000 - 17450000 to 70 bit

lalera 2015-04-17 11:47

1 Attachment(s)
hi,
here are the results for tf the range
17450000 - 17460000 to 70 bit

lalera 2015-04-27 15:38

1 Attachment(s)
hi,
here are the results for tf the range
17460000 - 17470000 to 70 bit

lalera 2015-05-05 13:27

1 Attachment(s)
hi,
here are the results for tf the range
17470000 - 17480000 to 70 bit

lalera 2015-05-15 21:26

1 Attachment(s)
hi,
here are the results for tf the range
17480000 - 17490000 to 70 bit

lalera 2015-05-24 16:47

1 Attachment(s)
hi,
here are the results for tf the range
17490000 - 17500000 to 70 bit

lalera 2015-05-24 18:06

1 Attachment(s)
[QUOTE=lalera;402909]hi,
here are the results for tf the range
17490000 - 17500000 to 70 bit[/QUOTE]

... sorry for the typo in the filename ...
hi,
here are the results for tf the range
17490000 - 17500000 to 70 bit

GP2 2018-12-12 02:07

[QUOTE=paulunderwood;397698]In order to feed the many cores of a potential PrimeGrid effort of PRP'ing, we need a TF server. At the moment we do not have such a facility :sad:[/QUOTE]

Why would a TF server be needed?

Extensive additional TF might find factors for maybe 10% of the currently unfactored Wagstaff exponents. That doesn't really have that much impact overall.

And from the perspective of 2018 rather than 2015, we now have Gerbicz cofactor-compositeness testing. A PRP test only needs to be done once on any exponent, and provided a large (2048-bit or thereabouts) residue is recorded, then we don't have to do a fresh PRP test whenever a new cofactor is created by the discovery of new factors.

So it makes sense to do PRP testing even prior to undertaking a deeper TF effort. In fact you can always just do PRP testing on the raw exponent itself, with no factor string other than "3", even if existing factors are already known. So that simplifies serving up worktodo lines to PrimeGrid clients.

paulunderwood 2018-12-12 06:39

[QUOTE=GP2;502461]Why would a TF server be needed?

Extensive additional TF might find factors for maybe 10% of the currently unfactored Wagstaff exponents. That doesn't really have that much impact overall.

And from the perspective of 2018 rather than 2015, we now have Gerbicz cofactor-compositeness testing. A PRP test only needs to be done once on any exponent, and provided a large (2048-bit or thereabouts) residue is recorded, then we don't have to do a fresh PRP test whenever a new cofactor is created by the discovery of new factors.

So it makes sense to do PRP testing even prior to undertaking a deeper TF effort. In fact you can always just do PRP testing on the raw exponent itself, with no factor string other than "3", even if existing factors are already known. So that simplifies serving up worktodo lines to PrimeGrid clients.[/QUOTE]

I agree with your "Gerbicz cofactor-compositeness testing" is better than TF+PRP these days. I do a Lucas based test in case one is a Carmichael co-factor or is indeed a base a-PSP -- perhaps Robert can tell us whether this can happen. The project would greatly benefit from 1000s computers running on an internet server, compared to a handful of computers. :smile:

diep 2018-12-12 09:03

TF is far more effective for Wagstaff than Mersenne. Furthermore GPU's are not very effective as of yet in PRP-testing, yet they can be very effective thrown into battle for TF. The break-even point until where TF is effective is in short far deeper than it is for Mersenne. If we look back quite some years, the mersenne project was obviously recognized by everyone having a 'shot' at the cash. In short relative too little TF was getting performed for Mersenne i got the impression (never studied it deep) and too little effort was undertaken in factoring Mersenne, just because anyone with 1 or 2 cores back then at home taking 'a shot' wanted a shot at the 'cash'.

With Wagstaff zero cash at stake of course such inefficiencies are not needed.

p.s. P-1 is a pathetic algorithm for both Mersenne and Wagstaff if we realize that both follow under different conditions 2NP + 1 which gives rise to factoring algorithms that can exploit this knowledge giving polynomial factorisation times.

GP2 2018-12-12 12:01

[QUOTE=paulunderwood;502480]I agree with your "Gerbicz cofactor-compositeness testing" is better than TF+PRP these days.[/QUOTE]

Careful with your phrasing :smile: You still have to do one PRP test, but it's never wasted now, even when factors are discovered later.

The only time you ever have to do a second PRP test on the same exponent is if you discover a new factor and it's one of the very rare cases where the Gerbicz cofactor-compositeness test does not determine that the newly created cofactor is definitely composite. In that case you have a "probable probable prime" and you do have to run a PRP test to verify that it's an actual probable prime.

And then PRP to other bases and a Lucas test, to feel more confident. And finally an ECPP certification if the exponent is small enough for that to be feasible, to turn the probable prime into an actual prime.

GP2 2018-12-12 12:18

[QUOTE=diep;502491]TF is far more effective for Wagstaff than Mersenne.[/QUOTE]

I don't really see that. My impression is that both cases are actually similar.

TF increases in difficulty exponentially as bit length increases. And whenever you have exponential growth in difficulty, you have a large set of cases that are trivially easy, another large set of cases that are impossibly hard, and a relatively narrow transition zone. Even when hardware gets faster, the transition zone only shifts a very small amount, because the curve rises so steeply.

For Mersenne, all of our TF is being done in that narrow transition zone. The trivially easy factors were found a long time ago, because Mersenne has been a famous search for decades or centuries.

For Wagstaff, it's much less glamorous, so I don't think any serious TF was done on it before you guys worked on it in 2013. So all those trivially easy factors were just waiting to be found.

diep 2018-12-12 15:08

[QUOTE=GP2;502498]I don't really see that. My impression is that both cases are actually similar.

TF increases in difficulty exponentially as bit length increases. And whenever you have exponential growth in difficulty, you have a large set of cases that are trivially easy, another large set of cases that are impossibly hard, and a relatively narrow transition zone. Even when hardware gets faster, the transition zone only shifts a very small amount, because the curve rises so steeply.

For Mersenne, all of our TF is being done in that narrow transition zone. The trivially easy factors were found a long time ago, because Mersenne has been a famous search for decades or centuries.

For Wagstaff, it's much less glamorous, so I don't think any serious TF was done on it before you guys worked on it in 2013. So all those trivially easy factors were just waiting to be found.[/QUOTE]

Oh it's even trivial it is more effective if you look at the expectation on where the next PRP probably will be. At mersenne the next prime is most likely factor 1.2 away from the previous one. At Wagstaff the next PRP is most likely factor 3 away from the current one with huge deviations.

Trivially that implicitly means TF will factor a higher percentage of Wagstaffs than Mersenne for the same energy input.

edit: Please note i'm not claiming that this factor 3 is constant for Wagstaff. When we move to larger and larger exponents i would expect that factor to grow.

VBCurtis 2018-12-12 17:29

[QUOTE=diep;502506]
Trivially that implicitly means TF will factor a higher percentage of Wagstaffs than Mersenne for the same energy input.
[/QUOTE]

Your logic here is pretty poor. For both types, 99.9+% of candidates will be composite; whether the 5th sig-fig is higher for Wagstaff or not doesn't "trivially" affect TF probabilities; I'm pretty sure it doesn't affect them at all.

Your logic about P-1 also makes no sense; P-1 leverages very well the factor structure 2np+1. That's what makes it powerful, but you claim that's what makes it less powerful. Calling P-1 "pathetic" for Mersennes makes you look rather full of :poop:

paulunderwood 2018-12-12 17:51

[QUOTE=GP2;502496]Careful with your phrasing :smile: You still have to do one PRP test, but it's never wasted now, even when factors are discovered later.

The only time you ever have to do a second PRP test on the same exponent is if you discover a new factor and it's one of the very rare cases where the Gerbicz cofactor-compositeness test does not determine that the newly created cofactor is definitely composite. In that case you have a "probable probable prime" and you do have to run a PRP test to verify that it's an actual probable prime.

And then PRP to other bases and a Lucas test, to feel more confident. And finally an ECPP certification if the exponent is small enough for that to be feasible, to turn the probable prime into an actual prime.[/QUOTE]

Thanks for a clearer understanding. What is the wave front of Wagstaff co-factor testing and in which year will Propper be beaten, given that the prime exponents thin out?

GP2 2018-12-12 22:52

[QUOTE=paulunderwood;502520]Thanks for a clearer understanding. What is the wave front of Wagstaff co-factor testing and in which year will Propper be beaten, given that the prime exponents thin out?[/QUOTE]

I maintain [URL="http://mprime.s3-website.us-west-1.amazonaws.com/wagstaff/"]a mini-website[/URL] where there are lists of factors and lists of PRP residues that can be downloaded as flat files.

I don't know of anyone else currently doing PRP tests on Wagstaff numbers. I am currently doing a doublecheck run on a total of 64 Skylake cores and have reached 6M so far. Going from 5M to 6M took about 3 weeks, so I will probably reach 10M by spring. Some additional factoring is also being done.

In the new year a parallel effort might start, beginning at 10M. It's still under discussion.

As you know, in 2013 you guys tested up to 10M using Vrba-Reix, and Ryan Propper tested extensively in the 10M, 11M, 12M and 13M ranges, apparently using 3-PRP, but didn't reveal whether he did so exhaustively. I suspect there is probably no undiscovered prime smaller than 14M, but I think it's really necessary to double check, for the same reason that Mersenne tests get double checked.


Others could join in, however it needs v 29.5 in order to get the 2048-bit residues that will make future Gerbicz cofactor-compositeness tests possible when and where necessary. Earlier versions only output 64-bit residues.

On Skylake, v 29.5 is significantly faster than 29.4 because of the AVX-512 code. I find the PRP testing also runs faster when [c]HyperthreadLL=1[/c] is enabled, although the exact opposite is true when using 29.5 to do LL tests.

However, 29.5 is still in beta and has some memory corruption bugs, which George may have solved already, but he won't release any new version until he gets back from his vacation. The memory bugs wouldn't affect the correctness of the results because of Gerbicz error correction, but they can cause failure to execute. The problem only seems to occur when you are using many cores on the same machine. I am running one-core and two-core virtual machines in the cloud, and have not encountered any such problems.


Hardware is faster now than in 2013 and the software has gotten faster too, so even a one-person effort would probably catch up to Ryan Propper's 13M primes sometime in late 2019. But obviously additional efforts would make it go faster. And if it became a PrimeGrid project, the sky's the limit.

However, it may be best to let 29.5 mature a bit before seeking wider participation.

GP2 2018-12-15 17:37

[QUOTE=GP2;502556]On Skylake, v 29.5 is significantly faster than 29.4 because of the AVX-512 code. I find the PRP testing also runs faster when [c]HyperthreadLL=1[/c] is enabled, although the exact opposite is true when using 29.5 to do LL tests.[/QUOTE]

A clarification: PRP testing of Wagstaff numbers in the 6M range runs faster in v 29.5 on Skylake when [c]HyperthreadLL=1[/c] is present in the local.txt file.

However, PRP testing of Mersenne numbers in the normal wavefront range (78M and higher) runs slower in v 29.5 on Skylake when [c]HyperthreadLL=1[/c] is present in the local.txt file, and the same is true for LL testing in either the double-check or first-time ranges. (Note, in v 29.4 the opposite was true: [c]HyperthreadLL=1[/c] made it run faster on Skylake. However, v 29.4 didn't have code optimized for AVX-512 like 29.5 does.)

This is purely empirical, I can't think of an explanation. Either small exponents make a difference, or Wagstaff vs. Mersenne makes a difference.

ATH 2018-12-16 03:42

I found Hyperthreading to be slightly faster for PRP check on 29.5:
[url]https://mersenneforum.org/showpost.php?p=498644&postcount=28[/url]

GP2 2018-12-16 08:15

[QUOTE=ATH;502953]I found Hyperthreading to be slightly faster for PRP check on 29.5:
[url]https://mersenneforum.org/showpost.php?p=498644&postcount=28[/url][/QUOTE]

That's strange. For an exponent in the 88.0M range it was considerably slower. I tried it again just now.

This is on the exact same c5d.large instance on AWS cloud. The only difference I can think of is the size of the exponent, or the fact that you ran on an earlier build number from October.

LIke you, I didn't do an actual benchmark, just stopped the automatically-launched background execution and ran it manually for a few minutes with the -d option.

This is without [c]HyperthreadLL=1[/c] in local.txt:

[CODE]
Memory=3300 during 7:30-23:30 else 3300
WorkerThreads=1
CoresPerTest=1
[/CODE]

[CODE]
[Main thread Dec 16 07:55] Mersenne number primality test program version 29.5
[Main thread Dec 16 07:55] Optimizing for CPU architecture: Core i3/i5/i7, L2 cache size: 1 MB, L3 cache size: 25344 KB
[Main thread Dec 16 07:55] Starting worker.
[Work thread Dec 16 07:55] Worker starting
[Work thread Dec 16 07:55] Resuming Gerbicz error-checking PRP test of M88042723 using AVX-512 FFT length 4704K, Pass1=192, Pass2=25088, clm=4
[Work thread Dec 16 07:55] Iteration: 16138806 / 88042723 [18.33%].
[Work thread Dec 16 07:55] Iteration: 16140000 / 88042723 [18.33%], ms/iter: [B]19.568[/B], ETA: 16d 06:49
[Work thread Dec 16 07:57] Iteration: 16145000 / 88042723 [18.33%], ms/iter: [B]19.587[/B], ETA: 16d 07:10
^C[Main thread Dec 16 07:57] Stopping all worker threads.
[Work thread Dec 16 07:57] Stopping PRP test of M88042723 at iteration 16146484 [18.33%]
[Work thread Dec 16 07:57] Worker stopped.
[Main thread Dec 16 07:57] Execution halted.
[/CODE]

And this is with [c]HyperthreadLL=1[/c] in local.txt:

[CODE]
Memory=3300 during 7:30-23:30 else 3300
WorkerThreads=1
CoresPerTest=1
HyperthreadLL=1
[/CODE]

[CODE]
[Main thread Dec 16 07:58] Mersenne number primality test program version 29.5
[Main thread Dec 16 07:58] Optimizing for CPU architecture: Core i3/i5/i7, L2 cache size: 1 MB, L3 cache size: 25344 KB
[Main thread Dec 16 07:58] Starting worker.
[Work thread Dec 16 07:58] Worker starting
[Work thread Dec 16 07:58] Resuming Gerbicz error-checking PRP test of M88042723 using AVX-512 FFT length 4704K, Pass1=192, Pass2=25088, clm=4, 2 threads
[Work thread Dec 16 07:58] Iteration: 16146485 / 88042723 [18.33%].
[Work thread Dec 16 07:59] Iteration: 16150000 / 88042723 [18.34%], ms/iter: [B]23.151[/B], ETA: 19d 06:20
[Work thread Dec 16 08:01] Iteration: 16155000 / 88042723 [18.34%], ms/iter: [B]23.130[/B], ETA: 19d 05:53
^C[Main thread Dec 16 08:01] Stopping all worker threads.
[Work thread Dec 16 08:01] Worker stopped.
[Main thread Dec 16 08:01] Execution halted.
[/CODE]


PS,
and looking at the log files of the automatically-launched background process, it was doing this before I halted it to do the above testing:

[CODE]
[Work thread Dec 16 07:46] Iteration: 16115000 / 88042723 [18.30%], ms/iter: 19.240, ETA: 16d 00:25
[Work thread Dec 16 07:48] Iteration: 16120000 / 88042723 [18.30%], ms/iter: 19.245, ETA: 16d 00:29
[Work thread Dec 16 07:50] Iteration: 16125000 / 88042723 [18.31%], ms/iter: 19.275, ETA: 16d 01:03
[Work thread Dec 16 07:51] Iteration: 16130000 / 88042723 [18.32%], ms/iter: 19.250, ETA: 16d 00:32
[Work thread Dec 16 07:53] Iteration: 16135000 / 88042723 [18.32%], ms/iter: 19.285, ETA: 16d 01:12
[/CODE]

and it was doing this right after I rebooted to relaunch it:

[CODE]
[Work thread Dec 16 08:04] Iteration: 16160000 / 88042723 [18.35%], ms/iter: 18.898, ETA: 15d 17:20
[Work thread Dec 16 08:06] Iteration: 16165000 / 88042723 [18.36%], ms/iter: 18.864, ETA: 15d 16:38
[Work thread Dec 16 08:08] Iteration: 16170000 / 88042723 [18.36%], ms/iter: 18.867, ETA: 15d 16:40
[Work thread Dec 16 08:09] Iteration: 16175000 / 88042723 [18.37%], ms/iter: 18.857, ETA: 15d 16:26
[/CODE]

I don't know if it's a peculiarity of mprime or of the AWS hypervisor that can cause the timing to change slightly just from a reboot.

ATH 2018-12-16 22:34

1 Attachment(s)
I'm not sure what is going on now. I have been outputting mprime output to a file since 29.5 with output every 100K iterations with HT. Now I stopped mprime and removed HT, and started it again, and it was again slower without HT.

But now when I start it again WITH HT, it is even slower again, and I can see it is not using the same FFT parameters (Pass1, Pass2, clm), it is like it is not reading the best option from gwnum.txt.

I have noticed when you start a new instance the speed at first is often not very good and then after the first nightly autobench, it gets a huge boost because it finds the best FFT parameters. If my theory is correct it will go back to previous speed after this next nightly autobench, I think it only retains the best configuration because it does not restart after the autobench?


Even the instances I just stopped briefly to change prime.txt without removing or adding HT is now much slower after restart.


When I look at the benchmarks I just did, at least 1 of the 3 FFTs are fastest for HT, but it is very close in each case:

[CODE]
FFTlen=4608K, Type=3, Arch=8, Pass1=192, Pass2=24576, clm=4 (1 core, 1 worker): 17.12 ms. Throughput: 58.40 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=192, Pass2=24576, clm=4 (1 core hyperthreaded, 1 worker): 21.19 ms. Throughput: 47.19 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=192, Pass2=24576, clm=2 (1 core, 1 worker): 17.94 ms. Throughput: 55.74 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=192, Pass2=24576, clm=2 (1 core hyperthreaded, 1 worker): 21.40 ms. Throughput: 46.72 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=192, Pass2=24576, clm=1 (1 core, 1 worker): 18.33 ms. Throughput: 54.54 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=192, Pass2=24576, clm=1 (1 core hyperthreaded, 1 worker): 21.93 ms. Throughput: 45.60 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=768, Pass2=6144, clm=4 (1 core, 1 worker): 17.16 ms. Throughput: 58.28 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=768, Pass2=6144, clm=4 (1 core hyperthreaded, 1 worker): 21.73 ms. Throughput: 46.02 iter/sec.
[COLOR="Blue"]FFTlen=4608K, Type=3, Arch=8, Pass1=768, Pass2=6144, clm=2 (1 core, 1 worker): 16.57 ms. Throughput: 60.34 iter/sec.[/COLOR]
FFTlen=4608K, Type=3, Arch=8, Pass1=768, Pass2=6144, clm=2 (1 core hyperthreaded, 1 worker): 16.92 ms. Throughput: 59.10 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=768, Pass2=6144, clm=1 (1 core, 1 worker): 17.00 ms. Throughput: 58.82 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=768, Pass2=6144, clm=1 (1 core hyperthreaded, 1 worker): 16.88 ms. Throughput: 59.25 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=1024, Pass2=4608, clm=2 (1 core, 1 worker): 17.02 ms. Throughput: 58.74 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=1024, Pass2=4608, clm=2 (1 core hyperthreaded, 1 worker): 19.65 ms. Throughput: 50.90 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=1024, Pass2=4608, clm=1 (1 core, 1 worker): 17.21 ms. Throughput: 58.10 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=1024, Pass2=4608, clm=1 (1 core hyperthreaded, 1 worker): 17.41 ms. Throughput: 57.43 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=1152, Pass2=4096, clm=2 (1 core, 1 worker): 17.32 ms. Throughput: 57.75 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=1152, Pass2=4096, clm=2 (1 core hyperthreaded, 1 worker): 20.00 ms. Throughput: 50.01 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=1152, Pass2=4096, clm=1 (1 core, 1 worker): 17.22 ms. Throughput: 58.06 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=1152, Pass2=4096, clm=1 (1 core hyperthreaded, 1 worker): 17.18 ms. Throughput: 58.21 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=1536, Pass2=3072, clm=2 (1 core, 1 worker): 17.71 ms. Throughput: 56.45 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=1536, Pass2=3072, clm=2 (1 core hyperthreaded, 1 worker): 23.28 ms. Throughput: 42.96 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=1536, Pass2=3072, clm=1 (1 core, 1 worker): 17.18 ms. Throughput: 58.21 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=1536, Pass2=3072, clm=1 (1 core hyperthreaded, 1 worker): 18.65 ms. Throughput: 53.63 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=2048, Pass2=2304, clm=1 (1 core, 1 worker): 17.58 ms. Throughput: 56.89 iter/sec.
FFTlen=4608K, Type=3, Arch=8, Pass1=2048, Pass2=2304, clm=1 (1 core hyperthreaded, 1 worker): 21.15 ms. Throughput: 47.27 iter/sec.


FFTlen=4704K, Type=3, Arch=8, Pass1=192, Pass2=25088, clm=4 (1 core, 1 worker): 18.18 ms. Throughput: 54.99 iter/sec.
FFTlen=4704K, Type=3, Arch=8, Pass1=192, Pass2=25088, clm=4 (1 core hyperthreaded, 1 worker): 22.16 ms. Throughput: 45.12 iter/sec.
FFTlen=4704K, Type=3, Arch=8, Pass1=192, Pass2=25088, clm=2 (1 core, 1 worker): 18.81 ms. Throughput: 53.16 iter/sec.
FFTlen=4704K, Type=3, Arch=8, Pass1=192, Pass2=25088, clm=2 (1 core hyperthreaded, 1 worker): 22.56 ms. Throughput: 44.32 iter/sec.
FFTlen=4704K, Type=3, Arch=8, Pass1=192, Pass2=25088, clm=1 (1 core, 1 worker): 19.21 ms. Throughput: 52.05 iter/sec.
FFTlen=4704K, Type=3, Arch=8, Pass1=192, Pass2=25088, clm=1 (1 core hyperthreaded, 1 worker): 22.97 ms. Throughput: 43.53 iter/sec.
FFTlen=4704K, Type=3, Arch=8, Pass1=896, Pass2=5376, clm=4 (1 core, 1 worker): 18.48 ms. Throughput: 54.11 iter/sec.
FFTlen=4704K, Type=3, Arch=8, Pass1=896, Pass2=5376, clm=4 (1 core hyperthreaded, 1 worker): 23.16 ms. Throughput: 43.17 iter/sec.
FFTlen=4704K, Type=3, Arch=8, Pass1=896, Pass2=5376, clm=2 (1 core, 1 worker): 17.47 ms. Throughput: 57.24 iter/sec.
FFTlen=4704K, Type=3, Arch=8, Pass1=896, Pass2=5376, clm=2 (1 core hyperthreaded, 1 worker): 18.44 ms. Throughput: 54.22 iter/sec.
FFTlen=4704K, Type=3, Arch=8, Pass1=896, Pass2=5376, clm=1 (1 core, 1 worker): 17.72 ms. Throughput: 56.43 iter/sec.
[COLOR="Blue"]FFTlen=4704K, Type=3, Arch=8, Pass1=896, Pass2=5376, clm=1 (1 core hyperthreaded, 1 worker): 17.33 ms. Throughput: 57.69 iter/sec.[/COLOR]
FFTlen=4704K, Type=3, Arch=8, Pass1=1344, Pass2=3584, clm=2 (1 core, 1 worker): 17.64 ms. Throughput: 56.68 iter/sec.
FFTlen=4704K, Type=3, Arch=8, Pass1=1344, Pass2=3584, clm=2 (1 core hyperthreaded, 1 worker): 22.45 ms. Throughput: 44.55 iter/sec.
FFTlen=4704K, Type=3, Arch=8, Pass1=1344, Pass2=3584, clm=1 (1 core, 1 worker): 17.43 ms. Throughput: 57.37 iter/sec.
FFTlen=4704K, Type=3, Arch=8, Pass1=1344, Pass2=3584, clm=1 (1 core hyperthreaded, 1 worker): 18.11 ms. Throughput: 55.22 iter/sec.
FFTlen=4704K, Type=3, Arch=8, Pass1=1536, Pass2=3136, clm=2 (1 core, 1 worker): 18.02 ms. Throughput: 55.49 iter/sec.
FFTlen=4704K, Type=3, Arch=8, Pass1=1536, Pass2=3136, clm=2 (1 core hyperthreaded, 1 worker): 23.84 ms. Throughput: 41.95 iter/sec.
FFTlen=4704K, Type=3, Arch=8, Pass1=1536, Pass2=3136, clm=1 (1 core, 1 worker): 17.42 ms. Throughput: 57.39 iter/sec.
FFTlen=4704K, Type=3, Arch=8, Pass1=1536, Pass2=3136, clm=1 (1 core hyperthreaded, 1 worker): 18.94 ms. Throughput: 52.79 iter/sec.



FFTlen=4800K, Type=3, Arch=8, Pass1=640, Pass2=7680, clm=4 (1 core, 1 worker): 18.07 ms. Throughput: 55.33 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=640, Pass2=7680, clm=4 (1 core hyperthreaded, 1 worker): 21.21 ms. Throughput: 47.15 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=640, Pass2=7680, clm=2 (1 core, 1 worker): 17.72 ms. Throughput: 56.42 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=640, Pass2=7680, clm=2 (1 core hyperthreaded, 1 worker): 17.81 ms. Throughput: 56.13 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=640, Pass2=7680, clm=1 (1 core, 1 worker): 18.02 ms. Throughput: 55.50 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=640, Pass2=7680, clm=1 (1 core hyperthreaded, 1 worker): 17.67 ms. Throughput: 56.58 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=768, Pass2=6400, clm=4 (1 core, 1 worker): 17.96 ms. Throughput: 55.69 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=768, Pass2=6400, clm=4 (1 core hyperthreaded, 1 worker): 21.37 ms. Throughput: 46.80 iter/sec.
[COLOR="Blue"]FFTlen=4800K, Type=3, Arch=8, Pass1=768, Pass2=6400, clm=2 (1 core, 1 worker): 17.34 ms. Throughput: 57.67 iter/sec.[/COLOR]
FFTlen=4800K, Type=3, Arch=8, Pass1=768, Pass2=6400, clm=2 (1 core hyperthreaded, 1 worker): 17.92 ms. Throughput: 55.79 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=768, Pass2=6400, clm=1 (1 core, 1 worker): 17.68 ms. Throughput: 56.56 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=768, Pass2=6400, clm=1 (1 core hyperthreaded, 1 worker): 17.44 ms. Throughput: 57.33 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=960, Pass2=5120, clm=4 (1 core, 1 worker): 18.66 ms. Throughput: 53.58 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=960, Pass2=5120, clm=4 (1 core hyperthreaded, 1 worker): 23.84 ms. Throughput: 41.95 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=960, Pass2=5120, clm=2 (1 core, 1 worker): 17.90 ms. Throughput: 55.85 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=960, Pass2=5120, clm=2 (1 core hyperthreaded, 1 worker): 19.09 ms. Throughput: 52.38 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=960, Pass2=5120, clm=1 (1 core, 1 worker): 18.04 ms. Throughput: 55.44 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=960, Pass2=5120, clm=1 (1 core hyperthreaded, 1 worker): 17.80 ms. Throughput: 56.16 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=1280, Pass2=3840, clm=2 (1 core, 1 worker): 18.14 ms. Throughput: 55.12 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=1280, Pass2=3840, clm=2 (1 core hyperthreaded, 1 worker): 22.58 ms. Throughput: 44.28 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=1280, Pass2=3840, clm=1 (1 core, 1 worker): 17.98 ms. Throughput: 55.63 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=1280, Pass2=3840, clm=1 (1 core hyperthreaded, 1 worker): 18.31 ms. Throughput: 54.63 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=1536, Pass2=3200, clm=2 (1 core, 1 worker): 18.52 ms. Throughput: 54.00 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=1536, Pass2=3200, clm=2 (1 core hyperthreaded, 1 worker): 24.42 ms. Throughput: 40.94 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=1536, Pass2=3200, clm=1 (1 core, 1 worker): 17.94 ms. Throughput: 55.75 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=1536, Pass2=3200, clm=1 (1 core hyperthreaded, 1 worker): 19.72 ms. Throughput: 50.72 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=1920, Pass2=2560, clm=2 (1 core, 1 worker): 19.58 ms. Throughput: 51.08 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=1920, Pass2=2560, clm=2 (1 core hyperthreaded, 1 worker): 26.11 ms. Throughput: 38.29 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=1920, Pass2=2560, clm=1 (1 core, 1 worker): 18.00 ms. Throughput: 55.55 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=1920, Pass2=2560, clm=1 (1 core hyperthreaded, 1 worker): 20.81 ms. Throughput: 48.06 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=3072, Pass2=1600, clm=1 (1 core, 1 worker): 20.47 ms. Throughput: 48.86 iter/sec.
FFTlen=4800K, Type=3, Arch=8, Pass1=3072, Pass2=1600, clm=1 (1 core hyperthreaded, 1 worker): 25.56 ms. Throughput: 39.13 iter/sec.
[/CODE]

GP2 2018-12-16 23:33

You could simply stop mprime, delete the gwnum.txt file, and then restart. I think this will force a new benchmark.

But in my case, it chose the same FFT length both times.

ATH 2018-12-17 09:03

I was correct in my assumption, after the nightly "magical" autobench, the iteration times are back to the previous levels.
Deleting gwnum.txt does not trigger autobench for me, it only runs during the night. I wish there was a way to trigger it.

I left one instance running without HT, and after the autobench it was at the same speed as with HT before, so the difference is negligible:

[CODE]With HT:
[Work thread Dec 16 12:13:29] Iteration: 2900000 / 78669229 [3.686320%], roundoff: 0.285, ms/iter: 15.728, ETA: 13d 19:02
[Work thread Dec 16 12:39:37] Iteration: 3000000 / 78669229 [3.813435%], roundoff: 0.285, ms/iter: 15.662, ETA: 13d 17:12
[Work thread Dec 16 12:39:53] Gerbicz error check passed at iteration 3000000.
[Work thread Dec 16 12:39:56] M78669229 interim Wh8 residue 9283CB4ABE1B5251 at iteration 3000000
[Work thread Dec 16 13:06:05] Iteration: 3100000 / 78669229 [3.940549%], roundoff: 0.285, ms/iter: 15.665, ETA: 13d 16:49
[Work thread Dec 16 13:32:03] Iteration: 3200000 / 78669229 [4.067664%], roundoff: 0.285, ms/iter: 15.557, ETA: 13d 14:07
[Work thread Dec 16 13:57:58] Iteration: 3300000 / 78669229 [4.194778%], roundoff: 0.285, ms/iter: 15.526, ETA: 13d 13:02
[Work thread Dec 16 14:24:21] Iteration: 3400000 / 78669229 [4.321893%], roundoff: 0.285, ms/iter: 15.808, ETA: 13d 18:31

Without HT:
[Work thread Dec 17 05:53:44] Resuming Gerbicz error-checking PRP test of M78669229 using AVX-512 FFT length 4200K, Pass1=960, Pass2=4480, clm=2
[Work thread Dec 17 05:53:44] Iteration: 6711462 / 78669229 [8.531241%].
[Work thread Dec 17 06:16:52] Iteration: 6800000 / 78669229 [8.643786%], roundoff: 0.280, ms/iter: 15.654, ETA: 13d 00:30
[Work thread Dec 17 06:42:57] Iteration: 6900000 / 78669229 [8.770900%], roundoff: 0.280, ms/iter: 15.629, ETA: 12d 23:34
[Work thread Dec 17 07:08:59] Iteration: 7000000 / 78669229 [8.898015%], roundoff: 0.280, ms/iter: 15.593, ETA: 12d 22:25
[Work thread Dec 17 07:09:15] Gerbicz error check passed at iteration 7000000.
[Work thread Dec 17 07:09:18] M78669229 interim Wh8 residue 3B2CA4733B9BB11A at iteration 7000000
[Work thread Dec 17 07:35:23] Iteration: 7100000 / 78669229 [9.025129%], roundoff: 0.280, ms/iter: 15.633, ETA: 12d 22:47
[Work thread Dec 17 08:01:36] Iteration: 7200000 / 78669229 [9.152244%], roundoff: 0.280, ms/iter: 15.703, ETA: 12d 23:44
[Work thread Dec 17 08:27:30] Iteration: 7300000 / 78669229 [9.279358%], roundoff: 0.280, ms/iter: 15.523, ETA: 12d 19:44
[/CODE]

Prime95 2019-01-16 03:03

[QUOTE=ATH;503043]I'm not sure what is going on now. I have been outputting mprime output to a file since 29.5 with output every 100K iterations with HT. Now I stopped mprime and removed HT, and started it again, and it was again slower without HT.[/QUOTE]

Fixed in 29.5 build 8

lalera 2019-04-06 12:07

hi,
i have a question:
is anybody searching for wagstaff numbers above n=20000000 ?
i do ask for this because i started trial factoring and prp testing
above n=20mio.

diep 2019-04-06 12:57

[QUOTE=ATH;502953]I found Hyperthreading to be slightly faster for PRP check on 29.5:
[url]https://mersenneforum.org/showpost.php?p=498644&postcount=28[/url][/QUOTE]

It's not only the software that matters - also the clockspeed of the CPU.

For my chessprogram Diep some years ago i noticed that hyperthreadings speedup was very variable and dependant upon the clock of the i7 cpu's.

Around 4.5Ghz watercooled it delivered a whopping 30% increase in NPS.

Of course the explanation for this might seem easy yet if you try to explain the increase at higher Ghz levels with a formula including RAM speed and cpu speed - that seems not so trivial as one beforehand guesses.

diep 2019-04-06 13:11

[QUOTE=lalera;512848]hi,
i have a question:
is anybody searching for wagstaff numbers above n=20000000 ?
i do ask for this because i started trial factoring and prp testing
above n=20mio.[/QUOTE]

Lalera - it might be interesting to checkout what happens with Wagstaff from a practical viewpoint.

There is initially a lot of wagstaffs then there is a HUGE gap up to 980k. Then there is one at 4.xM and then there is suddenly a huge gap to 2 of them at 13M very close to each other.

Now there could be one at 15M or 16M of course or maybe even at 20M - yet there is 3 huge gaps basically already. Every single gap is factor 3 or more.

You might be better off generating a statistic.

If we have W = (2^p + 1) / 3

Then you could try to do some statistical research on factors from p-1 and p+1 and see whether odds are higher that this has a tad larger prime.

And then search around 3 times the current exponentnumber which is 3 x 13 = 39M.

I wouldn't blink with my eyes though if the next wagstaff PRP appears to be at for example 50M or even 70M. It is a very unstable formula.

Why not try to find a heuristic first and then regardless what P the heuristic says is most interesting to test - test those first?

GP2 2019-04-06 16:59

[QUOTE=lalera;512848]
is anybody searching for wagstaff numbers above n=20000000 ?
i do ask for this because i started trial factoring and prp testing
above n=20mio.[/QUOTE]

Not as far as I know.

I have a [URL="http://mprime.s3-website.us-west-1.amazonaws.com/wagstaff/"]mini-website for Wagstaff numbers[/URL], and would be very interested in collecting your factors and residues. I hope you are using the latest mprime version 29.7b1 and getting 2048-bit residues?


I have double-checked the range below 10M, and will be doing 10M to 14M at some point, but have paused it temporarily for a few months. Actually axn is currently working on 10.0M to 10.1M. The 10M to 14M range was covered at least partially by Ryan Propper in 2013, but he can't recall exactly which exponents he tested. My hunch is that there are no new Wagstaff primes below 14M.

In the meantime I am doing P−1 testing to B1=p/5, B2=p/100, which has currently reached exponents around 14.9M.

pinhodecarlos 2019-04-06 17:15

I don't mind leaving my laptop running a few candidates for this. I could work on the range 10.1M to 10.2M.axn, care to send me the candidates in prime95 worktodo.txt format please?

diep 2019-04-06 17:29

[QUOTE=pinhodecarlos;512881]I don't mind leaving my laptop running a few candidates for this. I could work on the range 10.1M to 10.2M.axn, care to send me the candidates in prime95 worktodo.txt format please?[/QUOTE]

Pinho - as you nice guy - my tip is to start directly at 14M then and not waste system time 'double checking' without having the initial results 10M - 14M.

VBCurtis 2019-04-06 17:45

[QUOTE=pinhodecarlos;512881]I don't mind leaving my laptop running a few candidates for this. I could work on the range 10.1M to 10.2M.axn, care to send me the candidates in prime95 worktodo.txt format please?[/QUOTE]

If axn wishes to make this a shared project, I'm game to point a core or two toward completing a 100k range also. 10.2 to 10.3?

GP2 2019-04-06 17:48

[QUOTE=pinhodecarlos;512881]I don't mind leaving my laptop running a few candidates for this. I could work on the range 10.1M to 10.2M.axn, care to send me the candidates in prime95 worktodo.txt format please?[/QUOTE]

I guess I'm the person coordinating the ranges.

I already partially started the 10.1M range. I could give you 10.2M to 10.3M, or 14.0M to 14.1M or whatever. I'll send you a PM.

Again, please use the latest mprime 29.7b1 to ensure that Gerbicz error checking is working fully.

pinhodecarlos 2019-04-06 17:57

[QUOTE=GP2;512886]I guess I'm the person coordinating the ranges.

I already partially started the 10.1M range. I could give you 10.2M to 10.3M, or 14.0M to 14.1M or whatever. I'll send you a PM.

Again, please use the latest mprime 29.7b1 to ensure that Gerbicz error checking is working fully.[/QUOTE]

Please liaise the ranges also with Curtis and then send me by PM (10.2-10.3 or 10.3-10.4).

GP2 2019-04-06 19:15

[QUOTE=diep;512883]Pinho - as you nice guy - my tip is to start directly at 14M then and not waste system time 'double checking' without having the initial results 10M - 14M.[/QUOTE]

You are correct, it's not "double-checking" since any residues done in 2013 by Ryan Propper aren't available anymore.

Nonetheless, we have Gerbicz error checking now, which didn't exist in 2013. So a single PRP test should have very high reliability.

GP2 2019-04-06 19:20

[QUOTE=pinhodecarlos;512888]Please liaise the ranges also with Curtis and then send me by PM (10.2-10.3 or 10.3-10.4).[/QUOTE]

I sent PMs to you and VBCurtis.

If anyone else wants to get involved, note that each 10.xM range contains about 2200 exponents and each one takes about 4h 15m on a single core of a fast AVX-512 chip.

At the end, I wish to receive the full results.json.txt file with 2048-bit residues.

pinhodecarlos 2019-04-07 07:02

1 Attachment(s)
[QUOTE=GP2;512894]I sent PMs to you and VBCurtis.

If anyone else wants to get involved, note that each 10.xM range contains about 2200 exponents and each one takes about 4h 15m on a single core of a fast AVX-512 chip.

At the end, I wish to receive the full results.json.txt file with 2048-bit residues.[/QUOTE]

Hi there,


Just copying json results. Looks like everything is running as per your request.


[CODE][Sun Apr 07 01:42:32 2019]
{"status":"C", "k":1, "b":2, "n":10200067, "c":1, "known-factors":["3"], "worktype":"PRP-3", "res64":"FAEB27BC4F4F46F7", "residue-type":5, "res2048":"A007F90BE2ACEB58B6D8FBC9CEA33541B45A2493784EE9A17A9D4C891FCFFE8525E667676A46578303EED5DE34266B21D4A6D277DC3DA4CC1372522755F8DEEFF9CA9D7DAF30207456F1F38B1C47C1208B24B5C4526ACBF116512CAB38D663A70A8582EE612521D3C3A72AFEE398D1EBEB82E1CC2175BEF4A63C5BAC03E4D2F2BFFE5501A3ACFAA3FE91ECD40871888ED8685C7FFB15D1E43E0F80DF2268F487B8ACAC83BCD86B073A95B8DCBD4A4FF8E46D2E66BD4D8DAF2CA5E3797CBC13B702F7BA09BC173AB29E48BEE9C0998A97B752151222048EEAC2D04C6D32F154FA142173D77A13DDE72724FB0413AA77942C1F2E5668BEE3BCFAEB27BC4F4F46F7", "fft-length":589824, "error-code":"00000000", "security-code":"B018123D", "program":{"name":"Prime95", "version":"29.7", "build":1, "port":4}, "timestamp":"2019-04-07 00:42:32", "errors":{"gerbicz":0}, "computer":"HOMEPC"}
[Sun Apr 07 07:24:01 2019]
{"status":"C", "k":1, "b":2, "n":10200101, "c":1, "known-factors":["3"], "worktype":"PRP-3", "res64":"1FC199CD93AC66B5", "residue-type":5, "res2048":"4F4495430630AEEA75FF2D2BDF42C05557A48D0E0B7139B9C9F7F06EA1E360CC50E8AA96078EB9E286553D920841DED26A04E3F8382768A4C60F50E83E9037D2F3E8304A02729AEACE391FC277697B9511F37CB8391572636993DF9B2CBB014AFEE649251D360CA7FFEA7DF84E05D1940DEC95F6F7A75EF3F9632D8D7AC74FB07D201867486F8D3DB4F6FF2007360043B0DF044492E64A0AA3E709D22ED8CA2F0B12E327EAE4AC5DCA53B1213735F83065FB32381DDAE3A08F0278E17F484D721F165A3520E5D50049B523D8E8B68BF87EE31C7CE9CD0CD619A369D178CF0C986C21BDCCAEE6E396DECDA1B9CFA6A8C44EFDEDDC2D91AC351FC199CD93AC66B5", "fft-length":589824, "error-code":"00000000", "security-code":"B05C1281", "program":{"name":"Prime95", "version":"29.7", "build":1, "port":4}, "timestamp":"2019-04-07 06:24:01", "errors":{"gerbicz":0}, "computer":"HOMEPC"}
[/CODE]

lalera 2019-04-07 08:10

[QUOTE=GP2;512879]Not as far as I know.

I have a [URL="http://mprime.s3-website.us-west-1.amazonaws.com/wagstaff/"]mini-website for Wagstaff numbers[/URL], and would be very interested in collecting your factors and residues. I hope you are using the latest mprime version 29.7b1 and getting 2048-bit residues?


I have double-checked the range below 10M, and will be doing 10M to 14M at some point, but have paused it temporarily for a few months. Actually axn is currently working on 10.0M to 10.1M. The 10M to 14M range was covered at least partially by Ryan Propper in 2013, but he can't recall exactly which exponents he tested. My hunch is that there are no new Wagstaff primes below 14M.

In the meantime I am doing P−1 testing to B1=p/5, B2=p/100, which has currently reached exponents around 14.9M.[/QUOTE]

hi,
i do use mfaktc v0.21 and llr v3.8.21
i do not know how to use mprime/prime95
to do wagstaff prp-testing
but i would give a try if you explain to me
how the inputfile has to be or/and send me a small inputfile

GP2 2019-04-07 10:14

[QUOTE=lalera;512947]hi,
i do use mfaktc v0.21 and llr v3.8.21
i do not know how to use mprime/prime95
to do wagstaff prp-testing
but i would give a try if you explain to me
how the inputfile has to be or/and send me a small inputfile[/QUOTE]

You can find [URL="https://www.mersenneforum.org/showpost.php?p=508841&postcount=1"]the executable at the links in this post[/URL].

Create a new subdirectory or folder and decompress the zip file there.

In that same subdirectory or folder, create a file called [c]worktodo.txt[/c] with the following lines:

[CODE]
PRP=1,2,20000047,1,"3"
PRP=1,2,20000063,1,"3"
PRP=1,2,20000077,1,"3"
PRP=1,2,20000147,1,"3"
PRP=1,2,20000213,1,"3"
PRP=1,2,20000297,1,"3"
PRP=1,2,20000303,1,"3"
PRP=1,2,20000311,1,"3"
PRP=1,2,20000327,1,"3"
PRP=1,2,20000339,1,"3"
PRP=1,2,20000377,1,"3"
PRP=1,2,20000429,1,"3"
PRP=1,2,20000543,1,"3"
PRP=1,2,20000599,1,"3"
PRP=1,2,20000621,1,"3"
PRP=1,2,20000623,1,"3"
PRP=1,2,20000689,1,"3"
PRP=1,2,20000723,1,"3"
PRP=1,2,20000861,1,"3"
PRP=1,2,20000867,1,"3"
[/CODE]

This is just the first 20 exponents. You can delete any line where you have already found a factor for that exponent.

(If you run the PRP test on an exponent that already has factors, the 2048-bit residue is still useful information and has a very small chance of helping to find a very big PRP cofactor. However, I'm sure you want to concentrate on exponents that could be Wagstaff primes.)

I can provide a longer list of exponents. If you send me factors, I can filter out any exponents that have new factors discovered recently by you.

(The 20M range has not yet had any P−1 testing done on it, as far as I know. So additional factors might be discovered there in the future. The 14M range has had P−1 with B1=p/100, B2=p/5 done up to about 14.8M so far.)


Then run the program (mprime on Linux, or Prime95 on Windows).

If you use the Linux version, the first time you run it, it will first ask you a few questions for the setup. Hopefully those won't cause any difficulty. [B]You should answer "No" to the question about whether you want to use Primenet.[/B] Primenet only accepts Mersenne testing results, not Wagstaff. You can just keep the defaults for the questions about P−1 memory usage.

lalera 2019-04-07 11:45

1 Attachment(s)
[QUOTE=GP2;512950]You can find [URL="https://www.mersenneforum.org/showpost.php?p=508841&postcount=1"]the executable at the links in this post[/URL].

Create a new subdirectory or folder and decompress the zip file there.

In that same subdirectory or folder, create a file called [c]worktodo.txt[/c] with the following lines:

[CODE]
PRP=1,2,20000047,1,"3"
PRP=1,2,20000063,1,"3"
PRP=1,2,20000077,1,"3"
PRP=1,2,20000147,1,"3"
PRP=1,2,20000213,1,"3"
PRP=1,2,20000297,1,"3"
PRP=1,2,20000303,1,"3"
PRP=1,2,20000311,1,"3"
PRP=1,2,20000327,1,"3"
PRP=1,2,20000339,1,"3"
PRP=1,2,20000377,1,"3"
PRP=1,2,20000429,1,"3"
PRP=1,2,20000543,1,"3"
PRP=1,2,20000599,1,"3"
PRP=1,2,20000621,1,"3"
PRP=1,2,20000623,1,"3"
PRP=1,2,20000689,1,"3"
PRP=1,2,20000723,1,"3"
PRP=1,2,20000861,1,"3"
PRP=1,2,20000867,1,"3"
[/CODE]

This is just the first 20 exponents. You can delete any line where you have already found a factor for that exponent.

(If you run the PRP test on an exponent that already has factors, the 2048-bit residue is still useful information and has a very small chance of helping to find a very big PRP cofactor. However, I'm sure you want to concentrate on exponents that could be Wagstaff primes.)

I can provide a longer list of exponents. If you send me factors, I can filter out any exponents that have new factors discovered recently by you.

(The 20M range has not yet had any P−1 testing done on it, as far as I know. So additional factors might be discovered there in the future. The 14M range has had P−1 with B1=p/100, B2=p/5 done up to about 14.8M so far.)


Then run the program (mprime on Linux, or Prime95 on Windows).

If you use the Linux version, the first time you run it, it will first ask you a few questions for the setup. Hopefully those won't cause any difficulty. [B]You should answer "No" to the question about whether you want to use Primenet.[/B] Primenet only accepts Mersenne testing results, not Wagstaff. You can just keep the defaults for the questions about P−1 memory usage.[/QUOTE]

hi,
thank you!
at this time i have only one small range trialfactored
here are the results for tf the range
20190000 - 20200000 to 69 bit
as an attachment
and a inputfile for llr at
[url]http://deep.alotspace.com[/url]
i think it will take around 90 days
for prp-testing this range with a sb2600k
prime95 seems to be faster than llr
running 1 worker with 4 threads
prime95: 1.82 ms/iter
llr: 2.12 ms/bit

pinhodecarlos 2019-04-07 12:51

Prime95 is always faster than llr. Please be aware cpu generates more heat.

lalera 2019-04-07 14:33

hi,
i did the speed test a second time and let it run longer
sb2600k
n=20000047
running 1 worker with 4 threads
prime95: around 1.8 ms/iter
llr: around 1.8 ms/bit

paulunderwood 2019-04-07 17:45

[QUOTE=lalera;512966]hi,
i did the speed test a second time and let it run longer
sb2600k
n=20000047
running 1 worker with 4 threads
prime95: around 1.8 ms/iter
llr: around 1.8 ms/bit[/QUOTE]

Please use Prime95 as it gives the longer residues, which are useful in further testing.

:busy:

lalera 2019-04-07 20:19

hi,
i think that prime95 v29.7b1 is buggy and there is also a problem with llr v3.8.22
so i will wait for better code ... and do more tf with mfaktc v0.21

GP2 2019-04-07 22:59

[QUOTE=lalera;513004]hi,
i think that prime95 v29.7b1 is buggy and there is also a problem with llr v3.8.22
so i will wait for better code ... and do more tf with mfaktc v0.21[/QUOTE]

There are bugs for one specific type of work, but for Wagstaff PRP testing there is no problem. The Gerbicz error checking gives extra confidence.

pinhodecarlos 2019-04-08 07:51

Grand Prix 2,

How much sieve was done on this? At my pace my range will be completed within 450 days!

GP2 2019-04-08 08:41

[QUOTE=pinhodecarlos;513043]Grand Prix 2,

How much sieve was done on this? At my pace my range will be completed within 450 days![/QUOTE]

In the 10M range, for the remaining unfactored exponents, TF was done to 66 bits by axn, and P−1 was done by me to B1=p/200, B2=p/10.

It's certainly not as deep as for Mersenne, where large numbers of people have contributed to factoring. For Mersenne, the levels in the 10M range are typically TF=69 and P−1 to about B1=p/40, B2=p/2.

However, even if we did have TF and P−1 up to Mersenne levels, it would only eliminate a few percent of the remaining exponents, surely less than 10%. Finding factors gets exponentially harder at larger sizes, and most factors will simply remain out of reach.

So one way or another, there's no way to avoid doing most of those PRP tests. Progress on Mersenne is faster only because the work is split up among a much larger number of contributors.


However, with 2048-bit residues, if you do a PRP test and then a factor is found later, you can do a very quick Gerbicz cofactor-compositeness test on the new cofactor. So the PRP test is not wasted because at least there is a small chance of discovering a new very large PRP.

I find that with even a simple implementation using GMP, a Gerbicz cofactor-compositeness test is about 50 times faster than a PRP cofactor test using the latest mprime AVX-512 implementation. However, note that the Gerbicz test only removes the need to keep redoing PRP tests of new cofactors every time a new factor is discovered; you still have to do one initial PRP test and record the 2048-bit residue, because the Gerbicz test needs that 2048-bit residue as input.

GP2 2019-04-08 14:58

[QUOTE=GP2;513047]In the 10M range, for the remaining unfactored exponents, TF was done to 66 bits by axn, and P−1 was done by me to B1=p/200, B2=p/10.[/QUOTE]

I should say, TF was done from 64 to 66 by axn, and below that by ATH and others. And below 64 bits, the factoring for each exponent stopped whenever a first factor was found, so there are small secondary factors remaining to be found.


If we look at the [URL="https://www.mersenne.org/primenet/"]Mersenne work distribution map[/URL], as of today the line for the 10M range shows:

[CODE]
10000000 61938 | 40593 [B]21345[/B]
[/CODE]

So for Mersenne, there are 21,345 unfactored exponents in the 10M range.

For Wagstaff, there are currently 22,248 unfactored exponents in the 10M range. And the 10.2M subset contains 2206 of them, very close to 10%.

So based on that, if we did factor Wagstaff exponents as thoroughly as Mersenne, we'd only find factors for about 4% of the currently unfactored Wagstaff exponents in the 10M range.


As you know, factoring gets exponentially harder as you increase bit-length (for TF) or non-smoothness (for P−1). For any exponential curve, there is only a very narrow transition zone where you go from "incredibly tiny" to "impossibly large".

The overwhelming majority of exponents are either trivial to factor or impossible to factor. All the years of efforts of Primenet and all the GHz-days thrown at TF and P−1 actually only made a difference for a very small subset of exponents. But of course, it's impossible to know in advance which exponents those are.

pinhodecarlos 2019-04-08 16:28

Apologies but releasing my range. No way I’ll commit my laptop for more than one year on this.

VBCurtis 2019-04-08 16:43

I, too, bit off a little more than I expected; in my case, it'll take me a month to free up a few cores, and then ~3 months to do the work. I'll get mprime going on one core in a few days, and then 3-5 more in May (sadly, not all on one machine). Carlos, why don't we share one 100k range for 3 months or so, e.g. you do 10k and I do 90k?

GP2 2019-04-08 17:53

[QUOTE=pinhodecarlos;513106]Apologies but releasing my range. No way I’ll commit my laptop for more than one year on this.[/QUOTE]

[QUOTE=VBCurtis;513110]I, too, bit off a little more than I expected; in my case, it'll take me a month to free up a few cores, and then ~3 months to do the work.[/QUOTE]

Not a problem. Two thousand exponents is a very large number, even for relatively low exponent ranges.

Currently I don't have any setup for automated assignment of individual exponents. Maybe there's some way to adapt it as a BOINC project, but I have no idea how to go about doing that.

At some point, maybe a few months from now, I will resume my own testing using cloud resources.

pinhodecarlos 2019-04-08 17:54

1 Attachment(s)
[QUOTE=GP2;513117]Not a problem. Two thousand exponents is a very large number, even for relatively low exponent ranges.

Currently I don't have any setup for automated assignment of individual exponents. Maybe there's some way to adapt it as a BOINC project, but I have no idea how to go about doing that.

At some point, maybe a few months from now, I will resume my own testing using cloud resources.[/QUOTE]


Would you like to try [URL]https://boinc.tacc.utexas.edu/[/URL] ?


Attached my tested numbers.

DukeBG 2019-04-09 07:24

[QUOTE=GP2;513117]Maybe there's some way to adapt it as a BOINC project, but I have no idea how to go about doing that.[/QUOTE]

You would need to set up a boinc server with an address to give to users (this part is easy/straightforward) and then write a wrapper application because I don't think mprime has boinc capabilities in it. It's basically a very simple program that translates boinc calls to run a task into actual setup needed to launch the test and then collect its result. Here's [URL="https://github.com/ibethune/llr_wrapper/"]an example[/URL] of the llr wrapper used in PrimeGrid. Then setting up that application in your server is a matter of editing xmls.

Then you need to write a validator that checks the results. Decide if you maybe want to have double checking and to have validator compare residues from two tests. Oh, and write work generation scripts or software. A lot of fun if you're a programmer! It's very preferable to be familiar with php and mysql because you'll likely have to deal with them for various tasks.

lalera 2019-04-09 13:07

hi,
a boinc project would be very nice!
this is not an easy thing
you could look at
[url]http://srbase.my-firewall.org/sr5/[/url]
[url]http://srbase.my-firewall.org/sr5/download/srbase-guide.pdf[/url]
they use llr with a wrapper that comes with the boinc server software (not sure about this)
but if it is so you do not have to develop your own wrapper or a native-boinc-integrated program

GP2 2019-04-10 05:37

[QUOTE=DukeBG;513216]A lot of fun if you're a programmer! It's very preferable to be familiar with php and mysql because you'll likely have to deal with them for various tasks.[/QUOTE]

I used to be a programmer, but I got rusty and lazy. I know basic SQL and Python and C++03. But PHP doesn't have a great reputation, I never dealt with it before.

I'll give it some thought, but I don't know if I'll move forward with it soon, or at all.

lalera 2019-04-11 22:07

win10
tf with mfaktc v 0.21
tf from 1 to 64 bit
both at 240 ghzd/d
--
sb2600k
gtx580
driver 391.35
cuda 8
320w
--
2630v3
gtx1050ti
driver 417.22
cuda 10
140w
--

lalera 2019-04-15 08:01

1 Attachment(s)
hi,
here are the results for tf the range
20200000 - 20220000 to 69 bit

lalera 2019-05-20 09:49

1 Attachment(s)
hi,
here are the results for tf the range
n=20220000 to 20300000 to 69 bit

lalera 2019-05-22 17:52

hi,
i have a new gtx1660ti
win10
sb2600k
gtx1660ti
tf with mfaktc v 0.21
tf from 1 to 64 bit
930 ghzd/d
driver 419.67
cuda 10
190w

lalera 2019-05-29 22:26

1 Attachment(s)
hi,
here are the results for tf the range
n=20000000 to 20190000 to 69 bit

lalera 2019-06-05 17:35

1 Attachment(s)
hi,
here are the results for tf the range
n=19931573 to 20000000 to 69 bit

lalera 2019-06-29 20:00

hi,
i do like to reserve
W2147483647
for trialfactoring from 82 bit to ?

no factor for W2147483647 from 2^82 to 2^83 [mfaktc 0.21 barrett87_mul32_gs]
no factor for W2147483647 from 2^83 to 2^84 [mfaktc 0.21 barrett87_mul32_gs]

continuing

GP2 2019-06-29 21:24

[QUOTE=lalera;520344]hi,
i do like to reserve
W2147483647
for trialfactoring from 82 bit to ?
[/QUOTE]

OK, go for it. I am not working on it and I doubt anyone else is.

Please make sure that you are using a version of mfaktc that is compiled to do Wagstaff and not accidentally Mersenne instead. The easiest way to make sure is to do a quick TF for this exponent to 64, it should take only a few seconds. For Mersenne there are two factors smaller than 64 bits, for Wagstaff there will be none.

You can also try W1073741827 if you like. I did it to TF=84 and then stopped.

lalera 2019-07-11 15:56

1 Attachment(s)
hi,
here are the results for prp testing wagstaff numbers
20190k to 20200k

GP2 2019-07-11 19:57

[QUOTE=lalera;521319]hi,
here are the results for prp testing wagstaff numbers
20190k to 20200k[/QUOTE]

Hi,

Can you provide the results.json.txt file, with the full 2048-bit residues? Thanks.

(You might wish to alter or disguise your Primenet username, which is the user="..." field)

lalera 2019-07-11 20:56

hi,
i did this range with llr v3.8.21/3.8.23 and i do not have a json file
i read here in the forum that the programmer of llr will
integrate gerbicz error checking but it will need some time

paulunderwood 2019-07-11 21:07

[QUOTE=lalera;521336]hi,
i did this range with llr v3.8.21/3.8.23 and i do not have a json file
i read here in the forum that the programmer of llr will
integrate gerbicz error checking but it will need some time[/QUOTE]

You can use Prime95/mprime to crunch these numbers and get the desirable output.

GP2 2019-07-12 00:27

I don't think there's any advantage to using LLR. It uses the same underlying gwnum library as mprime. Maybe it can test some extra forms, but for plain old k*b^n+c PRP-3 I don't think there's any reason not to use mprime (the latest version, namely 29.8 b5).

LLR added the capability to do "Vrba-Reix" residues, but those are unproven. I don't know if there was speed advantage to using them, compared to PRP-3 ?

The LLR output in your file shows 64-bit "RES64" and "OLD64" values, I'm not sure how these differ, or if the first one is indeed a PRP-3 residue.

The range is 20.19M to 20.20M ... did you do 20.0M to 20.19M ? I think I saw only a bunch of trial-factoring results from you in the 20M ranges.

lalera 2019-07-12 01:48

hi,
here are the results for W2147483647

no factor for W2147483647 from 2^84 to 2^85 [mfaktc 0.21 barrett87_mul32_gs]
no factor for W2147483647 from 2^85 to 2^86 [mfaktc 0.21 barrett87_mul32_gs]

W2147483647 released

lalera 2019-07-12 20:24

[QUOTE=GP2;521367]I don't think there's any advantage to using LLR. It uses the same underlying gwnum library as mprime. Maybe it can test some extra forms, but for plain old k*b^n+c PRP-3 I don't think there's any reason not to use mprime (the latest version, namely 29.8 b5).

LLR added the capability to do "Vrba-Reix" residues, but those are unproven. I don't know if there was speed advantage to using them, compared to PRP-3 ?

The LLR output in your file shows 64-bit "RES64" and "OLD64" values, I'm not sure how these differ, or if the first one is indeed a PRP-3 residue.

The range is 20.19M to 20.20M ... did you do 20.0M to 20.19M ? I think I saw only a bunch of trial-factoring results from you in the 20M ranges.[/QUOTE]
hi,
i do use sometimes a prpnet server that cannot use prime95/mprime

GP2 2019-07-13 15:17

[QUOTE=GP2;521367]The LLR output in your file shows 64-bit "RES64" and "OLD64" values, I'm not sure how these differ, or if the first one is indeed a PRP-3 residue.[/QUOTE]

[QUOTE=lalera;521470]i do use sometimes a prpnet server that cannot use prime95/mprime[/QUOTE]

I did a quick check of the first exponent you did, 20190017. Neither of the Res64 values that LLR produced (RES64 and OLD64) is a type-5 PRP-3 residue.

Perhaps one of them is a type-1 PRP-3 residue, I'm not sure. Gerbicz testing for Wagstaff numbers only works with type-5 residues, not type-1.

Can you do a quick check with your LLR executable of a small exponent like 999983 ? Then we could compare the results with mprime and see if the residues really are type-1 PRP-3.

So anyways I think we can conclude that the 20.19M subrange almost certainly doesn't contain a Wagstaff prime. But the numerical residue values can't be added directly to my little database because that stores 2048-bit type-5 residues.

lalera 2019-07-13 16:20

[QUOTE=GP2;521537]I did a quick check of the first exponent you did, 20190017. Neither of the Res64 values that LLR produced (RES64 and OLD64) is a type-5 PRP-3 residue.

Perhaps one of them is a type-1 PRP-3 residue, I'm not sure. Gerbicz testing for Wagstaff numbers only works with type-5 residues, not type-1.

Can you do a quick check with your LLR executable of a small exponent like 999983 ? Then we could compare the results with mprime and see if the residues really are type-1 PRP-3.

So anyways I think we can conclude that the 20.19M subrange almost certainly doesn't contain a Wagstaff prime. But the numerical residue values can't be added directly to my little database because that stores 2048-bit type-5 residues.[/QUOTE]

hi,
with llr v3.8.23

(2^999983+1)/3 is not prime. RES64: 4C43A8FD104EC89D. OLD64: 607828060DA47DC2 Time : 303.538 sec.

GP2 2019-07-13 18:20

[QUOTE=lalera;521545]hi,
with llr v3.8.23

(2^999983+1)/3 is not prime. RES64: 4C43A8FD104EC89D. OLD64: 607828060DA47DC2 Time : 303.538 sec.[/QUOTE]

For this exponent mprime gave [c]"res64":"23A646DAB9B2B3C1", "residue-type":1[/c]

So LLR is producing neither type-1 PRP-3 nor type-5 PRP-3 residues.

lalera 2020-02-23 11:19

1 Attachment(s)
hi,
here are the results for wagstaff numbers
n=11980k to 12000k

diep 2020-02-23 13:48

[QUOTE=GP2;521553]For this exponent mprime gave [c]"res64":"23A646DAB9B2B3C1", "residue-type":1[/c]

So LLR is producing neither type-1 PRP-3 nor type-5 PRP-3 residues.[/QUOTE]

How about composite-27 type residue?

diep 2020-02-23 13:52

[QUOTE=lalera;538176]hi,
here are the results for wagstaff numbers
n=11980k to 12000k[/QUOTE]

Thanks for your fanatism Lalera - but why do you already checked range by Propper? You want to double check?

Why not factor exponent+1 and then the factorisation exponent+1 giving for example the strongest primes and having a Mersenne or Wagstaff known prime exponent als one of its factors - try search those exponents at say range 19M-59M and hope you are so lucky?

lalera 2020-02-23 15:10

hi,
i wanted to double check this range with prime95

lalera 2020-07-13 20:29

hi
for wagstaff numbers
i do like to reserve the range
16400000 to 16600000

diep 2020-07-13 21:01

[QUOTE=lalera;550488]hi
for wagstaff numbers
i do like to reserve the range
16400000 to 16600000[/QUOTE]

Why not take 16.4M to 164M as then at least you got some sort of decent odds to find at least 1.

The odds of a PRP in Wagstaff between [n;2n] seems diverging - not converging - and if i may blindfolded guess it could be on average a gap of factor 3 between a group of PRP's and the next one.

lalera 2020-07-13 21:44

hi,
the range of 16400000 to 16600000 i can do in lifetime (in a few years)

lalera 2021-06-30 17:22

hi,
i do cancel now my reservation for doing prp-tests on wagstaff numbers
n=16400000-16600000
because another person is doing that range (ryanp)
i will do more trial factoring on wagstaff numbers
[url]http://deep.alotspace.com/[/url]


All times are UTC. The time now is 23:27.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.