![]() |
|
|
#45 |
|
"Mihai Preda"
Apr 2015
3×457 Posts |
If anybody's curious to have a look, an implementation of k-selection is here:
https://github.com/preda/gpuowl/tree/master/kselect A good property is that the "k" list does not depend much on exponent. (It does depend on B1). Thus the same set of Ks can be reused for exponents that are not far apart. |
|
|
|
|
|
#46 |
|
"Mihai Preda"
Apr 2015
137110 Posts |
I have a feeling this is a silly question... but here it goes:
if m=2^p - 1, what does it mean if I find k<m-1 such that 3^k == 1 (mod m)? |
|
|
|
|
|
#47 |
|
"Forget I exist"
Jul 2009
Dumbassville
26×131 Posts |
|
|
|
|
|
|
#48 |
|
"Mihai Preda"
Apr 2015
137110 Posts |
I did the very first empirical test (meaning, I checked a known factor for detection). So far so good. So it seems I didn't do some basic mistake in the math, yet.
|
|
|
|
|
|
#49 |
|
"Mihai Preda"
Apr 2015
137110 Posts |
|
|
|
|
|
|
#50 |
|
"Mihai Preda"
Apr 2015
3×457 Posts |
I have one question about the accumulation of the GCD:
So we want to do GCD(a - 1, m), where "a" is some power of 3. But to save GCDs, we multiply a few of those (a - 1) together, and do only one GCD for the set, e.g. GCD((a - 1) * (b - 1), m) Later on, we want to do GCD(c - 1, m). We could do it either alone, or we can accumulate it like GCD((a-1)*(b-1)*(c-1), m) Is there a drawback in always accumulating? Or, is there some advantage in "resetting the multiplication chain" to 1 as soon as a GCD is done? (instead of keeping adding to it). thanks! |
|
|
|
|
|
#51 |
|
"Mihai Preda"
Apr 2015
101010110112 Posts |
I started to implement a proof of concept. Also I developed a more balanced opinion, meaning that now I believe PRP-1 is no "silver bullet" but may be useful.
The main problem lies in the low probability of P-1 in general of finding factors (of mersenne numbers that have already been TFed). Also the benefit from increasing the bounds of P-1 (both B1 and B2) is diminishing as the bounds grow. And in general, the effort put into finding factors should be equal to the probability of finding a factor times the effort of one or two PRP/LL tests. As an example, what I consider now a reasonable PRP-1 setup for a 90M exponent, that has been TFed to 75 or 76 bits, and had no P-1 done, would be like this: - do a first-stage P-1 with B1=1'000'000. The cost of that is about 1.44M iterations, or 1.6% of the PRP(90M). - using the "base" resulting from the initial first-stage P-1, do the PRP-1 with P-1 testing at 800'000 points. Each "test" costs one MUL, and one MUL is about the same as 2 iterations, so this comes to 1.8% of PRP(90M). In addition to the MULs, also do one GCD every 1M iterations, so 90 GCDs, each taking about 40s CPU time, so 1h of CPU time for the GCDs; let's discount the cost of the GCDs in a first approximation, because if they are too expensive at 1GCD per 1M, we can do one every 2M or as desired. So, the additional cost vs. plain PRP comes to 1.6 + 1.8 == 3.4%. On the benefit side we have: - the P-1 first stage with B1=1M, plus this cover possible with 800k points tested: 586081 (100.00%) of 586081 primes between 1M and 10M 606028 (100.00%) of 606028 primes between 10M and 20M 290873 ( 49.53%) of 587252 primes between 20M and 30M 156352 ( 27.15%) of 575795 primes between 30M and 40M 110000 ( 19.38%) of 567480 primes between 40M and 50M 86411 ( 15.40%) of 560981 primes between 50M and 60M 70229 ( 12.63%) of 555949 primes between 60M and 70M 56563 ( 10.26%) of 551318 primes between 70M and 80M 49214 ( 9.34%) of 527109 primes between 80M and 90M 41528 ( 12.18%) of 340883 primes between 90M and 100M covered under 100M: 2053279 First not covered: 20209667 Total covered: 3462833 Which is strictly more than P-1 second-stage with B2=20M. Because: all the primes between B1=1M and < 20209667 are covered. Some additional 800K primes below 100M are covered. And some other 1.5M primes above 100M are covered. I'd like to ask somebody who knows about efficient implementation of second-stage P-1, how many MULs are done in second-stage with B1=1M and B2=20M. (for comparison). So, if the probability of finding a factor of the PRP-1 example above, which is strictly more than prob(P-1(B1=1M, B2=20M)), is somewhere in the region of 3.4% (or as low as half that, if discounting two tests instead of one), then it may be worth it. The main contender is P-1 second-stage which can be done instead, before the PRP. For this, I'd need to know how the cost of second-stage(B1=1M, B2=20M) compares with the 800k MULs of the PRP-1. Other difficulties: Double-checking PRP-1 would require: - precise agreement or specification of the way the exponent (power-smooth(B1)) is built in the first-stage P-1. This would allow the double-checker to reconstruct the "base" given B1. - the cost of the double-check would increase by 1.6%, because of it re-doing the first-stage P-1. (on the plus side, this way the P-1 is double-checked as well). |
|
|
|
|
|
#52 | |
|
Romulan Interpreter
Jun 2011
Thailand
7×1,373 Posts |
Quote:
------- ** there is no known exception and some people believe that there are no 3-PSP mersenne numbers, but this is not proved, and in theory, exceptions are possible. |
|
|
|
|
|
|
#53 |
|
"Mihai Preda"
Apr 2015
3·457 Posts |
|
|
|
|
|
|
#54 | |
|
"Mihai Preda"
Apr 2015
3·457 Posts |
[posted below with the author's permission]
Quote:
Last fiddled with by preda on 2018-09-23 at 13:14 |
|
|
|
|
|
|
#55 | |
|
"Mihai Preda"
Apr 2015
3×457 Posts |
Quote:
(PRP-1 does multiple GCDs as it moves along, not a single one at the end). I'll try to quantify the cost next, but it's more involved. |
|
|
|
|