mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Computer Science & Computational Number Theory (https://www.mersenneforum.org/forumdisplay.php?f=116)
-   -   Efficient storage of all primes up to some n (https://www.mersenneforum.org/showthread.php?t=24417)

 hansl 2019-05-09 22:21

Efficient storage of all primes up to some n

I've been thinking about some methods of storing a large sequence of primes in a compact manner, while requiring trivial computation to "extract" them.

For example, say I want to store all primes up to 2^32. The simplest way would be a file of 32bit words, which would take up ~775.5 MiB

[code]
primepi(2^32)*4/1024/1024.
%5 = 775.45250320434570312500000000000000000
[/code]

Now I'm just exploring some ideas using PARI to get some ballpark estimates. Some of the cases below technically would require some additional metadata, like a count for how many primes exist in each bitlength, etc., however this should mostly be negligible, so I'm ignoring that small added cost at the moment for simplicity.

So, one simple way to compact this would be to split this into separate chunks in byte ranges. Store all primes < 2^8 as one byte, then primes < 2^16 as 2 bytes, etc:
[code]\\PARI
? (primepi(2^8)+2*(primepi(2^16)-primepi(2^8))+3*(primepi(2^24)-primepi(2^16))+4*(primepi(2^32)-primepi(2^24)))/1024/1024.
%8 = 774.41827487945556640625000000000000000
[/code]
Ok, so we saved one MiB or 0.133% lol, not great.

What if we take it further to bit-packing and splitting our primes among every power of 2?
[code]
? sum(n=1,32,n*(primepi(2^n)-primepi(2^(n-1))))/8/1024/1024.
%10 = 749.43927836418151855468750000000000000
[/code]
Now we save about 26MiB, or 3.35% over the original size. Marginally better.

Since all primes > 2 will be odd, lets say we can save that last bit and just store our values as x where p = (2x+1)
[code]
? sum(n=1,32,(n-1)*(primepi(2^n)-primepi(2^(n-1))))/8/1024/1024.
%11 = 725.20638763904571533203125000000000000
[/code]
Now we save about 50MiB or 6.48%. Still not particularly compelling.

So, what if we just store the gap between adjacent primes? According to [url=https://en.wikipedia.org/wiki/Prime_gap]wikipedia[/url], the largest gap for p < 2^32 is 336. This can be held within 9 bits.
[code]
9*primepi(2^32)/8/1024/1024.
%14 = 218.09601652622222900390625000000000000
[/code]
With this we would compress the size by 71.87%

And again since all prime gaps after (2,3) are multiples of 2, we can actually shave off another bit per gap stored, assuming the extracting program handles the initial case 2,3 specially:
[code]
? 8*primepi(2^32)/8/1024/1024.
%15 = 193.86312580108642578125000000000000000
[/code]
This brings us to exactly 75% the original size. Not bad!

We could extend this further by splitting the primes into ranges based on the bitsize of the maximum gaps in the range. I haven't yet attempted to estimate the size for such a scheme, but it might save a few more percent in the total size.

One limitation of this would be that extraction is now restricted to sequential only, as opposed to before, where we could have random access if needed. That is, unless we implement some sort of "keyframe" concept, where we periodically store the full prime in the file, costing additional metadata, but maybe useful in some scenario.

Anyways, we could extend the prime gap representation further, by splitting into ranges based on the bitsize of the maximum gaps in the range. I haven't yet attempted to estimate the size for such a scheme, but it might save a few more percent in the total size.

I am wondering if any sort of trial factoring application or something like that would have a use for this sort of purely sequential representation that could be quickly streamed out? Or also, since I'm guessing I'm not the first person in the world to think about such things if there is any names or references for these sort of domain-specific prime compression concepts?

Any other thoughts on ways to further compress this information, while still limiting the extraction complexity to simple,fast operations (all of the above ideas should be do-able with essentially just binary shifts and additions). Of course LZ compression or something like that could probably compact things even greater, but then it becomes a question of if you are even saving any computation over sieving it yourself.

I used the limit of 2^32 as a simple example which is easy to calculate, but of course, would be interested in potentially storing more. Some exercises in case anyone else finds this interesting:
Give your best imagined storage scheme, and how compactly could all p < 2^64 be stored?
Or alternatively, what's the largest n where all p < n could be stored within 1GB, or even say 1TB, etc?

 hansl 2019-05-09 22:34

Another idea would be to create a large sieve bitfield, where each 1 bit represents a prime.
By once again ignoring the only even prime 2, we could cut the bitfield in half
2^32/8/1024/1024/2 = 256 MiB

Simple, but not as effective as the prime gap idea.

 R. Gerbicz 2019-05-09 22:39

Wheel.

 hansl 2019-05-09 22:59

[QUOTE=R. Gerbicz;516312]Wheel.[/QUOTE]

I have read a little bit about wheel factorization, and was honestly thinking about mentioning that too. It generates sequences of "mostly prime" numbers, but I'm curious what would be a good scheme to store pre-computed "misses" given some limited wheel size, or how to determine an optimum wheel size for a given n?

Also I would count the wheel itself as part of the pre-computed data.
I'll have to think about it some to determine what kind of optimal gains this would entail.

 a1call 2019-05-09 23:18

I believe Pari-GP stores the difference between the primes which you mentioned. Lossless compressions such as zipping can perhaps overdo any math based compressions.

But all these will be to slow to read for very large files and likely less efficient than regenerating them from scratch.
Another consideration is the enormous amount storage space required.

There are some related estimates in this thread.

ETA
Here is another experiment that suggests there might not be much in the way of efficiency to store primes rather than calculate them. But probably a significant factor in this is the very poor read/write capabilities of Pari-GP.

 a1call 2019-05-10 00:21

Some related insight can be found here:
[url]https://pari-users.pari.math.u-bordeaux1.narkive.com/3xJ5M9Mw/primelimit[/url]

 CRGreathouse 2019-05-10 00:47

[QUOTE=R. Gerbicz;516312]Wheel.[/QUOTE]

Using a mod-30 wheel the primes up to 2^32 take up about 136.5 MiB.

 hansl 2019-05-10 01:11

[QUOTE=CRGreathouse;516330]Using a mod-30 wheel the primes up to 2^32 take up about 136.5 MiB.[/QUOTE]

Could you explain how you got to that estimate? What exactly is being stored in this case?

 axn 2019-05-10 02:55

[QUOTE=hansl;516332]Could you explain how you got to that estimate? What exactly is being stored in this case?[/QUOTE]

There are 8 modular classes that don't have trivial factors (mod 30). These are {1,7,11,13,17,19,23,29}. So you use 8 bitmaps each of length 2^32/30 bits. The first bitmap will store all numbers of the form 1+30n -- bit 0 will be 1, bit 1 will be 31, bit 2 will be 61, etc..
Second bitmap will store 7+30n, and so on.

Total size for all these bitmaps would be (2^32/30)*8 bits or 2^32/30 byte which is about 136.5 MB.

To see if a number k > 5 and < 2^32 is prime, decompose k into
q = floor(k/30), r=k%30
if r is not one of {1,7,11,13,17,19,23,29}, it is not prime.
Else lookup qth bit from the appropriate bitmap.

 ATH 2019-05-10 05:53

You could also store them sequentially: byte 0 will be 1-29, byte 1 will be 31-59, byte n will be n*30 + {1,7,11,13,17,19,23,29}

If you go to a mod-210 wheel there are 48 modular classes so 6 bytes for every 210 numbers so 2^32 in 117 MB (122.7M bytes).

 hansl 2019-05-10 06:31

[QUOTE=axn;516338]There are 8 modular classes that don't have trivial factors (mod 30). These are {1,7,11,13,17,19,23,29}. So you use 8 bitmaps each of length 2^32/30 bits. The first bitmap will store all numbers of the form 1+30n -- bit 0 will be 1, bit 1 will be 31, bit 2 will be 61, etc..
Second bitmap will store 7+30n, and so on.

Total size for all these bitmaps would be (2^32/30)*8 bits or 2^32/30 byte which is about 136.5 MB.

To see if a number k > 5 and < 2^32 is prime, decompose k into
q = floor(k/30), r=k%30
if r is not one of {1,7,11,13,17,19,23,29}, it is not prime.
Else lookup qth bit from the appropriate bitmap.[/QUOTE]
Ah, thanks, I think I get it now.

I worked out some functions to help calculate storing up to various bit limits with different wheel sizes. I figured the modular classes for large wheels could be stored as gaps between classes, so many of the functions rely on looking up prime gaps from tables:
[code]
bits(n) = ceil(log(n)/log(2));

\\ Nth Primorial (product of the first n primes)
nthprimorial(n)=prod(i=1,n,prime(i));

\\ Primorial (product of all primes <= n)
primorial(n)=nthprimorial(primepi(n));

\\ number of primes < nthprimorial(n) (see https://oeis.org/A000849 )
nthprimorialpi(n)={my(a=[0,1,3,10,46,343,3248,42331,646029,12283531,300369796,8028643010,259488750744,9414916809095,362597750396740,15397728527812858,742238179058722891,40068968501510691894]);
if(n>#a-1,print("error, unknown for n>",#a-1),a[n+1]);}

\\ number of primes < primorial(n)
primorialpi(n)=nthprimorialpi(primepi(n));

\\ nth maximal gap between primes (see https://oeis.org/A005250 )
maxgap(n)={my(gaps=[1,2,4,6,8,14,18,20,22,34,36,44,52,72,86,96,112,114,118,132,148,154,180,210,220,222,234,248,250,282,288,292,320,336,354,382,384,394,456,464,468,474,486,490,500,514,516,532,534,540,582,588,602,652,674,716,766,778,804,806,906,916,924,1132,1184,1198,1220,1224,1248,1272,1328,1356,1370,1442,1476,1488,1510,1526,1530,1550]);
gaps[n]}

\\ max gap between primes for all p <= prime(n) (see https://oeis.org/A005669 )
maxgap_nthprime(n)={my(gap_prime_ns=[1,2,4,9,24,30,99,154,189,217,1183,1831,2225,3385,14357,30802,31545,40933,103520,104071,149689,325852,1094421,1319945,2850174,6957876,10539432,10655462,20684332,23163298,64955634,72507380,112228683,182837804,203615628,486570087,910774004,981765347,1094330259,1820471368,5217031687,7322882472,9583057667,11723859927,11945986786,11992433550,16202238656,17883926781,23541455083,28106444830,50070452577,52302956123,72178455400,94906079600,251265078335,473258870471,662221289043,1411461642343,2921439731020,5394763455325,6822667965940,35315870460455,49573167413483,49749629143526,1175661926421598,1475067052906945,2133658100875638,5253374014230870,5605544222945291,7784313111002702,8952449214971382,10160960128667332,10570355884548334,20004097201301079,34952141021660495,135962332505694894,160332893561542066,360701908268316580,408333670434942092,423731791997205041]);
for(i=1,#gap_prime_ns,if(gap_prime_ns[i]>n,return(gap_prime_ns[i-1]),if(gap_prime_ns[i]==n,return(maxgap(i)))));print("n=",n," max gap unknown for n > ",gap_prime_ns[#gap_prime_ns]," guess 2047?"); 2047}

\\ max gap between primes prime gap for primes <= n (see https://oeis.org/A000101 )
maxgap_n(n)={my(gap_primes=[3,5,11,29,97,127,541,907,1151,1361,9587,15727,19661,31469,156007,360749,370373,492227,1349651,1357333,2010881,4652507,17051887,20831533,47326913,122164969,189695893,191913031,387096383,436273291,1294268779,1453168433,2300942869,3842611109,4302407713,10726905041,20678048681,22367085353,25056082543,42652618807,127976335139,182226896713,241160624629,297501076289,303371455741,304599509051,416608696337,461690510543,614487454057,738832928467,1346294311331,1408695494197,1968188557063,2614941711251,7177162612387,13829048560417,19581334193189,42842283926129,90874329412297,171231342421327,218209405437449,1189459969826399,1686994940956727,1693182318747503,43841547845542243,55350776431904441,80873624627236069,203986478517457213,218034721194215521,305405826521089141,352521223451365651,401429925999155063,418032645936713497,804212830686679111,1425172824437700887,5733241593241198219,6787988999657779307,15570628755536096243,17678654157568189057,18361375334787046697]);
for(i=1,#gap_primes,if(gap_primes[i]>n,return(gap_primes[i-1]),if(gap_primes[i]==n,return(maxgap(i)))));print("n=",n," max gap unknown for n > ",gap_primes[#gap_primes]," guess 2047?"); 2047}

\\ size of wheel factorization bitmap for storing all primes < 2^b, using a nthprimorial(n) base wheel
wheelbitmapMiB(b, n)=2.^(b-23)/nthprimorial(n)*eulerphi(nthprimorial(n));

\\ size of wheel itself for wheel factorization, using a nthprimorial(n) base wheel. Storing the gap between each modular class
wheelMiB(n)=(eulerphi(nthprimorial(n))-1)*bits(maxgap_n(nthprimorial(n))/2)/2.^23

\\ total size of wheel and bitmap for storing all primes < 2^b, using a nthprimorial(n) base wheel
wheeltotalMiB(b, n)=wheelMiB(n)+wheelbitmapMiB(b,n);

\\primorial_numeral(n)= \\ Convert to primorial number system, vector of digits
[/code]

And here's the results I got
[code]
? wheeltotalMiB(32,3)
%305 = 136.5333366711934407552083333
? wheeltotalMiB(32,4)
%306 = 117.0286050455910818917410714
? wheeltotalMiB(32,5)
%307 = 106.3901814021073378525771104
? wheeltotalMiB(32,6)
%308 = 98.21540557397352708326829421
? wheeltotalMiB(32,7)
%309 = 92.62673454240674648048412984
? wheeltotalMiB(32,8)
%310 = 91.91488279250780099343562301
? wheeltotalMiB(32,9)
%311 = 201.2229731159129979738501068
? wheeltotalMiB(40,8)
%312 = 22420.81124958361143420233199
? wheeltotalMiB(40,9)
%313 = 21559.29775874218603843453359
? wheeltotalMiB(40,10)
%314 = 24600.58342260438659563560571
? wheeltotalMiB(40,11)
%315 = 155250.8481199464342168732928
? wheeltotalMiB(48,10)
%316 = 5303727.482159470771217090061
? wheeltotalMiB(48,11)
%317 = 5264083.330768526806431184056
? wheeltotalMiB(48,12)
%318 = 10515841.27706460812349303656
? wheeltotalMiB(56,11)
%319 = 1313125198.888805102093294739
? wheeltotalMiB(56,12)
%320 = 1283029359.117316411105306225
? wheeltotalMiB(56,13)
%321 = 1493681169.718112853646773902
? wheeltotalMiB(64,12)
%322 = 327046489926.2217779744494826
? wheeltotalMiB(64,13)
%323 = 319311691479.0883192569093850
? wheeltotalMiB(64,14)
%324 = 322695438533.3537066932409118
[/code]
So, based on this code, best case for 2^32 could be stored in 91.9MiB, using a 8## wheel.

2^40 in ~21.6 GB using mod 9## wheel
2^48 in ~5.26 TB using mod 11## wheel
2^56 in ~1.28 PB using mod 12## wheel
2^64 in ~319 PB using mod 13## wheel

[quote=ATH]
If you go to a mod-210 wheel there are 48 modular classes so 6 bytes for every 210 numbers so 2^32 in 117 MB (122.7M bytes).
[/quote]

Oops, and I just realized that I'm not calculating the modular classes correctly, I thought it would be primepi(nthprimorial(n))-n+1, but I guess that misses all the composites which should still be included? So actual estimates would be a bit(a lot?) higher than the above results.

Is there a formula to get the modular classes given the nth primorial?

edit: Fixed code and results, Thanks LaurV!

 LaurV 2019-05-10 06:37

[QUOTE=hansl;516362]Is there a formula to get the modular classes given the nth primorial?[/QUOTE]
eulerphi()

 hansl 2019-05-10 06:56

[QUOTE=LaurV;516363]eulerphi()[/QUOTE]

Excellent! I just updated my code and results.

 xilman 2019-05-10 08:21

[QUOTE=hansl;516309]So, what if we just store the gap between adjacent primes?

Anyways, we could extend the prime gap representation further, by splitting into ranges based on the bitsize of the maximum gaps in the range. I haven't yet attempted to estimate the size for such a scheme, but it might save a few more percent in the total size.[/QUOTE]
Huffman coding of gaps.

 ewmayer 2019-05-10 19:39

If one's aim is to support e.g. get_nth_prime(n) or "quickly enumerate all primes up to some limit"-style functionality, the way my code does this is:

1. Use a simple sieve to generate candidates not divisible by really small primes;
2. Feed surviving candidates - batches of 4-8 here tend to allow for good hiding of integer MUL latencies - to a base-2 Fermat PRP routine;
3. Do a fast lookup of the surviving candidates in a precomputed table of known base-2 Fermat pseudoprimes;
4. If a candidate does not appear in said table, it's prime.

So there's still a table involved, but it's *much* smaller than any needed for explicit prime storage. Here is the associated commentary in my code:
[code]/* There are 10403 ( = 101*103) base-2 Fermat pseudoprimes < 2^32.
other tables related to the pseudoprimes of various types].
We split these into 2 subsets -  those divisible by 3 or 5 and  those not.

Some General Observations:

The largest base-2 Fermat pseudoprime < 2^32 is 4294901761 = 193*22253377 = 2^32 - 2^16 + 1, which is reminiscent
of the 64-bit (genuine) prime 2^64 - 2^32 + 1 used by Nick Craig-Wood in his pure-integer Mersenne-mod DWT-based
convolution code.

The only twin pseudoprimes < 2^32 are 4369 (which = 17*257, the product of the Fermat primes F2 and F3) and 4371.
Similarly to F2*F3, the product F3*F4 = 16843009 is a base-2 Fermat pseudoprime. Other products of Fermat primes
are "near pseudoprimes" in the sense that their base-2 Fermat residue is a power of 2, e.g. for n=17*65537,
2^(n-1)%n = 2^16.

The largest (gap/2) between adjacent pseudoprimes (either in the merged length-10403 table or the f2psp[] one below -
the number is set by two adjacent elements of the latter - is 4199775, requiring 3 bytes to store, so no significant
compactification via a difference table is possible, as there is for the primes.
*/[/code]

 ldesnogu 2019-05-10 21:47

Given the speed computers have reached, I have stopped using any list, and I just run Kim Walisch [URL="https://github.com/kimwalisch/primesieve"]primesieve[/URL]. It can sieve up to 10^10 in [URL="http://ntheory.org/sieves/benchmarks.html"]less than 2s[/URL].

 ewmayer 2019-05-10 23:16

[QUOTE=ldesnogu;516428]Given the speed computers have reached, I have stopped using any list, and I just run Kim Walisch [URL="https://github.com/kimwalisch/primesieve"]primesieve[/URL]. It can sieve up to 10^10 in [URL="http://ntheory.org/sieves/benchmarks.html"]less than 2s[/URL].[/QUOTE]

Perhaps posting some of the key algorithmic methods said code uses to achieve these speeds would be useful to the OP (and the rest of us.)

 ldesnogu 2019-05-11 06:53

[QUOTE=ewmayer;516435]Perhaps posting some of the key algorithmic methods said code uses to achieve these speeds would be useful to the OP (and the rest of us.)[/QUOTE]
Kim wrote a page that will hopefully help:

[url]https://github.com/kimwalisch/primesieve/blob/master/doc/ALGORITHMS.md[/url]

 hansl 2019-05-11 09:00

[QUOTE=ewmayer;516419]If one's aim is to support e.g. get_nth_prime(n) or "quickly enumerate all primes up to some limit"-style functionality, the way my code does this is:

1. Use a simple sieve to generate candidates not divisible by really small primes;
2. Feed surviving candidates - batches of 4-8 here tend to allow for good hiding of integer MUL latencies - to a base-2 Fermat PRP routine;
3. Do a fast lookup of the surviving candidates in a precomputed table of known base-2 Fermat pseudoprimes;
4. If a candidate does not appear in said table, it's prime.

So there's still a table involved, but it's *much* smaller than any needed for explicit prime storage. Here is the associated commentary in my code:
[code]/* There are 10403 ( = 101*103) base-2 Fermat pseudoprimes < 2^32.
other tables related to the pseudoprimes of various types].
We split these into 2 subsets -  those divisible by 3 or 5 and  those not.

Some General Observations:

The largest base-2 Fermat pseudoprime < 2^32 is 4294901761 = 193*22253377 = 2^32 - 2^16 + 1, which is reminiscent
of the 64-bit (genuine) prime 2^64 - 2^32 + 1 used by Nick Craig-Wood in his pure-integer Mersenne-mod DWT-based
convolution code.

The only twin pseudoprimes < 2^32 are 4369 (which = 17*257, the product of the Fermat primes F2 and F3) and 4371.
Similarly to F2*F3, the product F3*F4 = 16843009 is a base-2 Fermat pseudoprime. Other products of Fermat primes
are "near pseudoprimes" in the sense that their base-2 Fermat residue is a power of 2, e.g. for n=17*65537,
2^(n-1)%n = 2^16.

The largest (gap/2) between adjacent pseudoprimes (either in the merged length-10403 table or the f2psp[] one below -
the number is set by two adjacent elements of the latter - is 4199775, requiring 3 bytes to store, so no significant
compactification via a difference table is possible, as there is for the primes.
*/[/code][/QUOTE]
Thanks for the info. Which program is this for? Sorry, I'm still a bit new here and haven't memorized who's who of program authors yet. (There's so many programs!)

It seems like a fermat test would slow things down a bit though? What does the complexity look like for a good powm algorithm?
I've mostly just played a little with GMP so far, and started picking up a bit of PARI lately.

[quote=Idesnogu]
Given the speed computers have reached, I have stopped using any list, and I just run Kim Walisch primesieve. It can sieve up to 10^10 in less than 2s.[/quote]
That program is pretty impressive! Honestly I don't really have a specific reason for trying to store all primes from my original post, just trying to get some ideas of roughly how much data we are talking, and learn about how various algorithms like this work. I should try actually implementing some form of wheel factorization/sieve and see how it compares.

One thing I have started wondering about, is if there is some way to further compact a very large wheel, maybe some sort of recursive implementation could use smaller wheels to save space in the larger one? Not sure if that makes any sense, still thinking it over...

 hansl 2019-05-11 09:40

[QUOTE=ewmayer;516419][code]/* There are 10403 ( = 101*103) base-2 Fermat pseudoprimes < 2^32.
other tables related to the pseudoprimes of various types].[/code][/QUOTE]
The link is dead now, btw. I was able to view it in wayback machine though.

Playing around some more in PARI, I decided to write a pseudoprime counting function:
[code]
fermat_test(b,p)=Mod(b,p)^(p-1)==1

pseudoprimepi(b,limit)={
my(prev=4,count=0);
forprime(p=3,limit,forstep(n=prev,p-1,2,if(fermat_test(b,n),count++));prev=p+2);
count
}
[/code]

Results:
[code]
? pseudoprimepi(2,2^32)
%70 = 10403
? ##
*** last result computed in 32min, 5,043 ms.
[/code]
Well, at least I got the same value as above. But man, PARI is slow for enumerating primes like this.

Note: it skips even numbers so the count is not technically correct if counting base 3 for example. Here's a version that should count all of them, but is slightly slower:
[code]
pseudoprimepi(b,limit)={
my(prev=4,count=0);
forprime(p=3,limit,for(n=prev,p-1,if(gcd(b,n)==1&&fermat_test(b,n),count++));prev=p+1);
count
}
[/code]
The "b" in gcd call could be replaced with a wheel primorial (like 30) to narrow down relevant ones for the method that ewmayer talks about.

 Dr Sardonicus 2019-05-12 13:25

[QUOTE=ewmayer;516419]If one's aim is to support e.g. get_nth_prime(n) or "quickly enumerate all primes up to some limit"-style functionality, the way my code does this is:

1. Use a simple sieve to generate candidates not divisible by really small primes;
2. Feed surviving candidates - batches of 4-8 here tend to allow for good hiding of integer MUL latencies - to a base-2 Fermat PRP routine;
3. Do a fast lookup of the surviving candidates in a precomputed table of known base-2 Fermat pseudoprimes;
4. If a candidate does not appear in said table, it's prime.

So there's still a table involved, but it's *much* smaller than any needed for explicit prime storage. Here is the associated commentary in my code:
[code]/* There are 10403 ( = 101*103) base-2 Fermat pseudoprimes < 2^32.
other tables related to the pseudoprimes of various types].
The only twin pseudoprimes < 2^32 are 4369 (which = 17*257, the product of the Fermat primes F2 and F3) and 4371.
<snip>
*/[/code][/QUOTE]

The paper [url=https://www.maths.lancs.ac.uk/jameson/carpsp.pdf]Carmichael numbers and pseudoprimes[/url] may be instructive.

In particular, by the result labelled 2.1, both 17 and 257 are == 1 (mod 16) and both divide 2^16 - 1, so their product is automatically 2-psp.

Statistics for various types of pseudoprimes up to 10[sup]k[/sup] for k = 9 to 14 are compiled [url=http://ntheory.org/pseudoprimes.html]here[/url].

Tables of base-2 pseudoprimes up to 2[sup]64[/sup] (list of numbers, list of factorizations, annotated list with factorizations and indicating Carmichael numbers, all compressed, .txt.bz2) may be downloaded from [url=http://www.cecm.sfu.ca/Pseudoprimes/index-2-to-64.html]here[/url].

 ewmayer 2019-05-12 19:26

FYI, It seems the pseudoprime page I linked in my code comment has moved to [url]http://www.numericana.com/answer/pseudo.htm#pseudoprime[/url] .

 Dr Sardonicus 2019-05-12 20:39

[QUOTE=ewmayer;516556]FYI, It seems the pseudoprime page I linked in my code comment has moved to [url]http://www.numericana.com/answer/pseudo.htm#pseudoprime[/url] .[/QUOTE]This site is well worth a visit for the cross-references between pseudoprimes to different bases and Carmichael numbers; also links to pages about Fermat, Poulet, etc. [Nit-pick: tables of base-2 psp's (as opposed to how many there are with up to a given number of decimal digits) not present.]

Other tables of various sorts of pseudoprimes up to 10[sup]14[/sup] are readily accessible (Just click on "data" in the column headings) at

[url=http://ntheory.org/pseudoprimes.html]Pseudoprime Statistics, Tables, and Data[/url] (Fermat, Miller-Rabin, Lucas, Fibonacci, Pell, Frobenius, Baillie-PSW).

Note the small number of Perrin pseudoprimes.

 ewmayer 2019-05-12 21:11

1 Attachment(s)
[QUOTE=Dr Sardonicus;516559]This site is well worth a visit for the cross-references between pseudoprimes to different bases and Carmichael numbers; also links to pages about Fermat, Poulet, etc. [Nit-pick: tables of base-2 psp's (as opposed to how many there are with up to a given number of decimal digits) not present.][/QUOTE]

I've attached the header file from my Mlucas source archive which has the base-2 psp's < 2^32, in the 2-table form (those divisible by 3 or 5 and those not) in which I use them.

 All times are UTC. The time now is 12:40.