mersenneforum.org Prime representing constant
 Register FAQ Search Today's Posts Mark Forums Read

 2020-12-11, 00:38 #1 Viliam Furik     "Viliam Furík" Jul 2018 Martin, Slovakia 5·149 Posts Prime representing constant Inspired by recent Numberphile video (very short, 12 minutes long), I have started programming a Python program that could compute the constant to as many places as possible, in at most 24 hours, which is what I consider a reasonable time frame for the development phase. So far, the longest completed run computed 3 million decimal digits and took roughly 6 hours to complete. The collected time data suggests that if runtime for n digits is x, then runtime for an is xa2. It also seems to be lower than that, by a factor of about 1.1, on average. My program uses a slight variation of the algorithm used in the video (the infinite sum) -> As we know, what is the highest prime we will use, we can end the sum there, and sum up the fractions. Then we can factor out all the way to the beginning, and we get something like this: ( ( ( (2-1)*2 + (3-1) )*3 + (5-1) )*5 + (7-1) )*7 + (11-1)... / 2*3*5*7... If you look at the bracket part, you will notice, that for each prime, you add it to the previous result, subtract 1, and then you multiply the whole thing by the prime, and then do the same for the next prime. At each step, you also multiply the denominator by the prime, continuing the primorial. When you are done, you simply divide the fraction into a number with a given number of digits. That should be a lot faster than the original algorithm because you divide only in the end, thus keeping the numbers you work with as integers, until the end. Plus, it opens the possibility of backup files so that when you end, you can continue the computation later. I have also found, that for n decimal digits, the highest prime I need is at least (1+x)*n/log10(e), where x tends to zero as n grows – for n = 25000, x = 0.02 should be sufficient. That results from Chebyshev’s function and some quick mathematics using logarithms. A friend of mine is going to help me convert it into a C program, using hopefully a simple FFT multiplication (current Python program uses Karatsuba multiplication for simplicity). Until then, I will try if using NumPy speeds it up (it should, hopefully), and maybe I will get it to run on CUDA, through the Numba library. My question to Alex Yee is, whether it is worth including in the y-cruncher. Another question to anyone, apart from FFT and C, are there any other improvements that can be done? BTW, my program reads the primes from a sieved file containing primes under 1,000,000,000, enough to compute 400,000,000 digits. I found it is faster to read them from a file, rather than sieving them on the start of each run, i.e. the total time it takes it to read them is lower than the time it takes it to sieve them.
2020-12-11, 00:52   #2
retina
Undefined

"The unspeakable one"
Jun 2006
My evil lair

22·3·17·31 Posts

Quote:
 Originally Posted by Viliam Furik Another question to anyone, apart from FFT and C, are there any other improvements that can be done?
An obvious way to improve the speed is to use assembly.

But what is your goal here? Just to make it really fast and nothing else? If all the effort is just for a single run then it seems not worthwhile to go to a lot of trouble when you can let the computer do the hard work, albeit a little less efficiently and for longer, instead of you using your time while the computer sits idle.

 2020-12-11, 02:01 #3 Dr Sardonicus     Feb 2017 Nowhere 532610 Posts See also the arxiv submission A Prime-Representing Constant. This includes a proof that the constant is irrational.
2020-12-11, 02:05   #4
Viliam Furik

"Viliam Furík"
Jul 2018
Martin, Slovakia

5×149 Posts

Quote:
 Originally Posted by Dr Sardonicus See also the arxiv submission A Prime-Representing Constant. This includes a proof that the constant is irrational.
But the same paper includes an algorithm to calculate the constant to an arbitrary precision (bottom of page 1).

 2020-12-11, 04:24 #5 casevh   Dec 2005 101112 Posts Have you tried using the gmpy2 library? It provides access to the GMP library for multiple-precision arithmetic. It should be very easy to convert your existing program to use gmpy2. I maintain gmpy2. If you have any questions, let's start a new thread under Programming. casevh
2020-12-11, 11:17   #6
Viliam Furik

"Viliam Furík"
Jul 2018
Martin, Slovakia

5×149 Posts

Quote:
 Originally Posted by casevh Have you tried using the gmpy2 library? It provides access to the GMP library for multiple-precision arithmetic. It should be very easy to convert your existing program to use gmpy2. I maintain gmpy2. If you have any questions, let's start a new thread under Programming. casevh
I tried, but I couldn't install it. It errored out when installing. I will send you details in PM.

 2020-12-11, 17:24 #7 Stargate38     "Daniel Jackson" May 2011 14285714285714285714 23·5·17 Posts I hope you release this to the public when it's finished. I would love to try it out.
2020-12-11, 18:29   #8
Viliam Furik

"Viliam Furík"
Jul 2018
Martin, Slovakia

74510 Posts

Quote:
 Originally Posted by Stargate38 I hope you release this to the public when it's finished. I would love to try it out.
Sure, why not. It will be most probably open-source. I am now tweaking it with the help of casevh and his ultra-fast library.

2020-12-11, 20:15   #9
Dr Sardonicus

Feb 2017
Nowhere

14CE16 Posts

Quote:
 Originally Posted by Viliam Furik But the same paper includes an algorithm to calculate the constant to an arbitrary precision (bottom of page 1).
From the definitions of gn and f1 as the limiting value of the gn, and using Bertrand's Postulate os in the paper to estimate the "tail" of the infinite series for the limiting value f1 (the constant 2.92005...), we obtain for n > 1

gn < f1 < gn + 2/pn-1#.

Using the Prime Number Theorem, this estimate indicates that, for a given positive integer K,

0 < f1 - gn < 10-K if pn > K*ln(10), approximately.

If this estimate is right, then for 3 million decimal digits' accuracy you need to take primes p up to about 6.9 million.

Of course, being able to maintain accuracy to K decimal digits has its own price if K is large.

2020-12-11, 21:09   #10
Viliam Furik

"Viliam Furík"
Jul 2018
Martin, Slovakia

5·149 Posts

Quote:
 Originally Posted by Dr Sardonicus Using the Prime Number Theorem, this estimate indicates that, for a given positive integer K, 0 < f1 - gn < 10-K if pn > K*ln(10), approximately. If this estimate is right, then for 3 million decimal digits' accuracy you need to take primes p up to about 6.9 million.
Quote:
 Originally Posted by Viliam Furik I have also found, that for n decimal digits, the highest prime I need is at least (1+x)*n/log10(e), where x tends to zero as n grows – for n = 25000, x = 0.02 should be sufficient. That results from Chebyshev’s function and some quick mathematics using logarithms.
Yes, I know. It is right, because the ratio of x to theta(x) - first Chebyshev's function, tends to 1, as x goes to infinity. As the theta(x) is asymptotically equal to ln(p#), and that ln(x#) = log10(x#) / log10(e), the log10(x#) (about the number of digits we want to calculate) is equal to ln(x#) * log10(e), which is equal to x * log10(e). And because the value of the function x/theta(x) is slightly higher than 1, even when x = 100,000, then we can multiply the expression by 1+x, where x is some small number.

Quote:
 Originally Posted by Dr Sardonicus Of course, being able to maintain accuracy to K decimal digits has its own price if K is large.
Quote:
 Originally Posted by Viliam Furik If you look at the bracket part, you will notice, that for each prime, you add it to the previous result, subtract 1, and then you multiply the whole thing by the prime, and then do the same for the next prime. At each step, you also multiply the denominator by the prime, continuing the primorial. When you are done, you simply divide the fraction into a number with a given number of digits.
That's why I am calculating the approximation by computing numerator and denominator separately. I divide only after all the computation has been done.

2020-12-15, 01:02   #11
Viliam Furik

"Viliam Furík"
Jul 2018
Martin, Slovakia

5·149 Posts

Quote:
 Originally Posted by Stargate38 I hope you release this to the public when it's finished. I would love to try it out.
I have some great development news:

Program is fast enough to compute 100,000,000 digits in less than 2 hours, and 1,000,000 in one second. Unfortunately, the runtime scaling of the generating algorithm is quadratic (imperfectly), because it iterates through all primes less than a maximum bound, which I set to be 2.5 times the number of digits.

All times are measured for a single-threaded run because the multi-core parallelization hasn't been successfully implemented yet.

I am hugely thankful to casevh not only for his gmpy2 library but also for his help with programming. Without his contribution, I wouldn't have got this far.

The first publicly available version will be made when the program reaches 1 billion (10^9) decimal places under a day, which, I hope, will be soon.

 Similar Threads Thread Thread Starter Forum Replies Last Post miket Computer Science & Computational Number Theory 1 2019-12-20 06:53 kar_bon Riesel Prime Data Collecting (k*2^n-1) 5 2009-06-22 23:00 kar_bon Riesel Prime Search 45 2007-11-27 19:15 Zeta-Flux Math 5 2005-08-03 21:10 mfgoode Math 10 2004-06-02 04:06

All times are UTC. The time now is 10:39.

Sun Jan 16 10:39:36 UTC 2022 up 177 days, 5:08, 0 users, load averages: 0.55, 0.82, 0.90