20210520, 02:17  #12 
Jun 2003
1010010010110_{2} Posts 

20210520, 02:17  #13  
"Rashid Naimi"
Oct 2015
Remote to Here/There
2197_{10} Posts 
I don't think things are as gloomy as they seem. Probably the only obstacle to faster computers is a lack of market/generaluse. 128 bits computing is overdue in my opinion. And we don't have to stop there.
Quote:
Last fiddled with by a1call on 20210520 at 02:18 

20210520, 02:42  #14  
Undefined
"The unspeakable one"
Jun 2006
My evil lair
6,329 Posts 
Quote:
A distributed system could be many orders of magnitude faster without the need to increase any word sizes. Each memory cell could be an embedded processor. It is likely the synapse in the brain operates in a similar fashion to this IMO. Content addressable memory (CAM), instead of positional addressing, is already used in caches. So it just needs to be extended from there. 

20210520, 11:08  #15  
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
2×3^{3}×113 Posts 
Quote:
Mprime and Mlucas already use SIMD instructions up to AVX512 on processors that support them. Have for years. https://en.wikipedia.org/wiki/AVX512 https://www.mersenneforum.org/showth...d+multiply+add What would commercially drive design and production of wider native word length support? Full 128 bit native math instructions do not seem necessary in engineering, or most of finance, although may be useful in cryptography. Signed 64bit ints are sufficient to represent the US national debt to the penny, up to ~3000. times its current size. Commercial processors hit clock rate stagnation ~15 years ago. GaAs has been tried as an alternative to silicon. Lately the buzz is about graphene. What's kept processor power moving is dualsocket or quadsocket, multiple memory channels, and manycorespersocket, along with increasing SIMD width and extensive hardware caching. GPUs have hundreds or thousands of cores and multiple types of memory. Going to higher exponents means not only larger fft lengths to process, but intrinsically sequential processing of more iterations for LL or PRP tests. Going to a 128bit or wider memory address space or 128bit OS won't help that. Last fiddled with by kriesel on 20210520 at 11:13 

20210520, 18:00  #16  
Nov 2020
Massachusetts, USA
25_{10} Posts 
Quote:
107 (January 07) 127 (January 27) 521 (May 21) 607 (June 07) As another bonus, two of these four dates have been the discovery dates of our beloved Mersenne primes. M49* = M(74207281)  January 07, 2016 M37 = M(3021377)  January 27, 1998 

20210521, 20:52  #17  
"David Kirkby"
Jan 2021
Althorne, Essex, UK
2^{6}·7 Posts 
Quote:
As you say, the data set is very small. I can well believe what axn says, when he says it is pure chance. The GIMPS server has finally woken up to the fact my dualXeon machine is fairly quick, and is now giving me some category 0 exponents to test. But I've actually started testing some exponents by picking ones which are twinprimes with the twin below the prime. This may either be: * A waste of time, as category 0 exponents are smaller than the category 3 than I can get by manual allocation. * Worthwhile, if there is a connection. Manual exponents were around the 108 million mark, whereas the category 0/1 exponents are around 103 million. I think the time to complete the PRP test rises approximately as the square of the exponent, so the manual ones take about 15% longer. I think there's at least a 15% chance that there's a connection between twin and Mersenne primes, so perhaps spending 15% longer in the calculation is not so stupid. Dave Last fiddled with by drkirkby on 20210521 at 20:53 

20210521, 21:09  #18  
"Robert Gerbicz"
Oct 2005
Hungary
1,531 Posts 
Quote:
And likely there are infinitely many p, for that Mp is a Mersenne prime and p+2 is also a prime [the same is true for p2], the heuristic calc using the known(!) Mersenne exponents for the number of twin primes on each side: Code:
v=[2, 3, 5, 7, 13, 17, 19, 31, 61, 89, 107, 127, 521, 607, 1279, 2203, 2281, 3217, 4253, 4423, 9689, 9941, 11213, 19937, 21701, 23209, 44497, 86243, 110503, 132049, 216091, 756839, 859433, 1257787, 1398269, 2976221, 3021377, 6972593, 13466917, 20996011, 24036583, 25964951, 30402457, 32582657, 37156667, 42643801, 43112609, 57885161, 74207281, 77232917, 82589933]; sum(i=2,length(v),1/log(v[i])*2*0.66016) %3 = 10.132893358113782274515813355313262115 To finish the "proof": we are expecting that for the nth Mersenne prime exponent: exp(c0*n)<p<exp(c1*n) with c0>0. And here the sum of 1/log(exp(c*n))~sum 1/n is a divergent serie. 

20210521, 21:36  #19  
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
17D6_{16} Posts 
Quote:
Last fiddled with by kriesel on 20210521 at 21:42 

20210521, 21:46  #20 
6809 > 6502
"""""""""""""""""""
Aug 2003
101×103 Posts
29·353 Posts 

20210521, 23:37  #21  
"David Kirkby"
Jan 2021
Althorne, Essex, UK
2^{6}×7 Posts 
Quote:
The GHz days credit for the more relevant PRP tests of exponents is given on the GIMPS website  eg. https://www.mersenne.ca/exponent/104059807 Those numbers fit a square law closely over the range of interest to me. I noticed there was an error in my calculation above  I was pushing the cube button on my calculator, not the square. (I was using my iPhone, and struggle to see the scientific calculator in that). After using a calculator I can see a bit more easily (108/103)^2.0=1.09944 (108/103)^2.1=1.10467 So for all practical purposes, the difference is negligible whether one assumes a power of 2 as I did, or the "about 2.1" as stated on a page you link. Just checking the GIMPS website for the nearest exponent above 108 million https://www.mersenne.ca/exponent/108000043 = 442.950 GHz days and above 103 million https://www.mersenne.ca/exponent/103000039 = 410.105 GHz days The ratio of credits given is 442.950/410.105=1.08009 So my estimate of 1.09944 based on assuming a power of 2.0 is actually closer to the credit given than I would have got using a power of 2.1. Last fiddled with by drkirkby on 20210521 at 23:44 

20210522, 00:06  #22  
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
25DA_{16} Posts 
Quote:
Once you read about power analysis, you will start to appreciate that a statement of kind "I compared two points and now I can TL;DR version. No, t ~ c p^{2} is wrong. Worse yet, t ~ c p^{2.1} is also wrong but "less" wrong. t ~ c p^{2} log p is even less wrong of the three. 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Observation about Mersenne exponents  paulunderwood  Miscellaneous Math  15  20160123 20:53 
SophieGermain primes as Mersenne exponents  ProximaCentauri  Miscellaneous Math  15  20141225 14:26 
Assorted formulas for exponents of Mersenne primes  Lee Yiyuan  Miscellaneous Math  60  20110301 12:22 
Mersenne exponents verified  baha2007  Information & Answers  2  20071208 20:12 
Mersenne exponents  Xordan  Math  4  20070607 16:05 