mersenneforum.org TF vs. LL Where are the breakeven points for MMs?
 Register FAQ Search Today's Posts Mark Forums Read

 2012-10-31, 08:14 #1 aketilander     "Åke Tilander" Apr 2011 Sandviken, Sweden 2×283 Posts TF vs. LL Where are the breakeven points for MMs? If you ask the question: For how long would it be rational to continue TF instead of doing a LL when the double mersennes are concerned? The simple answer is "for ever", but if we insist and want to have an estimate of how long this "for ever" is we could extrapolate the breakeven points used by prime95. There would be so many IFs included in doing something like that so I simply omit all of them. When I do a simple extrapolation then I get the following breakeven points: MM61 173 bits (k = 2^111 = 2.6 E33) MM89 254 bits (k = 2^164 = 2.3 E49) MM107 307 bits (k = 2^199 = 8.0 E59) MM127 365 bits (k = 2^237 = 2.2 E71) i.e for MM127 it would be better to continue trial factoring until factors of the size 365 bits which equals a k-value of 2.2*10^71. For a Billion digit Mersenne the breakeven point would be 86 bits and for a 10Billion it would be 96 bits using the same way of extrapolating.
 2012-10-31, 08:30 #2 retina Undefined     "The unspeakable one" Jun 2006 My evil lair 22·1,499 Posts Perhaps we should wait until a program exists that can actually do an LL-test on these numbers and then we can compute break even points. Until we know how long an LL-test will take then there is not really any way to do the computations now.
2012-10-31, 08:46   #3
aketilander

"Åke Tilander"
Apr 2011
Sandviken, Sweden

2×283 Posts

Quote:
 Originally Posted by retina Perhaps we should wait until a program exists that can actually do an LL-test on these numbers and then we can compute break even points. Until we know how long an LL-test will take then there is not really any way to do the computations now.
Yes its more like giving some figures and hard facts to support the feeling that its too far out of reach to be able to do a LL ever, so I fully agree with you. I think it would not be too difficult to write a program that could do the calculations but of course it would never ever finish so it would be useless. So I fully agree with you.

And yes, that was one of my IFs (If we had a program as efficient as prime95 to do a LL ....)
When you think about it all these IFs are like a never-ending chain.

Last fiddled with by aketilander on 2012-10-31 at 08:55

2012-11-04, 05:13   #4
retina
Undefined

"The unspeakable one"
Jun 2006
My evil lair

22·1,499 Posts

Quote:
 Originally Posted by aketilander ... that was one of my IFs (If we had a program as efficient as prime95 to do a LL ...
Looking at that statement made me wonder if that was possible.

If we assume that we want to do an LL test on MM61:

I estimate (1) a mantissa precision of about 57-bits minimum would be needed, and (2) an FFT length of about 2^58.

For (1) this rules out any chance of using double precision (53-bit mantissa only) and would push us to use at least extended precision (64-bit mantissa). Combining (1) and (2) means that a memory usage of an order of magnitude around 2^65 bytes. Because of (1) we can't use the SSE or AVX units in existing CPUs so we need to use the non-parallelised FPU. And because of (2) we need to use an address width of more than 64-bits.

I am not aware of any existing system that has such a memory size available, or a register width available, to address 65-bits. If we are forced to use HDDs or SSDs to store the huge data then this would add a considerable performance bottleneck. Future technologies might change these figures above. e.g. using quad precision arithmetic would reduce the FFT length but probably not enough to get the memory usage to less than 2^64 bytes.

2012-11-04, 10:36   #5
aketilander

"Åke Tilander"
Apr 2011
Sandviken, Sweden

2×283 Posts

Quote:
 Originally Posted by retina I am not aware of any existing system that has such a memory size available, or a register width available, to address 65-bits. If we are forced to use HDDs or SSDs to store the huge data then this would add a considerable performance bottleneck. Future technologies might change these figures above. e.g. using quad precision arithmetic would reduce the FFT length but probably not enough to get the memory usage to less than 2^64 bytes.
And if we could solve these problems it would then "only" take something like more then 4.8*10^19 years to do the actual LL-calculation for MM61 if I estimated it rightly which I probably did not.

So we can safely continue TF and need not never ever think about other possiblities to show the compositeness of the MMs.

[I based my estimate on the fact that the LL-time using prime95 for an exponent grows roughly proportional to x^2 (acutally it grows even a little faster then that). I set the time to do a LL on exponent 332,192,831 to 1 year. So it is a very rough estimate, but it is sufficient to show that it's impossible and will be so for ever. So I get (MM61/332192831)^2*(1 year). If I try to estimate the growth even better (using x^2.0405) I get a bigger number like 12.1*10^19 years.]

So as you say, using double precision, MM61 is too large for a LL. What is the largest M we would be able (theoretically speaking, not taking the time into account) to do a LL on using double precision do you think?

2012-11-04, 10:56   #6
axn

Jun 2003

2·32·269 Posts

Quote:
 Originally Posted by retina using quad precision arithmetic would reduce the FFT length but probably not enough to get the memory usage to less than 2^64 bytes.

2012-11-04, 12:56   #7
retina
Undefined

"The unspeakable one"
Jun 2006
My evil lair

22·1,499 Posts

Quote:
 Originally Posted by axn Show your work.
With Quad precision and 57 guard bits we get ~2^6 bits stored per mantissa. From 2^61 total bits we need an FFT length of 2^55. At 2^5 bytes per FFT point comes to 2^60 bytes. Overhead, housekeeping, tables, buffers etc. is about a factor of 2^2 to 2^3 giving ~2^62 to 2^63 bytes. I see my guess above was rather on the high side.

At current tech levels I wonder if the trade-off of using QP in software emulation would be worth the lower memory footprint? Or if using NTT would be a better option overall for really large computations?

 Similar Threads Thread Thread Starter Forum Replies Last Post bsquared Factoring 24 2016-01-25 05:09 devarajkandadai Math 19 2012-12-15 14:15 Unregistered Information & Answers 14 2011-09-27 05:34 Prime95 Software 20 2006-07-10 21:24 Mystwalker Prime Sierpinski Project 6 2006-01-03 23:32

All times are UTC. The time now is 03:26.

Wed Jan 20 03:26:08 UTC 2021 up 47 days, 23:37, 1 user, load averages: 3.22, 3.24, 3.25