mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > Operazione Doppi Mersennes

Reply
 
Thread Tools
Old 2012-10-31, 08:14   #1
aketilander
 
aketilander's Avatar
 
"Åke Tilander"
Apr 2011
Sandviken, Sweden

2×283 Posts
Default TF vs. LL Where are the breakeven points for MMs?

If you ask the question: For how long would it be rational to continue TF instead of doing a LL when the double mersennes are concerned? The simple answer is "for ever", but if we insist and want to have an estimate of how long this "for ever" is we could extrapolate the breakeven points used by prime95. There would be so many IFs included in doing something like that so I simply omit all of them.

When I do a simple extrapolation then I get the following breakeven points:

MM61 173 bits (k = 2^111 = 2.6 E33)
MM89 254 bits (k = 2^164 = 2.3 E49)
MM107 307 bits (k = 2^199 = 8.0 E59)
MM127 365 bits (k = 2^237 = 2.2 E71)
i.e for MM127 it would be better to continue trial factoring until factors of the size 365 bits which equals a k-value of 2.2*10^71.

For a Billion digit Mersenne the breakeven point would be 86 bits and for a 10Billion it would be 96 bits using the same way of extrapolating.
aketilander is offline   Reply With Quote
Old 2012-10-31, 08:30   #2
retina
Undefined
 
retina's Avatar
 
"The unspeakable one"
Jun 2006
My evil lair

3×1,847 Posts
Default

Perhaps we should wait until a program exists that can actually do an LL-test on these numbers and then we can compute break even points. Until we know how long an LL-test will take then there is not really any way to do the computations now.
retina is offline   Reply With Quote
Old 2012-10-31, 08:46   #3
aketilander
 
aketilander's Avatar
 
"Åke Tilander"
Apr 2011
Sandviken, Sweden

10001101102 Posts
Wink

Quote:
Originally Posted by retina View Post
Perhaps we should wait until a program exists that can actually do an LL-test on these numbers and then we can compute break even points. Until we know how long an LL-test will take then there is not really any way to do the computations now.
Yes its more like giving some figures and hard facts to support the feeling that its too far out of reach to be able to do a LL ever, so I fully agree with you. I think it would not be too difficult to write a program that could do the calculations but of course it would never ever finish so it would be useless. So I fully agree with you.

And yes, that was one of my IFs (If we had a program as efficient as prime95 to do a LL ....)
When you think about it all these IFs are like a never-ending chain.

Last fiddled with by aketilander on 2012-10-31 at 08:55
aketilander is offline   Reply With Quote
Old 2012-11-04, 05:13   #4
retina
Undefined
 
retina's Avatar
 
"The unspeakable one"
Jun 2006
My evil lair

3·1,847 Posts
Default

Quote:
Originally Posted by aketilander View Post
... that was one of my IFs (If we had a program as efficient as prime95 to do a LL ...
Looking at that statement made me wonder if that was possible.

If we assume that we want to do an LL test on MM61:

I estimate (1) a mantissa precision of about 57-bits minimum would be needed, and (2) an FFT length of about 2^58.

For (1) this rules out any chance of using double precision (53-bit mantissa only) and would push us to use at least extended precision (64-bit mantissa). Combining (1) and (2) means that a memory usage of an order of magnitude around 2^65 bytes. Because of (1) we can't use the SSE or AVX units in existing CPUs so we need to use the non-parallelised FPU. And because of (2) we need to use an address width of more than 64-bits.

I am not aware of any existing system that has such a memory size available, or a register width available, to address 65-bits. If we are forced to use HDDs or SSDs to store the huge data then this would add a considerable performance bottleneck. Future technologies might change these figures above. e.g. using quad precision arithmetic would reduce the FFT length but probably not enough to get the memory usage to less than 2^64 bytes.
retina is offline   Reply With Quote
Old 2012-11-04, 10:36   #5
aketilander
 
aketilander's Avatar
 
"Åke Tilander"
Apr 2011
Sandviken, Sweden

23616 Posts
Default

Quote:
Originally Posted by retina View Post
I am not aware of any existing system that has such a memory size available, or a register width available, to address 65-bits. If we are forced to use HDDs or SSDs to store the huge data then this would add a considerable performance bottleneck. Future technologies might change these figures above. e.g. using quad precision arithmetic would reduce the FFT length but probably not enough to get the memory usage to less than 2^64 bytes.
And if we could solve these problems it would then "only" take something like more then 4.8*10^19 years to do the actual LL-calculation for MM61 if I estimated it rightly which I probably did not.

So we can safely continue TF and need not never ever think about other possiblities to show the compositeness of the MMs.

[I based my estimate on the fact that the LL-time using prime95 for an exponent grows roughly proportional to x^2 (acutally it grows even a little faster then that). I set the time to do a LL on exponent 332,192,831 to 1 year. So it is a very rough estimate, but it is sufficient to show that it's impossible and will be so for ever. So I get (MM61/332192831)^2*(1 year). If I try to estimate the growth even better (using x^2.0405) I get a bigger number like 12.1*10^19 years.]

So as you say, using double precision, MM61 is too large for a LL. What is the largest M we would be able (theoretically speaking, not taking the time into account) to do a LL on using double precision do you think?
aketilander is offline   Reply With Quote
Old 2012-11-04, 10:56   #6
axn
 
axn's Avatar
 
Jun 2003

32·5·103 Posts
Default

Quote:
Originally Posted by retina View Post
using quad precision arithmetic would reduce the FFT length but probably not enough to get the memory usage to less than 2^64 bytes.
Show your work.
axn is offline   Reply With Quote
Old 2012-11-04, 12:56   #7
retina
Undefined
 
retina's Avatar
 
"The unspeakable one"
Jun 2006
My evil lair

3×1,847 Posts
Default

Quote:
Originally Posted by axn View Post
Show your work.
With Quad precision and 57 guard bits we get ~2^6 bits stored per mantissa. From 2^61 total bits we need an FFT length of 2^55. At 2^5 bytes per FFT point comes to 2^60 bytes. Overhead, housekeeping, tables, buffers etc. is about a factor of 2^2 to 2^3 giving ~2^62 to 2^63 bytes. I see my guess above was rather on the high side.

At current tech levels I wonder if the trade-off of using QP in software emulation would be worth the lower memory footprint? Or if using NTT would be a better option overall for really large computations?
retina is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
QS/NFS crossover points bsquared Factoring 24 2016-01-25 05:09
Two points devarajkandadai Math 19 2012-12-15 14:15
My GHz and points are off as of today???? Unregistered Information & Answers 14 2011-09-27 05:34
New factoring breakeven points coming Prime95 Software 20 2006-07-10 21:24
More points for PRP? Mystwalker Prime Sierpinski Project 6 2006-01-03 23:32

All times are UTC. The time now is 18:45.

Mon Jul 13 18:45:41 UTC 2020 up 110 days, 16:18, 1 user, load averages: 1.60, 1.71, 1.65

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.