![]() |
![]() |
#1 |
"Åke Tilander"
Apr 2011
Sandviken, Sweden
2·283 Posts |
![]()
If you ask the question: For how long would it be rational to continue TF instead of doing a LL when the double mersennes are concerned? The simple answer is "for ever", but if we insist and want to have an estimate of how long this "for ever" is we could extrapolate the breakeven points used by prime95. There would be so many IFs included in doing something like that so I simply omit all of them.
When I do a simple extrapolation then I get the following breakeven points: MM61 173 bits (k = 2^111 = 2.6 E33) MM89 254 bits (k = 2^164 = 2.3 E49) MM107 307 bits (k = 2^199 = 8.0 E59) MM127 365 bits (k = 2^237 = 2.2 E71) i.e for MM127 it would be better to continue trial factoring until factors of the size 365 bits which equals a k-value of 2.2*10^71. For a Billion digit Mersenne the breakeven point would be 86 bits and for a 10Billion it would be 96 bits using the same way of extrapolating. |
![]() |
![]() |
![]() |
#2 |
Undefined
"The unspeakable one"
Jun 2006
My evil lair
1A1816 Posts |
![]()
Perhaps we should wait until a program exists that can actually do an LL-test on these numbers and then we can compute break even points. Until we know how long an LL-test will take then there is not really any way to do the computations now.
|
![]() |
![]() |
![]() |
#3 | |
"Åke Tilander"
Apr 2011
Sandviken, Sweden
2·283 Posts |
![]() Quote:
And yes, that was one of my IFs (If we had a program as efficient as prime95 to do a LL ....) When you think about it all these IFs are like a never-ending chain. Last fiddled with by aketilander on 2012-10-31 at 08:55 |
|
![]() |
![]() |
![]() |
#4 | |
Undefined
"The unspeakable one"
Jun 2006
My evil lair
1A1816 Posts |
![]() Quote:
If we assume that we want to do an LL test on MM61: I estimate (1) a mantissa precision of about 57-bits minimum would be needed, and (2) an FFT length of about 2^58. For (1) this rules out any chance of using double precision (53-bit mantissa only) and would push us to use at least extended precision (64-bit mantissa). Combining (1) and (2) means that a memory usage of an order of magnitude around 2^65 bytes. Because of (1) we can't use the SSE or AVX units in existing CPUs so we need to use the non-parallelised FPU. And because of (2) we need to use an address width of more than 64-bits. I am not aware of any existing system that has such a memory size available, or a register width available, to address 65-bits. If we are forced to use HDDs or SSDs to store the huge data then this would add a considerable performance bottleneck. Future technologies might change these figures above. e.g. using quad precision arithmetic would reduce the FFT length but probably not enough to get the memory usage to less than 2^64 bytes. |
|
![]() |
![]() |
![]() |
#5 | |
"Åke Tilander"
Apr 2011
Sandviken, Sweden
10001101102 Posts |
![]() Quote:
![]() So we can safely continue TF and need not never ever think about other possiblities to show the compositeness of the MMs. [I based my estimate on the fact that the LL-time using prime95 for an exponent grows roughly proportional to x^2 (acutally it grows even a little faster then that). I set the time to do a LL on exponent 332,192,831 to 1 year. So it is a very rough estimate, but it is sufficient to show that it's impossible and will be so for ever. So I get (MM61/332192831)^2*(1 year). If I try to estimate the growth even better (using x^2.0405) I get a bigger number like 12.1*10^19 years.] So as you say, using double precision, MM61 is too large for a LL. What is the largest M we would be able (theoretically speaking, not taking the time into account) to do a LL on using double precision do you think? |
|
![]() |
![]() |
![]() |
#6 |
Jun 2003
22×32×151 Posts |
![]() |
![]() |
![]() |
![]() |
#7 |
Undefined
"The unspeakable one"
Jun 2006
My evil lair
23·5·167 Posts |
![]()
With Quad precision and 57 guard bits we get ~2^6 bits stored per mantissa. From 2^61 total bits we need an FFT length of 2^55. At 2^5 bytes per FFT point comes to 2^60 bytes. Overhead, housekeeping, tables, buffers etc. is about a factor of 2^2 to 2^3 giving ~2^62 to 2^63 bytes. I see my guess above was rather on the high side.
At current tech levels I wonder if the trade-off of using QP in software emulation would be worth the lower memory footprint? Or if using NTT would be a better option overall for really large computations? |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
QS/NFS crossover points | bsquared | Factoring | 67 | 2022-02-21 01:54 |
Two points | devarajkandadai | Math | 19 | 2012-12-15 14:15 |
My GHz and points are off as of today???? | Unregistered | Information & Answers | 14 | 2011-09-27 05:34 |
New factoring breakeven points coming | Prime95 | Software | 20 | 2006-07-10 21:24 |
More points for PRP? | Mystwalker | Prime Sierpinski Project | 6 | 2006-01-03 23:32 |