mersenneforum.org  

Go Back   mersenneforum.org > New To GIMPS? Start Here! > Information & Answers

Reply
 
Thread Tools
Old 2019-02-13, 03:22   #1
dragonbud20
 
dragonbud20's Avatar
 
Mar 2014

34 Posts
Default understanding P-1/TF and the finding of factors

This is sort of a multi part question primarily I'm curious them in terms of the project.

I guess to start when we're talking about TF what do the bit levels mean? I assume it means we're looking for factors somehow in a range defined by the bit level but beyond that I don't know.

from there how do P-1 tests compare to TF? which makes the most sense to do first? also what do the bounds given in a P-1 test mean?

also I'm interested generally in how these tests work so if there's a good source for understanding this stuff a link would be very much appreciated.
dragonbud20 is offline   Reply With Quote
Old 2019-02-13, 04:00   #2
potonono
 
potonono's Avatar
 
Jun 2005
USA, IL

19310 Posts
Default

https://www.mersenne.org/various/math.php has some useful information.

A bit level is just how far we take 2 to the power of some number, like 2^63 would be factoring the candidate mersenne prime exponent by every prime number from 2 to 2^63. If we want to trial factor to the next bit level, it means all the prime numbers above 2^63 up to 2^64 are attempted to factor the candidate exponent. If you look up information on binary numbers, you'll find that the volume of prime numbers to trial factor from each bit level about doubles for each bit level. I'm sure I'll be corrected if I didn't say that right.

So lets say you have a candidate mersenne prime exponent like 82,589,933. That is (2^82589933)-1. Trial factoring was taken up to 2^75. Trial factoring can rule out all the candidates that have relatively small factors, but this example candidate didn't have any factors up to 2^75. Trial factoring much higher isn't as efficient as other tests, but is certainly done by those that are inclined.

P1 factoring uses a different method. I would have to defer to others for an explanation beyond the above link. Trial factoring should be done first, then P1.

If no factors are found with either of these methods (TF / P1), the next step is either an LL or PRP test. Those final tests won't find any factors, but will return a result indicating if the candidate number is prime or not.

Last fiddled with by potonono on 2019-02-13 at 04:03
potonono is offline   Reply With Quote
Old 2019-02-13, 16:24   #3
GP2
 
GP2's Avatar
 
Sep 2003

258110 Posts
Default

Mathematics tells us that for Mersenne numbers, which are of the form of 2p − 1, every factor must be of the form 2 × k × p + 1 for some integer k.

Trial factoring means we want to test all the possible values of k. First k=1, then k=2, and so on to infinity.

But actually, for any given value of p, we can very efficiently rule out about 80% of the possible k values, so we really only need to test the other 20% or so.

Also, we can't test an infinite number of k values, so you have to decide on a limit where you will stop. We express this value in terms of the bit length.

Every number can be expressed in base 2 (binary). For instance, 13 = 8+4+0+1, so we can write it as 1101 in binary:
1 × 23 (= 8) +
1 × 22 (= 4) +
0 × 21 (= 0) +
1 × 20 (= 1)

Each binary digit is a "bit". So 1101 is 4 bits long.

So if you do trial-factoring (TF) to a bit level of, say, 75, that means you have exhaustively searched for all factors that can be written with 75 bits or less. That is, factors smaller than 275 − 1, which is 37778931862957161709567 in base 10.

Trial factoring gets twice as hard each time the bit level increases by one. For example, searching for factors that are can be written with exactly 75 bits takes twice as much time as searching for factors with 74 bits. So it gets exponentially harder as the bit level increases, and at some point you have to give up.

P−1

P−1 ("P minus one") uses an entirely different method.

Consider the k in 2 × k × p + 1 , it's just an integer. So if we know of a factor, we can easily calculate its k value. And then k itself can be expressed as a product of factors.

The P−1 method is very good at finding factors whose k value is itself the product of a lot of small factors (in this case, we say k is "smooth"). It's very bad at finding factors for which the k value is either prime or the product of some small factors plus at least one very big factor.

Consider 245,913,831 − 1, which has the factor 1309673971497300048703624441 = 2 × 14262303351437827620 × 45913831 + 1

So for this factor k = 14262303351437827620 = 22 × 3 × 5 × 11 × 397 × 499 × 523 × 547 × 593 × 643

This k is extremely smooth, which means the factor 1309673971497300048703624441 can be found instantly with P−1 despite the fact that it's 91 bits long.

So the two methods complement each other: P−1 can sometimes find very large factors that would be impossible to find with TF on today's computers, but it will also sometimes fail to find small factors that would be easily found with TF. Unlike TF, the P−1 method can't be used to exhaustively search for all factors smaller than a certain size.

ECM

There is also a third method called ECM. It can find factors beyond the reach of TF and P−1, but only with a lot of effort.

So much effort, in fact, that if you can't find a factor with TF or P−1, it makes more sense just to give up on finding factors and do an LL test or PRP test. These tests allow us to prove that 2p − 1 is composite for some given p, without actually finding a factor.

But people sometimes still do ECM anyway because it's interesting to find factors, even though doing so doesn't actually make any contribution towards finding the next Mersenne prime.
GP2 is offline   Reply With Quote
Old 2019-02-13, 16:30   #4
GP2
 
GP2's Avatar
 
Sep 2003

29×89 Posts
Default

Quote:
Originally Posted by dragonbud20 View Post
which makes the most sense to do first?
Traditionally we do TF first, then P−1.

But you could do it the other way around, it makes no difference.

For searches in the ranges that we are doing, the overlap between the two methods is smaller than most people realize. That is, only a relatively modest fraction of the factors that we find by one method could have been found by the other method, and vice versa.
GP2 is offline   Reply With Quote
Old 2019-02-14, 03:11   #5
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

2×29×79 Posts
Default

See also

https://www.mersenneforum.org/showpo...7&postcount=12
https://www.mersenneforum.org/showpo...23&postcount=6
kriesel is offline   Reply With Quote
Old 2019-02-16, 01:46   #6
dragonbud20
 
dragonbud20's Avatar
 
Mar 2014

34 Posts
Default

Thank you very much guys this has helped me learn a bunch about the project
dragonbud20 is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
[Help] understanding about finding norm in non monic polynomials canny Abstract Algebra & Algebraic Number Theory 1 2018-05-22 07:17
Finding prime factors for 133bit number noodles YAFU 2 2017-05-12 14:00
Finding Wrong Factors lindee Information & Answers 31 2010-12-03 12:50
Finding factors of cunningham-like numbers Zeta-Flux Factoring 187 2008-05-20 14:38
What server should I connect to if I'm frustrated by not finding factors? jasong Factoring 16 2006-03-18 07:15

All times are UTC. The time now is 20:51.

Thu Oct 22 20:51:48 UTC 2020 up 42 days, 18:02, 0 users, load averages: 2.44, 1.97, 1.88

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.