![]() |
|
|
#1 |
|
"Ram Shanker"
May 2015
Delhi
2616 Posts |
Hi all,
Lately I have been banging my head over alternate implementation of P-1 test. Having checked quite a few factors, I can't recall seeing any high (say >10) power for factors of "k" in 2kp. Smaller digits seem to have sometimes up to 7th or 8th power, specially 2,3 & 5. However higher factors rarely go up to 3rd power. So would it not be justified, in a probabilistic way to use only fewer powers while calculating E? Notation as in http://www.mersenne.org/various/math.php I know you can't guarantee that you undertook P-1 test on an exponent with reduction in powers, but than this could be backed by probabilistic distribution of data based on 10s of million of known factors. :) While we are at it, am I allowed to scrap primenet server for known factors / factoring status? ![]() I am yet to understand why B2 bound work. |
|
|
|
|
#2 | |||
|
Nov 2003
22×5×373 Posts |
Quote:
very rapidly. We do not expect high powers of primes to appear very often. The phrase "justified, in a probabilistic way" is meaningless without an objective function. What objective function do you propose? What optimization do you propose? There is always a possibility that we may fail to find a factor p|N when p-1 is divisible by a high power. This is the nature of the algorithm. The same applies for ECM. It is not necessary to pre-calculate E to run the algorithm. Quote:
What are you measuring? We don't need empirical data. The distribution of prime factors of p-1 is well known theoretically. Quote:
As for understanding B2, one can only read descriptions of the algorithm. I can help, but without knowing what it is that you don't understand, I can't give any specific information. |
|||
|
|
|
|
#3 |
|
Nov 2003
1D2416 Posts |
|
|
|
|
|
#4 | |
|
(loop (#_fork))
Feb 2006
Cambridge, England
72×131 Posts |
Quote:
|
|
|
|
|
|
#5 | |
|
"Ram Shanker"
May 2015
Delhi
2·19 Posts |
Quote:
Ok. I will study more on that one. Thanks. |
|
|
|
|
|
#6 |
|
Sep 2009
1000000111102 Posts |
|
|
|
|
|
#7 | |
|
Nov 2003
22·5·373 Posts |
Quote:
Nor does it explain why one needs to involve primenet. It is not needed to run P-1. It might be needed for some particular project, but your post gave no indication of that. It is also unclear as to why one needs to download known factors. Last fiddled with by R.D. Silverman on 2015-12-09 at 16:57 |
|
|
|
|
|
#8 | |
|
"Ram Shanker"
May 2015
Delhi
2·19 Posts |
Quote:
I didn't know distribution of factors of P-1 is well known THEORETICALLY. It can (?) be done numerically in following way. Let's say we collect 1 million known factors of mersenne number. Now if highest power of 2 in factors of (2kp) is let's say 7 (picking just random value here, it could be some other value). With this observation, we could skip to compute only a^(2^7) instead of a^(log B1 base 2). It would introduce a "chance" to missing some factors, hence I said "probilistic", but hey we are skipping quite a few cycles. |
|
|
|
|
|
#9 | |||
|
Nov 2003
1D2416 Posts |
Quote:
not answer: What problem are you solving? What optimization are you doing? Quote:
You say "could skip to only a^(2^7)". We could also sing hyms, paint pictures, jump on a trampoline, run a marathon, or only ' skip to a^2^3'. We COULD do anything we want. But unless you state an OBJECTIVE, saying "we could ......." is meaningless. Quote:
Once again, what is your OBJECTIVE????? Your statements so far are meaningless and vacuous. Last fiddled with by R.D. Silverman on 2015-12-09 at 17:20 |
|||
|
|
|
|
#10 |
|
"Ram Shanker"
May 2015
Delhi
1001102 Posts |
OBJECTIVE: Reduce the number of computation required to perform P-1 factorization.
HOW: By reducing the power-to-raise during modular exponentiation phase. MATH: To establish distribution of factors (and hence upper limit on power of small factors) of P-1 by observing the trend among collected factors of other mersenne number. As you stated, it is well know theoretically. Consider this exploration only a personal effort.
|
|
|
|
|
#11 | ||
|
Nov 2003
164448 Posts |
Quote:
Or even better: reduce the exponents to 0. This will save even more time. There are many things one can do to reduce the computation. But they all have a COST associated with them. I see no discussion of this cost in your arguments. The cost is, of course, a reduction in the likelihood of finding a factor. But unless you specify how the probability changes as a function of the reduction in time, and unless you specify some objective for your probabilities, the discussion is meaningless. One can, for example, reduce the time needed to ZERO. Do nothing. This gives a zero probability of succeeding, but nothing in your statements places any kind of constraint on what probability you are willing to accepy. Quote:
And it will fail to "establish distribution of factors". It is just mindless numerology. Perhaps you might take a university level course on probability and statistics? It will allow you to gain some understanding of what you are trying (in vain) to propose. You should also take at least a semester on optimization methods/operations research. Would someone else weigh in here? I do not seem to be getting through. |
||
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Probabilistic primality tests faster than Miller Rabin? | mathPuzzles | Math | 14 | 2017-03-27 04:00 |
| probabilistic number theory | wildrabbitt | Math | 57 | 2015-09-17 18:26 |
| Elementary Literature on probabilistic aspects of prime searching | hhh | Math | 7 | 2014-03-18 14:37 |
| time complexity of probabilistic factoring algorithms | koders333 | Factoring | 1 | 2005-12-13 13:58 |
| ASM Optimization | Cyclamen Persicum | Hardware | 4 | 2004-05-26 07:51 |