![]() |
|
|
#1 |
|
Mar 2004
10348 Posts |
This is a question about prefactoring before either a DC or a first time test.
1. Suppose M was TF'd to 2^n, the B1 value for P-1 factoring is known, and an LL/PRP test was done on M. Before doing a DC on M, it is possible to take that B1 value to do an additional presieve of the values between 2^n and 2^(n+1), do a second TF, and then do a DC. My question is, is that beneficial time-wise at all? I'm not sure what fraction of the coefficients between 2^n and 2^(n+1) are B1-smooth, how long computationally it would take to presieve the coefficients, and how that compares to the time it takes to DC. 2. I guess the same question holds about doing a second TF after the P-1 factoring before doing the first-time LL/PRP test. |
|
|
|
|
|
#2 |
|
Bemusing Prompter
"Danny"
Dec 2002
California
5·479 Posts |
Part 44 of this post may be of interest to you: https://mersenneforum.org/showpost.p...23&postcount=6
|
|
|
|
|
|
#3 | |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
2×32×7×43 Posts |
Quote:
|
|
|
|
|
|
|
#4 | |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
2·32·7·43 Posts |
Quote:
The partial overlap of TF and P-1 provides some partial double-check between factoring methods. It has little percentage impact on overall GIMPS throughput. Last fiddled with by kriesel on 2019-08-01 at 17:39 |
|
|
|
|
|
|
#5 |
|
Mar 2004
21C16 Posts |
Wow, I had a feeling that the numbers would be around those numbers. The 25% I expected, but I didn't expect the 0.19% to be so big, and I didn't expect the transition would be such an enormous amount of work.
What about something as simple as this: Say a computer (gpu likely) is doing TF from scratch, meaning from 2p+1, 4p+1, .... What if it does an extremely fast P-1 test with B1=97 before trial factoring. That should take max 2 seconds of computation. Then the computer eliminates all k values that are smooth for B1=97. This B1 value need not be reported nor its residue stored. Does that speed up anything? I did choose B1=97 somewhat arbitrarily. Some math could be done to find an optimal B1 initial value, but I suspect that that B1 would be exceedingly small, maybe closer to 1000. Last fiddled with by JuanTutors on 2019-08-01 at 18:16 |
|
|
|
|
|
#6 |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
124528 Posts |
Read the rest, not just #44 of the trial factoring concepts, and related material. B1 very small is a waste of time.
Or, go ahead and code what you propose, test it, announce it. Last fiddled with by kriesel on 2019-08-01 at 18:53 |
|
|
|
|
|
#7 |
|
Mar 2004
54010 Posts |
I just read a paper on it. If one were to sieve 128-smooth numbers from k=1 to k=2^56, one would will save 1 second per year on average, not counting the sieving step, which makes the time negative. Only about 1% of numbers between 1 and 2^25 are 97-smooth, and about 1.6% of numbers between 1 and 2^25 are 128-smooth. I don't think the time spent presieving and time saved not factoring ever meet.
Last fiddled with by JuanTutors on 2019-08-01 at 22:29 |
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| What Bounds to choose, and what are Bounds | 144 | Information & Answers | 5 | 2017-03-15 13:36 |
| Question on P-1 bounds | NBtarheel_33 | Math | 1 | 2016-05-09 13:10 |
| Extending P-1 Bounds | TObject | Software | 4 | 2012-10-10 17:42 |
| ECM bounds | Unregistered | Information & Answers | 4 | 2011-07-17 20:18 |
| Optimal ECM bounds | henryzz | GMP-ECM | 14 | 2011-06-09 17:04 |