mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Software

Reply
 
Thread Tools
Old 2002-09-10, 21:08   #1
svempasnake
 
Aug 2002

3·7 Posts
Default Does the LL test:s factorization save or waste CPU time?

Hi, when I run an LL test on an ~16M exponent, the times needed on my home PC are these, roughly:
Factoring up to 66 bits: a bit more than ½ day (maybe something like 14-16 hours)
LL testing: about 10 days (or maybe 11.5 or whatever).
And it says chance to find a factor: 5%.
Now, given the 5% is a good estimate, the factoring seem to me like well spent time: It would save about 5% * ~22days (The double check included), = roughly 1 day, i.e. more than time spent.

However, I wonder if the stated ~5% estimate is much too optimistic, for two main reasons:
1. Theoretically: If the exponent I'm working with already been factored to 64 bits, and my factoring only would find a factor of 65 or 66 bits, the chance would be way below 5%, since the pre factoring work already drained out about 64/66 of that chance (and I have only like 2/66 * 5% = very small chance left? )

2. Practically: I have been browsing trough the cleared exponents file. I didn't do this too systematically. But for my eye it looks like remarkably few recent successful factorings, especially in the 65-66 bit range. Maybe 1 out of 100-200 or something like that, need a closer count for a more accurate figure.
I also browsed the recently finished 33M exponents and noticed these particular exponents seem to brake this pattern. Seems to be a good part (like 10-20 or so out of 100) of recent succesfull factorizations in connection with the LL test up here. I sence the relatively succesful factorisation up here are meerely "thanks" to lack of pre-factoring?

3. Just a curiosity ( ? ) After my 20+ LL tests, not factor was found yet. :( (I know, this doesn't tell us much at all! But I'm usually lucky in lotteries. 8) )

Can you please tell me if my speculations are incorrect? Actually I hope so, but then where did I miss out?

On the other hand, if it would be true that "5% is much too optimistic": Then I think the overall troughput could benefit some juicy %:s by drastically changing Prime95:s factoring criterias. Maybe just simply having the currently hidden feature "SkipTrialFactoring" activated by default.
svempasnake is offline   Reply With Quote
Old 2002-09-11, 00:26   #2
binarydigits
 
Aug 2002

22×13 Posts
Default

I am almost certain that you are talking about P-1 factoring, as 16M exponents are only trial factored up to 65 bits; the status will show 66 after completing the P-1, but it could find a factor of 80 bits or more (if it finds one at all). IIRC, the server actually stores 65.5 to flag it as P-1 completed, but that displays as 66 on the reports.

...But I have been wondering the same thing. I have P4s that take a day and a half to do the P-1 test, then just 9 days for the LL test. Perhaps that's because I give it so much memory to use. I have slower machines that only spend a few hours on P-1s but with much less memory allocated.
binarydigits is offline   Reply With Quote
Old 2002-09-11, 00:52   #3
Prime95
P90 years forever!
 
Prime95's Avatar
 
Aug 2002
Yeehaw, FL

17·487 Posts
Default

Barring a bug in the code 5% is accurate. The estimation code does take into account how much trial factoring was done. Just for fun, you can change the 64 in "Test=exponent,64,0" and see how the bounds and percentages change.

I've been lucky in factoring (I'll trade you for your lottery luck). I don't have firm stats, but I think I find slightly more than my fair share.

TPR recently did P-1 on a big range. I wonder if they kept stats on how many factors were found.
Prime95 is offline   Reply With Quote
Old 2002-09-11, 00:53   #4
Prime95
P90 years forever!
 
Prime95's Avatar
 
Aug 2002
Yeehaw, FL

17×487 Posts
Default

Quote:
Originally Posted by binarydigits
I have P4s that take a day and a half to do the P-1 test, then just 9 days for the LL test. Perhaps that's because I give it so much memory to use. I have slower machines that only spend a few hours on P-1s but with much less memory allocated.
Make sure you haven't given prime95 so much memory that it is thrashing. Nothing could be worse than swapping these temporaries out to disk!
Prime95 is offline   Reply With Quote
Old 2002-09-11, 00:58   #5
garo
 
garo's Avatar
 
Aug 2002
Termonfeckin, IE

24·173 Posts
Default TPR P-1 effort

We did P-1 on 1373 exponents and found 55 factors during P-1 for a success rate of 4.006%. A vast majority of the exponents were between 8.2M and 8.25M (factor 63 bits) with some over 8.25M that requires factoring to 64 bits.
garo is offline   Reply With Quote
Old 2002-09-11, 01:21   #6
Xyzzy
 
Xyzzy's Avatar
 
Aug 2002

2·32·13·37 Posts
Default

I just recently P-1 factored around 100 or so exponents and I got close to 13 factors... But then I did another run of 100 and got 2... Most of the factors were found in P-1 stage 1, and not stage 2... For this test I let Prime95 use 920MB of memory and it actually took 800MB... I now have 16 "DF" factors found... :)

http://www.teamprimerib.com/rr1/usCL-Xyzzy.htm
Xyzzy is offline   Reply With Quote
Old 2002-09-11, 03:10   #7
roy1942
 
Aug 2002

47 Posts
Default Re: Does the LL test:s factorization save or waste CPU time?

Quote:
Originally Posted by svempasnake
1. Theoretically: If the exponent I'm working with already been factored to 64 bits, and my factoring only would find a factor of 65 or 66 bits, the chance would be way below 5%, since the pre factoring work already drained out about 64/66 of that chance (and I have only like 2/66 * 5% = very small chance left? )
I agree with the earlier replies, that you are probably talking about the P-1 factoring. But your paragraph about the trial factoring,quoted above, counts the work wrong. Each bit range has twice as many numbers as the previous range and about the same as all the previous ranges added together. Thus if you do 65 and 66 bits, the previous prefactoring work (1-64 bits) has only 'drained out' 1/4 of the chance, not 64/66.

(Actually, this isn't completely accurate either, since the density of prime numbers decreases with size)
roy1942 is offline   Reply With Quote
Old 2002-09-11, 03:15   #8
Prime95
P90 years forever!
 
Prime95's Avatar
 
Aug 2002
Yeehaw, FL

17×487 Posts
Default Re: Does the LL test:s factorization save or waste CPU time?

Quote:
Originally Posted by roy1942
Thus if you do 65 and 66 bits, the previous prefactoring work (1-64 bits) has only 'drained out' 1/4 of the chance, not 64/66.
Actually, the original post was right on this point. Even though there are twice as many trial factors when going to the next bit level, the chance that they divide the Mersenne number decreases. Thus, it really is the case that factoring from 2^64 to 2^65 only has 1 chance in 65 of finding a factor.
Prime95 is offline   Reply With Quote
Old 2002-09-11, 15:33   #9
svempasnake
 
Aug 2002

2110 Posts
Default

Quote:
Originally Posted by binarydigits
status will show 66 after completing the P-1, but it could find a factor of 80 bits or more
Thanks for all replies and comments! So in summary, where I went wrong was when I read 66 too literally as "trial factored to 66 bits". Now I got a better clue about what P-1 actually does. Thanks again! /S-O
svempasnake is offline   Reply With Quote
Old 2002-09-12, 06:25   #10
dswanson
 
dswanson's Avatar
 
Aug 2002

3108 Posts
Default Re: Does the LL test:s factorization save or waste CPU time?

Quote:
Originally Posted by Prime95
Quote:
Originally Posted by roy1942
Thus if you do 65 and 66 bits, the previous prefactoring work (1-64 bits) has only 'drained out' 1/4 of the chance, not 64/66.
Actually, the original post was right on this point. Even though there are twice as many trial factors when going to the next bit level, the chance that they divide the Mersenne number decreases. Thus, it really is the case that factoring from 2^64 to 2^65 only has 1 chance in 65 of finding a factor.
A couple of years ago I trial-factored a large range of numbers to a depth of 2^57. This post prompted me to do a statistical analysis of the results files to see how well George's formula matches up with empirical data. Here are my results:

[code:1]
| Factoring range (2^n – 2^n+1)
Mersenne number | 52-53 53-54 54-55 55-56 56-57
==================|===========================================
|Candidates at start of factoring range
15470000-16000000 | 15671 15372 15075 14807 14559
17000000-18000000 | 21597 22300 24091 24197 27200
18000000-19000000 | 0 0 0 0 27276
19000000-20000000 | 0 0 0 0 26921
------------------|-------------------------------------------
Total | 37268 37672 39166 39004 95956
------------------|-------------------------------------------
|
==================|===========================================
|Candidates factored within factoring range
15470000-16000000 | 299 297 268 248 281
17000000-18000000 | 388 403 417 439 495
18000000-19000000 | 0 0 0 0 480
19000000-20000000 | 0 0 0 0 428
------------------|-------------------------------------------
Total | 687 700 685 687 1684
------------------|-------------------------------------------
|
==================|===========================================
|Percent of candidates factored within range
15470000-16000000 | 1.91% 1.93% 1.78% 1.67% 1.93%
17000000-18000000 | 1.80% 1.81% 1.73% 1.81% 1.82%
18000000-19000000 | 1.76%
19000000-20000000 | 1.59%
------------------|-------------------------------------------
Total | 1.84% 1.86% 1.75% 1.76% 1.75%
------------------|-------------------------------------------
Predicted* | 1.89% 1.85% 1.82% 1.79% 1.75%
------------------|-------------------------------------------
*per George's 1/(n+1) formula
[/code:1]
(The increasing number of candidates in the 17M-18M range is because the numbers in this range had already been factored to varying depths below 2^57 before I started. All other ranges started with the numbers factored to the same depth: 2^52 for the 15.47M-16M range, and 2^56 for the 18M-20M range.)

For this data set, George's formula looks pretty darn good. :D But there are some trends in the results that make me suspicious of generalizing the conclusion. In particular, the percentage of numbers factored within a range decreases as the M-number increases. But George's formula predicts that this percentage should be a constant. I suspect that if I had a similar data set in the vicinity of say 30-33M that George's formula would look much worse, at least in the 2^52 to 2^57 range.

If I were really ambitious I'm sure I could piece together a much more thorough analysis by combining the data in the factors and nofactors files. Are there any enterprising souls out there who'd like to take on this challenge...?
dswanson is offline   Reply With Quote
Old 2002-09-13, 01:04   #11
binarydigits
 
Aug 2002

22×13 Posts
Default

Before taking on any statistical analysis of the factored data, you should be aware that the number of bits in the found factors database is not always correct. I complained about this a couple of years ago but it did no good. And I just came across another obvious example: A recently completed P-1 test reported a "65-bit" factor of a 16M exponent. I thought that was strange as the exponent had already been TF'd to 66 bits. The factor was 40498974660725214481 which is more than 9% higher than 2^66. It appears that it is "rounding" the results, but this is the equivalent of saying that the decimal number 109 is a 2-digit number because it is only 9% higher than 10^2.

At any rate, if one were to consider this factor when counting the number of found factors between 2^65 and 2^66 then one would be wrong. I realize this one was found by P-1 and not by TF, but I am pretty sure it does the same thing with TF factors found, because back when I complained about it we were not doing P-1 tests.
binarydigits is offline   Reply With Quote
Reply



Similar Threads
Thread Thread Starter Forum Replies Last Post
Can I decrease the factorization time with this formula ? Godzilla Miscellaneous Math 53 2017-02-07 07:58
Denser matrices save BL time Batalov Msieve 8 2010-02-14 11:39
Another colossal waste of time? rogue Lounge 7 2007-11-13 23:28
results.txt - Prime95 didn't record time for factorization ixfd64 Software 1 2006-03-30 13:39
P-1 save files didn't save work outlnder Software 1 2003-01-19 23:01

All times are UTC. The time now is 13:56.


Fri Jul 7 13:56:11 UTC 2023 up 323 days, 11:24, 0 users, load averages: 0.90, 1.21, 1.18

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔