mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > Lone Mersenne Hunters

Reply
 
Thread Tools
Old 2003-05-28, 06:24   #12
Boulder
 
May 2003

3·13 Posts
Default

Quote:
Originally Posted by trif
As an example, markr is currently engaged in taking the 33.6M range to 64 bits. This is the next range destined to go to Primenet for LL testing in the 10M digit range, but these go so slowly that there might well be enough time to factor them completely to 67 bits before they go to Primenet, although it would need more than one P4. Other P4 owners with the factoring bug might be interested.
This might be interesting. I could help with a small range and trial-factor to 67 bits. I just have the factoring bug :D I have nothing against using PrimeNet, it's just that I'd like to participate in removing the numbers before they even reach PrimeNet.

In the meantime, I guess I'll start factoring those numbers available on PrimeNet. Which means that garo, you can remove the range from the list. I'll check markr's progress from time to time and see if he wants any help beyond the 64 bit marker. I'll also send the results I've got to George, there were a couple of factors found.

Thanks to everyone for answering :D
Boulder is offline   Reply With Quote
Old 2003-05-28, 13:35   #13
markr
 
markr's Avatar
 
"Mark"
Feb 2003
Sydney

3×191 Posts
Default

All the 33.6M range is now done to 2^62, although this is not in nofactor yet. You would be welcome to any of them, either starting from 2^62, or when some reach 2^64. (I have started factoring to 2^64, starting one machine from each end of the range.) I certainly wouldn't miss some - it will take me over 3 months to do them all to 2^64! Just let me know.

Having different machines doing the work they are best suited to really appeals to me. When I learned AMDs are great for factoring to 2^62, and P4s are better at LL or factoring above 2^64, I changed the two home AMDs from LL to LMH. The P4s I have access to do LL.
markr is offline   Reply With Quote
Old 2003-05-28, 18:30   #14
garo
 
garo's Avatar
 
Aug 2002
Termonfeckin, IE

24·173 Posts
Default

markr and Boulder,
I'd be happy to help you guys coordinate this. What mrk could do is, send Boulder a list of say 10 exponents once they are done to 64 - or maybe even post them here and then Boulder can start on them. By the time he is done on those 10, mark will probably have another 50 to send. I coordinated an effort like this within Team_Prime_Rib so if you need any help just shout. And mark it may be a good idea to send George all the 2^62 results anyway so that they show up in the nofactors this Sunday.
garo is offline   Reply With Quote
Old 2003-05-30, 00:15   #15
markr
 
markr's Avatar
 
"Mark"
Feb 2003
Sydney

23D16 Posts
Default

The first few have been done to 2^64. Here are worktodo lines I made for them if boulder or anyone wants to take them further. I think the recommended limit for exponents of this size is 2^68, but this is LMH so it's up to you. Any taken should probably be posted to this forum so there's no doubling-up of effort. Perhaps a new topic?

Factor=33600023,64
Factor=33600103,64
Factor=33600139,64
Factor=33600179,64
Factor=33600271,64
Factor=33600319,64
Factor=33600337,64
Factor=33600421,64
Factor=33600439,64
Factor=33600451,64
Factor=33600547,64
Factor=33600571,64
Factor=33600643,64
Factor=33600727,64

I sent the results for these this morning via the manual pages. (Thanks cheesehead for wising us up about this! :D ) Results for the whole 33.6M range to 2^62 were emailed to George Wednesday.
markr is offline   Reply With Quote
Old 2003-05-30, 07:59   #16
Boulder
 
May 2003

3910 Posts
Default

I'm able to begin the trial-factoring on Monday, I'm off my computer for the rest of the week. I should probably take a few numbers first and see how long it takes to TF them to 2^68. My computer's not on 24/7 and I sometimes have to use the CPU time for others purposes too. I try to keep Prime95 running for about 8 hours a day so it might take a week to complete each exponent.
Boulder is offline   Reply With Quote
Old 2003-06-01, 17:11   #17
Boulder
 
May 2003

3×13 Posts
Default

OK, I started trial-factoring the first five exponents. I'll TF to 2^68 as suggested.
Boulder is offline   Reply With Quote
Old 2003-06-04, 14:22   #18
Mr. P-1
 
Mr. P-1's Avatar
 
Jun 2003

7·167 Posts
Default

Quote:
Originally Posted by Boulder
Quote:
Originally Posted by trif
As an example, markr is currently engaged in taking the 33.6M range to 64 bits. This is the next range destined to go to Primenet for LL testing in the 10M digit range, but these go so slowly that there might well be enough time to factor them completely to 67 bits before they go to Primenet, although it would need more than one P4. Other P4 owners with the factoring bug might be interested.
This might be interesting. I could help with a small range and trial-factor to 67 bits. I just have the factoring bug :D I have nothing against using PrimeNet, it's just that I'd like to participate in removing the numbers before they even reach PrimeNet.
May I speak against this plan? I suggest intead that you go with your original idea of doing P-1 factoring, provided you can allow it enough memory. (I suggest 256MB minimum, 1GB is cool, 3GB+ smoking!. Believe me, the program will use it!). There are a couple of reasons for this:-

Firstly, while P4 are OK at trial factoring above 64 bits, they're GREAT for P-1 testing. Secondly, the automatic P-1 attached to LL assignments is one of the less well optimised parts of the programme. Quite simply, it's done at the wrong time, and often by the wrong machines (I.e. those with insufficient memory).

Last time I checked, about one third of all P-1s were missing stage 2 entirely, and many others were done to minimal limits, because of low memory allowences. Also, there is an extention to the algorithm which finds even more factors (given identical B1 and B2), which only kicks in if you have lots of memory. So you could be finding many factors which otherwise would never be found. Another consideration is that the stage 2 algorithm runs faster when given a lot of memory, so an hour of CPU time spent P-1 testing on your P4 could save two hours[1] on an otherwise identical machine with less memory which ultimately gets the LL assignment.

The optimal point at which to do a P-1 test is not at the end of trial factrisation, but before the last bit or two.

Another reason for doing P-1s is that you can find *much* larger factors than with TF. My largest finds have been 98 and 102 bits. Up to 80 bits is common. All factors are equally valuable to GIMPS, but I personally like finding the large ones.

My suggestion for 33.6M range would be for someone else to take them to 65bits, then for you do do the P-1. Then someone else could finish off the last two bits, if desired.

I've been doing P-1s exclusively for some time now on a humble Duron 800 with 512MB. I'm looking at building a new P4 based system within the next few weeks with as much memory as I can fit on the board, and would be very happy to join in this effort.

[1]That's a guestimate. I haven't done any tests.

Regards

Daran
Mr. P-1 is offline   Reply With Quote
Old 2003-06-04, 17:11   #19
Mr. P-1
 
Mr. P-1's Avatar
 
Jun 2003

116910 Posts
Default

Quote:
Originally Posted by cheesehead
You don't have to go through Primenet to do P-1 factoring.

Just manually add the

[code:1]Pminus1=eeeeeeee,B1,B2,0,0[/code:1]
command to your worktodo.ini file
Better perhaps to use

[code:1]Pfactor=eeeeeeee,bits,has_been_LLed_once[/code:1]

where bits is the number of bits to which the exponent will be factored (and not the number of bits to which it has been factored,

(Edit: Fixed formatting)

has_been_LLed_once is either 0 or 1.

With this worktype, the program will choose B1 and B2 optimally.

[...]

Quote:
However, absence of an exponent in the PMINUS1.TXT file does not necessarily mean it needs P-1 factoring! It might already have been factored. Whenever a Mnumber is factored by any means, it is removed from the PMINUS1.TXT file (and nofactor.cmp). So be sure to check the factors.cmp file to see whether the Mnumber you're considering has already been factored!!
Actually there are very few exponents above the leading edges for LL tests which have had a P-1, so in practice it would be easier to obtain a list of exponents from nofactor.cmp, then exclude any which you find in PMINUS1.TXT.

Quote:
Also, it's best to avoid Mnumbers that someone else has just recently L-Led (i.e., LL-completed Mnumbers not far behind the ranges currently being assigned by Primenet). If your P-1 run factors a Mnumber, then whoever has L-Led it will lose GIMPS (but not Primenet) credit for that L-L work.
This could be a way for an agressive team to attack another's producer credit.

Regards

Daran
Mr. P-1 is offline   Reply With Quote
Old 2003-06-04, 19:47   #20
smh
 
smh's Avatar
 
"Sander"
Oct 2002
52.345322,5.52471

29×41 Posts
Default

Quote:
The optimal point at which to do a P-1 test is not at the end of trial factrisation, but before the last bit or two.
I seem to remember that this was implemented. P-1 before the last bit of trailfactoring. Dunno if this was for first time tests or double checks
smh is offline   Reply With Quote
Old 2003-06-04, 20:22   #21
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

22×3×641 Posts
Default

Quote:
Originally Posted by Mr. P-1
[code:1]Pfactor=eeeeeeee,bits,has_been_LLed_once[/code:1]

where bits is the number of bits to which the exponent will be factored (and not the number of bits to which it has been factored,
No, I'm afraid that is a misinterpretation of the meaning of the "bits" parameter.

Undoc.txt says
Quote:
You can do P-1 factoring by adding lines to worktodo.ini:
Pfactor=exponent,how_far_factored,has_been_LL_tested_once
which I read as meaning "how far it has already been trial-factored".

To check my interpretation, I examined the source code in Prime95 module commonc.c, which parses both the "Pfactor" and "Pminus1" lines from the worktodo.ini file. According to my examination, the "bits" field of "Pfactor" lines means the same thing as the "bits" field in "Test" and "DoubleCheck" lines, namely the number of bits to which the number has already been trial-factored.

Besides, since P-1 factoring progress is not measured in bits, it wouldn't make sense for the command for a P-1 run to specify some future bit level of trial factoring which has not already occurred.

It does make sense for the "Pfactor" line to specify the number of bits to which trial factoring has already been performed, because that information affects the choice of optimal (relative to L-L tradeoff) B1 and B2 limits.
cheesehead is offline   Reply With Quote
Old 2003-06-05, 02:05   #22
Mr. P-1
 
Mr. P-1's Avatar
 
Jun 2003

22218 Posts
Default

Quote:
Originally Posted by cheesehead
Quote:
Originally Posted by Mr. P-1
[code:1]Pfactor=eeeeeeee,bits,has_been_LLed_once[/code:1]

where bits is the number of bits to which the exponent will be factored (and not the number of bits to which it has been factored,
No, I'm afraid that is a misinterpretation of the meaning of the "bits" parameter.

Undoc.txt says
Quote:
You can do P-1 factoring by adding lines to worktodo.ini:
Pfactor=exponent,how_far_factored,has_been_LL_tested_once
which I read as meaning "how far it has already been trial-factored".
That's what it says. The tacit assumption, however, is that P-1 is done after TF is complete, so the two values are the same. The relevent question is not 'what does the parameter mean?', but 'what is the optimal value to give it?'

Quote:
To check my interpretation, I examined the source code in Prime95 module commonc.c, which parses both the "Pfactor" and "Pminus1" lines from the worktodo.ini file. According to my examination, the "bits" field of "Pfactor" lines means the same thing as the "bits" field in "Test" and "DoubleCheck" lines, namely the number of bits to which the number has already been trial-factored.
Ditto.

Quote:
Besides, since P-1 factoring progress is not measured in bits, it wouldn't make sense for the command for a P-1 run to specify some future bit level of trial factoring which has not already occurred.

It does make sense for the "Pfactor" line to specify the number of bits to which trial factoring has already been performed, because that information affects the choice of optimal (relative to L-L tradeoff) B1 and B2 limits.
I'm very familiar with the relevent section of the code.

Function guess_pminus1_bounds estimates the probability that the P-1 calculation will find a factor longer that the 'bits' parameter, and assumes, that if such a factor is found, then 2.03 or 1.03 LL tests will be saved, according to the value of the last parameter.

But this is only true if the factor is longer than the maximum depth to which the exponent will be factored. If a factor smaller than this is found, then all that is saved is the extra time taken to TF to that factor. That's not insignificant - it's the benefit that this optimisation is intended to gain - but it's not as much as an LL.

If the code were perfectly optimised, then the TF code would know whether P-1 had been done, and would reduce the TF limits accordingly, while the P-1 code would be aware that more TF was needed, how much more would be done (given this reduction) and would optimise accordingly. Given that TF is done to preset limits, a locally optimised P-1 would go to very slightly deeper limits when performed earlier, but the difference is probably within the margins of accuracy of the current estimate.

Here's another way of looking at it: If you hold the P-1 limits, then no matter what order you P-1 and TF in, the probability of not finding a factor is the same, therefore the probability of finding one is the same. Also the total amount of work done will be the same if you don't find a factor. All that changing the order affects is the total amount of time spent factoring in the case where you do find one. This is too small significantly to increase the optimal P-1 depth.

There is another consideration which mitigates in the direction of doing shallower P-1s. The guess_pminus1_bounds code assumes that the machine which is doing the P-1 is the same machine as will go on to do the LL, and aims to maximise the expected clearence rate (by factor or LL) for that machine. The trade-off is between doing more P-1 and less LL per unit time, and doing less P-1 and more LL. However a machine which is dedicated to P-1 aims to replace the average P-1 effort which would be made on many different machines with a deeper and more efficient one. The trade-off is between doing more P-1 on fewer exponents, and doing less P-1 on more. This is a quite different optimisation problem.

Suppose you double the time spent P-1 factoring each exponent. The law of diminishing returns means that you would less than double the probability of finding a factor, but as your throughput would be halfed, your overall exponent clearence rate (i.e. factors found per unit time) would fall. Similarly, if you half the time on each exponent, you will have more than half the probability of findng a factor per exponent, and would double your throughput; your clearence rate would increase. Against that, is the fact that twice as many other machines would not be doing P-1 searches themselves, and finding factors, so their clearence rate would fall.

The optimal depth is the one that maximises n*(p-pav) where n is the number of exponents you can test per unit time, p is your probability of finding a factor, and pav is the average probability for gimps client in general (bearing in mind that some don't do stage 2 due to lack of memory, and some older ones don't do P-1 at all).

The upshot of all this is that a dedicated P-1 engine optimally should do shallower tests on more exponents. Quantifying this is a project that I've had on my todo list for some time, but have never gotten around to it. In the mean time, I'm reluctant to recommend setting the 'bits' parameter to more than the 'will do to' level, but it certainly shouldn't be set lower.

Regards

Daran
Mr. P-1 is offline   Reply With Quote
Reply



All times are UTC. The time now is 13:15.


Fri Jul 7 13:15:16 UTC 2023 up 323 days, 10:43, 0 users, load averages: 1.61, 1.22, 1.14

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔