![]() |
|
|
#12 | |
|
May 2003
3·13 Posts |
Quote:
In the meantime, I guess I'll start factoring those numbers available on PrimeNet. Which means that garo, you can remove the range from the list. I'll check markr's progress from time to time and see if he wants any help beyond the 64 bit marker. I'll also send the results I've got to George, there were a couple of factors found. Thanks to everyone for answering :D |
|
|
|
|
|
|
#13 |
|
"Mark"
Feb 2003
Sydney
3×191 Posts |
All the 33.6M range is now done to 2^62, although this is not in nofactor yet. You would be welcome to any of them, either starting from 2^62, or when some reach 2^64. (I have started factoring to 2^64, starting one machine from each end of the range.) I certainly wouldn't miss some - it will take me over 3 months to do them all to 2^64! Just let me know.
Having different machines doing the work they are best suited to really appeals to me. When I learned AMDs are great for factoring to 2^62, and P4s are better at LL or factoring above 2^64, I changed the two home AMDs from LL to LMH. The P4s I have access to do LL. |
|
|
|
|
|
#14 |
|
Aug 2002
Termonfeckin, IE
24·173 Posts |
markr and Boulder,
I'd be happy to help you guys coordinate this. What mrk could do is, send Boulder a list of say 10 exponents once they are done to 64 - or maybe even post them here and then Boulder can start on them. By the time he is done on those 10, mark will probably have another 50 to send. I coordinated an effort like this within Team_Prime_Rib so if you need any help just shout. And mark it may be a good idea to send George all the 2^62 results anyway so that they show up in the nofactors this Sunday. |
|
|
|
|
|
#15 |
|
"Mark"
Feb 2003
Sydney
23D16 Posts |
The first few have been done to 2^64. Here are worktodo lines I made for them if boulder or anyone wants to take them further. I think the recommended limit for exponents of this size is 2^68, but this is LMH so it's up to you. Any taken should probably be posted to this forum so there's no doubling-up of effort. Perhaps a new topic?
Factor=33600023,64 Factor=33600103,64 Factor=33600139,64 Factor=33600179,64 Factor=33600271,64 Factor=33600319,64 Factor=33600337,64 Factor=33600421,64 Factor=33600439,64 Factor=33600451,64 Factor=33600547,64 Factor=33600571,64 Factor=33600643,64 Factor=33600727,64 I sent the results for these this morning via the manual pages. (Thanks cheesehead for wising us up about this! :D ) Results for the whole 33.6M range to 2^62 were emailed to George Wednesday. |
|
|
|
|
|
#16 |
|
May 2003
3910 Posts |
I'm able to begin the trial-factoring on Monday, I'm off my computer for the rest of the week. I should probably take a few numbers first and see how long it takes to TF them to 2^68. My computer's not on 24/7 and I sometimes have to use the CPU time for others purposes too. I try to keep Prime95 running for about 8 hours a day so it might take a week to complete each exponent.
|
|
|
|
|
|
#17 |
|
May 2003
3×13 Posts |
OK, I started trial-factoring the first five exponents. I'll TF to 2^68 as suggested.
|
|
|
|
|
|
#18 | ||
|
Jun 2003
7·167 Posts |
Quote:
Firstly, while P4 are OK at trial factoring above 64 bits, they're GREAT for P-1 testing. Secondly, the automatic P-1 attached to LL assignments is one of the less well optimised parts of the programme. Quite simply, it's done at the wrong time, and often by the wrong machines (I.e. those with insufficient memory). Last time I checked, about one third of all P-1s were missing stage 2 entirely, and many others were done to minimal limits, because of low memory allowences. Also, there is an extention to the algorithm which finds even more factors (given identical B1 and B2), which only kicks in if you have lots of memory. So you could be finding many factors which otherwise would never be found. Another consideration is that the stage 2 algorithm runs faster when given a lot of memory, so an hour of CPU time spent P-1 testing on your P4 could save two hours[1] on an otherwise identical machine with less memory which ultimately gets the LL assignment. The optimal point at which to do a P-1 test is not at the end of trial factrisation, but before the last bit or two. Another reason for doing P-1s is that you can find *much* larger factors than with TF. My largest finds have been 98 and 102 bits. Up to 80 bits is common. All factors are equally valuable to GIMPS, but I personally like finding the large ones. My suggestion for 33.6M range would be for someone else to take them to 65bits, then for you do do the P-1. Then someone else could finish off the last two bits, if desired. I've been doing P-1s exclusively for some time now on a humble Duron 800 with 512MB. I'm looking at building a new P4 based system within the next few weeks with as much memory as I can fit on the board, and would be very happy to join in this effort. [1]That's a guestimate. I haven't done any tests. Regards Daran |
||
|
|
|
|
|
#19 | |||
|
Jun 2003
116910 Posts |
Quote:
[code:1]Pfactor=eeeeeeee,bits,has_been_LLed_once[/code:1] where bits is the number of bits to which the exponent will be factored (and not the number of bits to which it has been factored, (Edit: Fixed formatting) has_been_LLed_once is either 0 or 1. With this worktype, the program will choose B1 and B2 optimally. [...] Quote:
Quote:
![]() Regards Daran |
|||
|
|
|
|
|
#20 | |
|
"Sander"
Oct 2002
52.345322,5.52471
29×41 Posts |
Quote:
|
|
|
|
|
|
|
#21 | ||
|
"Richard B. Woods"
Aug 2002
Wisconsin USA
22×3×641 Posts |
Quote:
Undoc.txt says Quote:
To check my interpretation, I examined the source code in Prime95 module commonc.c, which parses both the "Pfactor" and "Pminus1" lines from the worktodo.ini file. According to my examination, the "bits" field of "Pfactor" lines means the same thing as the "bits" field in "Test" and "DoubleCheck" lines, namely the number of bits to which the number has already been trial-factored. Besides, since P-1 factoring progress is not measured in bits, it wouldn't make sense for the command for a P-1 run to specify some future bit level of trial factoring which has not already occurred. It does make sense for the "Pfactor" line to specify the number of bits to which trial factoring has already been performed, because that information affects the choice of optimal (relative to L-L tradeoff) B1 and B2 limits. |
||
|
|
|
|
|
#22 | |||||
|
Jun 2003
22218 Posts |
Quote:
Quote:
Quote:
Function guess_pminus1_bounds estimates the probability that the P-1 calculation will find a factor longer that the 'bits' parameter, and assumes, that if such a factor is found, then 2.03 or 1.03 LL tests will be saved, according to the value of the last parameter. But this is only true if the factor is longer than the maximum depth to which the exponent will be factored. If a factor smaller than this is found, then all that is saved is the extra time taken to TF to that factor. That's not insignificant - it's the benefit that this optimisation is intended to gain - but it's not as much as an LL. If the code were perfectly optimised, then the TF code would know whether P-1 had been done, and would reduce the TF limits accordingly, while the P-1 code would be aware that more TF was needed, how much more would be done (given this reduction) and would optimise accordingly. Given that TF is done to preset limits, a locally optimised P-1 would go to very slightly deeper limits when performed earlier, but the difference is probably within the margins of accuracy of the current estimate. Here's another way of looking at it: If you hold the P-1 limits, then no matter what order you P-1 and TF in, the probability of not finding a factor is the same, therefore the probability of finding one is the same. Also the total amount of work done will be the same if you don't find a factor. All that changing the order affects is the total amount of time spent factoring in the case where you do find one. This is too small significantly to increase the optimal P-1 depth. There is another consideration which mitigates in the direction of doing shallower P-1s. The guess_pminus1_bounds code assumes that the machine which is doing the P-1 is the same machine as will go on to do the LL, and aims to maximise the expected clearence rate (by factor or LL) for that machine. The trade-off is between doing more P-1 and less LL per unit time, and doing less P-1 and more LL. However a machine which is dedicated to P-1 aims to replace the average P-1 effort which would be made on many different machines with a deeper and more efficient one. The trade-off is between doing more P-1 on fewer exponents, and doing less P-1 on more. This is a quite different optimisation problem. Suppose you double the time spent P-1 factoring each exponent. The law of diminishing returns means that you would less than double the probability of finding a factor, but as your throughput would be halfed, your overall exponent clearence rate (i.e. factors found per unit time) would fall. Similarly, if you half the time on each exponent, you will have more than half the probability of findng a factor per exponent, and would double your throughput; your clearence rate would increase. Against that, is the fact that twice as many other machines would not be doing P-1 searches themselves, and finding factors, so their clearence rate would fall. The optimal depth is the one that maximises n*(p-pav) where n is the number of exponents you can test per unit time, p is your probability of finding a factor, and pav is the average probability for gimps client in general (bearing in mind that some don't do stage 2 due to lack of memory, and some older ones don't do P-1 at all). The upshot of all this is that a dedicated P-1 engine optimally should do shallower tests on more exponents. Quantifying this is a project that I've had on my todo list for some time, but have never gotten around to it. In the mean time, I'm reluctant to recommend setting the 'bits' parameter to more than the 'will do to' level, but it certainly shouldn't be set lower. Regards Daran |
|||||
|
|
|