![]() |
[QUOTE=James Heinrich;276373]Sorry, I still don't think I follow. What text on the page and/or buttons is misunderstanable?[/QUOTE]On the button: "Get best assignments"
But it doesn't "get" or "actually reserve or create" any assignments. Better would be "List exponents available for best assignment" |
[QUOTE=cheesehead;276482]On the button: "Get best assignments"[/QUOTE]Easily changed -- it now says "Get best available exponents".
|
Thank you!
|
This is probably a really silly/nooby question, so I am sorry in advance.
I am doing a bunch of P-1 under the impression that it is what the project needs the most. However, I notice I am getting assigned a lot of P-1 over 60000000. Isn't that really quite far ahead of where the LL testing is? Is P-1 not really an issue anymore, or is something else happening? Right now here are some of the P-1's I am assigned: 60171019,60171113,60211537,60282577,60284299,60284681 I am just curious. Thanks! |
[QUOTE=KyleAskine;276521]This is probably a really silly/nooby question, so I am sorry in advance.
I am doing a bunch of P-1 under the impression that it is what the project needs the most. However, I notice I am getting assigned a lot of P-1 over 60000000. Isn't that really quite far ahead of where the LL testing is? Is P-1 not really an issue anymore, or is something else happening? Right now here are some of the P-1's I am assigned: 60171019,60171113,60211537,60282577,60284299,60284681 I am just curious. Thanks![/QUOTE] Primenet doesn't always hand out assignments as low as we'd want it too. There are plenty of exponents within or just ahead of the current test range that have not had P-1. If you want to focus on these, take test assignments and manually edit them into P-1 assignments. |
That is unusual. I have been getting 54 and 55M exponents to P-1. Send me a PM. I have some smaller P-1 exponents that will be more useful than the 60Ms you have.
|
My one machine that gets PrimeNet-assigned P-1s currently has one 58M and three 60M assignments, so I'm seeing the same behaviour.
|
I just found two P-1 factors in old exponents that should've already been found: [url=http://mersenne-aries.sili.net/exponent.php?exponentdetails=6802123]M6,802,123[/url] and [url=http://mersenne-aries.sili.net/exponent.php?exponentdetails=6888719]M6,888,719[/url].
If you look at those links, you'll see they both have factors that should've been found with the original P-1 bounds. Wonder why they weren't...? |
Everything I have been assigned since 10/30 has been over 60M.
Maybe this is because the <60M exponents have just been changed to not being finished TF according to PrimeNet??? |
[QUOTE=James Heinrich;276585]I just found two P-1 factors in old exponents that should've already been found: [url=http://mersenne-aries.sili.net/exponent.php?exponentdetails=6802123]M6,802,123[/url] and [url=http://mersenne-aries.sili.net/exponent.php?exponentdetails=6888719]M6,888,719[/url].
If you look at those links, you'll see they both have factors that should've been found with the original P-1 bounds. Wonder why they weren't...?[/QUOTE] Perhaps false reporting? Any way to check who the original user was? |
Looks like the change in TF assignments caused this to happen. If you go to the manual reservations page, and ask for exponents in the 50-55M range, you will get smaller P-1 assignments.
|
[QUOTE=garo;276629]Looks like the change in TF assignments caused this to happen. If you go to the manual reservations page, and ask for exponents in the 50-55M range, you will get smaller P-1 assignments.[/QUOTE]
I thought he meant that when these were P-1'd years ago, the factors should have been found, but weren't. I think he's doing low P-1 to find factors for already DC'ed exponents, just for the heck of it. He found factors that should have been found years ago. |
[QUOTE=Dubslow;276634]I thought he meant that when these were P-1'd years ago, the factors should have been found, but weren't.[/QUOTE]I did. You and [i]garo[/i] are on different conversations. :smile:
[i]garo[/i]'s talking about the fact that P-1 assignments are now handed out >60M, even though there are apparently-available ones in 55-60M. George did mention he'd recently fiddled with the assignment code, but he didn't delve into specifics as to what was changed or why. And yes, my idea of fun (and the reason behind the [url=http://mersenne-aries.sili.net/p1small.php]P1small page[/url]) is to do P-1 on exponents that have either had no P-1 done, or done so poorly it's not very useful. I work on a mixture of old stuff (DC'd), mid stuff (L-L'd once) or future stuff (not yet L-L'd but someone did a "bad" P-1); it's quite possible I'll be working on 8M, 48M and 80M and the same time. |
I don't understand why each category within itself is not just programmed to hand out the lowest available test numbers.
|
They are, but not to people we know here. The reason we're doing it here is because we know we can get it done (in the TF case) inside of a week. When PrimeNet assigns it, chances are overwhelming that it's assigned to someone who doesn't even look at the program and doesn't care like we do, and so long run times are common.
Right? |
I just think that Primenet thinks that the numbers below 58M or 60M should have more TF before they get sent to P-1 now, but that is generally how it works (the lowest numbers are handed out). It is only when you force it to give you something lower than 56M does it relent. Of course this is all 100% speculation, so you should take it with a grain of salt.
Anyway, I have taken my four PCs which have at least 1 core doing P-1 off of Primenet, and fed them all a lot of lower P-1s to keep them busy for a while. This seemed like a pain, so I hope this issue is fixed soon so I can put them back on. |
[QUOTE=Jwb52z;276725]I don't understand why each category within itself is not just programmed to hand out the lowest available test numbers.[/QUOTE]I'm sure they all are ... but one has to be picky as PrimeNet in defining the "category".
|
[QUOTE=Jwb52z;276725]I don't understand why each category within itself is not just programmed to hand out the lowest available test numbers.[/QUOTE]
That's because Primenet is a system which has evolved over the years. If we were designing it from scratch we would surely do it differently. But we're not, and George has better things to do with his time than to be constantly finagling the server. |
[QUOTE=Jwb52z;276725]I don't understand why each category within itself is not just programmed to hand out the lowest available test numbers.[/QUOTE]
It is. But if I request work for 365 days I would get all smaller available exponents, say a hundred of them. Some other 500 guys do the same. So 50 thousands exponents are gone, and you will not hear anything on the most of them for about a year. Then you come and request "honest" work of one exponent, which, of course, will be 50 thousands "steps" higher. Then I change my mind, because 365 days is a lot of time, and return back to the server the most of the exponents. Then your friend comes and gets an assignment smaller then yours (from the list I returned). In fact, a hard disk driver is done in such a way to allocate to your files the lowest contiguous space available. But no matter what you do, after six months of using you computer (creating and deleting files) you will have a totally mess up on your disk, plenty of files spread on all the surface of the disk, two sectors here, two there, two bytes I don't know where, and two bits lost at all..:smile:. That is why disk defragmenters and garbage-collector drivers were invented. Here if you wanna have lowest assignments always, and do not step on other people's toes, then [B]you have to become garbage collector yourself[/B]. |
[QUOTE=LaurV;276748]It is. But if I request work for 365 days I would get all smaller available exponents, say a hundred of them. Some other 500 guys do the same. So 50 thousands exponents are gone, and you will not hear anything on the most of them for about a year. Then you come and request "honest" work of one exponent, which, of course, will be 50 thousands "steps" higher. Then I change my mind, because 365 days is a lot of time, and return back to the server the most of the exponents. Then your friend comes and gets an assignment smaller then yours (from the list I returned).
In fact, a hard disk driver is done in such a way to allocate to your files the lowest contiguous space available. But no matter what you do, after six months of using you computer (creating and deleting files) you will have a totally mess up on your disk, plenty of files spread on all the surface of the disk, two sectors here, two there, two bytes I don't know where, and two bits lost at all..:smile:. That is why disk defragmenters and garbage-collector drivers were invented. Here if you wanna have lowest assignments always, and do not step on other people's toes, then [B]you have to become garbage collector yourself[/B].[/QUOTE] That is such an amazingly awesome analogy. Even I get it better, and I thought I got it! Thanks! |
[QUOTE=garo;276629]Looks like the change in TF assignments caused this to happen. If you go to the manual reservations page, and ask for exponents in the 50-55M range, you will get smaller P-1 assignments.[/QUOTE]
Or, in order to get exponents that have no LL done at all (thus saving two tests in case a factor is found), pick them in the 55-56M range. I tried the 50-55M and all exponents I got had 1 LL test already done. That is in fact to be expected, as the wavefront is now sweeping through the 54-55M. When using the Manual Test Page to get assignments, make sure you are logged in, otherwise the exponents will be registered in the generic "Anonymous" account. |
"Let George do it"
[QUOTE=Mr. P-1;276743]That's because Primenet is a system which has evolved over the years. If we were designing it from scratch we would surely do it differently. But we're not, and George has better things to do with his time than to be constantly finagling the server.[/QUOTE]
(George is the British nickname for auto pilot). I hope everyone here appreciates George's indefatigable efforts, notably tweaking Prime95 and posting here. But couldn't one of us help out with tweaking Primenet when concensus of opinion thinks it would be desirable? David |
I'd volunteer, 'cept I don't actually know any programming or other stuff.
|
[QUOTE=Dubslow;276804]I'd volunteer, 'cept I don't actually know any programming or other stuff.[/QUOTE]
Just what the doctor ordered:smile: |
[QUOTE=Dubslow;276804]I'd volunteer, 'cept I don't actually know any programming or other stuff.[/QUOTE]
Sign me up for "other stuff". |
[QUOTE=James Heinrich;276585]I just found two P-1 factors in old exponents that should've already been found: [url=http://mersenne-aries.sili.net/exponent.php?exponentdetails=6802123]M6,802,123[/url] and [url=http://mersenne-aries.sili.net/exponent.php?exponentdetails=6888719]M6,888,719[/url].
If you look at those links, you'll see they both have factors that should've been found with the original P-1 bounds. Wonder why they weren't...?[/QUOTE]Just found another one where the previous P-1 apparently didn't find the factor for some reason: [url=http://mersenne-aries.sili.net/exponent.php?exponentdetails=6853937]M6,853,937[/url]. |
I have a P-1 question... I was running P95 with 2.5 gig available and figured I had more available so switched to 3.0 gig available and noticed that the P-1 changed bounds when the memory was increased. It was B1=3290000, B2=76492500 at 2.5 and is now B1=3320000, B2=8300000 and when I calculate the amount of time left to finish, the completion time has jumped by over 20% (36 vs 45). Does this change in memory after 12% completion and subsequent bound change cause a problem in the P-1? Is it normal for more memory to cause longer times?
|
[QUOTE=bcp19;276950]I have a P-1 question... I was running P95 with 2.5 gig available and figured I had more available so switched to 3.0 gig available and noticed that the P-1 changed bounds when the memory was increased. It was B1=3290000, B2=76492500 at 2.5 and is now B1=3320000, B2=8300000 and when I calculate the amount of time left to finish, the completion time has jumped by over 20% (36 vs 45). Does this change in memory after 12% completion and subsequent bound change cause a problem in the P-1?[/QUOTE]No. During stage 1, as long as progress hasn't reached the lower B1 yet, extending it to a higher B1 is mostly just a matter of keeping on doing what it's been doing, for longer.
If your calculation had reached stage 2, extending B1 would require discarding all the stage 2 work so far and going back to extend stage 1. So, IIRC without looking at the source code (it's been a while), prime95 will not change the stage 1 bound if you bump up the available memory after stage 2 starts, but might extend B2. (I could be wrong.) [quote]Is it normal for more memory to cause longer times?[/quote]It is when prime95 is choosing its own bounds (Pfactor= or Test= in worktodo). In that case, it tries to optimize the probability of finding a factor, and going to higher bounds will do that, at the cost of a longer run time. If, instead, the user specified the bounds (Pminus1= instead of Pfactor= or Test= in the worktodo), then allocating more memory will allow prime95 to use more stage 2 auxiliary work areas to eliminate some duplication of calculations, which will speed it up. (If you're wondering why prime95 doesn't use the extra memory for more stage 2 workareas when it's choosing its own bounds -- well, it does ... but it also raises the B2 even more than that, so that it winds up spending more total time running with the extra (and thus, faster) workareas. There are tradeoffs involved.) |
Thanks for the explanation, I'm still a bit new to this and don't really understand the processes involved that well. I'll leave it with the higher memory then.
|
[QUOTE=cheesehead;276957]No. During stage 1, as long as progress hasn't reached the lower B1 yet, extending it to a higher B1 is mostly just a matter of keeping on doing what it's been doing, for longer.
If your calculation had reached stage 2, extending B1 would require discarding all the stage 2 work so far and going back to extend stage 1. So, IIRC without looking at the source code (it's been a while), prime95 will not change the stage 1 bound if you bump up the available memory after stage 2 starts, but might extend B2. (I could be wrong.)[/QUOTE] You are. In fact, B1 is fixed from the start of the calculation. If you change your memory setting during stage 1, it will calculate a different bound, but it will still - without telling you - use the original bound stored in the save file. As soon as stage 1 is complete, the stage 2 bound becomes fixed. Again, if you restart stage 2, whether due to a change in available memory or for any other reason, new bounds will be calculated, but the program will still revert to those stored in the save file. This time it will tell you if these bounds are different. [QUOTE]It is when prime95 is choosing its own bounds (Pfactor= or Test= in worktodo). In that case, it tries to optimize the probability of finding a factor, and going to higher bounds will do that, at the cost of a longer run time.[/QUOTE] Yes. With more memory, the cost per iteration is reduced, which means that it is worthwhile doing more iterations. The overall effect is to increase the running time, though this is worth it because you have a greater chance of finding factors. |
[QUOTE=Mr. P-1;276968]You are. In fact, B1 is fixed from the start of the calculation. If you change your memory setting during stage 1, it will calculate a different bound, but it will still - without telling you - use the original bound stored in the save file.[/QUOTE]I keep thinking I saw some code that did a "catch-up" (going back to include all the prime powers between old and new B1s) when B1 was increased.
|
[QUOTE=cheesehead;276973]I keep thinking I saw some code that did a "catch-up" (going back to include all the prime powers between old and new B1s) when B1 was increased.[/QUOTE]
I have on ocassion seen a message similar to "New B1 value ignored. Using B1 from save file instead" in my worker windows on startup with a new memory allocation. |
[QUOTE=Mr. P-1;276968]You are. In fact, B1 is fixed from the start of the calculation. If you change your memory setting during stage 1, it will calculate a different bound, but it will still - without telling you - use the original bound stored in the save file.
As soon as stage 1 is complete, the stage 2 bound becomes fixed. Again, if you restart stage 2, whether due to a change in available memory or for any other reason, new bounds will be calculated, but the program will still revert to those stored in the save file. This time it will tell you if these bounds are different. Yes. With more memory, the cost per iteration is reduced, which means that it is worthwhile doing more iterations. The overall effect is to increase the running time, though this is worth it because you have a greater chance of finding factors.[/QUOTE] You just went and confused me again :/ If the bounds are not changed, even though the program restarts the worker and reports them changed, the variations I saw then make no sense. Worker 1 is running a P-1 stage 1 on a 322M exponent, Workers 2,3&4 were doing ECM's. Worker 1 was completing .21-.22% every 6700 sec. Worker 3 switches to 60M TF and 8 min into the 150 min between W1's output and Worker 2 was switches to a P-1 on a 52M exp 66min in, Worker 1 shows 6550 sec, next iteration Worker 1 completes in 6389 sec. Using the above times and %'s, at 6700 per 'tick' the entire run should take 36.9 days, at 6389, 35.2 days. As expected, switching from ECM's to TF/P-1 on other workers showed a drop in time spent per tick. Worker 4 switches from Curve 2 Stage 2 on ECM of F22 to Curve 3 Stage 1 after 3 'ticks' from Worker 1 and time per 'tick' on W1 drops to around 5850 sec. Memory is changed from 2.5 to 3.0 gigs available and W1 restarts. W1 now shows .13-.14% completion every 5750-5820 sec. Using 5785 sec and .14% completion, this shows total run time of 47.82 days. The extended bound theory would explain the extention of the completion time, but I am at a loss if as you say it doesn't change anything. |
Does anyone here have any idea why my account would now show a TF where I'd only ever told it to "Do what seems best" or "P-1" specifically? I've never witnessed my client do a TF, ever, and I don't remember seeing this show up in my account information before today.
|
Do you happen to know which exponent it says you TF'd? It's [i]possible[/i] that PrimeNet somehow misinterpreted a found factor as coming from TF rather than P-1, although this is far less likely with automatic results submission than manual submission, and usually happens the other way (TF factor misinterpreted as P-1) if it did. Of course, it's also possible that "Whatever makes sense" actually did assign you a TF assignment at some point. Checking results.txt may shed some light on the matter.
|
I know my main Desktop PC did a big run of TFs a while ago when I had it on 'Whatever makes sense'. I thought it was cool at the time because I liked the relatively quick turnaround for these, but now that I know more, it was probably a waste of time.
I think that Primenet probably shouldn't hand out TF unless it is an old computer (too old to DC), or someone specifically requests it. But I could be wrong. If you want to see the numbers I got assigned you can look at the blue here: [url]http://mersenne-aries.sili.net/index.php?showuserexponents=kyleaskine&usercompid=339[/url] It was mostly in late September. |
On that old computer, the thought is either DCs or ECM if it doesn't have enough memory to do P-1. We're not unhappy about the TF effort, it's just that GPUs, even my relatively cheap GTX440, are significantly more effective than CPUs at it.
Or just retire it, get it an ubuntu or xubuntu disk, and use it for everything else you do on the computer..... |
[QUOTE=Christenson;277057]On that old computer, the thought is either DCs or ECM if it doesn't have enough memory to do P-1. We're not unhappy about the TF effort, it's just that GPUs, even my relatively cheap GTX440, are significantly more effective than CPUs at it.
Or just retire it, get it an ubuntu or xubuntu disk, and use it for everything else you do on the computer.....[/QUOTE] I don't know if you are talking to me or someone else, but I have an i5-2500k with 16gig of ram. This computer is certainly not old. And yes, once I learned more I started dedicating two cores to P-1 for the cause. I may add a third as soon as the current LL's that it is doing on two cores finish. |
[QUOTE=James Heinrich;277047]Do you happen to know which exponent it says you TF'd? It's [i]possible[/i] that PrimeNet somehow misinterpreted a found factor as coming from TF rather than P-1, although this is far less likely with automatic results submission than manual submission, and usually happens the other way (TF factor misinterpreted as P-1) if it did. Of course, it's also possible that "Whatever makes sense" actually did assign you a TF assignment at some point. Checking results.txt may shed some light on the matter.[/QUOTE]I may have made a slight mistake, but not for the reason you might guess. I looked at my lifetime stats and it shows TF work having been done, so that means it must have been done sometime during the 4.0 server days as there is no record of a number in my results page for TF and it says 4.0 results are not included. Now, that's been so long ago that I don't really remember ever letting a computer back then do TF, but I guess it's possible that I did. I'll try to be more careful with what I am reading efore I ask a question next time.
|
[QUOTE=KyleAskine;277060]I don't know if you are talking to me or someone else, but I have an i5-2500k with 16gig of ram. This computer is certainly not old.
And yes, once I learned more I started dedicating two cores to P-1 for the cause. I may add a third as soon as the current LL's that it is doing on two cores finish.[/QUOTE] Carelessness strikes me....you mentioned "old computers" somewhere in your post and I thought you had one...I certainly have more than my share, including one running W95 on my desk at work, in active use to urn a certain NoHau in-circuit emulator for an 8051.[Their software, like many, actually got worse in the user-interface department with the coming of Windows] Get your desktop a nice GPU (whatever the power supply will support), run mfaktc on it, and find lots of factors and complete DCs..... |
[QUOTE=Christenson;277100]Carelessness strikes me....you mentioned "old computers" somewhere in your post and I thought you had one...I certainly have more than my share, including one running W95 on my desk at work, in active use to urn a certain NoHau in-circuit emulator for an 8051.[Their software, like many, actually got worse in the user-interface department with the coming of Windows]
Get your desktop a nice GPU (whatever the power supply will support), run mfaktc on it, and find lots of factors and complete DCs.....[/QUOTE] Actually, I have two HD6950s in xFire. I don't factor with them with mfakto for a few reasons: 1) I wouldn't be able to run P95 anymore (or at most, have 1 worker, because of all the instances of mfakto I would require). 2) AMD's don't factor as well as nVidias (or so I have been led to believe) 3) According to my Kill-a-Watt, when both GPU's are going full bore, I use something like 400W more than when I don't. That would add up on the power bill. |
I'd suggest running 1 instance of mfakto anyways. One instance will give you one idle and one half-loaded GPU, which should add up less on the power bill, but still let you use 3 cores for P-1 while providing a tremendous boost to your overall throughput.
|
mfakt can be run on Radeons now?
|
[QUOTE=lorgix;277105]mfakt can be run on Radeons now?[/QUOTE]NVIDIA / [b]C[/b]UDA: [url=http://www.mersenneforum.org/showthread.php?t=12827]mfakt[b]c[/b][/url] [url=http://www.mersennewiki.org/index.php/Mfaktc]wiki[/url]
AMD / [b]O[/b]penCL: [url=http://www.mersenneforum.org/showthread.php?t=15646]mfakt[b]o[/b][/url] [url=http://www.mersennewiki.org/index.php/Mfakto]wiki[/url] |
[QUOTE=James Heinrich;277106]NVIDIA / [B]C[/B]UDA: [URL="http://www.mersenneforum.org/showthread.php?t=12827"]mfakt[B]c[/B][/URL] [URL="http://www.mersennewiki.org/index.php/Mfaktc"]wiki[/URL]
AMD / [B]O[/B]penCL: [URL="http://www.mersenneforum.org/showthread.php?t=15646"]mfakt[B]o[/B][/URL] [URL="http://www.mersennewiki.org/index.php/Mfakto"]wiki[/URL][/QUOTE] Thanks! Apparently my GPU doesn't support OpenCL 1.1 or newer anyway. |
[QUOTE=James Heinrich;277103]I'd suggest running 1 instance of mfakto anyways. One instance will give you one idle and one half-loaded GPU, which should add up less on the power bill, but still let you use 3 cores for P-1 while providing a tremendous boost to your overall throughput.[/QUOTE]
I will definitely think about it. I have a 5870 in the other room that I run one instance of mfakto on, and that card gives me probably more throughput (as measured in GHz/h) than all of my other 6 PC's combined (though 3 of those pc's are very old). |
[QUOTE=petrw1;277004]I have on ocassion seen a message similar to "New B1 value ignored. Using B1 from save file instead" in my worker windows on startup with a new memory allocation.[/QUOTE]
So have I, but only during stage 2. |
[QUOTE=bcp19;277005]You just went and confused me again :/ If the bounds are not changed, even though the program restarts the worker and reports them changed, the variations I saw then make no sense.
Worker 1 is running a P-1 stage 1 on a 322M exponent, Workers 2,3&4 were doing ECM's. Worker 1 was completing .21-.22% every 6700 sec. Worker 3 switches to 60M TF and 8 min into the 150 min between W1's output and Worker 2 was switches to a P-1 on a 52M exp 66min in, Worker 1 shows 6550 sec, next iteration Worker 1 completes in 6389 sec. Using the above times and %'s, at 6700 per 'tick' the entire run should take 36.9 days, at 6389, 35.2 days. As expected, switching from ECM's to TF/P-1 on other workers showed a drop in time spent per tick.[/QUOTE] There are two things happening here. First, without changing the bounds, the algorithm runs faster with more memory. Secondly, both P-1 and ECM are memory-bandwidth-hungry algorithms. If both are running at the same time, they will compete for the available memory bandwidth, slowing down both. |
[QUOTE=James Heinrich;277106]NVIDIA / [b]C[/b]UDA: [url=http://www.mersenneforum.org/showthread.php?t=12827]mfakt[b]c[/b][/url] [url=http://www.mersennewiki.org/index.php/Mfaktc]wiki[/url]
AMD / [b]O[/b]penCL: [url=http://www.mersenneforum.org/showthread.php?t=15646]mfakt[b]o[/b][/url] [url=http://www.mersennewiki.org/index.php/Mfakto]wiki[/url][/QUOTE] I remember reading somewhere that you aren't supposed to report no factor results from mfakto? |
[QUOTE=Mr. P-1;277129]There are two things happening here.
First, without changing the bounds, the algorithm runs faster with more memory. Secondly, both P-1 and ECM are memory-bandwidth-hungry algorithms. If both are running at the same time, they will compete for the available memory bandwidth, slowing down both.[/QUOTE] So, if I understand what you are saying, when the extra memory was made available, the ECM grabbed more causing the restarted P-1 to actually be using less memory and therefore go slower? Hmm, is there a recommended type of work for each core to minimize problems like this? I notice my other system seems to run faster with core 1 and 3 doing LL/DC and core 2 and 4 doing TF (with 4 cores running LL the time per iteration was ~.090 for all 4, with the LL/TF/LL/TF setup the LL's are at ~.060). I know when the P-1 finishes and core 1 moves onto LL it won't be as much of a memory hog (plus the ECM will be done), would running a P-1 be better on core 2, 3 or 4? From my other system it seems 1/2 and 3/4 are kinda linked, so any thoughts would be appreciated. |
[QUOTE=Dubslow;277150]I remember reading somewhere that you aren't supposed to report no factor results from mfakto?[/QUOTE]That would be kind of pointless... where did you read that?
|
Ah, the GPU FAQ PDF guide that Brain created, available in the GPU FAQ threads.
|
[QUOTE=James Heinrich;277154]That would be kind of pointless... where did you read that?[/QUOTE]
Agree... if this is really the case, that would probably mean it isn't worth anyone's time to run it, since 98% of the factors will have to be rechecked by mfaktc |
[QUOTE=Dubslow;277150]I remember reading somewhere that you aren't supposed to report no factor results from mfakto?[/QUOTE]
That has since been resolved by bdot I believe. Check [URL="http://mersenneforum.org/showpost.php?p=272797&postcount=132"]http://mersenneforum.org/showpost.php?p=272797&postcount=132[/URL] and [url]http://mersenneforum.org/showpost.php?p=272868&postcount=136[/url] So mfakto 0.09 should be the latest version and fixed. |
[QUOTE=bcp19;277152]So, if I understand what you are saying, when the extra memory was made available, the ECM grabbed more causing the restarted P-1 to actually be using less memory and therefore go slower?[/QUOTE]
I should clarify what I said before. With the same bounds, and other things being equal, [i]stage 2[/i] should run faster with more memory, however the effect is minor, unless you're close to the minimum. The available memory doesn't effect the speed of stage 1, once the B1 bound has been fixed at the start of the run. The second issue I mentioned (which is probably the more significant one in causing the effects you have mentioned), doesn't depend much at all upon the specific amount of memory any particular thread is using. Rather it depends upon the nature of the work that thread is doing. [QUOTE]Hmm, is there a recommended type of work for each core to minimize problems like this?[/QUOTE] How your system will perform under various workloads is dependent upon your hardware, including the type and speed of your processor and memory and whether and how you overclock. GIMPS worktypes fall into three categories: Low Bandwidth: TF - all types. Medium Bandwidth: LL - all types, ECM Stage 1, P-1 Stage 1 High Bandwidth: ECM Stage 2, P-1 Stage 2. I wouldn't recommend doing TF at all on a CPU any more. GPUs are so much faster at this type of work, that doing it with a CPU is a waste of a core. What I would recommend you do is put MaxHighMemWorkers=1, into local.txt. (You need to shut down P95 before you make changes to local.txt or they will be reverted.) Then run the program with P-1 on all four cores. As soon as you have one core running stage 2, note the timings of both the stage 2 core and the stage 1 cores. Change MaxHighMemWorkers to 2. Wait for a second core to go to stage 2, and again note the timings. Decide if you are willing to take the hit. If yes, then run ECM/P-1 on all four cores, with MaxHighMemWorkers equal to 2. If not then run ECM/P-1 on two cores, and LLs/doublechecks on the other two, with MaxHighMemWorkers equal to 1. This assumes you have high memory available all the time. If you don't then you are likely to quickly accumulate a backlog of uncompleted stage 2. Even if you do, with twice as many cores running P-1 as MaxHighMemWorkers, you will slowly accumulate uncompleted stage 2. Clear the backlog by occasionally running an LL test on one of your P-1 cores. [QUOTE]would running a P-1 be better on core 2, 3 or 4? From my other system it seems 1/2 and 3/4 are kinda linked, so any thoughts would be appreciated.[/QUOTE] I've never noticed it making a difference which core does a particular type of work. |
[QUOTE=bcp19;277152]From my other system it seems 1/2 and 3/4 are kinda linked, so any thoughts would be appreciated.[/QUOTE]What CPU do you have? It almost sounds like you're describing a hyperthreaded dual-core.
|
[QUOTE=James Heinrich;277201]What CPU do you have? It almost sounds like you're describing a hyperthreaded dual-core.[/QUOTE]
Intel Core 2 Quad Q8200 @2.33 GHz 64 bit Vista |
Not hyperthreaded.
[url]http://ark.intel.com/products/36547/Intel-Core2-Quad-Processor-Q8200-(4M-Cache-2_33-GHz-1333-MHz-FSB[/url]) |
[QUOTE=bcp19;277206]Intel Core 2 Quad Q8200[/QUOTE]Ah, I thought so. The early Intel Quads (including your [url=http://en.wikipedia.org/wiki/Yorkfield_%28microprocessor%29#Yorkfield-6M]Q8200[/url] and my slightly older [url=http://en.wikipedia.org/wiki/Kentsfield_%28microprocessor%29#Kentsfield]Q6600[/url] are actually dual-dual-core CPUs rather than true quad-cores:[quote]Analogous to the Pentium D branded CPUs, the Kentsfields comprise two separate silicon dies (each equivalent to a single Core 2 duo) on one MCM (multi-chip module)[/quote][quote]Yorkfield-6M ... are made from two Wolfdale-3M like cores, so they have a total of 6 MB of L2 cache, with 3 MB shared by two cores. They are used in Core 2 Quad Q8xxx with 4 MB cache enabled...[/quote]
|
[QUOTE=Mr. P-1;277198]I wouldn't recommend doing TF at all on a CPU any more. GPUs are so much faster at this type of work, that doing it with a CPU is a waste of a core. What I would recommend you do is put MaxHighMemWorkers=1, into local.txt. (You need to shut down P95 before you make changes to local.txt or they will be reverted.) Then run the program with P-1 on all four cores. As soon as you have one core running stage 2, note the timings of both the stage 2 core and the stage 1 cores.
[/QUOTE] I understand your arguments, but if I have 4 cores running LL, I can complete 8 LL's in approx 80 days, where if I use the LL/TF/LL/TF in the same 80 days I can complete 6 LL's and ~160 TF (with the extimated 1% factor found this saves 1.6LL and 1.6DC). It just seems to be a more efficient use of the CPU's. I tried testing an LL/ECM/LL/TF and an LL/P-1/LL/TF, but in each of those, core 1 would run ~.090 per iteration while core 3 was .063 and the TF was like 2 seconds slower (237 to 239 sec per .14%) during S1. Interestingly during S2 (on the ECM), core 1 would drop to .084 while Core 3 had a few extra blips at .064 and core 4 jumped to about 248sec. The ECM took 11 hours to run compared to 9 hours with all 4 cores doing ECM. The P-1 I did not run to completion and moved the assignment to another machine as it was looking like a 3-4 day run on it. And the only time all 4 cores ran P-1 was when I first started and got 4 LL's that needed P-1 and I have no idea how long they took VS P-1 and something else. |
[QUOTE=Dubslow;277150]I remember reading somewhere that you aren't supposed to report no factor results from mfakto?[/QUOTE]
[QUOTE=James Heinrich;277154]That would be kind of pointless... where did you read that?[/QUOTE] [QUOTE=delta_t;277177]That has since been resolved by bdot I believe. Check [URL]http://mersenneforum.org/showpost.php?p=272797&postcount=132[/URL] and [URL]http://mersenneforum.org/showpost.php?p=272868&postcount=136[/URL] So mfakto 0.09 should be the latest version and fixed.[/QUOTE] The "point" which comes to mind, is that mfakto might have been missing some factors, and We would like to be as confident as possible in the assertion "M(y) has no factors below 2^x". David |
[QUOTE]I understand your arguments, but if I have 4 cores running LL, I can complete 8 LL's in approx 80 days, where if I use the LL/TF/LL/TF in the same 80 days I can complete 6 LL's and ~160 TF (with the extimated 1% factor found this saves 1.6LL and 1.6DC). It just seems to be a more efficient use of the CPU's.[/QUOTE]
It may seem to be more efficient, but it actually isn't. GIMPS has an excess of TF capacity. Those 1.6LLs and 1.6DC will be saved anyway, possibly by a GPU. The only difference you're making is to save them less efficiently than they might be saved otherwise. By contrast, GIMPS has a shortage of P-1 and LL capacity. LL is simply a bottleneck - the more machines we have doing this kind of work, the faster the project proceeds. P-1 is even more valuable. About half of all LL machines do not have sufficient memory to do stage 2 P-1. Many stage 2 factors are factors with would not otherwise be found, and thus represent LLs and DCs really saved. Even if the factors you find are factors which would otherwise be found, or if you don't find factors, the project benefits from having these computations completed more efficiently by a machine with plentiful memory. |
[QUOTE=Mr. P-1;277784]Even if the factors you find are factors which would otherwise be found, or if you don't find factors, the project benefits from having these computations completed more efficiently by a machine with plentiful memory.[/QUOTE]
Please forgive me for this "plug", but I'd like to bring to the attention of all P-1 Workers that the "GPU to 72" Tool is now making available low exponents (with no LL work yet done) which have been TFed to high levels (72 bits) by GPU's which need a P-1 run. Please see [URL="http://gpu.mersenne.info/account/getassignments/p-1/"]http://gpu.mersenne.info/account/getassignments/p-1/[/URL]. (If you don't have an account at the site, you'll have to create one before being able to be assigned work.) |
[QUOTE=chalsall;277789]Please forgive me for this "plug", but I'd like to bring to the attention of all P-1 Workers that the "GPU to 72" Tool is now making available low exponents (with no LL work yet done) which have been TFed to high levels (72 bits) by GPU's which need a P-1 run.
Please see [URL="http://gpu.mersenne.info/account/getassignments/p-1/"]http://gpu.mersenne.info/account/getassignments/p-1/[/URL]. (If you don't have an account at the site, you'll have to create one before being able to be assigned work.)[/QUOTE] Nice! I just started doing TF there a few days ago and love it. I just grabbed 15 P-1's too for my three boxes that do that. Hopefully you will get the first P-1's in from me in a couple days! |
[QUOTE=Mr. P-1;277784]It may seem to be more efficient, but it actually isn't. GIMPS has an excess of TF capacity. Those 1.6LLs and 1.6DC will be saved anyway, possibly by a GPU. The only difference you're making is to save them less efficiently than they might be saved otherwise.
By contrast, GIMPS has a shortage of P-1 and LL capacity. LL is simply a bottleneck - the more machines we have doing this kind of work, the faster the project proceeds. P-1 is even more valuable. About half of all LL machines do not have sufficient memory to do stage 2 P-1. Many stage 2 factors are factors with would not otherwise be found, and thus represent LLs and DCs really saved. Even if the factors you find are factors which would otherwise be found, or if you don't find factors, the project benefits from having these computations completed more efficiently by a machine with plentiful memory.[/QUOTE] Your idea of efficiency and mine are a bit different. The Core 2 Quad has some sort of bottleneck compared to the i7 in that the i7 can run 4 LL's with only a minor slowdown (.066 to .070 per iteration) where the Quad bogs badly (.060 to .090). Someone said the Quad is actually a Dual-Dual core, whatever that means, but I am guessing that each 'Dual' shares either L1 or L2 memory which causes this bottleneck. With an LL/TF per 'Dual' the cores are not fighting and seem to run more efficiently. So, when I said 'more efficient' I also meant that the CPU's were running at their nominal optimum speed on the tasks given them, rather than fighting for resources. In my case, running P-1 affects the 'shared' Dual the same as running 2 LL's, so 'efficiency' suffers. So for me, a single core being able to complete 3 LL's in a LL/TF 'Dual' vs that same core completing 2 LL's in a LL/LL or LL/P-1 'Dual' is more efficient. I have recently upgraded my gaming system GPU and installed the old one in the Quad, so I can now continue running cores 1 and 3 on LL while devoting cores 2 and 4 to the GPU running Mfaktc which will keep the 'efficiency' I was referring to while helping with the GPU to 72 project at the same time. The beauty of this project is that any effort is a step forward. There are many compelling arguments for each aspect of it... LL - Pro: 100% proof of primality, very small possibility of finding a prime, Con: Very slow. P-1 - Pro: Best method %-wise of finding a factor, Con: Memory intensive, multiple instances running S2 concurrently reduces efficiency, high impact on other cores on 'Dual' cpu systems, 0% chance of finding a prime. TF - Pro: Low impact on 'Dual' CPU systems, fairly fast (esp. on GPUs), Con: Low % of factors found, inefficient on CPUs at higher bit levels, 0% chance of finding a prime. |
[QUOTE=Mr. P-1;277784]By contrast, GIMPS has a shortage of P-1 and LL capacity. LL is simply a bottleneck - the more machines we have doing this kind of work, the faster the project proceeds.[/QUOTE]
Is there reason to be concerned about DC's, where the factors currently being assigned in that area are only half the size of the ones that LL crunchers are getting? Or, not really? Rodrigo |
[QUOTE=Rodrigo;277811]Is there reason to be concerned about DC's, where the factors currently being assigned in that area are only half the size of the ones that LL crunchers are getting? Or, not really?[/QUOTE]
I assume that by "factors" you mean exponents. Otherwise I don't understand the question. I see no reason to be "concerned" about anything about the project. Whether you would view it to be desirable to do more DC, or less, relative to first time LL depends upon whether you think it more, or less, important to verify the status of Mersenne Numbers, than it is to find new Mersenne primes. There's no objective answer to that question. |
[QUOTE=Mr. P-1;277129]There are two things happening here.
First, without changing the bounds, the algorithm runs faster with more memory. Secondly, both P-1 and ECM are memory-bandwidth-hungry algorithms. If both are running at the same time, they will compete for the available memory bandwidth, slowing down both.[/QUOTE] As bcp19 has just reminded me, there is more going on here than just memory bandwidth contention. There is also cache contention. |
[QUOTE=Mr. P-1;277816]I assume that by "factors" you mean exponents. Otherwise I don't understand the question.[/QUOTE]
Yes, I meant exponents. That's what I get for typing at lunchtime. [QUOTE=Mr. P-1;277816]I see no reason to be "concerned" about anything about the project. Whether you would view it to be desirable to do more DC, or less, relative to first time LL depends upon whether you think it more, or less, important to verify the status of Mersenne Numbers, than it is to find new Mersenne primes. There's no objective answer to that question.[/QUOTE] The reason I asked is that you'd said that GIMPS has a "shortage" of LL (and P-1) capacity. Noting that the exponents currently being assigned to DC are much smaller than the LLs currently being assigned, I was curious as to whether, by the same token, we could also say that there is a shortage of DC capacity. Yes? No? Seeking to understand better... Rodrigo |
[QUOTE=Rodrigo;277819]Yes, I meant exponents. That's what I get for typing at lunchtime.
The reason I asked is that you'd said that GIMPS has a "shortage" of LL (and P-1) capacity. Noting that the exponents currently being assigned to DC are much smaller than the LLs currently being assigned, I was curious as to whether, by the same token, we could also say that there is a shortage of DC capacity. Yes? No? Seeking to understand better... Rodrigo[/QUOTE] You mean "liquid lunch" I assume. LL is what this project is all about. If there is a "shortage" we need to seduce some more participants. Like India or China. David |
I think shortage of LL is about as subjective as whether or not DC is too slow. I personally don't see a shortage in LL, but rather think of the LL work being done as setting how important everything else is. The P-1 to LL ratio of work being completed is lower than is optimal, which is why we say there's a shortage of P-1. It really is each to his own here.
|
[QUOTE=Dubslow;277825]I think shortage of LL is about as subjective as whether or not DC is too slow. I personally don't see a shortage in LL, but rather think of the LL work being done as setting how important everything else is. The P-1 to LL ratio of work being completed is lower than is optimal, which is why we say there's a shortage of P-1. It really is each to his own here.[/QUOTE]
I think we are all singing from the same hymn-sheet. If a CPU gets an LL assignment with inadequate P-1, it is little trouble (even a refreshing change) to do it before the test. But inadequate TF requires a GPU. BTW I understand the interest in factoring low mersennes, but the "effort" >100M seems to me to be completely pointless. David |
[QUOTE=davieddy;277826]BTW I understand the interest in factoring low mersennes,
but the "effort" >100M seems to me to be completely pointless.[/QUOTE] Of course it is. But let us please be honest... At the end of the day the entire GIMPS project is pointless... Except, of course, for the advances in algorithms, optimization of code, and distributed computing methodologies that this project requires. And I have already explained (many times) why I have my cluster trial factoring above >100M. But to say again: as a sys-admin / network admin, it is useful to have the machines I am responsible for generate a small but regular amount of traffic which is a function of the CPU power available to them (as opposed to, say, a Worm or Virus). This has helped me solve many issues over the years. |
[QUOTE=chalsall;277789]Please forgive me for this "plug", but I'd like to bring to the attention of all P-1 Workers that the "GPU to 72" Tool is now making available low exponents (with no LL work yet done) which have been TFed to high levels (72 bits) by GPU's which need a P-1 run.
Please see [URL="http://gpu.mersenne.info/account/getassignments/p-1/"]http://gpu.mersenne.info/account/getassignments/p-1/[/URL]. (If you don't have an account at the site, you'll have to create one before being able to be assigned work.)[/QUOTE] Great - nice work as usual, chalsall! I'll grab a little P-1 work in a few days when my supplies run low. Doing P-1 after GPU-TF makes so much sense! |
[QUOTE=davieddy;277826]BTW I understand the interest in factoring low mersennes, but the "effort" >100M seems to me to be completely pointless.[/QUOTE]Is [URL="https://www.eff.org/awards/coop"]$150,000[/URL] pointless???:ouch:
|
[QUOTE=Uncwilly;277856]Is [URL="https://www.eff.org/awards/coop"]$150,000[/URL] pointless???:ouch:[/QUOTE]
It'll be worthless by the time a 100M digit prime is found More to the point, how much use is TFing the b*****s to say 68 bits? Chocolate teapot IMO |
I was looking at low level exponents checking the amount of P-1 done (similiar to what James H is doing but in the 7M range) and set up to run P-1's on the ones that had lower B1/B2 bounds. Since the bounds most often used seem to be 85000,1593750 I skipped all the exponents at or above these bounds. 90+% of these were TF'd to ^63. Since the GPU is fairly quick on these (8 min I believe), I set my worktodo with TF to ^64 inbetween the GPU to 72 exponents I am running so it cranks out 6-8 TF a day.
Now, I'm wondering if anyone could help me to understand this if possible: 7000163,63,85000,1593750 shows this exponent was TF'd to ^63 and the B1/B2 bounds were such that I did not send it to P-1, yet the GPU running ^63 to ^64 on this exponent came up with 12771553988326268647 as a factor. I'll admit my math capability is no where near understanding P-1 and TF, but from what I have read on here so far it seemed to me that the P-1 should have found this factor that going another bit level found. Can anyone give me a low level explanation why this happened? |
Never mind.
|
[QUOTE=bcp19;278329]I was looking at low level exponents checking the amount of P-1 done (similiar to what James H is doing but in the 7M range) and set up to run P-1's on the ones that had lower B1/B2 bounds. Since the bounds most often used seem to be 85000,1593750 I skipped all the exponents at or above these bounds. 90+% of these were TF'd to ^63. Since the GPU is fairly quick on these (8 min I believe), I set my worktodo with TF to ^64 inbetween the GPU to 72 exponents I am running so it cranks out 6-8 TF a day.
Now, I'm wondering if anyone could help me to understand this if possible: 7000163,63,85000,1593750 shows this exponent was TF'd to ^63 and the B1/B2 bounds were such that I did not send it to P-1, yet the GPU running ^63 to ^64 on this exponent came up with 12771553988326268647 as a factor. I'll admit my math capability is no where near understanding P-1 and TF, but from what I have read on here so far it seemed to me that the P-1 should have found this factor that going another bit level found. Can anyone give me a low level explanation why this happened?[/QUOTE] I am no expert on the guts of P-1, but according to James' page here, you would have had to have B2 be around 12M (the size of the biggest factor of k). 1.5M won't get the job done. [URL="http://mersenne-aries.sili.net/exponent.php?exponentdetails=7000163"]http://mersenne-aries.sili.net/exponent.php?exponentdetails=7000163[/URL] |
[QUOTE=bcp19;278329]it seemed to me that the P-1 should have found this factor that going another bit level found.[/QUOTE]TF and P-1 typically find different types of factors; the ones that are "easy" to find by one method are not necessarily "easy" for the other method.
TF is easy to understand: try each prime number between 2^63 and 2^64 and see if it divides into 2^exponent-1. If it does, it's a factor. P-1 factors are only found when they meet certain criteria. The so-called "k-value" of any P-1 factor is the prime factorization of the discovered factor minus one, with 2 and the Mersenne exponent removed. Trying to clarify that with an example:[quote]M7000163 has a factor: 12771553988326268647 12771553988326268647 - 1 = 12771553988326268646 factorize(12771553988326268646) = [b]2[/b] × 3^4 × 7^2 × 19 × [b]7000163[/b] × 12096811 k = [color=blue]3^4[/color] × 7^2 × 19 × [color=blue]12096811[/color][/quote]For P-1 to find the factor, B2 and B2 need to be greater or equal to the largest and second-largest prime powers respectively, which means that the minimum bounds for P-1 to find the factor is B1=81 (that is, 3^4) and B2=12,096,811 You can see these numbers, plus some pretty graphs to try and help visualize that, on my site: [url]http://mersenne-aries.sili.net/7000163[/url] The factor you found isn't very "smooth", which makes it more difficult (as in larger bounds are needed) to find with P-1 and therefore more likely to show up with TF. Compare with [url=http://mersenne-aries.sili.net/6877433]this relatively large factor[/url] I found yesterday which was easily found with P-1 but would take a current GPU a few billion years to TF. |
[QUOTE=garo;278334]There was an error in the P-1 test?[/QUOTE]There was no error. You can clearly see on [url=http://mersenne-aries.sili.net/7000163]the P-1 graph[/url] that the factor lies outside the bounds used for the P-1 test.
|
[QUOTE=James Heinrich;278340]
The factor you found isn't very "smooth", which makes it more difficult (as in larger bounds are needed) to find with P-1 and therefore more likely to show up with TF. Compare with [url=http://mersenne-aries.sili.net/6877433]this relatively large factor[/url] I found yesterday which was easily found with P-1 but would take a current GPU a few billion years to TF.[/QUOTE] [URL="http://mersenne-aries.sili.net/exponent.php?factordetails=16861341412139695521233727186051728863"]http://mersenne-aries.sili.net/exponent.php?factordetails=16861341412139695521233727186051728863[/URL] is by far my favorite example. All of the GPU's in the world would never find this, but it is almost trivial for P-1 to discover. |
[QUOTE=James Heinrich;278340]TF and P-1 typically find different types of factors; the ones that are "easy" to find by one method are not necessarily "easy" for the other method.
TF is easy to understand: try each prime number between 2^63 and 2^64 and see if it divides into 2^exponent-1. If it does, it's a factor. P-1 factors are only found when they meet certain criteria. The so-called "k-value" of any P-1 factor is the prime factorization of the discovered factor minus one, with 2 and the Mersenne exponent removed. Trying to clarify that with an example:For P-1 to find the factor, B2 and B2 need to be greater or equal to the largest and second-largest prime powers respectively, which means that the minimum bounds for P-1 to find the factor is B1=81 (that is, 3^4) and B2=12,096,811 You can see these numbers, plus some pretty graphs to try and help visualize that, on my site: [URL]http://mersenne-aries.sili.net/7000163[/URL] The factor you found isn't very "smooth", which makes it more difficult (as in larger bounds are needed) to find with P-1 and therefore more likely to show up with TF. Compare with [URL="http://mersenne-aries.sili.net/6877433"]this relatively large factor[/URL] I found yesterday which was easily found with P-1 but would take a current GPU a few billion years to TF.[/QUOTE] Thank you for that explanation, I am beginning to understand P-1 a bit better. I think the other one I was trying to learn from was actually TOO simplified (read too small a number to P-1), which ended up confusing me more that it helped. Looks like it is a good idea to both expand the P-1 bounds AND extend the TF a few extra bits in finding factors in the lower ranges. So, if an exponent was P-1'd to say 30000/600000 and you want to extend it to 85000/1500000, is there a 'shortcut' to extending it or does it still have to do all the work again? |
[QUOTE=bcp19;278357]TSo, if an exponent was P-1'd to say 30000/600000 and you want to extend it to 85000/1500000, is there a 'shortcut' to extending it or does it still have to do all the work again?[/QUOTE]If you have the P-1 save file(s) then not all the work needs to be repeated. If you have the stage2 savefile (from when the first P-1 was complete) you could extend it to 30000/1500000 by simply continuing. But if you increase the B1, you need a stage1 savefile to continue from [i]and[/i] most/all of the Stage2 work will need to be re-done after extending the B1, you can't just extend B1 from 30k -> 85k and continue B2 starting at 600k (where stage 2 ended on the first try), because you could potentially miss a factor that requires B2 <= 600k but also B1 > 30k (which would not have been found in the first attempt).
Short answer: If you're planning to methodically work P-1 to progressively higher bounds it can be done somewhat efficiently to minimize wasted work, but it's not particularly easy. If you want to redo P-1 that someone else did but to higher bounds, then you're out of luck, just start fresh. |
I kind of had that feeling from what I had read, thank you for clarifying lt for me.
|
Save files
undoc.txt
"By default P-1 work does not delete the save files when the work unit completes. This lets you run P-1 to a higher bound at a later date. You can force the program to delete save files by adding this line to prime.txt: KeepPminus1SaveFiles=0" I am a little confused by this information in undoc.txt. I think that by default the P-1 save files are deleted? At least I cannot find any save files from my completed P-1s. If I want to keep all save files should I add: "KeepPminus1SaveFiles=1" Can this variable (KeepPminus1SaveFiles) have other values then 0, 1 and if so what is the significance of the other values? |
Before v26(?) it was normal that the savefiles are deleted after the assignment is complete. Since v26.? the [i]KeepPminus1SaveFiles[/i] option has been added, with the default value of keeping the savefiles; if you don't want to keep them you should add [color=blue]KeepPminus1SaveFiles=1[/color] to [i]prime.txt[/i]
It's an on/off switch (1 means they're kept, 0 means they're deleted), no other values make sense. |
[QUOTE=aketilander;278469]undoc.txt
"By default P-1 work does not delete the save files when the work unit completes. This lets you run P-1 to a higher bound at a later date. You can force the program to delete save files by adding this line to prime.txt: KeepPminus1SaveFiles=0" I am a little confused by this information in undoc.txt. I think that by default the P-1 save files are deleted? At least I cannot find any save files from my completed P-1s. If I want to keep all save files should I add: "KeepPminus1SaveFiles=1" Can this variable (KeepPminus1SaveFiles) have other values then 0, 1 and if so what is the significance of the other values?[/QUOTE] There are two kinds of p-1 worktodo lines: [CODE]Pfactor=k,b,n,c,how_far_factored,num_primality_tests_saved Pminus1=k,b,n,c,B1,B2 for example Pfactor=1,2,2700067,-1,61,10 Pminus1=1,2,2700067,-1,150000,3000000[/CODE] Pfactor lines tell the client to work out bounds itself, Pminus1 lines specify the bounds. Primenet hands out Pfactor lines. I think the KeepPminus1SaveFiles option must apply to p-1 work done with Pminus1 lines, not Pfactor lines. My prime.txt files do not have "KeepPminus1SaveFiles" in them, I have Pfactor lines, and the save files go away when the work is done. A long time ago I tried using Pminus1 lines and I had to delete the save files manually. |
[QUOTE=James Heinrich;278471] if you don't want to keep them you should add [COLOR=blue]KeepPminus1SaveFiles=1[/COLOR] to [I]prime.txt[/I][/QUOTE]
Copy/Paste typo? |
[QUOTE=LaurV;278478]Copy/Paste typo?[/QUOTE]Oh... oops :redface:
That should of course be [quote]if you don't want to keep them you should add [color=blue]KeepPminus1SaveFiles=[/color][color=red]0[/color] to prime.txt[/quote] |
[QUOTE=James Heinrich;276848]Just found another one where the previous P-1 apparently didn't find the factor for some reason: [url=http://mersenne-aries.sili.net/exponent.php?exponentdetails=6853937]M6,853,937[/url].[/QUOTE]Just keeping track for myself, I found two more P-1 factors that should have already been found based on previous P-1 bounds:
[url]http://mersenne-aries.sili.net/6853967[/url] [url]http://mersenne-aries.sili.net/6854297[/url] |
Those exponents are too close to be a coincidence. Probably somebody was running P95 with hardware errors. (Now that I think about it, how much does P95 in terms of error catching for non LL work?)
|
[QUOTE=Dubslow;279257]Those exponents are too close to be a coincidence. Probably somebody was running P95 with hardware errors. (Now that I think about it, how much does P95 in terms of error catching for non LL work?)[/QUOTE]
Might not be hardware error, if you look at the P-1 work done in the lower ranges, you'll see some completed to 85K/1.5M while others are 30K/30K. If someone new started doing P-1 work with not much memory available, they'd probably have gotten several exponents in a row and not have done good P-1 work. |
[QUOTE=bcp19;279278]Might not be hardware error, if you look at the P-1 work done in the lower ranges, you'll see some completed to 85K/1.5M while others are 30K/30K. If someone new started doing P-1 work with not much memory available, they'd probably have gotten several exponents in a row and not have done good P-1 work.[/QUOTE]
The point is the factors should have been found with the bounds reported [I]for these specific exponents[/I]. (They're in the pages James linked to.) If there was no error in hardware or elsewhere, that is. |
[QUOTE=markr;279288]The point is the factors should have been found with the bounds reported [I]for these specific exponents[/I].[/quote]Exactly. To list those exponents (that I've found so far) in one place for easy reference:
[url=http://mersenne-aries.sili.net/6802123]M6,802,123[/url], [url=http://mersenne-aries.sili.net/6853937]M6,853,937[/url], [url=http://mersenne-aries.sili.net/6853967]M6,853,967[/url], [url=http://mersenne-aries.sili.net/6854297]M6,854,297[/url], [url=http://mersenne-aries.sili.net/6888719]M6,888,719[/url] Unfortunately there's no way to know from PrimeNet's limited data from that time whether these factors were all not-found by the same computer / user / software or during a certain time period. A large number of the exponents in this range were P-1'd with abnormally low bounds so I'm redoing those P-1s anyhow, and I should find all the factors that should've been found (and plenty more outside the original P-1 bounds). |
Should I be concerned that all the exponents I've completed that are above 60 million give the exact same amount of credit for completing them?
|
Music to my ears
[QUOTE=Jwb52z;279339]Should I be concerned that all the exponents I've completed that are above 60 million give the exact same amount of credit for completing them?[/QUOTE]
That's fine... As long as it was zero (to 3 sig figs) Nothing x something = nothing 60M is no man's land ATM David PS There be Dragons |
| All times are UTC. The time now is 10:25. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.