![]() |
[QUOTE=Jwb52z;279339]Should I be concerned that all the exponents I've completed that are above 60 million give the exact same amount of credit for completing them?[/QUOTE]Is it a reasonable amount of credit, something pretty close to 5GHz-days?
How much of a range do the exponents in question cover; are they all pretty close to each other? |
All of them, according to my results page on my account, have a credit of exact. "5.3087 GHz Days" if I'm reading this correctly. They are fairly close to each other, but I would think that there would be a small difference since it goes out to 4 decimal places.
|
[QUOTE=Jwb52z;279358]All of them, according to my results page on my account, have a credit of exact. "5.3087 GHz Days" if I'm reading this correctly. They are fairly close to each other, but I would think that there would be a small difference since it goes out to 4 decimal places.[/QUOTE]
What are the bounds used? If they all have the same bounds, then the credit will be the same. Minor differences in bound = minor differences in score. |
[QUOTE=Jwb52z;279358]I would think that there would be a small difference since it goes out to 4 decimal places.[/QUOTE]Actually no: The credit is based solely on FFT size and bounds:[code]// $timing is from a precalculated lookup table based on exponent size
return ( $timing * ( 1.45 * $B1 + 0.079 * ($B2 - $B1) ) / 86400.0 );[/code]All exponents between M58,520,000 and M60,940,000 should use FFT size of 3200K and so if the [i]same bounds[/i] are chosen for each exponent then the credit will be exactly the same. Of course, the credit for a L-L test does vary directly with the exponent so it's quite possible that Prime95 would choose different bounds and therefore give different credit for similar assignments, but if the exponents are close enough together it's reasonable to assume that identical bounds would be chosen and therefore identical credit given. |
[QUOTE=Jwb52z;279358]All of them, according to my results page on my account, have a credit of exact. "5.3087 GHz Days" if I'm reading this correctly. They are fairly close to each other, but I would think that there would be a small difference since it goes out to 4 decimal places.[/QUOTE]
deleted EDIT: James' post showed mine to be inaccurate :/ |
[QUOTE=James Heinrich;279372]Actually no: The credit is based solely on FFT size and bounds:[code]// $timing is from a precalculated lookup table based on exponent size
return ( $timing * ( 1.45 * $B1 + 0.079 * ($B2 - $B1) ) / 86400.0 );[/code]All exponents between M58,520,000 and M60,940,000 should use FFT size of 3200K and so if the [i]same bounds[/i] are chosen for each exponent then the credit will be exactly the same. Of course, the credit for a L-L test does vary directly with the exponent so it's quite possible that Prime95 would choose different bounds and therefore give different credit for similar assignments, but if the exponents are close enough together it's reasonable to assume that identical bounds would be chosen and therefore identical credit given.[/QUOTE]I'll remember that for when/if I do another LL instead of P-1 again sometime. :) |
Some "newbie" questions.
Whether or not a factor is found by P-1 is determined purely by
the bounds, right? The amount of memory available just affects the speed. Right? If we decide on the optimum TF and P-1 bounds, it is of marginal importance when we do the P-1, since most of the time no factor will be found. So GPUs should go all the way on TF without waiting for the P-1 which won't get done anyway. Right? David |
[QUOTE=James Heinrich;279372]Actually no: The credit is based solely on FFT size and bounds:[code]// $timing is from a precalculated lookup table based on exponent size
return ( $timing * ( 1.45 * $B1 + 0.079 * ($B2 - $B1) ) / 86400.0 );[/code]All exponents between M58,520,000 and M60,940,000 should use FFT size of 3200K and so if the [I]same bounds[/I] are chosen for each exponent then the credit will be exactly the same. Of course, the credit for a L-L test does vary directly with the exponent so it's quite possible that Prime95 would choose different bounds and therefore give different credit for similar assignments, but if the exponents are close enough together it's reasonable to assume that identical bounds would be chosen and therefore identical credit given.[/QUOTE] I've actually seen the fourth digit vary. |
[QUOTE=davieddy;279745]Whether or not a factor is found by P-1 is determined purely by the bounds, right? The amount of memory available just affects the speed. Right?[/quote]For [i]fixed bounds[/i], the amount of available memory just affects the speed, not the chance of finding a factor.
Of course, the speed differences with different amounts of memory usually dictate the choosing of different bounds for optimal efficiency, but that's another discussion. [QUOTE=davieddy;279745]If we decide on the optimum TF and P-1 bounds, it is of marginal importance when we do the P-1, since most of the time no factor will be found. So GPUs should go all the way on TF without waiting for the P-1 which won't get done anyway. Right?[/QUOTE]I assume you're referring to the concept of leaving a bitlevel or two of TF for [i]after[/i] P-1. My opinion is that (especially considering the relative surplus of TF power now available) the split-TF approach could safely be abandoned: Do TF up to a GPU-reasonable level (e.g. 2^72 per [i]chalsall[/i]'s new project's name). After that, assuming no factors found, then us P-1'ers come in and try and do a decent test before the exponent gets unleashed on the L-L'ers (who may or may not do a reasonable P-1 (or any at all)). It's possible that the last bit of TF could obviate the P-1 test, or vice versa; overall I don't think it makes much difference which is done first, although I suspect overall the GPU-TFs are still cheaper and should be done first. Note that this is because of the lower cost of GPU-TF. Throwing out some number for 60M-range exponents: it takes about 5GHz-days for a good P-1 (~4.5%) and 15GHz-days to TF 2^68 to 2^72 (~5.75%). So you get a somewhat higher chance of factor for 3x the apparent effort, but it translates to considerably [i]less[/i] effort since GPUs bring 5x-50x the power of CPUs. |
Many thanks James.
Exactly my thoughts. But the LL tester intent on finishing it (regrettably a minority apparently) would be barmy not to do P-1 first if it hadn't been done aleady. David PS the "overlap" of P-1 and TF ~30% |
[QUOTE=James Heinrich;279748]My opinion is that (especially considering the relative surplus of TF power now available) the split-TF approach could safely be abandoned: Do TF up to a GPU-reasonable level (e.g. 2^72 per [i]chalsall[/i]'s new project's name). After that, assuming no factors found, then us P-1'ers come in and try and do a decent test before the exponent gets unleashed on the L-L'ers (who may or may not do a reasonable P-1 (or any at all)).[/QUOTE]
James et al... If I may give some (very) raw data of what I know: [CODE]mysql> select count(*) from GPU where Status<9 and P1=1; +----------+ | count(*) | +----------+ | 15539 | +----------+ 1 row in set (0.02 sec) mysql> select count(*) from GPU where Status<9 and P1=1 and B1=B2; +----------+ | count(*) | +----------+ | 5254 | +----------+ 1 row in set (0.01 sec)[/CODE] This tells me that approximately one third of those LL and DC candidates currently held by "GPU to 72" which had P-1 work done on them were done "poorly". As in, the B1 was the same as B2 which (if I understand it correctly) means phase 2 of the P-1 process was not done because of lack of memory. From this, then, would it be desirable of some P-1 workers to rework such candidates before releasing them to LL / DC workers? We have had many P-1 workers complain they couldn't get P-1 work which had already been TFed to 2^72. And, to my surprise, the project has a [B][I]lot[/I][/B] of very serious P-1 fire-power. |
[QUOTE=chalsall;279756]As in, the B1 was the same as B2 which (if I understand it correctly) means phase 2 of the P-1 process was not done because of lack of memory.[/quote]Correct. If insufficient memory is available, only stage1 will be done [i]but[/i] it will have a much higher B1 chosen to compensate (such that the probability vs effort remains proportionate).
[QUOTE=chalsall;279756]From this, then, would it be desirable of some P-1 workers to rework such candidates before releasing them to LL / DC workers?[/QUOTE]No, not for this definition of "poorly". There's "good" (B2 > B1 and plenty of RAM available for good bounds) ~= 5-6% probability. There's "adequate" (B2=B1 due to lack of RAM) ~= 3-4% probability. And then there's "poor", where the user overrode the default bounds and picked something silly. For example, some of the [url=http://mersenne-aries.sili.net/p1small.php?worst=1]worst offenders[/url]. I've cleaned up most of the worst offenders over the last few years, things like B1=30,B2=100 and nonsense like that. Unfortunately PrimeNet accepts any P-1 as being "done", even if it has an infinitesimal probability (I'd prefer to see PrimeNet reject any no-factor P-1 result that has <2% chance of factor to prevent these follies). If there's a surplus of P-1 workers, give them exponents that have been LL'd but not DC'd and not P-1'd (I'm not sure if you look for these currently?). If there's none of that, better use of resources to do normal pre-LL P-1 rather than spend same 5GHz-days for a 1-2% incremental chance of factor over the previous P-1. |
[QUOTE=James Heinrich;279757]And then there's "poor", where the user overrode the default bounds and picked something silly. For example, some of the [url=http://mersenne-aries.sili.net/p1small.php?worst=1]worst offenders[/url]. I've cleaned up most of the worst offenders over the last few years, things like B1=30,B2=100 and nonsense like that. Unfortunately PrimeNet accepts any P-1 as being "done", even if it has an infinitesimal probability (I'd prefer to see PrimeNet reject any no-factor P-1 result that has <2% chance of factor to prevent these follies).[/QUOTE]
Thanks for that clarification James. [QUOTE=James Heinrich;279757]If there's a surplus of P-1 workers, give them exponents that have been LL'd but not DC'd and not P-1'd (I'm not sure if you look for these currently?).[/QUOTE] The system has never encountered a DC candidate which hasn't been P-1'd. [QUOTE=James Heinrich;279757]If there's none of that, better use of resources to do normal pre-LL P-1 rather than spend same 5GHz-days for a 1-2% incremental chance of factor over the previous P-1.[/QUOTE] That's what is happening now for those workers who are willing to work on candidates TFed to less than 72 "bits". But they have to explicitly ask for such work by changing the "Minimum TF level" from 72 to 71 on the request form. Otherwise they're told "Sorry -- no such work available". |
[QUOTE=davieddy;279745]Whether or not a factor is found by P-1 is determined purely by
the bounds, right? The amount of memory available just affects the speed. Right?[/QUOTE]Usually right, but not always. James's explanation is fine as far as it goes, but doesn't mention the Brent-Suyama extension. When there's a [U]lot[/U] of available memory (more than even some dedicated P-1ers allocate) for P-1 stage 2, prime95 will use the Brent-Suyama extension to extend the factor search much higher than the specified B2, at the cost of much memory space. (If I find a simple explanation, I'll add a link. Alex Kruppa has written a thesis with "[t]he main ideas of the Brent-Suyama extension ...". See [URL]http://www.mersenneforum.org/showthread.php?t=8190[/URL]) |
[QUOTE=chalsall;279759]The system has never encountered a DC candidate which hasn't been P-1'd.[/quote]There's [url=http://mersenne-aries.sili.net/p1small.php?prob=2&min=44700000&max=44800000&onlystage1=0&ignorenop1=0&showassigned=0]hundreds of them[/url] (change the dropdown from "ignore LL tests left" to "1 LL test left" to see the counts and ranges).
[QUOTE=chalsall;279759]Otherwise they're told "Sorry -- no such work available".[/QUOTE]A single-click link to re-post the form with the max bitlevel reduced by one would be a good idea, methinks. Or possibly leave the max bitlevel field blank by default, and only restrict it to that if explicitly entered, otherwise return from the whole pool in descending order of bitlevel, that way if a 72-bit assignment is available then great, otherwise it will automatically fall back to 71 or 70 or whatever. |
[QUOTE=James Heinrich;279763]A single-click link to re-post the form with the max bitlevel reduced by one would be a good idea, methinks.[/QUOTE]
Was "1-click" not patented by Amazon.com? I would hate to be sued by the "big boys". (That's meant to be funny, and serious, at the same time.) I appreciate your suggestion. It's a good one. |
[QUOTE=James Heinrich;279763]There's [url=http://mersenne-aries.sili.net/p1small.php?prob=2&min=44700000&max=44800000&onlystage1=0&ignorenop1=0&showassigned=0]hundreds of them[/url] (change the dropdown from "ignore LL tests left" to "1 LL test left" to see the counts and ranges).[/QUOTE]
I don't disagree with you. I've simply said my system has never encountered any. These statements are not in opposition. (If I may say again, I really enjoy working with really smart and attentive people. :smile:) |
How hard would it be to implement something in P95 to say "no B1-smooth k, do stage 2"? As far as I know stage 2 doesn't need the stage 1 memory file as long as the result is known.
|
[QUOTE=James Heinrich;279763]There's [url=http://mersenne-aries.sili.net/p1small.php?prob=2&min=44700000&max=44800000&onlystage1=0&ignorenop1=0&showassigned=0]hundreds of them[/url] (change the dropdown from "ignore LL tests left" to "1 LL test left" to see the counts and ranges).[/QUOTE]
I believe he is referring to DC's at the current wavefront; aka the GPU to 72 range; aka 25M-30M. Granted some are to poor bounds or with no stage 2 but all have some P-1. |
OK the "Newbie" bit was a jest
[QUOTE=davieddy;279745]
If we decide on the optimum TF and P-1 bounds, it is of marginal importance when we do the P-1, since most of the time no factor will be found. So GPUs should go all the way on TF without waiting for the P-1 which won't get done anyway. Right? David[/QUOTE] Nice example: I'd bagged some 45M expo [URL="http://www.youtube.com/watch?v=i5Tiqv4Irjshttp://www.youtube.com/watch?v=i5Tiqv4Irjs"]after midnight[/URL] TFed to 68, no P-1, and asked Eric Clapton to do some TF while I did P-1. No thought from either of us that one should go first. He just asked me to let him know if I'd found a factor, and vice versa:smile: David |
[QUOTE=petrw1;279774]I believe he is referring to DC's at the current wavefront; aka the GPU to 72 range; aka 25M-30M.
Granted some are to poor bounds or with no stage 2 but all have some P-1.[/QUOTE]Yes they do, cause I already did all the ones that didn't already have P-1 done. :smile: |
I know it would take some changes that George doesn't have time for, unless I misunderstand how it works, but I really wish I could get some P-1s that aren't in what someone else called "no man's land" and that are closer to the currently not reached first time LL exponents to help get rid of some of those candidates, which would save time. The problem is, I don't want to have to do it manually because I'm a bit stupid when it comes to that kind of thing.
|
[QUOTE=Jwb52z;279782]I really wish I could get some P-1s that aren't in what someone else called "no man's land" ... I don't want to have to do it manually[/QUOTE]Just set your preferred worktype to P-1 and you'll automatically get P-1 assignments a little ahead of the L-L curve. P-1 assignments are slightly ahead of the LL wavefront to give a little room for the [url=http://www.mersenneforum.org/showthread.php?t=16211]GPU-TF enthusiasts[/url] to quickly clean up the just-in-front-of-the-wave space, but that's definitely a manual thing (although not that complicated, just a little copy-paste every few days).
|
[QUOTE=James Heinrich;279819]Just set your preferred worktype to P-1 and you'll automatically get P-1 assignments a little ahead of the L-L curve. P-1 assignments are slightly ahead of the LL wavefront to give a little room for the [url=http://www.mersenneforum.org/showthread.php?t=16211]GPU-TF enthusiasts[/url] to quickly clean up the just-in-front-of-the-wave space, but that's definitely a manual thing (although not that complicated, just a little copy-paste every few days).[/QUOTE]
If he doesn't mind the copy and paste, then the GPU to 72 tool is looking for P-1 help; it's a little closer "into" the wavefront than just setting the worktype to do P-1. |
[QUOTE=Jwb52z;279782]"no man's land"[/QUOTE]
This IS what you currently get when taking primenet's P-1: exps beyond 60M, not just "a little ahead of the L-L curve". I had to change all my P-1 workers from auto-primenet to G72 manual. I'm not excited about that - though I'm happy about GPU-to-72 providing useful P-1 tasks. |
[QUOTE=davieddy;279776] and asked Eric Clapton to do some TF while I did P-1. [/QUOTE]
Davieddy, you´re really [URL="http://www.youtube.com/watch?v=vUSzL2leaFM&ob=av2e"]wonderful tonight [/URL]... that must have been quite a duet. |
[QUOTE=lycorn;279888]Davieddy, you´re really [URL="http://www.youtube.com/watch?v=vUSzL2leaFM&ob=av2e"]wonderful tonight [/URL]... that must have been quite a duet.[/QUOTE]
Peaches and Cream |
[QUOTE=lycorn;279888][URL="http://www.youtube.com/watch?v=vUSzL2leaFM&ob=av2e"]wonderful tonight [/URL][/QUOTE]
I've always had two objections to this song. The first is minor: it's too girly and soppy for my taste. The other one is that the song is about Patti Boyd. I don't think his mate the late George Harrison ever forgave him for stealing her from him. David |
It´s not one of my favourites either. But it´s one of his classics, and besides I couldn´t help using it for my (light hearted) comment. Hope you didn´t mind...
|
[QUOTE=James Heinrich;279819]Just set your preferred worktype to P-1 and you'll automatically get P-1 assignments a little ahead of the L-L curve. P-1 assignments are slightly ahead of the LL wavefront to give a little room for the [url=http://www.mersenneforum.org/showthread.php?t=16211]GPU-TF enthusiasts[/url] to quickly clean up the just-in-front-of-the-wave space, but that's definitely a manual thing (although not that complicated, just a little copy-paste every few days).[/QUOTE]My preferences are set to get P-1 work and what's currently being sent out now are work units over 60 million, nearly to 61 million in my case.
|
[QUOTE=James Heinrich;279291]Exactly. To list those exponents (that I've found so far) in one place for easy reference:[/QUOTE]
Found two more (bold): [url=http://mersenne-aries.sili.net/6802123]M6,802,123[/url], [url=http://mersenne-aries.sili.net/6853937]M6,853,937[/url], [url=http://mersenne-aries.sili.net/6853967]M6,853,967[/url], [url=http://mersenne-aries.sili.net/6854297]M6,854,297[/url], [url=http://mersenne-aries.sili.net/6888719]M6,888,719[/url], [b][url=http://mersenne-aries.sili.net/6935129]M6,935,129[/url], [url=http://mersenne-aries.sili.net/6937501]M6,937,501[/url][/b] |
[QUOTE=Jwb52z;279339]Should I be concerned that all the exponents I've completed are above 60 million?[/QUOTE]
This has been fixed. You'll now get the smallest exponent that has reached the GPU TF limits. Currently around 56.2M. |
Is there any way to do only Stage 2 P-1 work? That is, tell P95 that stage 1 was to completed to some bound B1, and then choose a B2 bound and do stage 2? I think there are a lot of candidates that got only stage 1 factoring...
|
[QUOTE=Prime95;280022]This has been fixed. You'll now get the smallest exponent that has reached the GPU TF limits. Currently around 56.2M.[/QUOTE]Thank you. I hope it wasn't too much work to do this. I appreciate it and I'm probably not alone in that.
|
[QUOTE=Dubslow;280055]Is there any way to do only Stage 2 P-1 work? That is, tell P95 that stage 1 was to completed to some bound B1, and then choose a B2 bound and do stage 2? I think there are a lot of candidates that got only stage 1 factoring...[/QUOTE]Stage 2 processing starts with the number result from stage 1.
So, if you have the save file from stage 1, you can use that to do only stage 2 work. If you don't have that save file, you have to do stage 1 over again before stage 2. |
[QUOTE=cheesehead;280079]Stage 2 processing starts with the number result from stage 1.
So, if you have the save file from stage 1, you can use that to do only stage 2 work. If you don't have that save file, you have to do stage 1 over again before stage 2.[/QUOTE]This has me wondering if it's possible to retrieve the stage 1 results number from the database somewhere so this could be done on its own. |
[QUOTE=Jwb52z;280113]This has me wondering if it's possible to retrieve the stage 1 results number from the database somewhere so this could be done on its own.[/QUOTE]Not realistically. The stage1 result that's needed isn't a simple 16-character checksum or anything, it's a good chunk of data (take a look at the size of the tempfiles as you work on P-1), anywhere from 1MB to 100MB, although typically around 5-10MB or so for current assignments. It's not considered practical to either transfer this data after each completion to the PrimeNet server, nor to store all that data.
|
Everyone probably knows this except me...what is We4
When a P-1 run is complete, on the status line I see B1= and B2= which I know are the bounds, E=nn which is the Brent-Suyama extension, and at the end there is We4: 8-digit hex number.
What is the We4? Is it part of the assignment key? Chuck |
[QUOTE]What is the We4? Is it part of the assignment key?
Chuck[/QUOTE][url]http://www.mersenneforum.org/showpost.php?p=253957&postcount=137[/url] |
[QUOTE=Chuck;280148]When a P-1 run is complete, on the status line I see B1= and B2= which I know are the bounds, E=nn which is the Brent-Suyama extension, and at the end there is We4: 8-digit hex number.[/QUOTE]
On this topic, I've been running things like (Prime95-generated B1, B2): ..., M50xxxxxx completed P-1, B1=590000, B2=15487500, E=12, ... and note that the B1 and B2 values are listed on the exponent status page. Is the E value saved somewhere as well? Does it matter? |
[QUOTE=S34960zz;280158]the B1 and B2 values are listed on the exponent status page. Is the E value saved somewhere as well? Does it matter?[/QUOTE]As far as I know, the B1/B2 bounds are recorded by PrimeNet but the E value is not. The E value represents how much (and if at all) the Brent-Suyama extension was used. With generous amounts of memory this allows P-1 to find a factor that is outside the normal P-1 factor range. It's there possible that a factor could be found with certain bounds by using the extension, but not found if the extension wasn't used. If a factor was found it apparent whether the B-S extension was necessary to find it; if no factor was found there's no way to know whether the extension was used if that data wasn't recorded. But no, it doesn't matter a whole lot.
A question of my own: can someone explain in simple terms how does the Brent-Suyama extension translate into (small) increased chance of finding a factor beyond the normal bounds? I assume it's nothing so simple as being equivalent to a higher B2? |
[QUOTE=James Heinrich;280162]A question of my own: can someone explain in simple terms how does the Brent-Suyama extension translate into (small) increased chance of finding a factor beyond the normal bounds? I assume it's nothing so simple as being equivalent to a higher B2?[/QUOTE]
Recall that stage 1 computes S=3[sup]E[/sup] where E is the product of prime powers less than B1. Then by Fermat's Little Theorem, a prime number p | S-1 if p-1 | E A naive stage 2 would then compute T=S[sup]q[/sup] = 3[sup]E*q[/sup] for successive prime q in the range (B1,B2]. Then p | T-1 if p-1 | q*E. Slicker would be to note that all primes > 3 are of form 6k+/-1. Suppose instead that we compute T=S[sup](6k)[sup]2[/sup]-1[/sup] = 3[sup]E*(6k-1)*(6k+1)[/sup] whenever one of 6k+1 or 6k-1 is prime. If both are prime, then we get to include two for the price of one. Even if only one is prime, the other may be a multiple of some other prime > B1, so with a bit of planning, we may be able to skip that prime on the way up, and thus again get two for the price of one. This is called "prime pairing". Brent-Suyama instead computes T=S[sup](6k)[sup]e[/sup]-1[/sup], where e is a small even number >2. (6k)[sup]e[/sup]-1 = (6k-1)*(6k+1)*(a higher order polynomial in k). The algorithm will then find a factor p if the largest prime factor of p-1 is 6k-1, 6k+1, or if it is a factor of the higher order polynomial evaluated at some k between B1/6 and B2/6 - and all other factors of p-1 are below B1. It is this latter case, whether the relevant prime factor of the higher order polynomial happens to be > B2, that the additional factors are found. Note that there is nothing special about the number 6, other than that it is a primorial. A larger primorial increases the proportion of primes that pair up, at the cost of having to consider a larger number of congruence classes "relatively prime" to it, and consequently a larger memory requirement. Prime95 uses 30, 210, and 2310, depending upon the available memory. |
Did you get all that from the source, or ...? Did you code it yourself into P95? Who did? George? akruppa? Brent and Suyama?
Otherwise very interesting :) |
[QUOTE=Mr. P-1;280258]Note that there is nothing special about the number 6, other than that it is a primorial. A larger primorial increases the proportion of primes that pair up, at the cost of having to consider a larger number of congruence classes "relatively prime" to it, and consequently a larger memory requirement. Prime95 uses 30, 210, and 2310, depending upon the available memory.[/QUOTE]I would assume that these equate to the E=__ values in Prime95 output of E=4, E=6, E=12... but I can't quite figure out how that makes sense? 4#=6; 6#=30; 12#=2310. 210 would be a good value to use, but that's 10# and I've never seen E=10 in any results...?
Otherwise, thanks for the explanation (despite it being completely beyond my ken). I've immortalized it in the wiki since there was no article for that. Please make any corrections there if I messed anything up: [url]http://www.mersennewiki.org/index.php/Brent-Suyama_extension[/url] |
[QUOTE=James Heinrich;280263] S=3E[/QUOTE]
Should be S=3^E (don't have a wiki account) And then a period at the end of the second line Edit: Oops, again: T=Sq = 3E*q should be T=S^q = 3^(E*q) I do believe that wiki software has a way of making super/subscripts somewhere, to prettify it. |
[QUOTE=Dubslow;280267]Should be S=3^E (don't have a wiki account). And then a period at the end of the second line.[/QUOTE]Changed.
|
[QUOTE=James Heinrich;280268]Changed.[/QUOTE]
There are still several exponents not superscripted: [quote]A naïve stage 2 would then compute T=Sq = 3E*q ... Brent-Suyama instead computes T=S(6k)e-1[/quote] All in all, I'd prefer that the wiki page were written by an expert, i.e., not me. |
Better you than nothing. akruppa?
|
[QUOTE=James Heinrich;280263]I would assume that these equate to the E=__ values in Prime95 output of E=4, E=6, E=12... but I can't quite figure out how that makes sense? 4#=6; 6#=30; 12#=2310. 210 would be a good value to use, but that's 10# and I've never seen E=10 in any results...?[/QUOTE]
No, E in the output equates to e in my explanation - the Brent-Suyama exponent. The small primorial is d in the source code, but isn't directly indicated in the output. You can see its effect, though: [tex]\phi[/tex](30)=8 [tex]\phi[/tex](210)=48 [tex]\phi[/tex](2310)=480 Where [tex]\phi[/tex](x) is Euler's totient function - the number of congruences (mod x) which are relatively prime to x. |
So you're saying that in T=S[sup](z*k)[sup]e[/sup]-1[/sup]
e=4,6,12 (or whatever we see in the results line) and z=30,210,2310 ? Then e would have to be even, otherwise you can't factor the square out... |
[QUOTE=Mr. P-1;280281]There are still several exponents not superscripted[/QUOTE]Indeed, sorry. I think I've got them all now. Not perfectly pretty, but hopefully no longer incorrect.
|
[QUOTE=Dubslow;280288]So you're saying that in T=S[sup](z*k)[sup]e[/sup]-1[/sup]
e=4,6,12 (or whatever we see in the results line) and z=30,210,2310 ?[/QUOTE] Very nearly. In fact T=S[sup](z*k)[sup]e[/sup]-y[sup]e[/sup][/sup] Where z is one of 30, 210, 2310 and y loops over all the positive integers < z/2 which are relatively prime to z. [QUOTE]Then e would have to be even, otherwise you can't factor the square out...[/QUOTE] Yes, e must be even. |
[QUOTE=James Heinrich;280296]Indeed, sorry. I think I've got them all now. Not perfectly pretty, but hopefully no longer incorrect.[/QUOTE]
This is wrong: [quote]Prime95 uses 30 (6#), 210 (10#), or 2310 (12#) (depending upon the available memory) and will be noted in the results.txt output in the form E=4, E=6, or E=12 respectively.[/quote] It makes more sense to think of the primorial as 5#, 7#, or 11#, i.e., the product of the first 3, 4, or 5 prime numbers. This quantity is not related to the E parameter recorded in the results file. All in all the article looks really badly written. |
[QUOTE=Mr. P-1;280300]This is wrong:
All in all the article looks really badly written.[/QUOTE]Wrong part removed. Feel free to apply for a wiki account and rewrite the article. :smile: |
[QUOTE=James Heinrich;279291]To list those exponents (that I've found so far) in one place for easy reference:[/quote]
Found one more (bold), but this time in another range: [url=http://mersenne-aries.sili.net/6802123]M6,802,123[/url], [url=http://mersenne-aries.sili.net/6853937]M6,853,937[/url], [url=http://mersenne-aries.sili.net/6853967]M6,853,967[/url], [url=http://mersenne-aries.sili.net/6854297]M6,854,297[/url], [url=http://mersenne-aries.sili.net/6888719]M6,888,719[/url], [url=http://mersenne-aries.sili.net/6935129]M6,935,129[/url], [url=http://mersenne-aries.sili.net/6937501]M6,937,501[/url], [b][url=http://mersenne-aries.sili.net/8855257]M8,855,257[/url][/b] |
Requesting more P-1 workers...
Hey all...
This is a request for more P-1 fire power over at [URL="http://gpu.mersenne.info/"]GPU to 72[/URL] ("Not just for GPUs any more"). The system currently has 736 low candidates which have been trial factored to 72 "bits". These are awaiting P-1 work before being released back to PrimeNet for LL assignment. If we don't get some more P-1 workers, we're going to have to start releasing them back to PrimeNet, where they may not get the optimal P-1 attention. Thanks. |
I'm giving her all she's got, Captain!
|
I'll put a few more in queue. It will be a bit before I can give it another worker.
|
[QUOTE=James Heinrich;280785]Found one more (bold), but this time in another range:
[URL="http://mersenne-aries.sili.net/6802123"]M6,802,123[/URL], [URL="http://mersenne-aries.sili.net/6853937"]M6,853,937[/URL], [URL="http://mersenne-aries.sili.net/6853967"]M6,853,967[/URL], [URL="http://mersenne-aries.sili.net/6854297"]M6,854,297[/URL], [URL="http://mersenne-aries.sili.net/6888719"]M6,888,719[/URL], [URL="http://mersenne-aries.sili.net/6935129"]M6,935,129[/URL], [URL="http://mersenne-aries.sili.net/6937501"]M6,937,501[/URL], [B][URL="http://mersenne-aries.sili.net/8855257"]M8,855,257[/URL][/B][/QUOTE] Is there a way to find out which program was used previously (probably prime95), and which version? So that the old version could be run again to see if it really does not find the factor? Then, the next step would be to find out when the program was changed to fix P-1, and then rerun all older [SIZE=2]NF-PM1 [/SIZE]results ... |
[QUOTE=Bdot;281005]Is there a way to find out which program was used previously (probably prime95), and which version?[/QUOTE]Very unlikely. Unless there's some old log files lying around (which I doubt) that information wasn't captured back then. If even the date was recorded that would be a start, but that's unknown from back then.
As for GPU-to-72 P-1, I should be able to throw a new system at it by the end of the week. |
Admiral's daughter
[QUOTE=garo;280996]I'm giving her all she's got, Captain![/QUOTE]
I shall return. |
Doesn't p-1 stage 2 use ffts. Surely they stand a chance of errors just like ll tests.
|
[QUOTE=henryzz;281369]Doesn't p-1 stage 2 use ffts. Surely they stand a chance of errors just like ll tests.[/QUOTE]
Both stages (1 & 2) use FFT. However much fewer FFT muls are involved compared to the LL test. So overall chance of error per test is lower. Additionally, unlike LL, errors that occur in Stage 2 (and to a lesser extent, Stage 1) _need not_ prevent the finding of the factor. |
Certainly not enough errors to get bad results for as many exponents as JH has found (if that's what you were referring to...)
|
[QUOTE=Dubslow;281432]Certainly not enough errors to get bad results for as many exponents as JH has found (if that's what you were referring to...)[/QUOTE]
Check out [URL]http://mersenne-aries.sili.net/exponent.php?exponentdetails=7012963[/URL], not only did the original P-1 fail to find it, the TF also failed. |
Today marks the point where I've found 730 P-1 factors in the [url=http://v5www.mersenne.org/report_top_500_P-1/]last 365 days[/url], so I've reached the 2-per-day mark. :smile:
Out of 19,248 tests and 5,939.7 GHz-days, that's 3.79% success ratio (explained by the fact that most of the work I've been doing the last year is re-doing work that was done with a ~2% factor chance instead of the normal ~5%), and 8.136GHz-days per factor. |
Huzzah, James! Huzzah!
|
[QUOTE=James Heinrich;281601]Today marks the point where I've found 730 P-1 factors in the [url=http://v5www.mersenne.org/report_top_500_P-1/]last 365 days[/url], so I've reached the 2-per-day mark. :smile:[/QUOTE]
Nice work, James! :toot: :bow: |
[QUOTE=bcp19;281434]Check out [URL]http://mersenne-aries.sili.net/exponent.php?exponentdetails=7012963[/URL], not only did the original P-1 fail to find it, the TF also failed.[/QUOTE]
That reminded me of a [I]very[/I] old thread about [URL="http://www.mersenneforum.org/showthread.php?t=1425"]missed small factors[/URL]. Back then, 7.019M to 7.06M (and other ranges) was found to have factors that trial factoring missed first time around. I've started checking 7.0M to 7.019M - one found so far, a 55 bit factor of M7013393, the same size as for M7012963. :coffee: |
[QUOTE=James Heinrich;281601]Today marks the point where I've found 730 P-1 factors in the [url=http://v5www.mersenne.org/report_top_500_P-1/]last 365 days[/url], so I've reached the 2-per-day mark. :smile:
Out of 19,248 tests and 5,939.7 GHz-days, that's 3.79% success ratio (explained by the fact that most of the work I've been doing the last year is re-doing work that was done with a ~2% factor chance instead of the normal ~5%), and 8.136GHz-days per factor.[/QUOTE] Wow, congrats!! |
Congratulations, Mr. Heinrich!
|
[QUOTE=James Heinrich;279291]To list those exponents (that I've found so far) in one place for easy reference:[/QUOTE]
3 more: [url=http://mersenne-aries.sili.net/6802123]M6,802,123[/url], [url=http://mersenne-aries.sili.net/6853937]M6,853,937[/url], [url=http://mersenne-aries.sili.net/6853967]M6,853,967[/url], [url=http://mersenne-aries.sili.net/6854297]M6,854,297[/url], [url=http://mersenne-aries.sili.net/6888719]M6,888,719[/url], [url=http://mersenne-aries.sili.net/6935129]M6,935,129[/url], [url=http://mersenne-aries.sili.net/6937501]M6,937,501[/url], [b][url=http://mersenne-aries.sili.net/6961751]M6,961,751[/url][/b], [b][url=http://mersenne-aries.sili.net/6984797]M6,984,797[/url][/b], [i][url=http://mersenne-aries.sili.net/7012963]M7,012,963[/url][/i], [b][url=http://mersenne-aries.sili.net/8289409]M8,289,409[/url][/b], [url=http://mersenne-aries.sili.net/8855257]M8,855,257[/url] |
[QUOTE=markr;281723][QUOTE=bcp19;281434]Check out [URL]http://mersenne-aries.sili.net/exponent.php?exponentdetails=7012963[/URL], not only did the original P-1 fail to find it, the TF also failed.[/QUOTE]
That reminded me of a [I]very[/I] old thread about [URL="http://www.mersenneforum.org/showthread.php?t=1425"]missed small factors[/URL]. Back then, 7.019M to 7.06M (and other ranges) was found to have factors that trial factoring missed first time around. I've started checking 7.0M to 7.019M - one found so far, a 55 bit factor of [URL="http://mersenne-aries.sili.net/exponent.php?exponentdetails=7013393"]M7013393[/URL], the same size as for M7012963. :coffee:[/QUOTE] 7.0M to 7.019M is now checked to 2^59, and I'll take it to 2^60. [URL="http://mersenne-aries.sili.net/exponent.php?exponentdetails=7018901"]M7018901[/URL] has a 58 bit factor. It was assigned to Carsten Kossendey for P-1 (since March): I don't usually poach but since P-1 would be very unlikely to find a factor with k = 43 x 258029413 I hope he won't mind too much. |
[QUOTE=markr;282119]
[URL="http://mersenne-aries.sili.net/exponent.php?exponentdetails=7018901"]M7018901[/URL] has a 58 bit factor. It was assigned to Carsten Kossendey for P-1 (since March): I don't usually poach but since P-1 would be very unlikely to find a factor with k = 43 x 258029413 I hope he won't mind too much.[/QUOTE]So, that's your "justification" for not asking Kossendey's permission to render his P-1 useless, or for not at least having the courtesy to notify him that you were about to poach? Is it that you assign no value to Kossendey's time, so that it is of no consequence to [I]you[/I] that Kossendey might have wanted to do something else instead of a useless P-1? Is that why you saw no reason to communicate with him before you proceeded to poach -- that while your own time has value, someone else's doesn't? However, it's admirable that you publicly admit your poachery. - - - Poaching seems always to involve a conclusion that one's own desires and impatience are somehow more important than the desires, time and effort of someone else who has gone to the trouble of properly getting an assignment from PrimeNet. What is the "somehow" by which a poacher arrives at that conclusion? . . |
[QUOTE=cheesehead;282133]So, that's your "justification" for not asking Kossendey's permission to render his P-1 useless, or for not at least having the courtesy to notify him that you were about to poach?
Is it that you assign no value to Kossendey's time, so that it is of no consequence to [I]you[/I] that Kossendey might have wanted to do something else instead of a useless P-1? Is that why you saw no reason to communicate with him before you proceeded to poach -- that while your own time has value, someone else's doesn't? However, it's admirable that you publicly admit your poachery. - - - Poaching seems always to involve a conclusion that one's own desires and impatience are somehow more important than the desires, time and effort of someone else who has gone to the trouble of properly getting an assignment from PrimeNet. What is the "somehow" by which a poacher arrives at that conclusion? . .[/QUOTE] What's the big fuss? He did TF while it was reserved for P-1. Didn't know that extending TF was considered poaching... |
[QUOTE=diamonddave;282134]Didn't know that extending TF was considered poaching...[/QUOTE]As is common with poachery justifications, that leaves out something important.
"Didn't know that extending TF was considered poaching..." -- It wasn't [I]only[/I] extending TF. It was extending TF [I]while someone else had a legitimate assignment[/I] (as you yourself acknowledged in your preceding sentence, but not this one). [I]That overlap with a proper assignment[/I] is what distinguishes poaching from legitimate non-overlapping work. Why did you leave out that overlapping-an-assignment factor and try to pretend that the bare TF extension was what was accused to be poachery? |
I don't see anything wrong in trying to extend the TF horizon for exponents assigned to other people for P-1 or ECM, but I see no motivation either. You gain nothing, except the fact that you WASTE your processor time. Off course, if you know the "other people" (as opposite of them being "anonymous" users or guys who are not members here), it should be better to ask first. That is polite and you can avoid later discussion like this one here.
My motivation is very simple: doing TF for exponents with lots of P-1 and ECM done on them is a waste of time. WASTE of time. Most of the time you will end up with NO FACTORs. Some of the time you could end up with a factor that would not be found by P-1 or ECM (like in the current case) and only very seldom, but VERY seldom, you will find a "reasonable" factor that would render futile the work of the other guy. The most of the time is YOUR work that is futile. It is YOUR processor time that gets wasted. If anyone love to waste his CPU cycles, be my guest. Additionally, P-1 assignments are done with P95, which would be enough clever to unreserve the exponent when the work is not needed anymore, if the work for that exponent was not started yet. If the work is in progress... well, here is very painful, because P95 will not unreserve any exponent which has work done on it, so for the original cruncher it would be double frustration, one: he lost the time to work on that exponent, and two: if he does not find out that the work is not needed anymore, he will still lose the time up to the completion of the work, doing something which nobody need any longer. In this case, someone should tell to the legitimate cruncher, so he could stop his work in case it is started already, and move to some other assignment. |
[QUOTE=cheesehead;282135]"Didn't know that extending TF was considered poaching..."
-- It wasn't [I]only[/I] extending TF. It was extending TF [I]while someone else had a legitimate assignment[/I][/QUOTE] cheesehead, Interesting question. Had Kossendey reported any progress on that P-1? If not, then how long can one hang on to an assignment before it may acceptably be, um, pre-empted by somebody else? You asked markr, [QUOTE]Is it that you assign no value to Kossendey's time, so that it is of no consequence to [I]you[/I] that Kossendey might have wanted to do something else instead of a useless P-1?[/QUOTE] OK, so if Kossendey felt that, over a nine-month period, anything else was better to do than the P-1 he got assigned to him, then can we really begrudge someone else taking over the exponent? Don't get me wrong -- I'm not trying to be argumentative here, just trying sincerely to get a better handle on the situation. I'm in no way in favor of poaching, I simply wish to see how and in what circumstances the concept is or isn't applicable. Thanks! Rodrigo |
[QUOTE=Rodrigo;282139]cheesehead,
Interesting question. Had Kossendey reported any progress on that P-1?[/QUOTE]PrimeNet releases assignments for which there's been no progress report within a certain time frame. Since the assignment still exists, we may deduce that a progress report [U]has[/U] been made within the time limits set by PrimeNet and the user's preferences (UnreserveDays=, DaysofWork=, etc.). [quote]If not, then how long can one hang on to an assignment before it may acceptably be, um, pre-empted by somebody else?[/quote]There is never a time when an assignment may be "acceptably" poached. If you have a complaint about the length of time someone has had an assignment, the acceptable way to handle that is to communicate directly with the assignee or with George Woltman, not by poaching. [quote]OK, so if Kossendey felt that, over a nine-month period, anything else was better to do than the P-1 he got assigned to him, then can we really begrudge someone else taking over the exponent?[/quote][U]Yes, we can begrudge that[/U], if there was no communicating to politely urge the assignee to hurry up, and then communicating with George Woltman to ask for intervention. If the assignee, or George, releases the assignment, then it's okay for someone else to get it. Otherwise, "taking over" an assigned exponent is just arrogant poaching. [quote]Don't get me wrong -- I'm not trying to be argumentative here, just trying sincerely to get a better handle on the situation. I'm in no way in favor of poaching, I simply wish to see how and in what circumstances the concept is or isn't applicable.[/quote]It's rather simple: When an exponent is assigned to someone else, and PrimeNet does not allow overlapping simultaneous assignments (as it does with ECM), then anyone else who does work on that exponent without having been assigned by PrimeNet is poaching. Anyone who's not willing to abide by PrimeNet assignments should find some other project to work on, and not interfere with GIMPS. |
[QUOTE=LaurV;282137]I don't see anything wrong in trying to extend the TF horizon for exponents assigned to other people for P-1 or ECM,[/QUOTE]Then it seems you don't yet understand the ethics of the GIMPS/PrimeNet system.
The PrimeNet assignment reservation system exists to allow GIMPS participants to contribute without having "toes stepped on". Once you have secured an assignment from PrimeNet, it is your right to proceed without having your effort duplicated or preempted by someone else (as long as your progress meets the PrimeNet standards -- and no would-be poacher has any judgement superior to PrimeNet's algorithms, with the very occasional and exceptional intervention of our project admins, in this regard). (Note: PrimeNet does allows multiple simultaneous ECM assignments to be given to different users for the same exponent, because of the nature of ECM curve randomization. This doesn't apply to any other type of work, and does not apply to overlapping TF or P-1 with each other or with ECM, because non-ECM methods have deterministic, not randomized, initialization. There could be an argument that ECM shouldn't be overlapped, either, but the likelihood of interference between two ECMers has been judged to be far lower than between any other combination and low enough to allow overlap. I'll accept that judgement until someone can show that it's detrimental. No one has ever shown that for non-ECM overlaps.) [quote]but I see no motivation either.[/quote]Well, good -- you don't have the motivation that many poachers have. So, just don't do it. [quote]You gain nothing, except the fact that you WASTE your processor time.[/quote]Have you given proper weight to the wasting of the assignee's processor time? [quote]Off course, if you know the "other people"[/quote]You mean "assignees" here, right? [quote](as opposite of them being "anonymous" users[/quote]If an "anonymous" user has a proper PrimeNet assignment, then s/he's not anonymous as far as PrimeNet is concerned, and no one else has any business interfering with or poaching that user's assignment. [quote] or guys who are not members here)[/quote]By "not members", what do you mean? Anyone who has an assignment from PrimeNet is a "member" as far as GIMPS/PrimeNet is concerned. [quote]it should be better to ask first.[/quote]It's [U]always[/U] better to ask first, regardless of your presumed social distance from the assignee. [quote]My motivation is very simple: doing TF for exponents with lots of P-1 and ECM done on them is a waste of time. WASTE of time. Most of the time you will end up with NO FACTORs. Some of the time you could end up with a factor that would not be found by P-1 or ECM (like in the current case) and only very seldom, but VERY seldom, you will find a "reasonable" factor that would render futile the work of the other guy. The most of the time is YOUR work that is futile. It is YOUR processor time that gets wasted.[/quote]... in [I]your[/I] opinion, that is. So, when, if ever, is [I]your[/I] opinion so superior to someone else's opinion as to justify poaching an assignment? [quote]If anyone love to waste his CPU cycles, be my guest.[/quote]So, never poach. |
cheesehead,
Thanks for taking the time to provide this lucid explanation. It makes sense. How many GHz-days should that exponent (M7018901) require to P-1? Rodrigo |
[QUOTE=Rodrigo;282146]
How many GHz-days should that exponent (M7018901) require to P-1? [/QUOTE]That depends rather heavily on the B1 and B2 bounds, doesn't it? |
That's life, my dear. I don't want to go through a flame war with you, but if it is a must, I will.
Why you take it so hard? I agree that no one should poach. But what we call poaching, is different. Doing a TF for an exponent assigned to TF to other user and getting a result, and REPORTING that result before the other user do (call it assignee, I did not know the word), well, THAT is poaching. Same as doing a LL for an LL-assigned-already exponent, and reporting the result faster then the assignee, and so on. (Hmmm, here I wonder what GIMPS will do with the prize in such a case, if the poacher finds a prime. I think the original assignee should get the prize, and this should be specified in the rules, for exactly the reason of avoiding poaching). Nobody stops me to play with whatever exponent I like, as long as I let the other guy do his job. I can do all your DC exponents faster then you, and report them as triple checks AFTER you report them as double checks. I can do your LL assignments in parallel with you, and report them AFTER you report yours. Is this poaching? The system allows it, and the server would be happy if I do so. If the system is so "clever" as you assume, it should forbid me (the poacher) to report a result for an exponent assigned to other guy. And I would be REALLY REALLY REALLY happy to have that feature!!! You have no idea how many times I did manual reports WITHOUT being logged in (or the server logged me out in between, because of bad internet connection), and my results ended as credited to anonymous guys. But in nature (and Primenet) things are not like that. The system WILL accept whatever result come first, and you (doing LL on a 100M digits exponent for one year already) should be happy if some other guy shows a factor of it, saving you from another nine years of work. I would be happy if some guy finds factors for all exponents I am DC-ing and LL-ing in this very moment. I would give him a kiss and move to other LL and DC assignments. Most probably I would react the same as you did, if someone would poach my first time LLs and REPORT the results before I do it. That I would call poaching. And if he finds a prime, I would take the gun. :P But getting angry if someone actually SAVE me from some work? C'mon! |
TF and P-1 overlap
A few months ago I was assigned a first time LL with
no P-1 and TFed only to 68. Eric's GPU did more bits of TF while I did P-1. Although he asked me to let him know if I found a factor, (as if I wouldn't), neither of us gave the slightest thought to letting the other go first. Predictably neither of us found one, and the LL residue was not zero. There was a myth on V4 that the TF wavefront was comfortably ahead of LL assignments. Even if that was once true, the short-sighted notion of postponing the last worthwhile bit or so of TF until P-1 was done [B](not)[/B] hurled this fiction out of the window. There is no reason why 200 new ([B]not re-assigned) [/B]LL assignments/day shouldn't be GPUed to 72. But hardly any are. I (not to mention many others) am fed up with explaining why this screwup is occurring. [B]FOCUS[/B] David |
For the sake of completeness:
[code] Sending result to server: UID: ckdo/mother, M7018901 completed P-1, B1=445000, B2=13572500, E=12, We4: B577ECA3, AID: 38FE58CA43BC481FD33D3F698964C12F PrimeNet success code with additional info: Result was not needed. P-1 exponent: 7018901, B1: 445000, B2: 13572500 [strike]CPU credit is 0.3668 GHz-days.[/strike] [/code]No credit. [URL="http://www.youtube.com/watch?v=QR58Heh-STU"]Get your filthy hands off my desert[/URL] ... please? |
[QUOTE=davieddy;282155]There is no reason why 200 new ([B]not re-assigned) [/B]LL
assignments/day shouldn't be GPUed to 72. But hardly any are. [/QUOTE] I don't see a day here where there was under 200 exponents factored to 72: [URL="http://gpu.mersenne.info/reports/overall/graph/"]http://gpu.mersenne.info/reports/overall/graph/[/URL] P-1 is once more limiting than TF. There are 250 factors waiting now for P-1 that have been factored to 72. Plus challsall says we are releasing some back to PrimeNet. [QUOTE=davieddy;282155] I (not to mention many others) am fed up with explaining why this screwup is occurring. [/QUOTE] Who are these "many" others? I only hear one voice. |
[QUOTE=LaurV;282153]
I agree that no one should poach.[/QUOTE] Do you? Not only this incident, but there was the other case where you tried to poach from a completely different project!! And you admitted you knew about it! [URL="http://mersenneforum.org/showpost.php?p=279785&postcount=121"]http://mersenneforum.org/showpost.php?p=279785&postcount=121[/URL] If you had found a factor the amount of wasted processor time would have been absolutely mammoth from NFS@home. There are tons of numbers out there that have not been assigned to people to do whatever worktype you want to do. Don't be selfish. Think about the people who are working on the exponent. I would be pissed if I turned in a P-1 and was not given credit because someone else already found that factor with TF. Or if I was working on it and someone told me they had already done it. |
[QUOTE=ckdo;282167]For the sake of completeness:[/QUOTE]
My humble apologies. In truth, I was just being lazy. Not a defence, obviously! I said above (way above now) "I don't usually poach": actually I'm quite sure this was my first time, and it didn't feel good. Quite apart from any issue of credit (and at least with primenet v5, if you have the AID, you get the credit, even if someone else submitted first) it meant your machine spent time on this exponent that it could have used to actually further the project. |
[QUOTE=KyleAskine;282172]Do you? Not only this incident, but there was the other case where you tried to poach from a completely different project!! And you admitted you knew about it! [URL="http://mersenneforum.org/showpost.php?p=279785&postcount=121"]http://mersenneforum.org/showpost.php?p=279785&postcount=121[/URL]
If you had found a factor the amount of wasted processor time would have been absolutely mammoth from NFS@home. There are tons of numbers out there that have not been assigned to people to do whatever worktype you want to do. Don't be selfish. Think about the people who are working on the exponent. I would be pissed if I turned in a P-1 and was not given credit because someone else already found that factor with TF. Or if I was working on it and someone told me they had already done it.[/QUOTE] I don't think anyone doing P-1 or TF below 20M are in it for the PrimeNet credits. PrimeNet doesn't (or didn't) handle TF reservation below preset limits, soo people extending those limits are out of the loop... I won't do back-flips when I try extending limits and the TF take all of 5-6 minutes. If I do 500 such test in a week. Trying to find if any of the exponents are reserved for something else like P-1 or DC, is a ridiculous notion. I do the TF regardless of DC status (proven or not) or P-1 limits anyway. I do however look for TF activity in the range I work, so that I can avoid clash with other TFers. obviously I was alone in those [URL="http://www.mersenne.info/trial_factored_tabular_delta_30/3/10000000/"]range[/URL] |
[QUOTE=davieddy;282155]
There is no reason why 200 new ([B]not re-assigned) [/B]LL assignments/day shouldn't be GPUed to 72. But hardly any are. I (not to mention many others) am fed up with explaining why this screwup is occurring. [/QUOTE] [QUOTE=KyleAskine;282171]I don't see a day here where there was under 200 exponents factored to 72: [URL]http://gpu.mersenne.info/reports/overall/graph/[/URL] [/QUOTE] To name one other voice:[B] YOURS[/B] Don't they speaky Engleesh in Fairyland? |
[QUOTE=davieddy;282155]There is no reason why 200 new ([B]not re-assigned) [/B]LL assignments/day shouldn't be GPUed to 72.
But hardly any are. I (not to mention many others) am fed up with explaining why this screwup is occurring. [/QUOTE] [QUOTE=KyleAskine;282171]I don't see a day here where there was under 200 exponents factored to 72: Who are these "many" others? I only hear one voice.[/QUOTE] Your voice is one of the "many". Don't you speaky Engleesh in Fairyland? David PS or read clearly presented data? [URL]http://gpu.mersenne.info/reports/overall/graph/[/URL] |
LaurV, with reference to your lengthy post #956, could I just offer you the following tip?
Because different people will have different ideas about what constitutes poaching (indeed you acknowledge that you and cheesehead have different ideas from each other in that post), it might not be a good idea to justify behaviour which impinges on other people's work on the basis of your own particular ideas. |
[QUOTE=davieddy;282186]Your voice is one of the "many". Don't you speaky Engleesh in Fairyland?[/QUOTE]
Sigh... It's a bit like debating with a mental midget... :sad: Let's look at the empirical evidence, rather than emotional hysteria. As you can clearly see from [URL="http://www.mersenne.info/exponent_status_tabular_delta_7/1/0/"]this chart[/URL], in the last week a total of 1217 LL tests were completed in the 40M to 60M range. Or, 174 a day. During the same temporal period, as KyleAskine pointed out from another report, not a single day in the last week have we completed less than 200 TF jobs to at least 72 bits. (And note that Xyzzy hasn't reported in in a few days; when he does he's going to blow our charts and heuristics out of the water! :smile:) Ergo, at this point in time all new LL assignments from PrimeNet will have been TFed to at least 72 bits. And many, additionally, will have been P1ed "well" (we have garnered an impressive amount of P-1 "fire power"). Also, just for completeness, it should be pointed out that the lower candidates take less time to LL. It doesn't matter that they've been "recycled"; the fact is they have not been LLed. |
Lies, Damn Lies and Statistics
[QUOTE=chalsall;282188]Sigh... It's a bit like debating with a mental midget... :sad:
[/QUOTE] Love it. Which way round or both? David PS Or is this a menage a trois? |
[QUOTE=diamonddave;282176]I don't think anyone doing P-1 or TF below 20M are in it for the PrimeNet credits. PrimeNet doesn't (or didn't) handle TF reservation below preset limits, soo people extending those limits are out of the loop...
I won't do back-flips when I try extending limits and the TF take all of 5-6 minutes. If I do 500 such test in a week. Trying to find if any of the exponents are reserved for something else like P-1 or DC, is a ridiculous notion. I do the TF regardless of DC status (proven or not) or P-1 limits anyway.[/QUOTE] I must be missing something. I am one of those people that occasionally does TF in the low ranges knowing I V-E-R-Y likely will NOT find a factor (though I have found several in the 3M range it was a small percentage). I have other reasons of personal interest. However, from my perspective I do NOT find it difficult to ensure any assignment I take is NOT already assigned. I don't do 500 a week; closer to 100. I start with a query like this (ensuring I exclude currently assigned exponents): [url]http://www.mersenne.org/report_factoring_effort/?exp_lo=400000&exp_hi=499999&bits_lo=0&bits_hi=60&txt=1&exassigned=1&B1=Get+Data[/url] Then copy and paste the exponents I want into worktodo.add(or txt) and with basic editing massage them into Factor= statements. I still do it manually because it doesn't take me more than a minute or two but I know it would also be quite easy to automate the entire process (others have). |
[QUOTE=chalsall;282188]
Ergo, at this point in time all new LL assignments from PrimeNet will have been TFed to at least 72 bits. And many, additionally, will have been P1ed "well" (we have garnered an impressive amount of P-1 "fire power"). [/QUOTE] Hmmm the logic is flawed... PrimeNet assign for LL testing at least 2-4 time the number of completed test per day. So if 200 test gets completed on a given day, anywhere between 400 and 800 will have been assigned. It's easy to see why when we know that only 25-50% of any assignment ever get completed. I was actually wandering about this problem. Is it better to let the wavefront advance or should we release our lowest exponent first? lets look at 2 different exponent: 45M (our lowest available) and 56M (current wavefront) If both are factored to 71, witch one is better to hand out first? Well they require respectively to LL test: 45M - 72.2 Ghz-Days 56M - 118.6 Ghz-Days The effort required to factor from 2^71 to 2^72: 45M - 10.62 Ghz-Days 56M - 8.54 Ghz-Days So the respective Bang-for-buck (chance of finding a factor being equal) ratio are: 72.2 / 10.6 = 6.8 118.6 / 8.5 = 13.95 In other word a 56M factor is worth more and cost less than a 45M factor. Since we know that Prime95 will never further factor exponent factored to 71 in those range, even if it would have been optimal GPU-wise. If the spider could somehow monitor the number of exponent buffered by PrimeNet, so that it didn't have to advance the wavefront and always release the lowest exponent factored to 71. Then, I think the system would be optimal! :cool: |
[QUOTE=petrw1;282191]
However, from my perspective I do NOT find it difficult to ensure any assignment I take is NOT already assigned. I don't do 500 a week; closer to 100. I start with a query like this (ensuring I exclude currently assigned exponents): [url]http://www.mersenne.org/report_factoring_effort/?exp_lo=400000&exp_hi=499999&bits_lo=0&bits_hi=60&txt=1&exassigned=1&B1=Get+Data[/url] Then copy and paste the exponents I want into worktodo.add(or txt) and with basic editing massage them into Factor= statements. I still do it manually because it doesn't take me more than a minute or two but I know it would also be quite easy to automate the entire process (others have).[/QUOTE] This doesn't reserve the exponent (BTW it's what I use). If I pick a month worth of work in a range, it might not have been assigned when I grabbed it, but that status could have changed in the following month and we are back at square one. If anything I could save them work by reporting a factor. [QUOTE=petrw1;282191] I am one of those people that occasionally does TF in the low ranges knowing I V-E-R-Y likely will NOT find a factor (though I have found several in the 3M range it was a small percentage). I have other reasons of personal interest. [/QUOTE] Factors I have found in the low 10M range: [CODE]10158307 2011/12/10 23165442422693669639 10166789 2011/12/09 30505621781735241263 10177859 2011/12/09 19226522870204743351 10167617 2011/12/09 35400464284608355399 10167589 2011/12/09 19220197856513632009 10151821 2011/12/04 22803421415633075479 10155877 2011/12/01 25221410187895323209 10139453 2011/11/09 29742952181715579481 10130039 2011/11/08 19030509425919904001 10142887 2011/11/06 29386253647515292777 10142603 2011/11/06 25725407691049648513 10144187 2011/11/06 19450708470619689793 10113823 2011/11/01 20633240159215740071 10112413 2011/11/01 35143563607950223351 10117351 2011/11/01 36254438196810113591 10118929 2011/10/27 30922526157342469561 10121257 2011/10/22 30363492001559010857 10099007 2011/10/22 31740104071521452119 10065361 2011/10/20 36225268980519504713 10065203 2011/10/20 35943312394665094271 10064423 2011/10/20 26238270662066701129 10058003 2011/10/20 29744837510404835473 10073599 2011/10/18 22005368889459055223 10072957 2011/10/18 18580180956241537247 10050427 2011/10/11 26662465435662261713 10039879 2011/10/11 21897683265828949169 10031323 2011/10/06 31586511777517672033 10019909 2011/10/06 32423630758344997151 10019197 2011/10/06 21296345329286511871 10017109 2011/10/04 19682494866689261801 10021147 2011/10/03 25504600676039398873 10020613 2011/10/03 19584669608749036703 10016263 2011/10/03 31466213554702484423 10044667 2011/09/29 23885927142945697031 10200781 2011/09/27 33482517911745744913 11001721 2011/09/26 22207016025259277761 11000911 2011/09/26 35147657443478046847 10102007 2011/09/13 22427480224957636601 10006201 2011/09/09 28020776722441384279 [/CODE] however it looks like there is now P-1 activity in that range since I started |
| All times are UTC. The time now is 21:55. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.