![]() |
[QUOTE=Uncwilly;397823]Not so much. It had more to do with that mirror (and the smaller mirrors) and instruments being out of whack with each other because the company that delivered the mirrors was used to delivering [URL="http://en.wikipedia.org/wiki/KH-11_Kennan#Design"]spy satellites[/URL] that the telescope was derived from. The same grinding program was used for it as the sat's.[/QUOTE]
My understanding (very likely wrong; no "insider" knowledge) is that a planned test on the ground which would have cost ~$1M was skipped for budgetary reasons. |
[QUOTE=TheMawn;397736]Would any of you mind giving me a couple of the exponents that you've found factors for in 74 -> 75? 0/300 is a bit concerning and I'd like to check that my hardware isn't rejecting 75 bits for some reason.[/QUOTE]
In case you want a few more: [CODE]66275567 71627797 71900449 74025781 74031313[/CODE] |
This morning the Workers' Overall Progress report is showing I have done 2 TF >75. I don't recall doing anything this far; it would be an 8-hour job.
|
[QUOTE=Chuck;397911]I don't recall doing anything this far; it would be an 8-hour job.[/QUOTE]
[CODE]+-----------+----+----+---------------------+---------------------+----------------+ | Exponent | F | T | Assigned | Completed | GHzDays | +-----------+----+----+---------------------+---------------------+----------------+ | 332350043 | 73 | 78 | 2013-02-27 19:42:34 | 2013-02-28 11:44:32 | 178.4373626709 | | 332309843 | 73 | 79 | 2013-04-02 12:43:18 | 2013-04-03 19:45:39 | 362.6746215820 | +-----------+----+----+---------------------+---------------------+----------------+[/CODE] |
Rats; I forgot about my little fling with the 332M exponents. Sorry about the false alarm.
|
[QUOTE=Chuck;397931]Sorry about the false alarm.[/QUOTE]
No problem. Trivial query. |
[QUOTE=Chuck;397931]Rats; I forgot about my little fling with the 332M exponents. Sorry about the false alarm.[/QUOTE]
I am always up for another fling like that. :grin: |
[QUOTE=chalsall;397773]Just so everyone knows, I've updated the [URL="https://www.gpu72.com/reports/workers/"]Workers' Overall Progress[/URL] report to include TF'ing to 74 and 75 (and >75). As before, you can click on the column headers to see who's doing what to where. For [URL="https://www.gpu72.com/reports/workers/75/"]example, those going to 75[/URL].
I'm afraid it makes the table rather wide, but I know people have been wanting this data exposed.[/QUOTE] Good job. It may be the time to do the same for the "factoring [URL="http://www.gpu72.com/reports/factor_percentage/"]percentages[/URL]" and "factoring [URL="http://www.gpu72.com/reports/factoring_cost/"]cost[/URL]" tables, and then we can see if the last bit really worth... :razz: And don't bother about table's width, they look great, even on my narrow monitors. What I can't say about [URL="http://www.mersenne.org/report_top_500_custom/?team_flag=0&type=0&rank_lo=1&rank_hi=100&start_date=2000-01-01&end_date=&B1=Get+Report"]Madpoo's table[/URL], I hate that horizontal scrolling there, especially as you have to scroll down first (or press the "end" key) to be able to scroll right.. grrrr... it looks totally shitty, for just 6 characters more... :razz: |
[QUOTE=LaurV;397994]Good job. It may be the time to do the same for the "factoring [URL="http://www.gpu72.com/reports/factor_percentage/"]percentages[/URL]" and "factoring [URL="http://www.gpu72.com/reports/factoring_cost/"]cost[/URL]" tables, and then we can see if the last bit really worth... :razz:[/QUOTE]
You know, somehow I knew it would be you to ask for this.... :razz: Done. |
[QUOTE=chalsall;398012]You know, somehow I knew it would be you to ask for this.... :razz: Done.[/QUOTE]
Pray tell... What are the secrets of the Mighty Chris Halsall and LaurV? Why is their ratio of work saved to work done above 1 where everyone else is dysmally below 1? EDIT: Nevermind! I figured it out. It's for factoring in the 100M+ digits range. By the way, scroll down to the bottom of [url]https://www.gpu72.com/reports/worker/factors/d8a75f85f90457298bd3c366a8de2410/[/url] and can you explain to me why the DC cost is 10,000 GHz-days flat whereas the LL cost is not? |
[QUOTE=TheMawn;398015]By the way, scroll down to the bottom of [URL]https://www.gpu72.com/reports/worker/factors/d8a75f85f90457298bd3c366a8de2410/[/URL] and can you explain to me why the DC cost is 10,000 GHz-days flat whereas the LL cost is not?[/QUOTE]
The user requested a DCTF assignment but also did P-1 on it and found a factor with it. GPU72 identified it as being found with TF instead of P-1 and gave incorrect credits. [URL]http://www.mersenne.org/report_exponent/?exp_lo=42884713&exp_hi=&full=1[/URL] [edit] Oww, wait, you're looking at the 951M exponents. |
[QUOTE=chalsall;398012]You know, somehow I knew it would be you to ask for this.... :razz: Done.[/QUOTE]
Now, how about the Individual factoring Costs? |
[QUOTE=NickOfTime;398026]Now, how about the Individual factoring Costs?[/QUOTE]
Fine!!! (Man, give a metre, they want the kilometre!!!) :wink: |
[QUOTE=chalsall;398032]Fine!!! (Man, give a metre, they want the kilometre!!!) :wink:[/QUOTE]
In Ro we say "you give him a finger and he takes all the hand/arm". |
[QUOTE=TheMawn;398015]Pray tell... What are the secrets of the Mighty Chris Halsall and LaurV?[/QUOTE]
I tell you but don't tell to these guys here: if I drill a hole in my garden straight down, it goes through the Earth core and pops directly into Chris garden... :w00t: About that "saved work", don't believe him, he is cheating, I saved more work than him. I saved the most work of all, because I don't like to work, I like to rest. Then, when I rest, the work is saved... |
[QUOTE=LaurV;398060]In Ro we say "you give him a finger and he takes all the hand/arm".[/QUOTE]
I heart it said as "Sometimes when you give a man the finger he asks for the whole fist". |
[QUOTE=LaurV;398063]About that "saved work", don't believe him, he is cheating, I saved more work than him. I saved the most work of all, because I don't like to work, I like to rest. Then, when I rest, the work is saved...[/QUOTE]
:D |
[QUOTE=TheMawn;398015]EDIT: Nevermind! I figured it out. It's for factoring in the 100M+ digits range. By the way, scroll down to the bottom of [url]https://www.gpu72.com/reports/worker/factors/d8a75f85f90457298bd3c366a8de2410/[/url] and can you explain to me why the DC cost is 10,000 GHz-days flat whereas the LL cost is not?[/QUOTE]
Yeah... LaurV is correct -- don't trust my "Work Saved" metric! But, in my defence, it wasn't an actual intended cheat, simply a SPE when I was experimenting with having large LMH work being done through GPU72 and I forgot to filter out the results. WRT the "DC Saved" on the report you linked to showing 10,000.00 while the "LL Saved" shows large (but different) values, again, a SPE. The "GHzDaysLL" field is defined as "float(20,10) unsigned" in that table, while the "GHzDaysDC" field is defined as "float(14,10) unsigned". I'll look at filtering out my high "saved" values from the database sometime; until then, simply assume I'm cheating.... :smile: P.S. Actually, this brings up a memory... When this was raised before (years ago) James proposed a very elegant equation which would solve the problem. James, please forgive me for this, but can you find the post where this was defined (or repost)? It would probably take as much work for me to implement your suggestion as to filter out "edge" cases. |
[QUOTE=chalsall;398086]James proposed a very elegant equation which would solve the problem. James, please forgive me for this, but can you find the post where this was defined (or repost)? It would probably take as much work for me to implement your suggestion as to filter out "edge" cases.[/QUOTE]I remember that. It was elegant. I'll have to think about what I proposed to try and figure out where I posted it... :unsure:
Google is impressively good sometimes, but not quite to the point of "remember that clever idea I had a while ago?..." :smile: |
[QUOTE=LaurV;398063]I tell you but don't tell to these guys here: if I drill a hole in my garden straight down, it goes through the Earth core and pops directly into Chris garden... :w00t:[/QUOTE]
Yeah, don't tell anyone, but LaurV and I have an arrangement. I send him authentic Bajan food through the tunnel (arriving 42.2 minutes later), and he reciprocates with authentic Thai (I think I'm getting the better part of the deal). The tunnel was relatively easy; the stasis field technology (patent pending) to keep the food at the intended temperature was the difficult part.... |
After some searching I was unable to find the post where I first suggested it. But it was something along the lines of a scaled number that isn't really how much work was saved, but "rewards" finding larger/harder factors equally on any exponent, rather than trivial factors on large exponents. For example I'm finding 1 factor per second across my 4 GPUs looking in the ~3500M range up to 2[sup]64[/sup], and each factor found saves about 100,000GHz-days of LL effort (200k if you count DC). But I only put 0.5s of GPU time into finding it.
Some examples[code]mysql> SELECT `exponent`, `factor`, `factorbits`, POW(2, `factorbits`) / (`exponent` * `exponent`) AS `ratio` FROM `known_factors_000` WHERE CEIL(`factorbits`) = 32 LIMIT 1; +------------+--------------------------+------------+-------------------+ | exponent | factor | factorbits | ratio | +------------+--------------------------+------------+-------------------+ | 183329 | 2610604961 | 31.2817 | 0.0776725 | | 180503 | 3617977583593 | 41.7183 | 111.0428958 | | 9255461 | 2586129499585367 | 51.1997 | 30.1890313 | | 180799 | 3519866543349568537 | 61.6102 | 107677727.6770670 | | 50000017 | 3615901229407 | 41.7175 | 0.0014463 | | 50003377 | 2629357074783431 | 51.2236 | 1.0515767 | | 50008481 | 3034732720866794417 | 61.3963 | 1213.5033475 | | 50005399 | 3363174795243180938039 | 71.5103 | 1344966.3214317 | | 150000047 | 2244000703121 | 41.0292 | 0.0000997 | | 150000667 | 3119505371338871 | 51.4702 | 0.1386396 | | 150001763 | 4550635694009568991 | 61.9808 | 202.2494818 | | 159919951 | 2545635882015204168857 | 71.1085 | 99537.2032619 | | 500001259 | 2397006035647 | 41.1244 | 0.0000095 | | 500000467 | 4503064205858041 | 51.9998 | 0.0180118 | | 500001763 | 2741363214012610249 | 61.2496 | 10.9653775 | | 500000701 | 4705811278909163282143 | 71.9949 | 18822.8023273 | | 1508205187 | 2569981638649 | 41.2249 | 0.0000011 | | 1500000107 | 3765639268615583 | 51.7418 | 0.0016735 | | 1504000241 | 4011556747001189569 | 61.7989 | 1.7734827 | | 4205000179 | 2287520097377 | 41.0569 | 0.0000001 | | 4203000167 | 4451052830856007 | 51.9831 | 0.0002519 | | 4209000109 | 2642363834740997513 | 61.1965 | 0.1491502 | | 4200003253 | 4559069828180196281983 | 71.9492 | 258.4456111 | +------------+--------------------------+------------+-------------------+[/code]I don't think this is quite the same as what I had before, but something along the general idea. Those who understand math can tweak what I'm using to something that makes more sense, I just played with numbers until I saw something that looked reasonable (monkeys and typewriters, you know...) |
[QUOTE=chalsall;398089]...arriving 42.2 minutes later[/QUOTE]For an average speed of 5km/s. Now that's some fast food! :w00t:
|
[STRIKE]Do you have a rough date? Like 2011-2013 or better?[/STRIKE]
[STRIKE]Was it in this thread?[/STRIKE] Do you remember which thread it was in? Edit: Working on the assumption that it was in this thread, I started searching. Is this it? [QUOTE=James Heinrich;323022]You need some kind of self-balancing metric, perhaps something along the lines of[code]worth = GHd_saved * (GHd_factor / GHd_LL) // examples: // 72-bit TF factor on 60M (TF to 2[sup]73[/sup]) value = (133.292 + 133.292 + 15.94) * (11.956 / 133.292) = 89.7 // 72-bit TF factor on 900M (TF to 2[sup]84[/sup]) value = (24825 + 24825 + 4352) * (0.5314 / 24825) = 1.2 // 83-bit TF factor on 900M (TF to 2[sup]84[/sup]) value = (24825 + 24825 + 2176) * (1088 / 24825) = 2271 // 93-bit P-1 factor on 900M (TF to 2[sup]84[/sup]) value = (24825 + 24825 + 0) * (684 / 24825) = 1368[/code]This correctly shows that a 72-bit factor is worth a lot less on larger exponents than on smaller, despite "saving" a lot more LL effort. As can be seen above, it also works well with P-1 factors -- large factors can be found with relatively less effort than TF factors, but the above automatically scales it in what I think is an appropriate manner.[/QUOTE] |
[QUOTE=Dubslow;398096]Edit: Working on the assumption that it was in this thread, I started searching. Is this it?[/QUOTE]
Yup! Thanks!!! |
[QUOTE=James Heinrich;398095]For an average speed of 5km/s. Now that's some fast food! :w00t:[/QUOTE]
We're considering offering franchises. |
[QUOTE=chalsall;398097]Yup! Thanks!!![/QUOTE]
That formula is ... problematic. Because, after simplification, it comes out as (2 + p-1 bonus) * GHd_factor, where p-1 bonus is earned if an exponent is yet to have a P-1, and is of the order of roughly 0.1. So basically, a small number (2-2.1) times GHd_factor. In other words, you are still measuring the work expended in finding the factor, not the work saved! |
[QUOTE=axn;398137]That formula is ... problematic.[/QUOTE]I'd forgotten what my original proposal was until [i]Dubslow[/i] found it, but my revised proposal [url=http://www.mersenneforum.org/showpost.php?p=398092&postcount=3513]above[/url] depends on bits and not GHz-days. Feel free to critique it also, I have no sentimental attachment to it, nor am I skilled in math so I'm sure someone else can propose a similar concept with numbers that make more sense.
|
[QUOTE=James Heinrich;398178]I'd forgotten what my original proposal was until [i]Dubslow[/i] found it, but my revised proposal [url=http://www.mersenneforum.org/showpost.php?p=398092&postcount=3513]above[/url] depends on bits and not GHz-days. Feel free to critique it also, I have no sentimental attachment to it, nor am I skilled in math so I'm sure someone else can propose a similar concept with numbers that make more sense.[/QUOTE]
Well, IMO, work saved is not (should not be) related to work expended. Straightforward calculation of work saved is good enough. However, in order to avoid people gaming the system by TF-ing too far ahead of the LL wavefront, you can discount the work saved by how far away the exponent is from the current LL (@ 4m/year). This means that a factor found ahead of the wavefront starts out with a small value of "work saved", but will appreciate over time, eventually reaching the full value (after decades?!). |
[QUOTE=axn;398137]In other words, you are still measuring the work expended in finding the factor, not the work saved![/QUOTE]
OK... Do you (or anyone) have a better suggestion? The goal is simply to not have very large exponents be given a disproportionate amount of "Saved" credit. Although, in reality, the metric is relatively meaningless, and was in fact implemented (as suggested by Dubslow) simply in order to draw Jerry and Mike back to LLTF from DCTF (years ago). |
[QUOTE=axn;398182]However, in order to avoid people gaming the system by TF-ing too far ahead of the LL wavefront, you can discount the work saved by how far away the exponent is from the current LL (@ 4m/year). This means that a factor found ahead of the wavefront starts out with a small value of "work saved", but will appreciate over time, eventually reaching the full value (after decades?!).[/QUOTE]
We cross posted... What I liked about James' proposal is it avoided the need to keep track of the "wave front". This is even more important at the moment as in there are many "fronts" in the LL range, including the P-1 wave in the Cat 4 region (which we're fighting to keep ahead of just going to 74!). |
[QUOTE=chalsall;398183]The goal is simply to not have very large exponents be given a disproportionate amount of "Saved" credit[/QUOTE]
Large exponents leads to more "work saved" not because they are currently lower depth TF-wise (although that helps), but because they require a large amount of effort to LL. There is just no getting around that fact. And the work saved _is real_, just not relevant _today_. [QUOTE=chalsall;398184]We cross posted... What I liked about James' proposal is it avoided the need to keep track of the "wave front". This is even more important at the moment as in there are many "fronts" in the LL range, including the P-1 wave in the Cat 4 region (which we're fighting to keep ahead of just going to 74!).[/QUOTE] I think tracking the wavefront is the only real option. Although, you don't actually need to "track" it. Pick a start value, say 75m, and a start date, 2015-01-01. Now the computed wavefront is (current date - start date) in years times 4m/year + start value. So, 3 months into 2015, wavefront would be at a nominal value of 76m. The real number doesn't matter, as long as it is close enough. Now any exponent below our computed wavefront gets full credit. Anything > wavefront gets a discounted credit based on how far away it is from the wave front (I don't know what is the correct form of the discount function, perhaps 1/d^3 where d is the distance in suitable unit). Even this system can be gamed by doing breadth first TF at current wavefront. I guess this is what is driving your intuition regarding discounting the bit levels. I don't have any good suggestions. Perhaps, factor found at lower bit levels can be penalised by only counting the remaining TF work/P-1 work as being saved (under the theory that more TF/P-1 might also have found another factor). Only TF at the highest (optimal) bit level gets the full 2LL effort as saved. |
I would not change it. It is getting too complicate. Let it like it is. First, as axn said, the saved work is _real_, second, at the end, all this credits are just "for fun", only for guys like me to make fun of Sid and Andy, "hey, I saved more work than you" and viceversa :razz:
If ye have time to program, ye would invest it in other more important things, like bringing back the missing millions of exponents in g-visu, and/or adding the 75 columns to those tables, and/or checking why do we sometime get pm1 assignments from gpu72 when we request first LL, and/or.... |
I'm happy knowing I save roughly 8,000,000,000 GHz-days of work every day in the 3800M range TF'ing up to 2[sup]64[/sup].
:smile: |
[QUOTE=James Heinrich;398207]I'm happy knowing I save roughly 8,000,000,000 GHz-days of work every day in the 3800M range TF'ing up to 2[sup]64[/sup].[/QUOTE]
In a thousand years, George's great * ~50 grandson will thank you... :wink: Any chance you might consider throwing a few THzDays at LLTF'ing? Even going to "just" 74 would help; things are *really* tight at the moment. P.S. You know, it sometimes blows my mind just how much computing power we have as individuals now-a-days. I remember implementing a program to render the Mandelbrot set on a Commodore 64 after reading about it in SciAm -- 6502 @ 1 MHz with 64 KB of RAM. Took over an hour just for the zoomed-out view! |
[QUOTE=chalsall;398210]Any chance you might consider throwing a few THzDays at LLTF'ing? Even going to "just" 74 would help; things are *really* tight at the moment.[/quote]
I'll do an extra 70 73M 72->74 this weekend. [quote]P.S. You know, it sometimes blows my mind just how much computing power we have as individuals now-a-days. I remember implementing a program to render the Mandelbrot set on a Commodore 64 after reading about it in SciAm -- 6502 @ 1 MHz with 64 KB of RAM. Took over an hour just for the zoomed-out view![/QUOTE] My first computer was also a 6502, at 1.023 MHz in an Apple IIe. 128 KB of RAM with the extension card :D |
[QUOTE=chalsall;398210]Any chance you might consider throwing a few THzDays at LLTF'ing?[/QUOTE]Since you asked nicely I'll divert my 580 to do a THz-day worth of work for you. Without that diversion I expected to have the entire 1-4G range TF'd to 2[sup]64[/sup] within 30 days, after 2.5 years of effort, after which I could do some more crunching for GPU72.
|
[QUOTE=James Heinrich;398215]Since you asked nicely I'll divert my 580 to do a THz-day worth of work for you.[/QUOTE]
As Moz from BBC's Ideal would say "Nicely nicely" (read: thanks). |
[QUOTE=chalsall;398210] Even going to "just" 74 would help; things are *really* tight at the moment.
[/QUOTE] Hm... I only get 74 - 75 ... |
[QUOTE=blip;398227]Hm... I only get 74 - 75 ...[/QUOTE]
Which, at the end of the day, is the most important. If you choose "Let GPU72 decide" it will balance between TF'ing to 74 or 75 based on the demand in the different categorises. If you choose any other option (including "What Makes Sense") it will honour the "Pledge level", although that defaults to 75 bits. If you want to go lower (perfectly acceptable) choose a lower pledge. Please know that we're /really/ close to keeping up with all the other work-types. And, it's not the "end of the world" if a few 66Ms get handed out at only 74 but I don't want any >70M candidates handed out for P-1'ing at less than 74, and certainly none handed out for LL'ing at less than 75 bits of TF'ing and having a P-1 run already done. OCD anyone? :smile: |
Shouldn't we really be doing P-1 after TF to 75, resources permitting?
|
[QUOTE=Mark Rose;398234]Shouldn't we really be doing P-1 after TF to 75, resources permitting?[/QUOTE]
Possibly. But resources aren't currently permitting. |
Are the assignments with lower bit depths no longer available? I normally factor to 72 or 73 bits, and I wasn't even able to get 74-bit assignments.
|
[QUOTE=ixfd64;398296]Are the assignments with lower bit depths no longer available? I normally factor to 72 or 73 bits, and I wasn't even able to get 74-bit assignments.[/QUOTE]
According to the logs, you asked for the "High" value for the range to be 73, rather than (I presume) 73,000,000 and/or a pledge of 73. [code]| ixfd64 | 2015-03-21 18:38:08 | LF(0) -- N: 100 G: 0 P: 75 L: 72 H: 73 -- n: 100 p: 75 l: 66000000 h: 73 -- A: 0 | | ixfd64 | 2015-03-21 18:37:47 | LF(0) -- N: 100 G: 0 P: 75 L: 0 H: 73 -- n: 100 p: 75 l: 66000000 h: 73 -- A: 0 |[/CODE] Nothing has changed; TF'ing to less than 75 is still available. |
Thanks for the information. I'll give it a try again after this batch of 20 75-bit assignments.
|
[QUOTE=ixfd64;398301]Thanks for the information. I'll give it a try again after this batch of 20 75-bit assignments.[/QUOTE]
Thanks. Please understand I truly want to know if I've made a mistake. It happens often.... :smile: |
[QUOTE=chalsall;398229]OCD anyone? :smile:[/QUOTE]
That's CDO. Alphabetical, please. |
1 Attachment(s)
Look to this guy! He is amazing, he not only overtook me, but soon he will overtake himself...
[ATTACH]12419[/ATTACH] :razz: -------- (edit: clarification, the snip is from the TF lifetime top) |
[QUOTE=LaurV;398470]Look to this guy[/QUOTE]
LOL... :smile: For those who don't get the joke, LaurV created an account on Primenet in my name, and as been submitting his TF'ing results under that. Thanks for all the cycles my friend. BTW, tonight I'll send a special dish of blackened flying fish with some macaroni pie through the tunnel.... |
Ahh! I was wondering why you were doing so much TF with the price of electricity there :D
|
He he, in fact I let the mill run for a while after our "experiment" ended, only to be able to make this joke, I am planning it since few weeks :smile:
Yesterday after I took the snip I switched them back. |
[QUOTE=James Heinrich;398215]Since you asked nicely I'll divert my 580 to do a THz-day worth of work for you.[/QUOTE]1,996 GHz-days completed (was supposed to be 2009GHz-d but a found factor was only worth 13 instead of 26).
Back to the real work... :truck: |
[QUOTE=James Heinrich;398586]1,996 GHz-days completed (was supposed to be 2009GHz-d but a found factor was only worth 13 instead of 26).[/QUOTE]
Thanks much! :smile: |
1 Attachment(s)
Not too bad so far!
:max: |
[QUOTE=Xyzzy;398986]Not too bad so far![/QUOTE]
Wow! Slightly more than twice the average number of factors found per attempt! Lucky you! :smile: |
[QUOTE=Xyzzy;398986]Not too bad so far!
:max:[/QUOTE] I wish mine was as good... TF75 - Total Runs 1,811 F 10 GHzDays 9,721.5 |
[QUOTE=NickOfTime;399030]I wish mine was as good...[/QUOTE]
Given infinite samples, you should see an approximate [URL="https://www.gpu72.com/reports/factor_percentage/"]1.072%[/URL] success rate going to 75 from 74. But statistics has no memory. YMMV. |
Hey, I heard you guys could use some extra help working on TFing to 74 and 75. I can give 30 days of around 600ghz per day...
Should I just fetch work with "Let GPU72 Decide" or are there specific values I should use? Scott |
[QUOTE=swl551;399041]Hey, I heard you guys could use some extra help working on TFing to 74 and 75. I can give 30 days of around 600ghz per day...
Should I just fetch work with "Let GPU72 Decide" or are there specific values I should use? Scott[/QUOTE] Seems to be the recommended setting. The project thanks you. |
[QUOTE=petrw1;399047]The project thanks you.[/QUOTE]
Indeed. :smile: And yes, "Let GPU72 Decide" or "What makes sense" (the latter to 74 or 75) are both good. Thanks! |
I brought an extra system online good for ~380GHz per day along with some P-1 (could do more but cooling is an issue). I should be able to have it on for a while.
break break... For some reason every once in a while I end up with some assignments that are expiring. Is there a way to have GPU72 just put them back into my work queue at the 10 day mark so they get done? A lot of work for little gain, but I hate them sitting for 30+ days when I hardly ever check the assignments page anymore. :smile: |
Assigned to numbers oters are LL'ing
Chalsall,
I'm new GPUto72 trial factoring with GPU's. I mostly Fold@Home but send some of my GPU cycles to help out here. Last night I got 6 74 to 75 assignments. When I had done the first one, I looked at the history and saw that it had been assigned to "For_Research" to do LL testing by Primenet before I had gotten the assignment for TF'ing from GPUto72. I checked the other five and they also show that they are already assigned by PrimeNet to For_Research for LL so there is little value in my TF'ing those exponents. I have just left them sitting as assignments and am not running them. Maybe this happened because they were expiring at the end of the month but for a short time, both GPUto72 and PrimeNet owned them and so both gave them out? Thanks for your help. -Walt |
For Reasearch [I]is[/I] chalsall. Please complete them and submit them.
|
Hi Walt, and welcome!
For Research is one of the names chalsall uses as part of the GPU72 setup. That is, FR obtains assignments from PrimeNet, and then GPU72 distributes them, first, for any needed TF and P-1, before sending them out for LL. In many cases, these assignments have gone out before for LL, but have expired according to the "new" expiry rules. (They've been around for a while, now, though they are tweaked occasionally for optimum results.) It does not sound like you have actual conflicting assignments, but the way GPU72 works might make it seem that way. Carry on, and thanks for joining our merry band! :smile: |
[QUOTE=Walt;399110]I'm new GPUto72 trial factoring with GPU's. I mostly Fold@Home but send some of my GPU cycles to help out here.[/QUOTE]
Thank you very much for joining and helping out -- it's badly needed at the moment!!! :smile: [QUOTE=Walt;399110]Last night I got 6 74 to 75 assignments. When I had done the first one, I looked at the history and saw that it had been assigned to "For_Research" to do LL testing by Primenet before I had gotten the assignment for TF'ing from GPUto72.[/QUOTE] As axn and kladner said, this is fine. Most assignments will appear on Primenet as being "owned" by "GPU Factoring", but a few will appear as being owned by "For Research". The latter are candidates that my "Observing Spider" fetched from Primenet which aren't yet TF'ed to at least 75 bits -- this is because they've been recycled by Primenet, unreserved by the original owner, or just completed a P-1 run by someone through Primenet at 74 bits. GPU72 will never intentionally hand out an assignment owned by someone else. Thanks again for joining our little side project of GIMPS. Please let us know if you have any additional questions or comments. |
[QUOTE=flashjh;399102]For some reason every once in a while I end up with some assignments that are expiring. Is there a way to have GPU72 just put them back into my work queue at the 10 day mark so they get done? A lot of work for little gain, but I hate them sitting for 30+ days when I hardly ever check the assignments page anymore. :smile:[/QUOTE]
Thanks for the additional fire power! WRT automatically adding them back (basically re-issuing), it would be quite a bit of work. Partially because GPU72 has no way of knowing if they've actually "gone missing", or just on a machine which doesn't regularly check in. One possible thing I could do is have an option for an email notice when an assignment is beyond a certain limit (opt-in, of course -- I hate automatic spam). |
Email would be great also, thanks!
|
[QUOTE=chalsall;399127] As axn and kladner said, this is fine. Most assignments will appear on Primenet as being "owned" by "GPU Factoring", but a few will appear as being owned by "For Research". The latter are candidates that my "Observing Spider" fetched from Primenet which aren't yet TF'ed to at least 75 bits -- this is because they've been recycled by Primenet, unreserved by the original owner, or just completed a P-1 run by someone through Primenet at 74 bits. GPU72 will never intentionally hand out an assignment owned by someone else.[/QUOTE]
Hilarious. I completely misunderstood. I had seen the check outs to GPU_factoring and that made sense. But I had not connected the dots that For Research was you checking them out for factoring. Oops. The five remaining ones are back running now. -Walt |
[QUOTE=Walt;399140]Hilarious. I completely misunderstood. I had seen the check outs to GPU_factoring and that made sense. But I had not connected the dots that For Research was you checking them out for factoring. Oops. The five remaining ones are back running now.
-Walt[/QUOTE] It confused me and many others at first, too. It makes sense once you understand how the system works though. |
[QUOTE=chalsall;399128]Thanks for the additional fire power!
WRT automatically adding them back (basically re-issuing), it would be quite a bit of work. Partially because GPU72 has no way of knowing if they've actually "gone missing", or just on a machine which doesn't regularly check in. One possible thing I could do is have an option for an email notice when an assignment is beyond a certain limit (opt-in, of course -- I hate automatic spam).[/QUOTE] Ok, something else is going on with the current batch of exponents that were set to expire. I put them back into the queue without checking to see if they were reported already (my lazy fault) to run and all of them reported as not needed. So it looks like spidey just hasn't seen that they're done already. |
[QUOTE=flashjh;399202]Ok, something else is going on with the current batch of exponents that were set to expire. ... So it looks like spidey just hasn't seen that they're done already.[/QUOTE]
Hmmm... Could you please PM me a few examples? Spidey should observe completion within (at worst) two hours. |
These are the current ones:
66987227 66987269 66987493 66987533 66987631 |
[QUOTE=flashjh;399232]These are the current ones:
66987227 66987269 66987493 66987533 66987631[/QUOTE] Hmmm... Interesting. One thing I notice querying the MySQL database on GPU72 is each of these have been "extended". Most probably a very SPE on my part. Tomorrow I'll drill down further. Thanks for bring this to my attention (sincerely). |
[QUOTE=chalsall;399245]Hmmm... Interesting.
One thing I notice querying the MySQL database on GPU72 is each of these have been "extended". Most probably a very SPE on my part. Tomorrow I'll drill down further. Thanks for bring this to my attention (sincerely).[/QUOTE] Right, I noticed they were about to expire so I extended them, recreated the work and put them on one of my systems. Should have checked to see if they were done already, but didn't. No problem, no rush... Thanks for checking :-) |
Could there be value in a weekly State-Of-The-Union?
Something as basic as:...just making up stuff for dramatic effect :)
LL-TF: 10 days Ahead of the Curve: YELLOW DC-TF: 30 days Ahead of the Curve: GREEN P-1: 1 day Ahead of the Curve: RED Then those of us who are flexible in where we put our resources could accommodate accordingly. Because even if I use "Let GPU72 Decide"; if I am working on a less necessary work type it doesn't help |
[QUOTE=petrw1;401354]Something as basic as:...just making up stuff for dramatic effect :)[/QUOTE]
OK... Not a bad idea. It's been a while since I did a "FYI" post. Not sure I can commit to doing it every week; perhaps every fortnight -- it takes a bit of work to interpret all the trends and "signals". I'll put together a "report" at the end of this next Sunday, after Xyzzy and Oliver will (most likely) dump their (huge) weekly batch. Short version right now: we're looking OK for TF'ing ahead for all of the DC and LL multiple categories (can't tell you exactly how many days ahead without a bit of work). This means, for LLTF, at least 74 for Cat 1 and Cat 2, and at least 75 for Cat 3 and Cat 4 (with P-1 done). What we're continuing having some difficulty doing currently is staying ahead of the P-1'ing. Many P-1's are being handed out via Primenet at "only" 74, and a few P-1's are being handed out via GPU72 (mostly to Oliver) at only 73. (Just because I feel like gloating, I seem to remember a certain David claiming we couldn't sustain going to 73....) |
[QUOTE=chalsall;401358]Not sure I can commit to doing it every week; perhaps every fortnight -- it takes a bit of work to interpret all the trends and "signals".[/QUOTE]Didn't [url=http://mersenneforum.org/showpost.php?p=288006&postcount=447]someone once say[/url] "[i]Never send a human to do a machine's job[/i]" ?
:whistle: |
[QUOTE=James Heinrich;401362]:whistle:[/QUOTE]
You are an astute observer. Just the way we like it! :smile: |
GPU72 Status...
OK, as requested, here's a snapshot of where we're currently at with regards to TF'ing in front of the various ranges.
[CODE][B]Range Available LL'ed 30 Day LL'ed Day Days Ahead TF'ed Day[/B] DC 29017 4225 140.83 206.04 81.42 LL 1 & 2 6788 2212 73.73 92.06 0.00 LL 3 2121 4715 157.17 13.50 190.47 [U]LL 4 3572 595 19.83 180.10 95.47[/U] LL Totals 12481 7522 250.73 285.66 285.94 [/CODE] Please note that we're comfortably ahead of the DC'ers, so I batched all of the "Categories" together. DC Cat 4 is where the "churners" live, so although many tens of thousands of candidates are currently assigned, approximately 97% of them will be recycled. We /are/ slowly falling behind the DC'ers (by about 59.4 a day), but we have enough of a buffer to not worry about this for quite some time. Similarly, interestingly, there are very few LL Cat 2 assignments issued, so I combined the Cat 1 and Cat 2 ranges (all already TF'ed to at least 74 bits). Cat 3 is the most worked, and everything being assigned for it and Cat 4 are TF'ed to at least 75 bits (with P-1 done). As previously mentioned however, we're currently having difficulty keeping ahead of the P-1'ers. Thus, although we've got a comfortable buffer for LL assignment, I would ask that people not think that additional LLTF'ing is not still critical -- it is! Further to this, please know that currently "Let GPU72 Decide" is mostly assigning work in the 74M range to 74 bits which have not yet had a P-1 run done. Once we get a reasonable buffer, this will move back to 75 bits. This is because 74M is far enough ahead of the LL Cat 4 wave-front that "Spidy" can pull its rip-cord and release candidates for P-1'ers without risk of them being assigned to LL workers. Please let me know if there are any questions or comments. And, as always, thanks for all the cycles everyone! :tu: |
Whoops!!! I made a serious error in my spreadsheet: I summed all of the last row for the "LL Totals". Instead for the "Days Ahead" column I should have extended the calculation for the entire column (read: it should be "B7/D7", rather than "sum(E4:E6)").
The correct (I think) calculations are: [CODE][B]Range Available LL'ed 30 Day LL'ed Day Days Ahead TF'ed Day[/B] DC 29017 4225 140.83 206.04 81.42 LL 1 & 2 6788 2212 73.73 92.06 0.00 LL 3 2121 4715 157.17 13.50 190.47 [U]LL 4 3572 595 19.83 180.10 95.47[/U] LL Totals 12481 7522 250.73 49.78 285.94 [/CODE] This means we're only ~50 days ahead for LLTF'ing, rather than the ~285 days originally reported. Sorry about the error. (Just goes to show you: spreadsheets are code, and thus SPE's should be watched out for (as the [URL="http://www.bloomberg.com/bw/articles/2013-04-18/economists-spreadsheet-error-upends-the-debt-debate"]Economics and Policy world discovered the hard way a couple of years ago[/URL]... Edit: [URL="http://www.reuters.com/article/2013/04/18/us-global-economy-debt-herndon-idUSBRE93H0CV20130418"]A second article on this which doesn't downplay the significance as much as the first[/URL]...).) |
Temporal Worker's Progress reports...
Just to let everyone know, I fixed a couple of SPEs on the temporal Workers Progress reports over the weekend.
There were two issues. First of all, for those who did some work of a particular type (for example, DCTF) but then stopped didn't "slide off" the report. I had a "left outer join" in the query which I thought would take care of this, but for some reason it didn't. The second issue was I was using the datediff() function instead of the more appropriate timestampdiff() function. This meant that the totals were from midnight (UTC) to current time, rather than the appropriate time period. This was most evident in the "Last Day" reports. Please review the reports (for example, for [URL="https://www.gpu72.com/reports/workers/dctf/day/"]DCTF over the last Day[/URL] and for [URL="https://www.gpu72.com/reports/workers/lltf/day/"]LLTF over the last Day[/URL]) and let me know if you see anything strange. |
Thanks! The since-midnight thing was annoying :)
|
Excellent! I was always having to check just before midnight UTC to see a full day's worth of work. Now I can check anytime. :smile:
|
[QUOTE=chalsall;402618]This meant that the totals were from midnight (UTC) to current time, rather than the appropriate time period. [/QUOTE]
Ahaaa... This would make davieddy very proud :razz:, one of the main reasons of his arguments (for which he was banned once) was that one table was sync'd from midnight and the other from midday, hehe... |
[QUOTE=LaurV;402675]Ahaaa... This would make davieddy very proud :razz:, one of the main reasons of his arguments (for which he was banned once) was that one table was sync'd from midnight and the other from midday, hehe...[/QUOTE]
LOL. And he was still wrong. I sincerely hope he's still around (as my late father used to say) "on this bloody ball of wax". |
Workers' progress not working...
The drill-downs under "Workers' Progress" — "Overall work" — "Last day/week/month/quarter" are only returning ten names. This started a couple of days ago.
|
[QUOTE=Chuck;402748]The drill-downs under "Workers' Progress" — "Overall work" — "Last day/week/month/quarter" are only returning ten names. This started a couple of days ago.[/QUOTE]
Damn... Thanks... Fixing one bug caused another.... (Sigh...) |
Am I correct, that the account "GPU Factoring" Is there to reserve TF/LL work fot the users on GPU72? I am wondering, because on the active Assignment Page I can see hundereds of Assignments blocked since many years for LL work, while there is no one working on them and they appear to not be available on the site. Is there something blocking, that shouldn't?
Here some example Assignments: [URL="http://www.mersenne.org/report_exponent/?exp_lo=332206453&full=1"]http://www.mersenne.org/report_exponent/?exp_lo=332206453&full=1[/URL]: Could be given out for P-1 or LL, but hasn't been worked on for 3 years. [URL="http://www.mersenne.org/report_exponent/?exp_lo=332212753&full=1"]http://www.mersenne.org/report_exponent/?exp_lo=332212753&full=1[/URL]: Same here ... I could list loads of these. They seem as if they should be released or something. |
[QUOTE=manfred4;402801]Am I correct, that the account "GPU Factoring" Is there to reserve TF/LL work fot the users on GPU72?[/QUOTE]
That is correct. [QUOTE=manfred4;402801]I am wondering, because on the active Assignment Page I can see hundereds of Assignments blocked since many years for LL work, while there is no one working on them and they appear to not be available on the site. Is there something blocking, that shouldn't?[/QUOTE] George asked me to bring in those candidates in the 332M range which are not yet appropriately TF'ed. I brought in approximately 3,500 candidates between 332192831 and 332.6M which had not been assigned to registered users, and which were TF'ed to less than 78 bits. Because of the way I do this, those candidates which are reserved by truly Anonymous (read: non-registered) users using the manual assignment form are transferred to the "GPU Factoring" account. These are what you are seeing -- most of the 3,500 candidates should show a registration date of 2015.05.17 and will be released for assignment once they've been TF'ed to at least 78 bits. Please be aware that my "Observation Spider" is watching the 332M range, and will release candidates at 77 bits if any are about to be assigned to LL'ers at below that. |
[QUOTE=chalsall;402816]Please be aware that my "Observation Spider" is watching the 332M range, and will release candidates at 77 bits if any are about to be assigned to LL'ers at below that.[/QUOTE]
I have left my machines to finish the assignments that I had given them prior to the [STRIKE]hostile take over[/STRIKE] assumption of management of the range by GPU72 . :grin: A lot of what I am doing is 'poaching' the TF work of exponents assigned to LL workers. I understand that you have not sucked them up. As my worktodo's get thin I will hit up the GPU72 server for new assignments. |
[QUOTE=chalsall;402816]That is correct.
George asked me to bring in those candidates in the 332M range which are not yet appropriately TF'ed. I brought in approximately 3,500 candidates between 332192831 and 332.6M which had not been assigned to registered users, and which were TF'ed to less than 78 bits. Because of the way I do this, those candidates which are reserved by truly Anonymous (read: non-registered) users using the manual assignment form are transferred to the "GPU Factoring" account. These are what you are seeing -- most of the 3,500 candidates should show a registration date of 2015.05.17 and will be released for assignment once they've been TF'ed to at least 78 bits. Please be aware that my "Observation Spider" is watching the 332M range, and will release candidates at 77 bits if any are about to be assigned to LL'ers at below that.[/QUOTE] That one I know about, but i was wondering about those assignments being held back since years. Before 2015-05-17 there should be no assignments, since GPU72 did not have any assignments available for some months in that range. Could you have a look into those two I posted here with links and check, if they are assigned / available from your site or if they are blocked for no reason as I supposed? |
[QUOTE=manfred4;402884]Could you have a look into those two I posted here with links and check, if they are assigned / available from your site or if they are blocked for no reason as I supposed?[/QUOTE]
Please for give me for this, but connect the dots. 332206453, for example, is at 76 bits. What part of what I said above isn't clear? |
Is 78 the optimal bit level for tests at 100M digits then?
|
[QUOTE=casmith789;402945]Is 78 the optimal bit level for tests at 100M digits then?[/QUOTE]
That is about the point where P-1 should be tried, before doing the final 2-3 bits. Here is a chart that shows the cross stop bits for an average GPU: [url]http://mersenneforum.org/showpost.php?p=324680&postcount=413[/url] |
[QUOTE=casmith789;402945]Is 78 the optimal bit level for tests at 100M digits then?[/QUOTE]
As indicated by Uncwilly and James, not really -- we should be going deeper. But, most prefer to do "breadth first" up there, which is why I only brought in those up to 332.6M at less than 78 bits. This was largely to "hold" those candidates TF'ed to less than 78 so they weren't assigned for P-1'ing nor LL'ing. So everyone knows, this is what "Spidy" observed just now:[CODE]20150526_180503 INFO: Category 332... 20150526_180504 INFO: 100: 332240089,79,1 (725E76DBE4E2A295C5925514DFEC1EF0) -- Keep: 0 20150526_180504 INFO: 100: 332251531,79,1 (8C5AEFC4A9C359D842A9C1F920D34DC1) -- Keep: 0 20150526_180504 INFO: 100: 332290181,79,1 (9AA1FDE57B7E7330548ADD8DD9E3217A) -- Keep: 0 20150526_180504 INFO: 100: 332388613,79,1 (4EC5ECDEB0F09B390194E368D3CEFDC1) -- Keep: 0 20150526_180504 INFO: 100: 332438479,79,1 (3ADB82AF6A934DD71FEED251FBEEDB5D) -- Keep: 0 20150526_180504 INFO: 100: 332442211,79,1 (09A2E6050A4BD6CC4A38888F9D8B1196) -- Keep: 0 20150526_180504 INFO: 100: 332466361,80,1 (AE51CBDDB07EC429197AD04591E4D025) -- Keep: 0 20150526_180504 INFO: 100: 332466451,79,1 (24CC1498870CE1888A513A41EAF363D9) -- Keep: 0 20150526_180504 INFO: 100: 332466457,79,1 (2710DE41DF8C594F0B4FD062D14FC802) -- Keep: 0 20150526_180504 INFO: 100: 332193109,78,0 (439A5C7B00EA224948C36664CF6B2E2D) -- Keep: 0[/CODE] As in, there are a good number of candidates adequately (but not yet optimally) TF'ed ready for LL and/or P-1 assignment. Further please know, except for the occasional surge, there are very few such assignments. For those doing TF'ing work up there, it would help the project more if you did "depth first". Although be aware that taking a candidate up to 78 or so can take a day or so even on a really good GPU. |
Brief surge of DCTF to 72.
Just so everyone knows, we need a little bit of a surge in DCTF to 72 in order to satisfy the churners with a bit of a buffer.
Anyone so inclined are asked to take some candidates from 71 bits to 72. Preferably from the lowest exponent. Not yet critical, but welcomed. |
How many exponents a day do we need?
|
[QUOTE=Mark Rose;403073]How many exponents a day do we need?[/QUOTE]
For a steady state, about 40. |
| All times are UTC. The time now is 22:33. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.