![]() |
I've experienced the same for the last week or so.
|
I have been traveling and was just this morning able to configure a good portion of the system for LL work, should start seeing those results today.
|
Thanks for all the resources guys! :smile:
With regards to the slowness being experienced on GPU72... Between approximately 15 and 30 minutes after each hour the system does a bunch of work to update the stats. With the huge amount of results being submitted recently, this has increased the amount of processing going on. To add to this, Google's spider has been fetching many pages and graphs recently, some of which are very expensive to render. I'm working on caching some of the more expensive datasets. |
[QUOTE=wombatman;409390]Just for my reference, how far did you drop your RAM speeds to get stability?[/QUOTE]
Running at 1600MHz (DDR) should be good. |
[QUOTE=chalsall;409411]Thanks for all the resources guys! :smile:
With regards to the slowness being experienced on GPU72... Between approximately 15 and 30 minutes after each hour the system does a bunch of work to update the stats. With the huge amount of results being submitted recently, this has increased the amount of processing going on. To add to this, Google's spider has been fetching many pages and graphs recently, some of which are very expensive to render. I'm working on caching some of the more expensive datasets.[/QUOTE] Perhaps block the URLs with the graphs with robots.txt? |
[QUOTE=Mark Rose;409425]Perhaps block the URLs with the graphs with robots.txt?[/QUOTE]
Good idea. But, since anyone can accesses these graphs and reports (and I have found that those URLs mentioned in robots.txt as being off-limits are often of particular interest to non-well-behaved 'bots) I think it would be best to cache the data. |
I'll be the first to admit to being an obsessive stat watcher and graph loader. I will try to restrain myself.
|
[QUOTE=airsquirrels;409454]I'll be the first to admit to being an obsessive stat watcher and graph loader. I will try to restrain myself.[/QUOTE]
Please don't worry. This is something I need to deal with. I never imagined GPU72 would last as long as it has; a query which when originally written referenced 100,000 records now references over 2,300,000.... |
[QUOTE=chalsall;409189]Just so everyone knows, all DCTF candidates to 70 are now assigned.
I have adjusted the DCTF assignment page such that if MISFIT or mfloop is the requester and the "pledge" is 70, it is automatically bumped up to 71. This is so no workers are left without work.[/QUOTE] And mine are all completed....on to 70-71 |
[QUOTE=chalsall;409411]Thanks for all the resources guys! :smile:
With regards to the slowness being experienced on GPU72... Between approximately 15 and 30 minutes after each hour the system does a bunch of work to update the stats. With the huge amount of results being submitted recently, this has increased the amount of processing going on. To add to this, Google's spider has been fetching many pages and graphs recently, some of which are very expensive to render. I'm working on caching some of the more expensive datasets.[/QUOTE] Tell Google (and crawlers in general) to avoid those pages. mersenne.org is configured to have crawlers ignore a bunch of stuff that's just related to reports on various things which are computationally expensive if they start crawling, and of little benefit for driving traffic from search engines. :smile: Things like the /report_exponent/ pages, or anywhere that only works if you're logged in like /results/ etc. If those are worthwhile to get indexed by a search engine, try just setting up the site in Google Webmaster Tools and configure a custom crawl rate so it's slow enough not to hammer the site and cause issues. |
I've always found Google's crawl to be reasonable. I have had issues with Bing and Baidu slamming small sites heavily all at once. That was years ago though.
|
| All times are UTC. The time now is 23:16. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.