mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   PrimeNet (https://www.mersenneforum.org/forumdisplay.php?f=11)
-   -   OFFICIAL "SERVER PROBLEMS" THREAD (https://www.mersenneforum.org/showthread.php?t=5758)

Mark Rose 2014-08-17 01:06

[QUOTE=Mark Rose;380482]Edit: Oh, shite... Sorry. In my hast I edited your message, rather than posting my own. I'm currently using Firefox because Chrome has decided to randomly crash, but only the latter can view YouTube videos without installing Flash.[/QUOTE]

Dude. So confusing.

kracker 2014-08-17 03:23

[QUOTE=Mark Rose;380547]Dude. So confusing.[/QUOTE]

[url]http://www.computerworld.com/s/article/9174581/Google_s_Chrome_now_silently_auto_updates_Flash_Player[/url] :whistle:

[QUOTE=retina;380533][wildly offtopic]
SSDs are still too unreliable and have very poor endurance. For short term use, one year or less, after which you intend to throw them away (which is wasteful) then perhaps they have a use but I never use anything for such a short time to justify it. Plus they are harder to securely erase and thus require the use of always on FDE to ensure data security.
[/wildly offtopic][/QUOTE]

That was years ago... they still aren't reliable as HDD's I think, but still...

EDIT: For example: Specs on the Intel 730 SSD: [url]http://ark.intel.com/products/81038/Intel-SSD-730-Series-240GB-2_5in-SATA-6Gbs-20nm-MLC[/url]

Madpoo 2014-08-17 03:34

[QUOTE=retina;380533][wildly offtopic]
SSDs are still too unreliable and have very poor endurance. For short term use, one year or less, after which you intend to throw them away (which is wasteful) then perhaps they have a use but I never use anything for such a short time to justify it. Plus they are harder to securely erase and thus require the use of always on FDE to ensure data security.
[/wildly offtopic][/QUOTE]

Yeah... I mean, consumer MLC's have RMW cycles of maybe 3,000. Wear leveling helps and all that, along with having 10% or so of space reserved for replacing failed sections... but still.

Enterprise MLC drives are the current hotness for servers, hitting a sweetish spot of price/performance, but even then they have maybe 30,000 write cycles, and that's only done by picking the best chips out of the bin.

For truly awesome reliability it still boils down to SLC modules with over 100,000 write cycles but they cost quite a few pennies more.

Still, for typical consumer use, 3000 write cycles and wear leveling still means you could write and re-write a lot of data each day for years and years before you reach any limits. For typical server use, 30,000 cycles holds about true... I have spinning disks start failing after maybe 5 years and I'd expect an enterprise MLC might last about as long.

Where you really start to see the argument fall apart is for really high write applications, like constant SQL writes happening, extremely busy email servers, etc. I wouldn't mind getting some HGST SSDs but their price is kind of incredible.

Anyway, yeah, this is prett off-topic. :)

Let's just say that for the relatively minor needs of Primenet, it doesn't take *too* much to get it running. It's problem right now is the hammering of the DB on a single pair of old U320 drives. All that large data is backed up by only 2 GB of RAM which doesn't help either. I guess any system, even SSD, is going to suffer if it has to page RAM to disk... SSD is fast but not as fast as DRAM.

Which reminds me... applications that require the very fastest speed an reliability use DRAM boards that back their data to disk in the event of power failure or reboots. Yup... even with SSD's getting better and better, DRAM based boards are still very popular with the highest end servers.

That's one reason I like to kit out my SQL servers with enough system RAM to hold most or all of the data in memory... SQL's caching can keep the readable bits in memory and unless we're doing a ton of writes for some reason, it's very fast to query once it reaches a steady state.

Batalov 2014-08-17 03:44

(Perhaps the tuning thread needs spawning?)

Mark Rose 2014-08-17 03:51

[QUOTE=kracker;380551][url]http://www.computerworld.com/s/article/9174581/Google_s_Chrome_now_silently_auto_updates_Flash_Player[/url] :whistle:
[/quote]

I meant about him editing my post.

[quote]
That was years ago... they still aren't reliable as HDD's I think, but still...

EDIT: For example: Specs on the Intel 730 SSD: [url]http://ark.intel.com/products/81038/Intel-SSD-730-Series-240GB-2_5in-SATA-6Gbs-20nm-MLC[/url][/QUOTE]

I would say SSDs as a whole are reliable enough. I wouldn't run that drive in a database machine, but it would be excellent for desktop use. Perhaps I'll get one for home. At work, all our MySQL and Cassandra machines are SSD only. Our desktop machines use SSDs for the OS. I've seen more spinning rust fail than SSDs fail over the last two years.

Madpoo 2014-08-17 03:58

[QUOTE=kracker;380551]...they still aren't reliable as HDD's I think, but still...
[/QUOTE]

I love my SSD on my laptop and desktop (and the wife's laptop). But you're right, they can and will eventually die. In fact I had one in my laptop that died last year... it was under warranty so I got a brand new one (and the next model up since Kingston retired the model I had).

But that's just an exclamation point on the notion that EVERYONE should be backing up their important data. Yeah, my drive failed but it was all backed up so it was more just the hassle of reinstalling the OS on a new one, but at least my docs and photos and stuff were okay.

I've had too many times in the past where something died and my backups were either broken or, duh, I forgot to make any. That moment when you format what you thought was a spare floppy (remember those days?) only to find out it had the only copy of your term paper. That kind of stuff changes a man... I back up all my home systems to no less than 3 separate terabyte size drives and keep them in different locations in case of theft/fire/whatever. Maybe I'm paranoid, but I think my wife would kill me if all her photos got lost. :smile:

Let's all be glad George is backing up the Primenet database (and we have copies on the test server now too). It would suck to lose track of which exponents were done, their residues, factors found, etc. How many years of CPU work involved? :) We're talking cloud backups too for the future... don't keep all the eggs in one basket (or two, or even three) if it's important enough.

Madpoo 2014-08-17 04:10

[QUOTE=Mark Rose;380555]I would say SSDs as a whole are reliable enough. I wouldn't run that drive in a database machine, but it would be excellent for desktop use. Perhaps I'll get one for home. At work, all our MySQL and Cassandra machines are SSD only. Our desktop machines use SSDs for the OS. I've seen more spinning rust fail than SSDs fail over the last two years.[/QUOTE]

If I had SSD's on a server, I'd be watching them for signs of failure. At least they report stats on how many bad blocks have been mapped out, and once they use up all of their reserved capacity for that, they really do need to be tossed.

Admins will still use at least RAID 1 no matter what. I mean, I *could* go with RAID 0 or JBOD on a dev box and not lose anything critical, but hey, my time is expensive too and I don't want to spend a day rebuilding a box when I can RAID 1 the thing for an extra couple hundred up front.

Anyway, let's say you have a consumer grade SSD rated at 3,000 write cycles, and let's say you try to maintain a good 20% of free space on the drive, which helps with wear leveling, and it has 10% of capacity reserved for bad block mapping.

You could overwrite the entire contents of the drive several times a day for several years before you really use the thing up, and desktop/laptops don't really do that much writing in general.

Enterprise MLC's like the HGST or Kingston E100 models rated at 30,000 write cycles, well, I think HGST for instance has one of their models rated to be able to write the entire drive contents 25 times a day for 5 years or something before you'd see any degradation.

The good news is that by the time they did fail, there's probably something better, plus I only ever expect the server hardware itself to last about 5 years before it's time to put out to pasture.

Mark Rose 2014-08-17 05:18

[QUOTE=Madpoo;380558]If I had SSD's on a server, I'd be watching them for signs of failure. At least they report stats on how many bad blocks have been mapped out, and once they use up all of their reserved capacity for that, they really do need to be tossed.

Admins will still use at least RAID 1 no matter what. I mean, I *could* go with RAID 0 or JBOD on a dev box and not lose anything critical, but hey, my time is expensive too and I don't want to spend a day rebuilding a box when I can RAID 1 the thing for an extra couple hundred up front.[/QUOTE]

We get around that by having redundancy at the machine level. I can knock any one database server offline and the system stays up. So we use RAID 0. If any part of the hardware fails we launch on new hardware from an image and synchronize the new machine to the cluster (which takes a few hours).

My dev box has no RAID. It basically consists of a large screen for running Chrome, [url=http://konsole.kde.org/]Konsole[/url], and [url=http://kate-editor.org/]Kate[/url]. All my work is remote, even the source code editing. I can do a clean reinstall of everything I need in about 15 minutes. I might spend another 15 minutes setting up mprime (for SoB), mfaktc.exe, and tweaking settings (I should really get around to putting my dotfiles on GitHub...). It's really not worth the expense of RAID.

At home my /home is on a RAID of spinning rust. I'm keen on replacing my / SSD with RAIDed SSDs and putting my /home on them, too. Backup is rsync to a remote machine (having a 175 Mbps symmetric connection is handy).

TheMawn 2014-08-17 22:29

[QUOTE=Batalov;380554](Perhaps the tuning thread needs spawning?)[/QUOTE]

I'd say move everything since we started talking about a new server to an Official "New Server" Thread.

I think the discussion is worth having but this thread was really meant for server problems.

LaurV 2014-08-18 02:11

[QUOTE=TheMawn;380593]I'd say move everything since we started talking about a new server to an Official "New Server" Thread.

I think the discussion is worth having but this thread was really meant for server problems.[/QUOTE]
+1

Prime95 2014-08-18 12:59

The Primenet server will be down starting around 7:00PM EDT. If all goes well, it will be revived on a new temporary home with new IP addresses.


All times are UTC. The time now is 23:01.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.