![]() |
|
|
#23 | |
|
"Ben"
Feb 2007
3×5×251 Posts |
Quote:
.If the sand is mosly quartz with a density of 2.65 g/cm^3, then you have around 9433 cm^3 of sand. With a 250 micron grain size you'd only have around 600 Mb of "storage" with 25 kg of sand. If that bag of sand costs a dollar that's 1.67 nano-dollars per bit. Meanwhile can get a 6 TB (48 Tb) hard drive for $140 right now... which is 2.9 pico-dollars per bit, over 500x cheaper per bit. |
|
|
|
|
|
|
#24 | |
|
"Jacob"
Sep 2006
Brussels, Belgium
36428 Posts |
Quote:
Jacob |
|
|
|
|
|
|
#25 | |
|
"Composite as Heck"
Oct 2017
2×52×19 Posts |
Quote:
|
|
|
|
|
|
|
#26 | |
|
Undefined
"The unspeakable one"
Jun 2006
My evil lair
6,793 Posts |
Quote:
|
|
|
|
|
|
|
#27 |
|
Feb 2016
UK
26×7 Posts |
Regardless of the mechanism, I've had both HDs and SSDs die on me. As always, best practice is to have backups of any important data so you're ready when it happens, not if.
I'm not an average person, and I guess most on this forum wont be either. I have had many SSDs and HDs over the years, although probably still not enough to be statistically meaningful. Still, I've had two SSDs ever die on me. That is, totally unresponsive. Undetectable. One was a Sandisk, and they replaced it under warranty without fuss. I'm pretty sure it was well below its endurance rating. The other was an unknown brand I never heard of, or will again, I got off ebay for cheap. Did have one SSD have data corruption, but at least in that case I know the cause. In that particular system, the M.2 slot was right under the top PCIe slot. Which had a GPU in it which was continuously running at the time. SMART reported temps above the rated maximum. Oops. It was fine after cooling down and that system now had a different SATA SSD in a different physical location far from the GPU. For now I use SSDs for general purpose and performance storage. I can't imagine running a modern OS on a HD today. HDs remain for bulk storage and backup copies. My personal data set is not going to fit on any cloud I can afford so I still have to hope my house doesn't burn down. On SSDs going silent, I think that was a requirement for enterprise uses e.g. in RAID or similar. The worst thing to have is a device that "maybe" works. It either works, or it doesn't. Then redundancy kicks in and it is replaced. For consumer devices, I think going read-only is a better fail state. Especially for those who don't keep up to date on backups, if at all. |
|
|
|
|
|
#28 |
|
"Mark"
Apr 2003
Between here and the
11100101011012 Posts |
About 22 years ago I was asked to develop software (based upon someone else's design) to read historical data from tape per requests that an end user could submit thru the software. The estimated cost of software was about $500,000. Based upon the requirements I thought that disk would be a much cheaper solution for holding the 1 TB of data that was spread across thousands of tapes. With my suggestion, they quoted the cost of a large enough RAID array and found it to be cost prohibitive. Nevertheless I was able to convince the powers that be that by the time the $500,000 system would be in production (at least a year away) that disk would be cheaper than the original solution because there was a significant human cost for loading and unloading tapes into the silo. I even estimated that if the volume of data increased to 10 TB over 7 years (it was financial data) that disk would be an even better bargain over time as there would be little human cost and there would be quick access to the data (as opposed to multiple days if someone wanted 7 years of financial data).
The software project was scrapped which caused me to become the object of derision from the person who originally designed it. I quickly became his (and his friend's) favorite target. I left a few months later. On the plus side, I was working with a DBA to develop my estimates for how much disk would be required for my solution. He happened to be a close friend with someone working for my current employer. I was hired because he provided a glowing informal reference. The icing on the cake is that the person who designed it was fired a few months after I left due to sending pornographic material over company e-mail. Some of his friends were sent packing with him. |
|
|
|
|
|
#29 |
|
Aug 2002
207238 Posts |
We recently switched to the Btrfs filesystem to see if we ever experience "bit rot". We run a scrub operation every week that compares all of our files with a database of checksums. So far we haven't seen any errors.
We don't worry much about wearing out a SSD. We have everything backed up properly and we can reinstall our entire environment from scratch in just a few hours. (We tested this!) We use a Samsung 970 "pro" which uses MLC NAND memory which is supposed to be tougher than TLC/QLC NAND memory. We are in the process of building a new "supercomputer" so we are hoping Samsung releases a PCIe 4.0 980 "pro" soon.
|
|
|
|
|
|
#30 | |
|
Bamboozled!
"๐บ๐๐ท๐ท๐ญ"
May 2003
Down not across
2×17×347 Posts |
Quote:
One of the four disks suffered infant mortality. It was rapidly replaced under warranty. With that sole exception I have not seen any errors and no data was lost, though redundancy was lost for a period of two days or so. I like ZFS. |
|
|
|
|
|
|
#31 |
|
Aug 2002
7·1,237 Posts |
Interesting Btrfs test: https://zejn.net/b/2017/04/30/single...cy-with-btrfs/
|
|
|
|
|
|
#32 | |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
2×112×47 Posts |
Quote:
I run SMART logging and monitoring on all storage devices I have control of. SSDs tend to not report much meaningful data through that API interface. I also always run at least RAID 1 on anything that might even be the slightest bit important. RAID 6 is my preference. Exercise the kit. With regards to backups... "Normals" don't even understand the concept (right up until the moment they can't access stuff). To them, everything is "in the cloud". Sigh... |
|
|
|
|