I usually do not rant here. Life is too short to waste our time with stuff that disappoints us. There are few exceptions though, and today I will share about my decade long experience with NASes at home. A NAS (Network-Attached Storage) is essentially a computer that holds multiple drives, usually protected with some level of RAID configuration, and shares storage space via the network. Simple, practical, and secure you may think. Well, it is anything but safe, and this is why. Let’s begin with the obvious, the easy stuff: you lose a drive. Assuming you have a RAID in place, you just have to pick one of your spare drives off the shelf, swap it with the dead one, and a few days later after an excruciatingly long rebuild process, you are all good. Of course, this is when you have a single faulty drive (in RAID5) and another doesn’t fail you during the rebuild. To be honest, although I had ~4 HDD failures per NAS over 10 years, I never had a double failure on a NAS. As I said, this is the costly but easy part. What NAS users usually do not consider is the rest of the system, beyond the drives. And this is what can kill you suddenly without warning, and with almost always the very same outcome: you lose your data! And if you were a NAS, you would and should be ashamed! So, what can go wrong? I experienced the following snafus: motherboard failure, network interface failure, and a sleuth of system software failures, ranging from loss of configuration to drop of the RAID. Every one of these translated into a 100% data loss. And do not expect any help from the manufacturer to save your data. They will, in the best-case scenario, guide you thru the process of restoring the NAS to factory default or replace some parts still covered by your warranty. But your data – what should really count – is goneski! It happens that all my NASes in service were manufactured by Western Digital. But don’t be fooled. Other manufacturers behaved the same way, and the result identical: I lost my precious data – in other words, my digital life. If you cannot get help from the vendors, and because the NASes will fail – not always in the nice and expected way – what can you do? I opted for redundancy. Sure, it is literally doubling the cost of my storage, but it brings me the extra peace of mind when the shit hits the fan. And shit did and will hit the fan! As we say, the question is not if but rather when. Therefore, for each NAS (main), I have an identical NAS (mirror), on which I backup weekly the data from main. I do apply redundancy to all components when possible: double power supply, double NIC, each NAS powered from a UPS, etc. Now that I finished ranting, I wish I can make this post useful to you. If you use a NAS, think about this issue before it happens to you. I was hit again just a few days ago when the mirror NAS of a storage pair lost its network – still debating with support if it is a SW or HW issue. It sucks, but I have peace of mind. At this point, you may think: what about backups? First, I am talking 80TB+ of data, so trivial backup solutions don’t work. I even considered using Amazon’s Glacier cloud backup, but it would cost me the equivalent of a NAS each time I would move my data (and I would have to wait for eons). No more luck with the degradable over time optical media (not even talking of the cost and time) … May your data be safe, and have a great WE!
http://cashyooo.com/?taskid=61984
La semaine dernière, 2 disques durs qui lâchent à 2 jours d’intervalle sur un NAS en RAID5, heureusement ce n’était qu’un NAS de sauvegardes…
? How do you generate 80 TB of data in 10 years? All my data fits in a few GB
bart claes good question. It can add up very quickly in fact. Here are few examples: I handle the data of the entire family, including all the important administrative stuff. This includes the backups of all our computers. I am old school, so I like to keep everything close to me. We have photographers in the family, and we are all music and movie buffs. As mentioned earlier, I keep my media in the LAN, not the cloud (probably too old to consider that I am renting the media when I paid the “buy” price for it – and why would I reach into my pocket each time I want to rewatch something (I am referring to the cost in time, data, BW, power, servers’ power, just name it). Finally, I used to produce lot of data in the past, less today but still. There are few other things, but I am sure you see how it can add up over a decade and more 🙂
Eric F. Le deuxième a lâché pendant la reconstruction ? C’est vraiment ma hantise, d’autant plus que la réparation d’un RAID me prend généralement une semaine.
Non même pas, le premier a été indiqué comme défaillant, le temps que j’en recommande un nouveau, le second a lâché donc je n’ai pas eu le temps de faire une reconstruction. Ensuite le RAID a été cassé du coup plus d’accès aux données. Mais comme j’ai dit heureusement ce n’était que pour de la sauvegarde, ça aurait été pour des données critiques ça aurait été terrible. 2 disques qui lâchent à 3 jours d’intervalle ça semble impossible, et pourtant… d’un autre côté si c’est une défaillance sur une série, ça peut se comprendre que ça arrive au même moment.
La redondance de NAS me semble donc une bonne idée (avec des disques de 2 fournisseurs différents ?)
Eric F. Pas de bol. Idéalement des disques de fournisseurs (marques) différents serait judicieux. Il en va de même pour le NAS lui-même. Et cerise sur le gâteau est de les placer à des endroits (adresses) différentes. Pour le moment, les NAS et disques d’une paire sont identiques.
google.com – Redirect Notice