EzDevInfo.com

raid-5 interview questions

Top raid-5 frequently asked interview questions

How do I reactivate my MDADM RAID5 array?

I've just moved house which involved dismantling my server and re-connecting it. Since doing so, one of my MDADM RAID5 arrays is appearing as inactive:

root@mserver:/tmp# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
md1 : active raid5 sdc1[1] sdh1[2] sdg1[0]
      3907023872 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md0 : inactive sdd1[0](S) sdf1[3](S) sde1[2](S) sdb1[1](S)
      3907039744 blocks

unused devices: <none>

It looks to me as though it's found all of the disks but for some reason doesn't want to use them.

So what do the (S) labels mean and how can I tell MDADM to start using the array again?

[Edit] I just tried stopping and assembling the array with -v:

root@mserver:~# mdadm --stop /dev/md0
mdadm: stopped /dev/md0

root@mserver:~# mdadm --assemble --scan -v
mdadm: /dev/sde1 is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sdf1 is identified as a member of /dev/md0, slot 3.
mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 1.
mdadm: added /dev/sdd1 to /dev/md0 as 0 (possibly out of date)
mdadm: added /dev/sdb1 to /dev/md0 as 1 (possibly out of date)
mdadm: added /dev/sdf1 to /dev/md0 as 3 (possibly out of date)
mdadm: added /dev/sde1 to /dev/md0 as 2
mdadm: /dev/md0 assembled from 1 drive - not enough to start the array.

..and entering cat /proc/mdstat looks no different.

[Edit2] Not sure if it helps but this is the result of examining each disk:

root@mserver:~# mdadm --examine /dev/sdb1

/dev/sdb1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2f331560:fc85feff:5457a8c1:6e047c67 (local to host mserver)
  Creation Time : Sun Feb  1 20:53:39 2009
     Raid Level : raid5
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0

    Update Time : Sat Apr 20 13:22:27 2013
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 6c8f71a3 - correct
         Events : 955190

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     1       8       17        1      active sync   /dev/sdb1

   0     0       8      113        0      active sync   /dev/sdh1
   1     1       8       17        1      active sync   /dev/sdb1
   2     2       8       97        2      active sync   /dev/sdg1
   3     3       8       33        3      active sync   /dev/sdc1

root@mserver:~# mdadm --examine /dev/sdd1

/dev/sdd1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2f331560:fc85feff:5457a8c1:6e047c67 (local to host mserver)
  Creation Time : Sun Feb  1 20:53:39 2009
     Raid Level : raid5
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
   Raid Devices : 4
  Total Devices : 2
Preferred Minor : 0

    Update Time : Sat Apr 20 18:37:23 2013
          State : active
 Active Devices : 2
Working Devices : 2
 Failed Devices : 2
  Spare Devices : 0
       Checksum : 6c812869 - correct
         Events : 955205

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8      113        0      active sync   /dev/sdh1

   0     0       8      113        0      active sync   /dev/sdh1
   1     1       0        0        1      faulty removed
   2     2       8       97        2      active sync   /dev/sdg1
   3     3       0        0        3      faulty removed

root@mserver:~# mdadm --examine /dev/sde1

/dev/sde1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2f331560:fc85feff:5457a8c1:6e047c67 (local to host mserver)
  Creation Time : Sun Feb  1 20:53:39 2009
     Raid Level : raid5
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
   Raid Devices : 4
  Total Devices : 2
Preferred Minor : 0

    Update Time : Sun Apr 21 14:00:43 2013
          State : clean
 Active Devices : 1
Working Devices : 1
 Failed Devices : 2
  Spare Devices : 0
       Checksum : 6c90cc70 - correct
         Events : 955219

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     2       8       97        2      active sync   /dev/sdg1

   0     0       0        0        0      removed
   1     1       0        0        1      faulty removed
   2     2       8       97        2      active sync   /dev/sdg1
   3     3       0        0        3      faulty removed

root@mserver:~# mdadm --examine /dev/sdf1

/dev/sdf1:
          Magic : a92b4efc
        Version : 0.90.00
           UUID : 2f331560:fc85feff:5457a8c1:6e047c67 (local to host mserver)
  Creation Time : Sun Feb  1 20:53:39 2009
     Raid Level : raid5
  Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
     Array Size : 2930279808 (2794.53 GiB 3000.61 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0

    Update Time : Sat Apr 20 13:22:27 2013
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0
       Checksum : 6c8f71b7 - correct
         Events : 955190

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     3       8       33        3      active sync   /dev/sdc1

   0     0       8      113        0      active sync   /dev/sdh1
   1     1       8       17        1      active sync   /dev/sdb1
   2     2       8       97        2      active sync   /dev/sdg1
   3     3       8       33        3      active sync   /dev/sdc1

I have some notes which suggest the drives were originally assembled as follows:

md0 : active raid5 sdb1[1] sdc1[3] sdh1[0] sdg1[2]
      2930279808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

[Edit3]

Looking through the log it looks like the following happened (based on the Update Time in the --examine results):

  1. sdb and sdf were knocked out some time after 13:22 on the 20th
  2. sdd was knocked out some time after 18:37 on the 20th
  3. the server was shut down some time after 14:00 on the 1st

Given that two disks went down (apparently) simultaneously I think it should be reasonably safe to assume the array wouldn't have been written to after that point(?) and so it should be relatively safe to force it to re-instate in the correct order? What's the safest command to do that with and is there a way to do it without writing any changes?


Source: (StackOverflow)

How to calculate the final RAID size of a raid 5 array?

What is the formula for working out the final RAID size of a RAID 5 array knowing the number of disk and the size of each disk?


Source: (StackOverflow)

Advertisements

4 x 2TB drives, safe to use RAID-5? What is failure rate?

I've read a number of opinions on the risks of running a RAID-5 using 4 x 2TB drives. Apparently the failure rate of the 2TB drives is so high that there is less redundancy than is generally expected.

2TB drives have been on the market for a long time now. Is this opinion founded?

If relevant, the system is Linux using md/lvm. Alternative suggestions appreciated.


Source: (StackOverflow)

Ubuntu Server 14.04 - RAID5 created with mdadm disappears after reboot

This is my first question on superuser, so if I forgot to mention something, please ask.

I'm trying to set up a home server which will be used as file server and media server. I installed Ubuntu Server 14.04 and now I'm trying to set up a Raid5 consisting of a total of 5 disks, using mdadm. After the raid has been created, I am able to use it and I can also access the Raid from other PCs. After rebooting the server, the Raid does not show up anymore. I have also not been able to assemble the raid.

I have done the following steps:

Create the RAID

mdadm --create --verbose /dev/md0 --level=5 --raid-devices=5 /dev/sda /dev/sdc /dev/sdd /dev/sde /dev/sdf

After the RAID has been completed (watch cat /proc/mdstat), I store the RAID configurations

mdadm --detail --scan >> /etc/mdadm/mdadm.conf

Then I removed some parts of the entry in mdadm.conf. The resulting file looks as follows:

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
#DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This file was auto-generated on Fri, 14 Mar 2014 23:38:10 +0100
# by mkconf $Id$
ARRAY /dev/md0 UUID=b73a8b66:0681239e:2c1dd406:4907f892

A check if the RAID is working (mdadm --detail /dev/md0) returns the following:

/dev/md0:
Version : 1.2
Creation Time : Sat Apr 19 15:49:03 2014
Raid Level : raid5
Array Size : 7813531648 (7451.56 GiB 8001.06 GB)
Used Dev Size : 1953382912 (1862.89 GiB 2000.26 GB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent

Update Time : Sat Apr 19 22:13:37 2014
State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0

Layout : left-symmetric
Chunk Size : 512K

Name : roembelHomeserver:0  (local to host roembelHomeserver)
UUID : c29ca6ea:951be1e7:ee0911e9:32b215c8
Events : 67

Number   Major   Minor   RaidDevice State
0       8        0        0      active sync   /dev/sda
1       8       32        1      active sync   /dev/sdc
2       8       48        2      active sync   /dev/sdd
3       8       64        3      active sync   /dev/sde
5       8       80        4      active sync   /dev/sdf

As far as I can tell, this all looks good. In a next step I created the file system:

mke2fs -t ext4 /dev/md0

This results in the following output:

mke2fs 1.42.8 (20-Jun-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=512 blocks
244174848 inodes, 1953382912 blocks
97669145 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
59613 block groups
32768 blocks per group, 32768 fragments per group
4096 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
    102400000, 214990848, 512000000, 550731776, 644972544, 1934917632

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

Then I changed to fstab by adding the following entry at the end of the file:

/dev/md0    /data    ext4    defaults,nobootwait,no fail     0    2

After mounting the RAID (mount -a) I then could use it, create files, access it from other PCs...

Now comes the problem:
After rebooting the server (reboot now), the RAID does not exist anymore, i.e.
- No /dev/md0
- empty /proc/mdstat (besides the Personalities)
- df -h does not show the raid
- mdadm --assemble --scan does not do anything

Does anyone have any suggestions? Did I do something wrong?


Source: (StackOverflow)

Can I set up a RAID 5 with a bunch of drives of different sizes?

I currently have 3 1TB drives, a couple 500GB ones and some 750GB ones. Can I put them all in a RAID 5 configuration or do they need to be of same size?


Source: (StackOverflow)

How does parity work on a RAID-5 array?

I'm looking to build a nice little RAID array for dedicated backups. I'd like to have about 2-4TB of space available, as I have this nasty little habit of digitizing everything. Thus, I need a lot of storage and a lot of redundancy in case of drive failure. I'll also essentially be backing up 2-3 computers' /home folders using one of the "Time Machine" clones for Linux. This array will be accessible over my local network via SSH.

I'm having difficulties understanding how RAID-5 achieves parity and how many drives are actually required. One would assume that it needs 5 drives, but I could be wrong. Most of the diagrams I've seen have only yet confused me. It seems that this is how RAID-5 works, please correct me as I'm sure I'm not grasping it properly:

/---STORAGE---\    /---PARITY----\
|   DRIVE_1   |    |   DRIVE_4   |
|   DRIVE_2   |----|     ...     |
|   DRIVE_3   |    |             |
\-------------/    \-------------/

It seems that drives 1-3 appear and work as a single, massive drive (capacity * number_of_drives) and the parity drive(s) back up those drives. What seems strange to me is that I usually see 3+ storage drives in a diagram to only 1 or 2 parity drives. Say we're running 4 1TB drives in a RAID-5 array, 3 running storage and 1 running parity, we have 3TB of actual storage, but only have 1TB of parity!?

I know I'm missing something here, can someone help me out? Also, for my use case, what would be better, RAID-5 or RAID-6? Fault tolerance is the highest priority for me at this point, since it's going to be running over a network for home use only, speed isn't hugely critical.


Source: (StackOverflow)

Data Recovery from RAID5 disk - Promise Pegasus with Apple OSLion and Thunderbolt connection

I have a Promise Pegasus R6 - 12 TB RAID 5 disk for major storage purposes. One of the six 2TB hard disks were dead and the company provided me with a replacement. When the disk was inserted it installed itself as a separate partition and their service advised me to delete it and reinstall it. However, during this process the partition with 10 TB was deleted. It was silly of me to try this out.

Now the Promise says that I need to contact professional data recovery services. I live in France nearby Geneva and the search nearby identified no available services.

Could anyone suggest any method - software or service for a similar data recovery problem please ?


Source: (StackOverflow)

Should I use "Raid 5 + spare" or "Raid 6"?

What is "Raid 5 + Spare" (excerpt from User Manual, Sect 4.17.2, P.54):

RAID5+Spare: RAID 5+Spare is a RAID 5 array in which one disk is used as spare to rebuild the system as soon as a disk fails (Fig. 79). At least four disks are required. If one physical disk fails, the data remains available because it is read from the parity blocks. Data from a failed disk is rebuilt onto the hot spare disk. When a failed disk is replaced, the replacement becomes the new hot spare. No data is lost in the case of a single disk failure, but if a second disk fails before the system can rebuild data to the hot spare, all data in the array will be lost.


What is "Raid 6" (excerpt from User Manual, Sect 4.17.2, P.54):

RAID6: In RAID 6, data is striped across all disks (minimum of four) and a two parity blocks for each data block (p and q in Fig. 80) is written on the same stripe. If one physical disk fails, the data from the failed disk can be rebuilt onto a replacement disk. This Raid mode can support up to two disk failures with no data loss. RAID 6 provides for faster rebuilding of data from a failed disk.


Both "Raid 5 + spare" and "Raid 6" are SO similar ... I can't tell the difference.

When would "Raid 5 + Spare" be optimal?

And when would "Raid 6" be optimal"?

The manual dumbs down the different raid with 5 star ratings. "Raid 5 + Spare" only gets 4 stars but "Raid 6" gets 5 stars. If I were to blindly trust the manual I would conclude that "Raid 6" is always better. Is "Raid 6" always better?


Source: (StackOverflow)

Find files affected by bad blocks with md-raid5 and LVM

I've been doing a lot of research on this topic over the last few weeks - and I think I'm close to completing my recovery, as much as is possible at least. To make a long story short, I'll just describe the problem without filling in every tiny technical detail.

Assume you have multiple RAID-5 arrays, each with 8 disks, and have then spanned those together into a single LVM logical volume. One of the disks then dies in one of the arrays, and during rebuild you encounter an unrecoverable read error on a second disk in that array. And of course, there are no backups.

I've already ddrescue'd the data from the drive with the URE onto a new drive, only 5K of data is damaged all grouped into a very small area of disk. I am also assuming that once I reassemble that MD device using the ddrescue'd copy, that I will multiply the size of my data loss by the number of non-parity drives in my array (so 35K of data loss), as the parity calculations for the stripes using those blocks will be incorrect.

I've read and understand the procedure's at http://smartmontools.sourceforge.net/badblockhowto.html for determining which files would be corrupted by a situation like this, but my problem is in figuring out exactly what blocks will be corrupt after the md rebuild to use as input to debugfs. Figuring out all of the offsets where md and lvm store metadata isn't going to be fun either, but I think I can handle that part.

Can I just multiply all of my bad-block numbers by 7 and then assume that the following 6 blocks after each of those will also be bad, and then follow the LVM instructions in the guide linked above?

And to be clear - I'm not concerned with repairing or re-mapping the bad blocks as the guide describes, I've replaced the disk and will be letting md handle that kind of thing. I just want to know what files on the ext4 filesystem have been affected.


Source: (StackOverflow)

How does RAID5 work? [duplicate]

This question already has an answer here:

In a raid 5 setup, you get 4 TB of usable space out of 3 x 2 TB disks. How is that possible?

Simple minded as I am, I would think that you need 4TB to store your things, and use the remaining 2TB for recovery. But how can I recover 4TB out of only 2TB? If 1kb is gone, I only have 0.5kb to recover from. And if that is sufficient, why not use 0.5kb for storage right from the start.

I know this is a naive question. But what is the answer?


Source: (StackOverflow)

Practical RAID Performance?

I've always thought the following to be a general rule of thumb for RAID:

  • RAID 0: Best performance for READ and WRITE from stripping, greatest risk
  • RAID 1: Redundant, decent for READ (I believe it can read from different parts of a file from different hard drives), not the best for WRITE
  • RAID 0+1 (01): combines redundancy of RAID 1 with performance of RAID 0
  • RAID 1+0 (10): slightly better version of RAID 0+1
  • RAID 5: good READ performance, bad WRITE performance, redundant

IS THIS ASSUMPTION CORRECT? (and how do they compare to a JBOD setup for R/W IO performance)

Are certain practical RAID setups better for different applications: gaming, video editing, database (Acccess or SQL)?

I was thinking about hard disk drives but does this apply to solid state drives as well?


Source: (StackOverflow)

Hardware RAID - what happens if the motherboard or controller fails?

I want to set up a RAID 5 array in a machine that has an intel motherboard. It has built in raid (sata). My concern is what happens if the motherboard fails?

How can I take those hard drives and move them to a different machine - with a different motherboard and restore the raid array (the OS is not on the raid array, only data).

I am assuming there might be a function in the bios to back up the raid config, which means I could have a spare motherboard of the same type - or possibly other intel desktop boards are compatible to be able to import a raid config?

If this is going to be an issue, possibly software raid is something I should consider?

Note: I also found the grayed out RAID-5 option in Windows 7 disk management. Searching windows help I found a page titled "Move Disks to Another Computer" which mentions RAID-5 and that of course you would need to move all the disks in the set to the new computer, then use the import foreign disks option. This seems like exactly what I want, but searching for the windows 7 raid-5 option seems like it is not available? Then why is it a menu choice?


Source: (StackOverflow)

How to force mdadm to stop RAID5 array?

I have /dev/md127 RAID5 array that consisted of four drives. I managed to hot remove them from the array and currently /dev/md127 does not have any drives:

cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdd1[0] sda1[1]
      304052032 blocks super 1.2 [2/2] [UU]

md1 : active raid0 sda5[1] sdd5[0]
      16770048 blocks super 1.2 512k chunks

md127 : active raid5 super 1.2 level 5, 512k chunk, algorithm 2 [4/0] [____]

unused devices: <none>

and

mdadm --detail /dev/md127
/dev/md127:
        Version : 1.2
  Creation Time : Thu Sep  6 10:39:57 2012
     Raid Level : raid5
     Array Size : 8790402048 (8383.18 GiB 9001.37 GB)
  Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB)
   Raid Devices : 4
  Total Devices : 0
    Persistence : Superblock is persistent

    Update Time : Fri Sep  7 17:19:47 2012
          State : clean, FAILED
 Active Devices : 0
Working Devices : 0
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       0        0        1      removed
       2       0        0        2      removed
       3       0        0        3      removed

I’ve tried to do mdadm --stop /dev/md127 but:

mdadm --stop /dev/md127
mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group?

I made sure that it’s unmounted, umount -l /dev/md127 and confirmed that it indeed is unmounted:

umount /dev/md127
umount: /dev/md127: not mounted

I’ve tried to zero superblock of each drive and I get (for each drive):

mdadm --zero-superblock /dev/sde1
mdadm: Unrecognised md component device - /dev/sde1

Here's output of lsof | grep md127:

lsof|grep md127
md127_rai  276       root  cwd       DIR                9,0          4096          2 /
md127_rai  276       root  rtd       DIR                9,0          4096          2 /
md127_rai  276       root  txt   unknown                                             /proc/276/exe

What else can I do? LVM is not even installed so it can't be a factor.


After much poking around I finally found what was preventing me from stoping the array. It was SAMBA process. After service smbd stop I was able to stop the array. It’s strange though because although the array was mounted and shared via SAMBA at one point in time, when I tried to stop it it was already unmounted.


Source: (StackOverflow)

RAID5 vs RAID4 purpose of floating parity

I still did not get why is RAID5 better than RAID4. I understand both computes parity bits that are used for recovering if some failure occurs, the only difference is in storing those parity bits. I have borrowed diagrams from here How does parity work on a RAID-5 array

A B (A XOR B)
0 0    0
1 1    0
0 1    1
1 0    1

RAID4

Disk1   Disk2   Disk3   Disk4
----------------------------
data1  data1  data1  parity1
data2  data2  data2  parity2
data3  data3  data3  parity3
data4  data4  data4  parity4

Lets say that first row is:

data1 = 1
data1 = 0
data1 = 1
parity1 = 0 (COMPUTED: 1 XOR 0 XOR 1 = 0)

RAID5

Disk1   Disk2   Disk3   Disk4
----------------------------
parity1 data1   data1   data1   
data2   parity2 data2   data2  
data3   data3   parity3 data3
data4   data4   data4   parity4

Lets say that first row is:

parity1 = 0 (COMPUTED: 1 XOR 0 XOR 1 = 0)
data1 = 1
data1 = 0
data1 = 1

Scanarios:

1. RAID4 - Disk3 FAILURE:

data1 = 1
data1 = 0
data1 = 1 (COMPUTED: 1 XOR 0 XOR 0 = 1)
parity1 = 0

2. RAID4 - Disk4 (parity) FAILURE:

data1 = 1
data1 = 0
data1 = 1 
parity1 = 0 (COMPUTED: 1 XOR 0 XOR 1 = 0)

etc.

In general: when RAID(4 or 5) uses N disks and one fails. I can take all remaining non failed disks (N-1) and XOR (since XOR is associative operation) values and I will get the failed value. What is the benefit of storing parity not on dedicated disk but rather cycle them? Is there some performance benefit or what? Thank you


Source: (StackOverflow)

Several questions about software and onboard RAID 5

I have a few questions on how software and onboard RAID 5 compare:

  1. Is there any way to add a disk to an existing RAID 5 array using either software or onboard RAID?

  2. All motherboards I'm interested in come with both SATA2 and SATA3 ports. Using either software or onboard RAID, is it possible to combine disks connected to different ports in the same array?

  3. If I have two operating systems installed (on a disk that does not belong to the array), can still use software RAID?

Notes:

  • I have read Onboard RAID vs Software RAID. It doesn't cover any of my questions.

  • I know that hardware RAID is better than both options. Sadly, I can't find a single RAID controller card. I've searched all over the country...

  • I know that RAID is not backup. Protecting my data from a single disk's failure is all I want.


Source: (StackOverflow)