EzDevInfo.com

mdadm interview questions

Top mdadm frequently asked interview questions

Should I force assemble raid5 with 2 out of 3 mdadm

my raid 5 (md0) stopped working and after some poking around I found that

/dev/sdc:
    Update Time : Fri Dec 20 05:03:06 2013
         Events : 88
/dev/sdd:
    Update Time : Sun Jun  5 02:00:03 2016
         Events : 3299448
/dev/sde:
    Update Time : Sun Jun  5 19:25:45 2016
         Events : 3299455

Is it safe for my data to force assemble the raid? Should I replace sdc first?


Source: (StackOverflow)

Kill watch from bash

I need to be able to call this:

watch -n1 cat /proc/mdstat

from bash.

For raid creating watching (after mdadm --create etc.), and then kill it then building process will end.

#!/bin/bash
#PID=$!
while
progress=$(cat /proc/mdstat |grep -oE 'recovery = ? [0-9]*')
do
    watch -n1 cat /proc/mdstat
    PID=$(pidof watch)
    echo "$PID" >> /mnt/pid
    if (("$progress" >= "100"))
        then
            break
            kill -9 $PID
    fi
done
echo "done" 

But I can not figure out how to kill watch out from bash. I tried PID=$! and PID=$$, pidof watch at the cycle and out of him, but can't assign correct PID to my variable to make kill -9 $PID.


Source: (StackOverflow)

Advertisements

esXi mapped Raw Lun to ESXI Ubuntu Guest creating MD0 and exporting over NFS. Bad idea?

So I know how to expose a local disk to an ESXI guest via

vmkfstools -z /vmfs/devices/disks/t10.ATA___** /vmfs/volumes/datastore1/LocalDisks/

This works great! My thought then would be to create an MD0 inside of an Ubuntu Server and export that via NFS and SMB. The NFS would be for other internal ESXI linux guests and SMB for Windows only.

Does this sound like a bad idea? Any special export parameters I should use for NFS?

Currently to make NFS exporting to other local esxi guests work I use

(rw,async,insecure,no_subtree_check,nohide,no_root_squash)

And to mount I use

nosharecache,context="system_u:object_r:httpd_sys_rw_content_t:s0" 0 0"

I should note, the esxi host/datastore is on a seperate HD not part of md0 and that the data will be mostly static. No large DBs or anything, a lot of media. The most heavy IO would be ZoneMinder (motion detection suite that saves images and compares them constantly)


Source: (StackOverflow)

Cant mount my hard drive [closed]

When I try to mount one of my hard drives, it can't locate the folder with the hard drive.

This is the output for the command df:

Filesystem                       1K-blocks      Used Available Use% Mounted on
rootfs                            33000428    119124  32881304   1% /
none                              33000428    119124  32881304   1% /
198.27.85.63:/home/pub/rescue.v7 886788312 250295096 591423904  30% /nfs
198.27.85.63:/home/pub/pro-power 886788312 250295096 591423904  30% /power
198.27.85.63:/home/pub/commonnfs 886788312 250295096 591423904  30% /common
tmpfs                                10240       204     10036   2% /dev
tmpfs                              6600088        72   6600016   1% /run
tmpfs                                 5120         0      5120   0% /run/lock
tmpfs                             13200160         0  13200160   0% /run/shm

This comes when I run the command fdisk -l:

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1  3907029167  1953514583+  ee  GPT

WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1  3907029167  1953514583+  ee  GPT

Disk /dev/md3: 1978.9 GB, 1978886193152 bytes
2 heads, 4 sectors/track, 483126512 cylinders, total 3865012096 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/md3 doesn't contain a valid partition table

Disk /dev/md2: 21.0 GB, 20970405888 bytes
2 heads, 4 sectors/track, 5119728 cylinders, total 40957824 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/md2 doesn't contain a valid partition table

When I try to mount the /dev/sda1 hard drive with the command mount -o barrier=0 /dev/sda1 it gives me this message:

mount: can't find /dev/sda1 in /etc/fstab or /etc/mtab

How can i fix this, so i can backup all of my stuff?

This comes when I try to mount /dev/sdb3:

mount: unknown filesystem type 'linux_raid_member'

Then I tried to use the command mdadm --assemble --run /mnt /dev/sdb3 but then it just gives me this:

mdadm: /dev/sdb3 is busy - skipping

This is my output from using cat /proc/mdstat

md2 : active raid1 sda2[0] sdb2[1]
      20478912 blocks [2/2] [UU]

md3 : active raid1 sda3[0] sdb3[1]
      1932506048 blocks [2/2] [UU]

unused devices: <none>

I can mount md3, but no files appears in the folder /mnt


Source: (StackOverflow)

The MD member name is changed in ec2

I create a RAID1 device with mdadm in an EC2 instance. The version of mdadm is v3.3.2.

/sbin/mdadm --create /dev/md1 --level=1 --raid-devices=2  /dev/xvdf /dev/xvdk

This is the output of mdstat:

cat /proc/mdstat 
Personalities : [raid1] 
md1 : healthy raid1 xvdk[1] xvdf[0]
      41594888 blocks super 1.2 [2/2] [UU]

It's normal. We see that there are two member disks xvdk and xvdf for this RAID1 device.

However, I find the members of MD device become /dev/sd* in "mdadm -D" output:

mdadm -D /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Fri Dec 11 06:29:50 2015
     Raid Level : raid1
     ...

    Number   Major   Minor   RaidDevice State
       0     202       82        0      active sync   /dev/sdf
       1     202      162        1      active sync   /dev/sdk

Then I find these links have been created automatically:

ll /dev/sd*
lrwxrwxrwx. 1 root root 4 Dec 11 06:29 /dev/sdf -> xvdf
lrwxrwxrwx. 1 root root 4 Dec 11 06:29 /dev/sdk -> xvdk

I guess that this is done by mdadm. I never saw this problem before.

I think it's no need to change device name of MD members because it confuses people. How to avoid this problem? Thanks a lot!


Source: (StackOverflow)

Recover mdadm RAID5: 4 of 5 disks marked as removed

I have a software RAID5 built with 5 disks. There were some problems with system which I was trying to resolve with mondorescue. I've used "nuke" type of recovering which lead me to destroying of my RAID5 array. Now mdadm -D /dev/md0 shows me 4 disks are "removed", one disk is still "active". No superblock found on the "removed" disks, so, mdadm could not assemble this array back. Is there any chance to recover superblocks on the "removed" disks?

No badblocks on all disks at all, they all are clean. Please help.

mdadm --examine output:

[root@WWW /]# mdadm --examine /dev/sd[bcdef]1
mdadm: No md superblock detected on /dev/sdb1.
mdadm: No md superblock detected on /dev/sdc1.
mdadm: No md superblock detected on /dev/sdd1.
mdadm: No md superblock detected on /dev/sde1.
/dev/sdf1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 50c0e8b8:0d80f11d:be9b2b55:1718bb55
Name : WWW:0 (local to host WWW)
Creation Time : Wed Jan 29 23:08:39 2014
Raid Level : raid5
Raid Devices : 5

Avail Dev Size : 5860268032 (2794.39 GiB 3000.46 GB)
Array Size : 11720534016 (11177.57 GiB 12001.83 GB)
Used Dev Size : 5860267008 (2794.39 GiB 3000.46 GB)
Data Offset : 258048 sectors
Super Offset : 8 sectors
Unused Space : before=257960 sectors, after=5120 sectors
State : clean
Device UUID : 9ed14cc6:2ca75f84:5b934993:63ca71fc

Update Time : Sun Nov 8 05:53:40 2015
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 1844f433 - correct
Events : 14622

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 4
Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)

Source: (StackOverflow)

RAID5 failure: mdadm: not enough devices to start the array

One disk (sdc) of my RAID5 array (sda..sdf, 6 disks) down yesterday moning. I haven't backup any files or arrry information before. I replace it with a new one (still named sdc in my system, we call it NEW_sdc here), run:

mdadm --manage /dev/md1 --add /dev/NEW_sdc

I've done this many times. But last night, after rebuild (I guess, based on "Update time" listed), another disk (sda) down also. I've no idea how to recover this.

This morning, I found this and reboot system from LiveUSB and get following information.

I guess array can run on sdb, NEW_sdc, sdd, sde, sdf on degraded mode, then I can replace sda, or do some backup. But I don't know how.

Before reboot, I got details about array:

/dev/md1:
        Version : 1.2
  Creation Time : Mon Jan  5 11:39:44 2015
     Raid Level : raid5
     Array Size : 9766914560 (9314.46 GiB 10001.32 GB)
  Used Dev Size : 1953382912 (1862.89 GiB 2000.26 GB)
   Raid Devices : 6
  Total Devices : 6
    Persistence : Superblock is persistent

    Update Time : Tue Sep 22 08:43:45 2015
          State : clean, FAILED 
 Active Devices : 4
Working Devices : 5
 Failed Devices : 1
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

           Name : txds:1
           UUID : 3dd22fc6:4226630e:efc3c5dc:909102ef
         Events : 895156

    Number   Major   Minor   RaidDevice State
       0       8       48        0      active sync   /dev/sdd
       8       8       64        1      active sync   /dev/sde
       2       8       80        2      active sync   /dev/sdf
       6       0        0        6      removed
       8       0        0        8      removed
       6       8       16        5      active sync   /dev/sdb

       4       8        0        -      faulty   /dev/sda
       7       8       32        -      spare   /dev/sdc

Live system recognized raid array as RAID0!

/dev/md127:
Version:        1.2
Raid Level:     raid0
Total Devices:      6
Persistence:        Superblock is persistent

State:          inactive

Name:           txds:1
UUID:           3dd22fc6:4226630e:efc3c5dc:909102ef
Events:         890960

Number  Major   Minor   RaidDevice
-   8   0   -   /dev/sda
-   8   16  -   /dev/sdb
-   8   32  -   /dev/sdc
-   8   48  -   /dev/sdd
-   8   64  -   /dev/sde
-   8   80  -   /dev/sdf

I've run "mdadm --stop /dev/md127". Then, examine on every disk:

examine on /dev/sda:

Magic:          a92b4efc
Version:        1.2
Feature Map:        0x0
Array UUID:     3dd22fc6:4226630e:efc3c5dc:909102ef
Name:           txds:1
Creation Time:      Mon Jan 5 11:39:44 2015
Raid Level:     raid5
Raid Devices:       6

Avail Dev Size:     3906767024 (1862.89 GiB 2000.26 GB)
Array Size:     9766914560 (9314.46 GiB 10001.32 GB)
Used Dev Size:      3906765824 (1862.89 GiB 2000.26 GB)
Data Offset:        262144 sectors
Super Offset:       8 sectors
Unused Space:       before=262064 sectors, after=1200 sectors
State:          clean
Device UUID:        6bae772e:30a94056:50a346d6:74e17539

Update Time:        Mon Sep 21 21:51:17 2015
Checksum:       cd94df52 - correct
Events:         890960

Layout:         left-symmetric
Chunk Size:     512K

Device Role:        Active device 4
Array State:        AAAAAA ('A'==active, '.'==missing, 'R'==replacing)

examine on /dev/sdb:

Magic:          a92b4efc
Version:        1.2
Feature Map:        0x0
Array UUID:     3dd22fc6:4226630e:efc3c5dc:909102ef
Name:           txds:1
Creation Time:      Mon Jan 5 11:39:44 2015
Raid Level:     raid5
Raid Devices:       6

Avail Dev Size:     3906767024 (1862.89 GiB 2000.26 GB)
Array Size:     9766914560 (9314.46 GiB 10001.32 GB)
Used Dev Size:      3906765824 (1862.89 GiB 2000.26 GB)
Data Offset:        262144 sectors
Super Offset:       8 sectors
Unused Space:       before=262064 sectors, after=1200 sectors
State:          clean
Device UUID:        82028087:4d6bf8b6:b203ac87:8972ea0b

Update Time:        Tue Sep 22 08:53:14 2015
Checksum:       b2c9288 - correct
Events:         895160

Layout:         left-symmetric
Chunk Size:     512K

Device Role:        Active device 5
Array State:        AAA..A ('A'==active, '.'==missing, 'R'==replacing)

examine on /dev/sdc:

Magic:          a92b4efc
Version:        1.2
Feature Map:        0x0
Array UUID:     3dd22fc6:4226630e:efc3c5dc:909102ef
Name:           txds:1
Creation Time:      Mon Jan 5 11:39:44 2015
Raid Level:     raid5
Raid Devices:       6

Avail Dev Size:     3906767024 (1862.89 GiB 2000.26 GB)
Array Size:     9766914560 (9314.46 GiB 10001.32 GB)
Used Dev Size:      3906765824 (1862.89 GiB 2000.26 GB)
Data Offset:        262144 sectors
Super Offset:       8 sectors
Unused Space:       before=262056 sectors, after=1200 sectors
State:          clean
Device UUID:        4a2aad92:b5c46b6f:88024745:8c979bc6

Update Time:        Mon Sep 21 10:58:19 2015
Checksum:       47722b054 - correct
Events:         873820

Layout:         left-symmetric
Chunk Size:     512K

Device Role:        Active device 3
Array State:        AAAAAA ('A'==active, '.'==missing, 'R'==replacing)

examine on /dev/NEW_sdc:

Magic:          a92b4efc
Version:        1.2
Feature Map:        0x8
Array UUID:     3dd22fc6:4226630e:efc3c5dc:909102ef
Name:           txds:1
Creation Time:      Mon Jan 5 11:39:44 2015
Raid Level:     raid5
Raid Devices:       6

Avail Dev Size:     3906767024 (1862.89 GiB 2000.26 GB)
Array Size:     9766914560 (9314.46 GiB 10001.32 GB)
Used Dev Size:      3906765824 (1862.89 GiB 2000.26 GB)
Data Offset:        262144 sectors
Super Offset:       8 sectors
Unused Space:       before=262056 sectors, after=1200 sectors
State:          clean
Device UUID:        5b971390:eb99b77b:fc46a10a:d45e433a

Update Time:        Tue Sep 22 08:53:14 2015
Bad Block Log:      512 entries available at offset 72 sectors - bad blocks persent.
Checksum:       89d585e4 - correct
Events:         895160

Layout:         left-symmetric
Chunk Size:     512K

Device Role:        spare
Array State:        AAA..A ('A'==active, '.'==missing, 'R'==replacing)

examine on /dev/sdd:

Magic:          a92b4efc
Version:        1.2
Feature Map:        0x0
Array UUID:     3dd22fc6:4226630e:efc3c5dc:909102ef
Name:           txds:1
Creation Time:      Mon Jan 5 11:39:44 2015
Raid Level:     raid5
Raid Devices:       6

Avail Dev Size:     3906767024 (1862.89 GiB 2000.26 GB)
Array Size:     9766914560 (9314.46 GiB 10001.32 GB)
Used Dev Size:      3906765824 (1862.89 GiB 2000.26 GB)
Data Offset:        262144 sectors
Super Offset:       8 sectors
Unused Space:       before=262064 sectors, after=1200 sectors
State:          clean
Device UUID:        c0f4e974:8c16a6d2:0d88252d:94bbb9c5

Update Time:        Tue Sep 22 08:53:14 2015
Checksum:       738cfd65 - correct
Events:         895160

Layout:         left-symmetric
Chunk Size:     512K

Device Role:        Active device 0
Array State:        AAA..A ('A'==active, '.'==missing, 'R'==replacing)

examine on /dev/sde:

Magic:          a92b4efc
Version:        1.2
Feature Map:        0x0
Array UUID:     3dd22fc6:4226630e:efc3c5dc:909102ef
Name:           txds:1
Creation Time:      Mon Jan 5 11:39:44 2015
Raid Level:     raid5
Raid Devices:       6

Avail Dev Size:     3906767024 (1862.89 GiB 2000.26 GB)
Array Size:     9766914560 (9314.46 GiB 10001.32 GB)
Used Dev Size:      3906765824 (1862.89 GiB 2000.26 GB)
Data Offset:        262144 sectors
Super Offset:       8 sectors
Unused Space:       before=262056 sectors, after=1200 sectors
State:          clean
Device UUID:        8eef85af:81f164ea:f3c95a2c:cddb6340

Update Time:        Tue Sep 22 08:53:14 2015
Bad Block Log:      512 entries available at offset 72 sectors
Checksum:       3fcf3597 - correct
Events:         895160

Layout:         left-symmetric
Chunk Size:     512K

Device Role:        Active device 1
Array State:        AAA..A ('A'==active, '.'==missing, 'R'==replacing)

examine on /dev/sdf:

Magic:          a92b4efc
Version:        1.2
Feature Map:        0x0
Array UUID:     3dd22fc6:4226630e:efc3c5dc:909102ef
Name:           txds:1
Creation Time:      Mon Jan 5 11:39:44 2015
Raid Level:     raid5
Raid Devices:       6

Avail Dev Size:     3906767024 (1862.89 GiB 2000.26 GB)
Array Size:     9766914560 (9314.46 GiB 10001.32 GB)
Used Dev Size:      3906765824 (1862.89 GiB 2000.26 GB)
Data Offset:        262144 sectors
Super Offset:       8 sectors
Unused Space:       before=262064 sectors, after=1200 sectors
State:          clean
Device UUID:        4e7469d8:a9d5e8f4:a916bee4:739ccdb3

Update Time:        Tue Sep 22 08:53:14 2015
Checksum:       a9fbab84 - correct
Events:         895160

Layout:         left-symmetric
Chunk Size:     512K

Device Role:        Active device 2
Array State:        AAA..A ('A'==active, '.'==missing, 'R'==replacing)

Source: (StackOverflow)

Raid 5 , write intended bitmap not for subarrays?

I have a Intel Rapid Storage Raid 5 with 4x 6TB disks (all space used as raid on ubuntu 14.04 with mdadm ),

I would like to add a write intended bitmap, such that recovery is faster.

I tried:

 sudo mdadm --grow /dev/md126 --bitmap=internal

which outputs mdadm: Cannot add bitmaps to sub-arrays yet

sudo mdadm --detail --scan

ARRAY /dev/md/imsm0 metadata=imsm UUID=e409a30d:353a9b11:1f9a221a:7ed7cd21
ARRAY /dev/md/vol0 container=/dev/md/imsm0 member=0 UUID=9adaf3f8:d899c72b:fdf41fd1:07ee0399

How can I achieve this and where is the problem? Is the only option an external bitmap file?

Thanks for the help!


Source: (StackOverflow)

Python's shell=True subprocess.Popen keyword argument breaking mdadm command

I'm trying to control mdadm from a Python script, and have a call similar to this:

subprocess.Popen('mdadm --create /dev/md1 --raid-devices=4 /dev/md-0 /dev/md-1 /dev/md-2 /dev/md-3'.split())

mdadm then complains with mdadm: You have listed more devices (5) than are in the array(4)!

When I use shell=True (or os.system), it works just fine. For example:

subprocess.Popen('mdadm --create /dev/md1 --raid-devices=4 /dev/md-0 /dev/md-1 /dev/md-2 /dev/md-3', shell=True)

Why does the call fail without shell=True?

EDIT: Here is the full string that I'm splitting up and passing to subprocess.Popen:

mdadm --create /dev/md10 --name /dev/md/demo --chunk=128K --level=raid6 --size=104857600 $MDADM_r6_OPT --spare-device=0 --raid-devices=8 /dev/mapper/mpathbp2 /dev/mapper/mpathbp3 /dev/mapper/mpathbp4 /dev/mapper/mpathbp5 /dev/mapper/mpathcp2 /dev/mapper/mpathcp3 /dev/mapper/mpathcp4 /dev/mapper/mpathcp5

Source: (StackOverflow)

How to disable mdadm auto assemble when system bootup for CentOS?

I use mdadm create to create a soft RAID device to store some private documents.

I find that when I reboot my system the RAID device will be assemble automatically.

I want assemble the device manually.

I have remove all the mdadm command from rc.sysinit rc.d/* but there's no effect. Please give me a hand.

OS: Centos6.5


Source: (StackOverflow)

mdadm: array disappearing on reboot, despite correct mdadm.conf

I'm using Ubuntu 13.10 and trying to create a RAID 5 array across 3 identical disks connected to SATA ports on the motherboard. I've followed every guide and and used both the built-in Disks GUI app and mdadm at the command line, and despite everything I cannot get the array to persist after reboot.

I create the array with the following command:

root@zapp:~# mdadm --create /dev/md/array --chunk=512 --level=5 \
    --raid-devices=3 /dev/sda /dev/sdb /dev/sdd

Then I watch /proc/mdstat for awhile while it syncs, until I get this:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid5 sda1[0] sdd1[3] sdb1[1]
      1953262592 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>

To update the mdadm config file, I run the following:

root@zapp:~# /usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf

This adds the essential line to my config file:

ARRAY /dev/md/array metadata=1.2 UUID=0ad3753e:f0177930:8362f527:285d76e7 name=zapp:array

Everything seems correct, but when I reboot, the array is gone!


Source: (StackOverflow)

Do I have to wait for reshaping to call resize2fs after mdadm raid 5 grow?

The title says it all. I have a 3 partitions mdadm raid 5 on ubuntu. Now I added a 4th partition. All partitions cover the full physical disk, each disk is 4 TB in size. The Filesystem in use is ext4.

After mdadm growing the raid 5 the wiki says to make an fsck check and then resize

Do I have to wait for the mdadm reshape to finish for the resize2fs to work? If not it doesn't seem to work. the raid is unmounted of course. I did the fsck -f check. ran resize2fs on the raid but mdadm -D /dev/md0 still shows 4 disks with Array size 8 TB and dev size 4 TB. fdisk -l also shows only a size of 8 TB.

Did i do something wrong? How can i resize the filesystem to include my 4th disk?

fdisk -l output:

Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: E434C200-63C3-4CB8-8097-DD369155D797

Device     Start        End    Sectors  Size Type
/dev/sdb1   2048 7814035455 7814033408  3.7T Linux RAID


Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: CAD9D9DA-1C3B-4FC2-95BD-A5F0B55DE313

Device     Start        End    Sectors  Size Type
/dev/sdd1   2048 7814035455 7814033408  3.7T Linux RAID


Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 090E94F6-A461-4B27-BCDF-5B49EC6AFC84

Device     Start        End    Sectors  Size Type
/dev/sdc1   2048 7814035455 7814033408  3.7T Linux RAID


Disk /dev/sde: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: A0285071-2C16-42B4-8567-32CE98147A93

Device     Start        End    Sectors  Size Type
/dev/sde1   2048 7814035455 7814033408  3.7T Linux RAID


Disk /dev/md0: 7.3 TiB, 8001301774336 bytes, 15627542528 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1572864 bytes

mdadm -D /dev/md0 output

/dev/md0:
        Version : 1.2
  Creation Time : Thu Jan  7 08:23:57 2016
     Raid Level : raid5
     Array Size : 7813771264 (7451.79 GiB 8001.30 GB)
  Used Dev Size : 3906885632 (3725.90 GiB 4000.65 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Mon Jan 11 08:10:47 2016
          State : clean, reshaping 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

 Reshape Status : 47% complete
  Delta Devices : 1, (3->4)

           Name : NAS:0  (local to host NAS)
           UUID : 69ba4b0e:a2427b2a:121cc4e0:5461a8fb
         Events : 10230

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       8       49        1      active sync   /dev/sdd1
       3       8       65        2      active sync   /dev/sde1
       4       8       17        3      active sync   /dev/sdb1

Source: (StackOverflow)

Auto yes to raid5 array script

I am working on a script to install a array raid5. I am having trouble with inserting auto=yes when the script ask: if I want to continue creating array. I tried --auto=yes (http://www.linuxmanpages.com/man8/mdadm.8.php) but very unsure where to place it.

#!/bin/bash
mdadm mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdc1 /dev/sdd1 /dev/sde1 --spare-devices=1 /dev/sdf1

if [ $? -eq 0 ]; then
    echo OK
else
    echo FAIL
fi

Source: (StackOverflow)