Determine software RAID array (Multiple Device driver aka Linux Software RAID) state using command-line utility.

First case

Determine MD array status.

$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue May 24 17:50:17 2022
        Raid Level : raid1
        Array Size : 960806720 (916.30 GiB 983.87 GB)
     Used Dev Size : 960806720 (916.30 GiB 983.87 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sun May 29 22:45:38 2022
             State : active 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : pxe-rescue:0
              UUID : bfdfd734:8476c2c5:160c8d7e:336308bf
            Events : 4749

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2

Determine MD array status using exit code.

$ sudo mdadm --detail --test /dev/md0 1>/dev/null
$ echo $?
0

Exit code `` means that array is functioning normally.

Second case

Determine MD array status.

$ sudo mdadm --detail /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Tue May 24 17:50:17 2022
        Raid Level : raid1
        Array Size : 15813504 (15.08 GiB 16.19 GB)
     Used Dev Size : 15813504 (15.08 GiB 16.19 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sun May 29 22:39:54 2022
             State : clean, degraded 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 1
     Spare Devices : 0

Consistency Policy : resync

              Name : pxe-rescue:1
              UUID : 5d9ec27a:bfb45b7a:812cbe55:fcadf3ca
            Events : 25

    Number   Major   Minor   RaidDevice State
       0       8        3        0      active sync   /dev/sda3
       -       0        0        1      removed

       1       8       19        -      faulty   /dev/sdb3

Determine MD array status using exit code.

$ sudo mdadm --detail --test /dev/md1 1>/dev/null
$ echo $?
1

Exit code 1 means that array is functioning in degraded state as at least one device failed.

Third case

Determine MD array status.

$ sudo mdadm --detail /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Sun May 29 23:13:51 2022
        Raid Level : raid0
      Raid Devices : 3
     Total Devices : 1
       Persistence : Superblock is persistent

       Update Time : Sun May 29 23:13:51 2022
             State : active, FAILED, Not Started 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

            Layout : original
        Chunk Size : 512K

Consistency Policy : unknown

              Name : milosz.wtf:1  (local to host milosz.wtf)
              UUID : 1cb29db7:daed3f72:f9e99802:20103e93
            Events : 0

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       -       0        0        1      removed
       -       0        0        2      removed

       -       7        8        2      sync   /dev/sdc3

Determine MD array status using exit code.

$ sudo mdadm --detail --test /dev/md1 1>/dev/null
$ echo $?
2

Exit code 2 means that array is unusable due to multiple failed devices.

ko-fi