<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Dulanjana Ranasinge</title>
    <description>The latest articles on DEV Community by Dulanjana Ranasinge (@dula).</description>
    <link>https://dev.to/dula</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dula"/>
    <language>en</language>
    <item>
      <title>Reset Existing RAID 0 &amp; Create RAID 10 Array with mdadm on RHEL9</title>
      <dc:creator>Dulanjana Ranasinge</dc:creator>
      <pubDate>Mon, 23 Feb 2026 16:14:02 +0000</pubDate>
      <link>https://dev.to/dula/reset-existing-raid-0-create-raid-10-array-with-mdadm-on-rhel9-69e</link>
      <guid>https://dev.to/dula/reset-existing-raid-0-create-raid-10-array-with-mdadm-on-rhel9-69e</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Using Linux's software RAID features, storage arrays may be created and managed with the mdadm utility. When it comes to organising their individual storage devices and developing logical storage devices with improved performance or redundancy features, administrators have a lot of options.&lt;/p&gt;

&lt;p&gt;Several RAID setups that can be set up with a RHEL9 server will be carried out in this tutorial.&lt;/p&gt;

&lt;p&gt;RAID 0 - Minimum of 2 storage devices, Performance in terms of read/write and capacity.&lt;br&gt;
RAID 1 - Minimum of 2 storage devices, Redundancy between two storage devices.&lt;br&gt;
RAID 5 - Minimum of 3 storage devices, Redundancy with more usable capacity.&lt;br&gt;
RAID 6 - Minimum of 4 storage devices, Double redundancy with more usable capacity.&lt;br&gt;
RAID 10 - Minimum of 3 storage devices, Performance and redundancy.&lt;/p&gt;
&lt;h2&gt;
  
  
  Prerequisites
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;SSH access to the RHEL9 server using root or sudo privilege user account. &lt;/li&gt;
&lt;li&gt;Two or Four storage devices based on type of the array.&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Resetting Existing RAID Devices
&lt;/h2&gt;

&lt;p&gt;If you haven't set up any arrays yet, you can skip this part for the time being. Several RAID levels will be introduced in this guide. You will probably want to reuse your storage devices after each section if you want to continue up and finish each RAID level for your devices. Before testing a new RAID level, you can reset your component storage devices by consulting the section on Resetting Existing RAID Devices.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Finding the active arrays in the /proc/mdstat file:
&lt;/li&gt;
&lt;/ul&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat /proc/mdstat
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;


&lt;p&gt;Output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Personalities : [raid0]
md0 : active raid0 sdc[1] sdb[0]
      10475520 blocks super 1.2 512k chunks

unused devices: &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Unmount the array from the filesystem:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;umount /dev/md0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Stop and remove the array:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mdadm --stop /dev/md0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Find the storage devices that were used to build the array (sdb and sdc in this case):
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME               SIZE FSTYPE            TYPE MOUNTPOINT
sda                 50G                   disk
├─sda1               1G xfs               part /boot
├─sda2             600M vfat              part /boot/efi
└─sda3            48.4G LVM2_member       part
  └─rootVG-rootLV    4G xfs               lvm  /
sdb                  5G linux_raid_member disk
sdc                  5G linux_raid_member disk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;After identify the devices used to create an array, zero their superblock and removes the RAID metadata and resets them to normal:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mdadm --zero-superblock /dev/sdb
mdadm --zero-superblock /dev/sdc
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Remove or comment any array related entries from following files:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/etc/fstab
/etc/mdadm/mdadm.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Rebuild the current kernel and so boot process does not bring the array online:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;dracut -f
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;At this stage storage devices are ready to rebuild as different array. Reboot the server and start the new build if safe to do so.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Create RAID 10 Array.
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Find the storage devices that will be use to build the array (sdb, sdc, sdd and sde in this case):
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;NAME               SIZE FSTYPE      TYPE MOUNTPOINT
sda                 50G             disk
├─sda1               1G xfs         part /boot
├─sda2             600M vfat        part /boot/efi
└─sda3            48.4G LVM2_member part
  └─rootVG-rootLV    4G xfs         lvm  /
sdb                  5G             disk
sdc                  5G             disk
sdd                  5G             disk
sde                  5G             disk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Basically RAID 10 array is the combination of RAID 0 and RAID 1. This will give high redundancy and high performance with three different layouts call &lt;strong&gt;near, far and offset&lt;/strong&gt;. Find out detail information about each layout in man page:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;man 4 md
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Execute create array command with the device name (md0), the RAID level, and the number of devices as following :
Note : By not specifying a layout and copy number will consider as near layout and setup the copies.
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sdb /dev/sdc /dev/sdd /dev/sde
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;To optimalize recovery speed, it is recommended to enable write-indent bitmap, do you want to enable it now? [y/N]? y
mdadm: chunk size defaults to 512K
mdadm: size set to 5237760K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Verify RAID status:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cat /proc/mdstat
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Personalities : [raid4] [raid5] [raid6] [raid10]
md0 : active raid10 sde[3] sdd[2] sdc[1] sdb[0]
      6983680 blocks super 1.2 512K chunks 3 offset-copies [4/4] [UUUU]
      [===&amp;gt;.................]  resync = 18.2% (1271808/6983680) finish=0.8min speed=115618K/sec
      bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: &amp;lt;none&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a filesystem on the array:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkfs.xfs /dev/md0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Create a mount point to attach the new filesystem:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir -p /mnt/md0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Mount the filesystem with the following:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mount /dev/md0 /mnt/md0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Check the mount point and space:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;df -h
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Output&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Filesystem                 Size  Used Avail Use% Mounted on
devtmpfs                   4.0M     0  4.0M   0% /dev
tmpfs                      3.2G     0  3.2G   0% /dev/shm
tmpfs                      1.3G  8.6M  1.3G   1% /run
/dev/mapper/rootVG-rootLV  4.0G  1.6G  2.4G  41% /
/dev/sda1                  960M  247M  714M  26% /boot
/dev/sda2                  599M   12K  599M   1% /boot/efi
tmpfs                      653M     0  653M   0% /run/user/0
/dev/md0                   6.6G   80M  6.6G   2% /mnt/md0
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Execute following to make the array automatically reassemble during the boot:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;Add mount point to the /etc/fstab to enable automatic mount during the reboot:
&lt;/li&gt;
&lt;/ul&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;echo '/dev/md0 /mnt/md0 xfs defaults,nofail,discard 0 0' | tee -a /etc/fstab
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;RAID 10 array build complete and it will mount automatically after the server reboot. Reboot the server and verify the status.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>linux</category>
      <category>cli</category>
      <category>raid</category>
      <category>rhel</category>
    </item>
  </channel>
</rss>
