frame

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Sign In Register

How to remove a Drive from RAID 1

edited April 30 in Tutorials

While installing a server with three drives and SOFTWARE RAID1 at soyoustart and hetzner I faced the error that all 3 drives were set as RAID member while I was willing to use only two drives for RAID1.
This tutorial also applies to "How to remove a failed drive under in SOFTWARE RAID1"

Let check the raid members, here sda1, sdb1 and sdc1 are members of md1 while sda2, sdb2 and sdc2 are members of md2

[email protected]:~# cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md1 : active raid1 sda1[0] sdc1[2] sdb1[1]
2096064 blocks [3/3] [UUU]

md2 : active raid1 sda2[0] sdc2[2] sdb2[1]
114594752 blocks [3/3] [UUU]
bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices:

We want to remove the drive sdc1 and sdc2 (sdc)
So first will set the sdc1 and sdc2 as faulty.

[email protected]:~# mdadm --manage /dev/md1 --fail /dev/sdc1

mdadm: set /dev/sdc1 faulty in /dev/md1

[email protected]:~# mdadm --manage /dev/md2 --fail /dev/sdc2

mdadm: set /dev/sdc2 faulty in /dev/md2

Now we can easily remove sdc1 and sdc2 from madam

[email protected]:~# mdadm /dev/md1 -r /dev/sdc1

mdadm: hot removed /dev/sdc1 from /dev/md1

[email protected]:~# mdadm /dev/md2 -r /dev/sdc2

mdadm: hot removed /dev/sdc2 from /dev/md2

Let check the mdadm status again

[email protected]:~# cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md1 : active raid1 sda1[0] sdb1[1]
2096064 blocks [3/2] [UU_]

md2 : active raid1 sda2[0] sdb2[1]
114594752 blocks [3/2] [UU_]
bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices:

Now you can see UU_ it means that mdadm is showing 3rd drive missing also defective drive is shown by [U_] and/or [_U]. If the RAID array is intact, it shows [UU].

So lets update the mdadm with correct raid devices

[email protected]:~# mdadm --grow /dev/md1 --raid-devices=2

raid_disks for /dev/md1 set to 2
unfreeze

[email protected]:~# mdadm --grow /dev/md2 --raid-devices=2

raid_disks for /dev/md2 set to 2
unfreeze

Let's check again.

[email protected]:~# cat /proc/mdstat

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
md1 : active raid1 sda1[0] sdb1[1]
2096064 blocks [2/2] [UU]

md2 : active raid1 sda2[0] sdb2[1]
114594752 blocks [2/2] [UU]
bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices:

Sign In or Register to comment.

Hosting Forum

| Free Hosting Support and Consultation
@ 2017 TalksHostng, All rights reserved.
Powered by VanillaForums, Designed by ThemeSteam

Contact us

[email protected]
[email protected]
(800) 3032120

Free Hosting