• SolusVM CP


  • Centos 6 software raid-1 with EFI

    From vpsget wiki
    Jump to: navigation, search

    Assuming that you are creating new OS setup. If you already have the centos installed on disk1 and want to add disk2 as software raid-1 refer to the official RHEL/centos documents you can find how to do it.

    In centos 6 setup select to manually create volumes and create the identical by size partitions on both drives. The partition type should be "Software RAID" Leave the place (200-500 MB for the /boot/efi mount point on both disks , but select as mount point only for one) after that you be able to add raid devices [create the raid volumes] from the sda and sdb previously created partitions. So it should be something like this:

       Device       mountpoint         type    
    -------------------------------------------- 
    HardDrivers
     /dev/sda
       /dev/sda1  /boot/efi           EFI Boot [vfat] #EFI mountpoint
       /dev/sda2  {part0}             Software RAID
       /dev/sda3                      Software RAID      
       /dev/sda4  {part1}             Software RAID
     /dev/sdb
       /dev/sdb1  {same size as sda}   EFI Boot      #no mountpoint!
       /dev/sdb2  {size -eq sda part0} Software RAID
       /dev/sdb3  {size -eq sda swap}  Software RAID
       /dev/sdb4  {size -eq sda part1} Software RAID
    

    Note that we have specified only 1 mountpoint for EFI boot.

    While you creating each volume [raid device] you be able to select partiotions that will be mirrored , filesystem type and mountpoint After you add the Raid Devices and create raid partitions the next volumes should be added at the start of the list in centos 6 install GUI. So the "picture" will be like this :

        Device       mountpoint         type    
    --------------------------------------------     
    RAID Devices
       /dev/md0   / {part0}          ext4
       /dev/md1                      swap             
       /dev/md2   /vz                ext4       
    HardDrivers
     /dev/sda
       /dev/sda1  /boot/efi           EFI Boot [vfat] #EFI mountpoint
       /dev/sda2  {part0}             Software RAID
       /dev/sda3                      Software RAID      
       /dev/sda4  {part1}             Software RAID
     /dev/sdb
       /dev/sdb1  {same size as sda}   EFI Boot      #no mountpoint!
       /dev/sdb2  {size -eq sda part0} Software RAID
       /dev/sdb3  {size -eq sda swap}  Software RAID
       /dev/sdb4  {size -eq sda part1} Software RAID
    


    As you can see currently EFI vfat partition not mirrored. But we have created the similar partition on sda1 and sdb1 . The mountpoint was configured only for sda1 (/boot/efi). You can press "Next" and setup will continue. Once it will done login to your server .

    Check mdstat:

    cat /proc/mdstat 
    

    You should see the resync process. you can watch it live with watch command:

    watch cat /proc/mdstat 
    

    Once resync will be done you can see the output like this:

    cat /proc/mdstat 
    Every 2.0s: cat /proc/mdstat
    Personalities : [raid1]
    md2 : active raid1 sda4[0]
         1885593600 blocks super 1.1 [2/1] [U_]
         bitmap: 6/15 pages [24KB], 65536KB chunk
    md0 : active raid1 sda2[0]
         51199872 blocks super 1.0 [2/1] [U_]
         bitmap: 1/1 pages [4KB], 65536KB chunk
    md1 : active raid1 sdb3[1] sda3[0]
         16375808 blocks super 1.1 [2/2] [UU]
    unused devices: <none>
    

    Remember to check and resync all raid partitions (from the output below we can see that md1 is ok but md0 and md2 should be resycned (just add sdb into arr))

    Now you need to make manual mirror for /boot/efi. The next copmmands copy /boot/efi from sda to sdb:

    dd if=/dev/sda1 of=/dev/sdb1
    efibootmgr --create --disk /dev/sdb --label "CentOS Backup" --load "\\EFI\\redhat\\grub.efi"   
    

    The /boot/efi should be manually resynced every time you update your kernel or edit /etc/grub.conf


    Now you can disable sda in BIOS/EFI and check - system should load w/o any problems.

    View status of SW RAID:

     mdadm -D /dev/mdN 
    



    Also as I can understand in case of one HDD failure - after replacement you need to initiate manual resync from survived HDD to new like this {better refer original docs}: Force Manual Resync .stop array first:

    mdadm --stop /dev/mdN
    

    How To Perform resync in case you need it.

    Check the arrays status

    mdadm -D /dev/mdN
    mdadm -D /dev/md2 #example
    

    If you see faulty spare like:

    Number   Major   Minor   RaidDevice State
         3       8       17        0      active sync   /dev/sdb1
         1       0        0        1      removed
         2       8       33        -      faulty spare   /dev/sdc1
    

    You need to remove/add. If you seee simply 1 volume and can;t see another -so simply add second partition w/o removal (you have nothing to remove)

    SDX1 hot remove

    root@ubuntumdraidtest:~# mdadm --manage /dev/mdN -r /dev/sdX1
    mdadm: hot removed /dev/sdX1 from /dev/mdN
    

    SDX1 add

    root@ubuntumdraidtest:~# mdadm --manage /dev/mdN -a /dev/sdX1
    mdadm: added /dev/sdX1
    

    Visualization

    cat /proc/mdstat shows: 
    #
    md0 : active raid1 sdc1[2] sdb1[3]
          2095040 blocks super 1.2 [2/1] [U_]
          [>....................]  recovery =  0.4% (9600/2095040) finish=3.6min spe
    ed=9600K/sec
    ---omitted---
    



    Unmount the volume first:

    umount /dev/mdN
    umount /dev/sdX1
    mdadm --assemble --run --force --update=resync /dev/mdN /dev/sdX1 /dev/sdX1
    
    

    NOTE: How to understand how to start resync correctly ? This is simple. Just take a look on the mdstat:

    [root@main /]# cat /proc/mdstat 
    Personalities : [raid1] 
    md2 : active raid1 sdb4[1]
          1885593600 blocks super 1.1 [2/1] [_U]
          bitmap: 8/15 pages [32KB], 65536KB chunk
    #
    md0 : active raid1 sdb2[1]
          51199872 blocks super 1.0 [2/1] [_U]
          bitmap: 1/1 pages [4KB], 65536KB chunk
    #
    md1 : active raid1 sdb3[1] sda3[0]
          16375808 blocks super 1.1 [2/2] [UU]
    
    [root@main /]# df -h
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/md0         48G   12G   34G  26% /
    tmpfs           7.7G     0  7.7G   0% /dev/shm
    /dev/sdb1       200M  280K  200M   1% /boot/efi
    /dev/md2        1.8T  707G  974G  43% /vz
    
    

    From this output you can see that only md1 volume currently synced correctly.

    The md2 and md0 are not synced. only sdbX present in output and no sdaX. Also you can note that [_U] flags appear near the affected volumes instead of [UU] - when volume are healthy.

    Start the resync in this case:

     umount /dev/md2
     umount /dev/sdb4
     mdadm --assemble --run --force --update=resync /dev/md2 /dev/sdb4 /dev/sda4
    
    


    ad/remove from md: SDA1 hot remove

    mdadm --manage /dev/mdN -r /dev/sda1
    mdadm: hot removed /dev/sda1 from /dev/mdN
    

    SDA add:

    mdadm --manage /dev/mdN -a /dev/sda1
    mdadm: added /dev/sda1
    


    For more details about centos 6 software raid 1 install refer to the official guide : https://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-raid-config.html


    Personal tools
    Namespaces

    Variants
    Actions
    Navigation
    Toolbox