Notes on RAID for Ubuntu: Difference between revisions

From Wildsong
Jump to navigationJump to search
Brian Wilson (talk | contribs)
Brian Wilson (talk | contribs)
mNo edit summary
 
(6 intermediate revisions by the same user not shown)
Line 1: Line 1:
Shorthand notes on how to convert a new Ubuntu system to RAID 1
Shorthand notes on how to set up Ubuntu Server on a system with RAID 1


== Install Ubuntu Server ==
Some notes:
# Target system has 12GB of RAM
# Target system has two 250GB drives (actually a 320 and a 250, I ignore the extra space for now)
# Target system will have a RAID controller and 4 more drives for data.


System has 12GB of RAM
The dark secret is that the installer disk (for 9.10 anyway) will detect and install onto mdadm raid if it exists. So the procedure is to partition the disks FIRST, then boot into the Ubuntu installer.


System has two 250GB drives (actually a 320 and a 250, I ignore the extra space for now)
== Partition the drives ==


During installation create 6 partitions on each drive, so that the system can be converted to RAID 1 (mirrored)
Create 6 partitions on each drive, like this


  1  1GB  /boot     ''wont be raid, so we can boot!''
  1  1GB  /boot       ''wont be raid, so we can boot!''
  2  10GB  /raidroot ''where the system will be eventually''
  2  10GB  /           ''where the system will be eventually''
  3  10GB  /        ''initial install location''
  3  10GB  not mounted ''initial install location''
  4        EXT
  4        EXT
  5  24GB  SWAP     ''won't be raided, so we will have 48GB of swap''
  5  24GB  SWAP       ''won't be raided, so we will have 48GB of swap''
  6 200GB /raidvar  ''will be raided /var partition''
  6 200GB /var        ''will be raided /var partition''
 
Set the type on the RAID partitions to Linux Raid (type=FD) for 2, 3 and 6


Partitions are far bigger than they need to be but the data on this system will live on an NFS server and on another RAID array to be installed later.  
Partitions are far bigger than they need to be but the data on this system will live on an NFS server and on another RAID array to be installed later.  
Making the boot partitions 1 GB means in a pinch an entire copy of Linux can be installed there.
Making the boot partitions 1 GB means in a pinch an entire copy of Linux can be installed there.


Do the installation. After it's done install the package to manage raid
== Create RAID filesystems ==


apt-get install mdadm
See http://www.linuxfoundation.org/collaborate/workgroups/linux-raid


Unmount the extra filesystems and remove their entries from /etc/fstab.
First convert the second and sixth partitions to RAID1


== Create RAID filesystems ==
mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sda2 /dev/sdb2
mdadm --create /dev/md1 --level=mirror --raid-devices=2 /dev/sda3 /dev/sdb3
mdadm --create /dev/md2 --level=mirror --raid-devices=2 /dev/sda6 /dev/sdb6
 
Make filesystems on the new RAID devices (or do this in Ubuntu installer)
 
mke2fs -j /dev/md0
mke2fs -j /dev/md1
mke2fs -j /dev/md2
 
== Install Ubuntu Server ==
 
Select MANUAL when you get to the disk partitioning.


See http://www.linuxfoundation.org/collaborate/workgroups/linux-raid
* Tell it to put /boot on /dev/sda1
* Tell it to use EXT3 filesystem for /dev/md0, and to put / there.  
* Tell it to use EXT3 for /dev/md2, and to put /var there.
* Ignore the other partitions, they are for later.


First convert the second and sixth partitions to RAID1
=== Put a copy of /boot on the other hard drive ===


mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sda2 /dev/sdb2
I have found that drive failures don't happen often enough to really obsess with getting the /boot partition onto RAID, it's too much work.
mdadm --monitor /dev/md0
But it's nice to keep a copy on the other hard drive and set it up so you can boot from either one. This makes it possible to get the system going again if you lose drive sda.


  mdadm --create --verbose /dev/md2 --level=mirror --raid-devices=2 /dev/sda6 /dev/sdb6
  mke2fs /dev/sdb1
  mdadm --monitor /dev/md2
mount /dev/sdb1 /mnt
cd /boot
  tar cvf - . | (cd /mnt; tar xpf -)


Make filesystems on the new RAID devices
You can go ahead and make things bootable from here too if you want.
mke2fs -j /dev/md0
mkreiserfs /dev/md2


Create mount points for testing. I use /raidroot and /raidvar.
=== Put a copy of / on the other RAID partition ===
Add temporary entries to /etc/fstab,
Use the "blkid" command to find a UUID for a filesystem.


Check /proc/mdstat to see if RAID reconstruction is complete.
You can also create a copy of / on the other RAID
Then reboot to single user mode.
The reboot is mostly just to confirm things are working with RAID and the kernel so far.


Copy. I prefer to do this from a rescue disk; adjust command accordingly
Edit the new /etc/fstab file


mkdir /tmp/root
Reboot to test
mount /dev/sd3 /tmp/root
mkdir /tmp/raidroot
mount /dev/md0 /tmp/raidroot
cd /tmp/root
tar cvf - . | (cd /tmp/raidroot; tar xpf -)
cd /tmp/raidroot
mkdir /tmp/raidvar
mount /dev/md2 /tmp/raidvar
cd /tmp/root/var
tar cvf - . | (cd /tmp/raidvar; tar xpf -)


THere will be redundant copies of the /var partition left behind on the root partition but in practice it's not much space and comes in pretty handy if you have to boot into single user mode with the /var partition unmounted.
== Loose ends ==


Edit the new /etc/fstab in /dev/md0.
You can check /proc/mdstat to see if RAID reconstruction is complete.
to make it use the newly copied files.


  cd /tmp/raidroot
  cat /proc/mdstat
vi fstab


Edit the grub stuff to add the capability to boot using the new root partition.
Having the operating system on both /dev/md0 and /dev/md1 gives you a way to make major changes when testing new things and still have a fallback position.
you probably can't boot into it yet. Boot into the old /dev/sda3 root and do an update-initramfs. Then try to boot into /dev/md0
You have several options then; for example a cautious approach is to sync the partitions from a rescue disk before doing anything major. Or you can install a completely new OS onto the spare partition, get it all set up, and make it the new root partition. When done testing you then sync back to the old root.
Note, all this requires some care in managing /etc/fstab.

Latest revision as of 23:59, 21 January 2010

Shorthand notes on how to set up Ubuntu Server on a system with RAID 1

Some notes:

  1. Target system has 12GB of RAM
  2. Target system has two 250GB drives (actually a 320 and a 250, I ignore the extra space for now)
  3. Target system will have a RAID controller and 4 more drives for data.

The dark secret is that the installer disk (for 9.10 anyway) will detect and install onto mdadm raid if it exists. So the procedure is to partition the disks FIRST, then boot into the Ubuntu installer.

Partition the drives

Create 6 partitions on each drive, like this

1   1GB  /boot       wont be raid, so we can boot!
2  10GB  /           where the system will be eventually
3  10GB  not mounted initial install location
4        EXT
5  24GB  SWAP        won't be raided, so we will have 48GB of swap
6 200GB /var         will be raided /var partition

Set the type on the RAID partitions to Linux Raid (type=FD) for 2, 3 and 6

Partitions are far bigger than they need to be but the data on this system will live on an NFS server and on another RAID array to be installed later. Making the boot partitions 1 GB means in a pinch an entire copy of Linux can be installed there.

Create RAID filesystems

See http://www.linuxfoundation.org/collaborate/workgroups/linux-raid

First convert the second and sixth partitions to RAID1

mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sda2 /dev/sdb2
mdadm --create /dev/md1 --level=mirror --raid-devices=2 /dev/sda3 /dev/sdb3
mdadm --create /dev/md2 --level=mirror --raid-devices=2 /dev/sda6 /dev/sdb6

Make filesystems on the new RAID devices (or do this in Ubuntu installer)

mke2fs -j /dev/md0
mke2fs -j /dev/md1
mke2fs -j /dev/md2

Install Ubuntu Server

Select MANUAL when you get to the disk partitioning.

  • Tell it to put /boot on /dev/sda1
  • Tell it to use EXT3 filesystem for /dev/md0, and to put / there.
  • Tell it to use EXT3 for /dev/md2, and to put /var there.
  • Ignore the other partitions, they are for later.

Put a copy of /boot on the other hard drive

I have found that drive failures don't happen often enough to really obsess with getting the /boot partition onto RAID, it's too much work. But it's nice to keep a copy on the other hard drive and set it up so you can boot from either one. This makes it possible to get the system going again if you lose drive sda.

mke2fs /dev/sdb1
mount /dev/sdb1 /mnt
cd /boot
tar cvf - . | (cd /mnt; tar xpf -)

You can go ahead and make things bootable from here too if you want.

Put a copy of / on the other RAID partition

You can also create a copy of / on the other RAID

Edit the new /etc/fstab file

Reboot to test

Loose ends

You can check /proc/mdstat to see if RAID reconstruction is complete.

cat /proc/mdstat

Having the operating system on both /dev/md0 and /dev/md1 gives you a way to make major changes when testing new things and still have a fallback position. You have several options then; for example a cautious approach is to sync the partitions from a rescue disk before doing anything major. Or you can install a completely new OS onto the spare partition, get it all set up, and make it the new root partition. When done testing you then sync back to the old root. Note, all this requires some care in managing /etc/fstab.