个人工具

UbuntuHelp:Installation/SoftwareRAID

来自Ubuntu中文

Wikibot讨论 | 贡献2009年11月17日 (二) 19:32的版本

跳转至: 导航, 搜索

How to install Ubuntu onto a Linux Software RAID system Note: https://wiki.ubuntu.com/BootDegradedRaid seems to indicate that many or all of the problems mentioned here have been fixed for Intrepid.

Introduction

RAID is a method of using multiple hard drives to act as one, reducing the probability of catastrophic data loss in case of drive failure. RAID is implemented in either software (where the operating system knows about both drives and actively maintains both of them) or hardware (where a special controller makes the OS think there's only one drive and maintains the drives 'invisibly'). The RAID software included with current versions of Linux (and Ubuntu) is based on the 'mdadm' driver and works very well, better even than many so-called 'hardware' RAID controllers. NOTE: Many aftermarket motherboards tout 'hardware RAID' but it's really not. Instead, it's a software driver with a slight hardware 'assist' from the motherboard; these systems are known as "FakeRAID" in the Linux community. If you're doing a new install, it is better to use the standard Linux drivers. If you're trying to dual-boot an existing FakeRAID setup, or you're insistent on using it even on a new install, you need to follow these instructions, FakeRaidHowto.

Requirements

  • The "Alternate" install CD for *buntu if you're building a desktop system. If you're building a server, the server install CD includes the necessary options. Getting Ubuntu Alternate Install disk
  • At least two hard drives, preferably the same mode, size, etc.

After a successful install, you should also manually fix 2 shortcomings in the default configuration:

  • Install GRUB boot-loader on second drive
  • Update startup script to detect a failed drive

Installing

How to Burn an ISO You must use either the alternate cd-image or the server image to install ubuntu on raid. Follow the install instruction for an Alternate Install until you get to partitioning the disks How to do a Ubuntu Alternate Install

Partitioning the disk

For a 2 hard drive system in RAID 1 configuration. (repeat steps for additional hard drives) Warning: the /boot filesystem cannot use any softRAID level other than 1 with the stock Ubuntu bootloader. If you want to use some other RAID level for most things, you'll need to create separate partitions and make a RAID1 device for /boot. Warning: this will remove all data on hard drives. See DrivesAndPartitions for more information.

  1. Select "Manual" as your partition method.
  2. Select your 1st hard drive, and agree to "Create a new empty partition table on this device ?"
  3. Repeat step 2 with your 2nd hard drive.
  4. Select the "FREE SPACE" on the 1st drive then select "Create a new partition"
  5. Select the size (suggestion, normally you want a root partition major part of the hard drive and swap which is 1.5 times the amount of ram )
  6. Select Primary, then Beginning.
  7. Select the "Use as:" by default this is "Ext3 journalling file system" we want to change that to "physical volume for RAID"
  8. Select "bootable flag" and set it to "on"
  9. Select "Done setting up the partition"

10. Repeat steps 4 to 10 for the 2nd hard drive and the other partitions.

Configuring the RAID

  1. Once you have complete your partitioning in the main "Partition Disks" page select "Configure Software RAID"
  2. Select "Yes"
  3. Select "Create new MD drive"
  4. Select RAID1, or type of RAID you want (RAID0 RAID1 RAID5)
  5. Number of devices 2 or the amount of hard drives you have
  6. Number of spare devices 0
  7. select which partitions to use. Generally they will be sda1 and sdb1 or hda1 or hdb1. Generally the numbers will match and the different letters are for different hard drives.
  8. At this point the installation may become unresponsive this is the hard drives already syncing. Repeat steps 3 to 7 with each pair of partitions you have created.
  9. Once done, select finish.

Formatting

You now have a list of your hard drives and your RAID drives. We will now format and set the mount point for other RAID drives. Treat the RAID drive as a local hard drive and format and mount accordingly

  1. Select Partition.
  2. Go to Use as Select Ext 3 for your normal partitions or swap area for your swap partition
  3. If you select Ext 3 then select your mount point if you only have one partition for ext 3 select /
  4. Repeat for each RAID partition.

Select "Finish partitioning and write changes to disk" From this point on, your hard drive lights will probably be on continuously. This indicates the array syncing. The system can be used normally and even rebooted while the array syncs.

Boot Loader

Performing the above steps in Jaunty server, I did not experience any problems (described below) with (grub) bootloader. However I must admit that I created two raids (a 100MB raid-0 for /boot, and a 240GB raid-0 for /), hence at the time of installation completion, the 100MB raid-device was most likely finnished syncing, so the /boot partition could be found on both physical disks, and may be the reason that I did not run into any problems. Installation continues as normal until you have to install a bootloader. GRUB installation may appear to complete successfully, but it will not actually work. The key is that GRUB can't be installed on the RAID device. However, the individual raw partitions that make up the RAID device look just like ordinary non-RAIDed partitions to GRUB. So you need to unmount the RAID device and install grub manually. There are a few ways to go about this, but probably the easiest is to let installation finish and allow the system to begin rebooting. You'll end up at a

GRUB>

prompt. Now

GRUB> find /boot/grub/menu.lst

or if you have a separate /boot partition,

GRUB> find /grub/menu.lst

GRUB should respond with something like

(hd0,0)
(hd1,0)

The above says your menu.lst file was found in the first partition of each of your first two disks. Then, for each (hdX,Y) line you get in response, use this pattern

GRUB> root (hdX,Y)
GRUB> setup (hdX)

When done, hit CTRL-ALT-DEL to reboot the machine again, and everything should work. I wonder if the above steps are necessary then each time there is a kernel update, to install grub on both physical disks. Note: I'm not 100% sure the above procedure works, because in my case there was probably already a grub hanging around in the MBR of (hd0), and I don't know whether the regular installer procedure will put one there when /boot is on RAID. Without GRUB in the MBR, you wouldn't even get to a GRUB> prompt at boot. If you want to be safe, please see my alternate method at http://techarcana.net/hydra/os-installation/#boot-loader, but if you try this simple approach please let me know whether it works for you or not. If for some reason you are able to boot without being dropped into a GRUB> prompt, you'll still want to install GRUB in the MBR (usually) of each drive in the RAID1 array that holds /boot (rather than just the first drive), so that if one drive goes down you can still boot. So even if things seem to be working, it's a good idea to hit escape at boot to bring up the GRUB menu, drop into a GRUB> prompt, and do the above exercise. More info for Ubuntu 8.10 Grub installs correctly. If you have any problems

  1. Get hold of System Rescue Disk 1.1.5 (www.sysresccd.org)
  2. Boot from the rescue disk
  3. mdadm --stop /dev/md0
  mkdir /data
  mount -t ext3 /dev/sda1 /data
  grub-install --recheck --root-directory=/data /dev/sda
  umount /data
  mount -t ext3 /dev/sdb1 /data
  grub-install --recheck --root-directory=/data /dev/sdb
  umount /data

It appears that grup-install will only write to a mounted disk. There is no warning of failure if you run a 'grub-install' without mounting the partition on the device.

Updating startup script

Every time the computer boots, it scans the available hard-drives to try and identify any RAID array. Most of the time this is easy and takes place instantaneously. If one of the drives is unusable, however, then the computer needs to operate the remaining drive in 'degraded' mode. If the bad drive failed during regular operation, then the computer will have already removed it from the configuration of the array. If the drive had previously been working, however, and failed spontaneously during power-up or boot-up, then the computer needs to figure it out on-the-fly. The script which tries to detect a failed drive is called 'initramfs' and, as of Ubuntu 8.04, the default code in this script completely fails. The procedure below adds an additional step so that it will succeed. 1. Update the 'initramfs' boot script,

> gksudo gedit /usr/share/initramfs-tools/scripts/local

2. Find the comment,

# We've given up, but we'll let the user fix matters if they can".

3. Just *before* this comment, add the following:

# The following code was added to allow degraded RAID arrays to start
if [ ! -e "${ROOT}" ] || ! /lib/udev/vol_id "${ROOT}" >/dev/null 2>&1; then
# Try mdadm and allow degraded arrays to start in case a drive has failed
log_begin_msg "Attempting to start RAID arrays and allow degraded arrays"
/sbin/mdadm --assemble --scan
log_end_msg
# If you use logical volume on raid partition, is better wait some seconds or the boot will fail!
sleep 10
fi

3.a) would be better to place it in your personal configuration, because for safer-updates. bash-Scripts in local-top are initiated from local-script above. Create script

> gksudo gedit /etc/initramfs-tools/scripts/local-top/mdadm

copy 3. (# The following code was added to allow degraded RAID arrays to start ...) make a bash-script, and change the line:

/sbin/mdadm --assemble --scan
into
/sbin/mdadm --examine --scan | /sbin/mdadm --assemble --scan

because you mustn`t have a valid mdadm.conf in initramfs-environment.-) 4. Save the change and exit the editor. 5. Finally, update the boot image to use the updated script,

> sudo update-initramfs -u

Troubleshooting

Swap space doesn't come up, error message in dmesg Provided the RAID is working fine this can be fixed with

> sudo update-initramfs -k all -u

Resources

Using mdadm

Checking the status of your RAID

Two useful commands to check the status are:

cat /proc/mdstat 

This will show output something similar to


 Personalities : [raid1] [raid6] [raid5] [raid4] 
md5 : active raid1 sda7[0] sdb7[1]
      62685504 blocks [2/2] [UU]
      
md0 : active raid1 sda1[0] sdb1[1]
      256896 blocks [2/2] [UU]

md6 : active raid5 sdc1[0] sde1[2] sdd1[1]
      976767872 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

From this information you can see that the available personalities on this machine are "raid1, raid6, raid4, and raid5" which means this machine is setup to use raid devices configured in a raid1, raid6, raid4 and raid5 configuration. You can also see in the three example meta devices that there are two raid 1 mirrored meta devices. These are md0 and md5. You can see that md5 is a raid1 array and made up of disk /dev/sda partition 7, and /dev/sdb partition 7, containing 62685504 blocks, with 2 out of 2 disks available and both in sync. The same can be said of md0 only it is smaller (you can see from the blocks parameter) and is made up of /dev/sda1 and /dev/sdb1. md6 is different in that we can see it is a raid 5 array, striped across 3 disks. These are /dev/sdc1, /dev/sde1 and /dev/sdd1, with a 64k "chunk" size which is basically a "write" size. Algorithm 2 shows it is a write algorithm patern 2 which is "left disk to right disk" writing across the array. You can see that all 3 disks are present and in sync.

sudo mdadm --query --detail /dev/md* 

( where * is the partition number)