Quick HOWTO : Ch26 : Linux Software RAID/zh

来自Ubuntu中文
Lipeng8413留言 | 贡献2008年6月3日 (二) 18:24的版本 →‎RAID 类型
跳到导航跳到搜索

{{#ifexist: :Quick HOWTO : Ch26 : Linux Software RAID/zh/zh | | {{#ifexist: Quick HOWTO : Ch26 : Linux Software RAID/zh/zh | | {{#ifeq: {{#titleparts:Quick HOWTO : Ch26 : Linux Software RAID/zh|1|-1|}} | zh | | }} }} }} {{#ifeq: {{#titleparts:Quick HOWTO : Ch26 : Linux Software RAID/zh|1|-1|}} | zh | | }}


简介

使用独立磁盘冗余阵列(RAID)的主要目的是提高磁盘数据处理能力和提供数据冗余。

RAID既能通过操作系统来设置(软件式RAID),也可以在不设置操作系统的情况下通过硬件RAID控制器来实现(硬件式RAID)。这章将向您解释如何在Redhat/Fedora linux下配置软件式RAID结构。

为简洁起见,本章的重点是:在没有/boot或/文件系统的分区上进行RAID设置。

RAID 类型

不管是硬件式或软件式,冗余磁盘阵列RAID能用很多不同的标准来配置,下面我们看看最流行的几种配置方式

线性RAID

在线性RAID中,RAID控制器将RAID集(set)视为一个磁盘链。当前一个磁盘空间被填满后,数据顺序写入磁盘链中的下一个磁盘。 线性RAID的目的是容纳跨磁盘的超大文件系统,而且没有数据冗余性。驱动器故障将损坏您的数据。

Fedora Linux不支持此方式的RAID。

RAID 0

在RAID 0下,RAID控制器试图将数据平均的写入RAID集(set)中的所有磁盘。

将磁盘视为一个盘子,将数据视为蛋糕。你有四个蛋糕-巧克力味、香草味、樱桃味、草莓味-和四个盘子。RAID 0初始操作是将蛋糕切成片,然后将这些片分给每个盘子。RAID 0 驱动器让操作系统感觉蛋糕都是完整的,而且是放到一整个大盘子里了。例如,4个9GB的硬盘,配置成RAID 0集的话,操作系统将它们视为一个36GB的硬盘

和线性RAID相似,RAID 0也是为了容纳跨磁盘的超大容量文件系统,并没有数据冗余性。RAID 0的优点是数据访问速度。一个文件分放在四个磁盘上能以四倍于一个磁盘的速度读取出来。记住RAID 0常称为条带集

RAID 0允许磁盘的空间大小不同。当RAID使用完最小磁盘上的条带空间时,它继续使用剩余磁盘的可用空间作为条带集。当这种情况发生时,这部分数据的访问速度将变慢,因为RAID驱动器的少量减少了。所以,RAID 0 最好使用相同大小的磁盘。

Fedora Linux支持RAID 0 ,图26.1说明了RAID 0的数据分配过程

RAID 1

With RAID 1, data is cloned on a duplicate disk. This RAID method is therefore frequently called disk mirroring. Think of telling two people the same story so that if one forgets some of the details you can ask the other one to remind you.

When one of the disks in the RAID set fails, the other one continues to function. When the failed disk is replaced, the data is automatically cloned to the new disk from the surviving disk. RAID 1 also offers the possibility of using a hot standby spare disk that will be automatically cloned in the event of a disk failure on any of the primary RAID devices.

RAID 1 offers data redundancy, without the speed advantages of RAID 0. A disadvantage of software-based RAID 1 is that the server has to send data twice to be written to each of the mirror disks. This can saturate data busses and CPU use. With a hardware-based solution, the server CPU sends the data to the RAID disk controller once, and the disk controller then duplicates the data to the mirror disks. This makes RAID-capable disk controllers the preferred solution when implementing RAID 1.

A limitation of RAID 1 is that the total RAID size in gigabytes is equal to that of the smallest disk in the RAID set. Unlike RAID 0, the extra space on the larger device isn't used.

RAID 1 is supported by Fedora Linux. Figure 26.1 illustrates the data allocation process in RAID 1.

Figure 26-1 RAID 0 And RAID 1 Operation


RAID 4

RAID 4 operates likes RAID 0 but inserts a special error-correcting or parity chunk on an additional disk dedicated to this purpose.

RAID 4 requires at least three disks in the RAID set and can survive the loss of a single drive only. When this occurs, the data in it can be recreated on the fly with the aid of the information on the RAID set's parity disk. When the failed disk is replaced, it is repopulated with the lost data with the help of the parity disk's information.

RAID 4 combines the high speed provided of RAID 0 with the redundancy of RAID 1. Its major disadvantage is that the data is striped, but the parity information is not. In other words, any data written to any section of the data portion of the RAID set must be followed by an update of the parity disk. The parity disk can therefore act as a bottleneck. For this reason, RAID 4 isn't used very frequently.

RAID 4 is not supported by Fedora Linux.

RAID 5

RAID 5 improves on RAID 4 by striping the parity data between all the disks in the RAID set. This avoids the parity disk bottleneck, while maintaining many of the speed features of RAID 0 and the redundancy of RAID 1. Like RAID 4, RAID 5 can survive the loss of a single disk only.

RAID 5 is supported by Fedora Linux. Figure 26.2 illustrates the data allocation process in RAID 5.

Linux RAID 5 requires a minimum of three disks or partitions.

Figure 26-2 RAID 5 Operation