Bienvenido! - Willkommen! - Welcome!

Bitácora Técnica de Tux&Cía., Santa Cruz de la Sierra, BO
Bitácora Central: Tux&Cía.
Bitácora de Información Avanzada: Tux&Cía.-Información
May the source be with you!

Friday, July 31, 2009

Software RAID

Operating system based ("software RAID")
Software implementations are now provided by many operating systems. A software layer sits above the (generally block-based) disk device drivers and provides an abstraction layer between the logical drives (RAIDs) and physical drives. Most common levels are
RAID 0 (striping across multiple drives for increased space and performance)
RAID 1 (mirroring two drives), followed by RAID 1+0, RAID 0+1
RAID 5 (data striping with parity) are supported.
  • FreeBSD supports RAID 0, RAID 1, RAID 3, and RAID 5 and all layerings of the above via GEOM modules[8][9] and ccd.[10], as well as supporting RAID 0, RAID 1, RAID-Z, and RAID-Z2 (similar to RAID-5 and RAID-6 respectively), plus nested combinations of those via ZFS.
  • Linux supports RAID 0, RAID 1, RAID 4, RAID 5, RAID 6 and all layerings of the above.[11][12]
  • Microsoft's server operating systems support 3 RAID levels; RAID 0, RAID 1, and RAID 5. Some of the Microsoft desktop operating systems support RAID such as Windows XP Professional which supports RAID level 0 in addition to spanning multiple disks but only if using dynamic disks and volumes. Windows XP supports RAID 0, 1, and 5 with a simple file patch [13]. RAID functionality in Windows is slower than hardware RAID, but allows a RAID array to be moved to another machine with no compatibility issues.
  • NetBSD supports RAID 0, RAID 1, RAID 4 and RAID 5 (and any nested combination of those like 1+0) via its software implementation, named RAIDframe.
  • OpenBSD aims to support RAID 0, RAID 1, RAID 4 and RAID 5 via its software implementation softraid.
  • OpenSolaris and Solaris 10 supports RAID 0, RAID 1, RAID 5 (or the similar "RAID Z" found only on ZFS), and RAID 6 (and any nested combination of those like 1+0) via ZFS and now has the ability to boot from a ZFS volume on both x86 and UltraSPARC. Through SVM, Solaris 10 and earlier versions support RAID 0, RAID 1, and RAID 5 on both system and data drives.

Software RAID has advantages and disadvantages compared to hardware RAID. The software must run on a host server attached to storage, and server's processor must dedicate processing time to run the RAID software. This is negligible for RAID 0 and RAID 1, but may become significant when using parity-based arrays and either accessing several arrays at the same time or running many disks. Furthermore all the busses between the processor and the disk controller must carry the extra data required by RAID which may cause congestion.
Another concern with operating system-based RAID is the boot process. It can be difficult or impossible to set up the boot process such that it can fail over to another drive if the usual boot drive fails. Such systems can require manual intervention to make the machine bootable again after a failure. There are exceptions to this, such as the LILO bootloader for Linux, loader for FreeBSD[14] , and some configurations of the GRUB bootloader natively understand RAID-1 and can load a kernel. If the BIOS recognizes a broken first disk and refers bootstrapping to the next disk, such a system will come up without intervention, but the BIOS might or might not do that as intended. A hardware RAID controller typically has explicit programming to decide that a disk is broken and fall through to the next disk.
Hardware RAID controllers can also carry battery-powered cache memory. For data safety in modern systems the user of software RAID might need to turn the write-back cache on the disk off (but some drives have their own battery/capacitors on the write-back cache, a UPS, and/or implement atomicity in various ways, etc). Turning off the write cache has a performance penalty that can, depending on workload and how well supported command queuing in the disk system is, be significant. The battery backed cache on a RAID controller is one solution to have a safe write-back cache.
Finally operating system-based RAID usually uses formats specific to the operating system in question so it cannot generally be used for partitions that are shared between operating systems as part of a multi-boot setup. However, this allows RAID disks to be moved from one computer to a computer with an operating system or file system of the same type, which can be more difficult when using hardware RAID (e.g. #1: When one computer uses a hardware RAID controller from one manufacturer and another computer uses a controller from a different manufacturer, drives typically cannot be interchanged. e.g. #2: If the hardware controller 'dies' before the disks do, data may become unrecoverable unless a hardware controller of the same type is obtained, unlike with firmware-based or software-based RAID).
Most operating system-based implementations allow RAIDs to be created from partitions rather than entire physical drives. For instance, an administrator could divide an odd number of disks into two partitions per disk, mirror partitions across disks and stripe a volume across the mirrored partitions to emulate IBM's RAID 1E configuration. Using partitions in this way also allows mixing reliability levels on the same set of disks. For example, one could have a very robust RAID 1 partition for important files, and a less robust RAID 5 or RAID 0 partition for less important data. (Some BIOS-based controllers offer similar features, e.g. Intel Matrix RAID.) Using two partitions on the same drive in the same RAID is, however, dangerous.
e.g. #1: Having all partitions of a RAID-1 on the same drive will, obviously, make all the data inaccessible if the single drive fails.
e.g. #2: In a RAID 5 array composed of four drives 250 + 250 + 250 + 500 GB, with the 500-GB drive split into two 250 GB partitions, a failure of this drive will remove two partitions from the array, causing all of the data held on it to be lost.

No comments: