https://wiki.archlinux.org/index.php/Installing_with_Fake_RAID
The purpose of this guide is to enable use of a RAID set created by the on-board BIOS RAID controller and thereby allow dual-booting of Linux and Windows from partitions inside the RAID set using GRUB. When using so-called "fake RAID" or "host RAID", the disc sets are reached from
From Wikipedia:
- Operating system-based RAID doesn't always protect the boot process and is generally impractical on desktop versions of Windows. Hardware RAID controllers are expensive and proprietary. To fill this gap, cheap "RAID controllers" were introduced that do not contain a RAID controller chip, but simply a standard disk controller chip with special firmware and drivers. During early stage boot-up, the RAID is implemented by the firmware. When a protected-mode operating system kernel such as Linux or a modern version of Microsoft Windows is loaded, the drivers take over.
- These controllers are described by their manufacturers as RAID controllers, and it is rarely made clear to purchasers that the burden of RAID processing is borne by the host computer's central processing unit -- not the RAID controller itself -- thus introducing the aforementioned CPU overhead which hardware controllers do not suffer from. Firmware controllers often can only use certain types of hard drives in their RAID arrays (e.g. SATA for Intel Matrix RAID, as there is neither SCSI nor PATA support in modern Intel ICH southbridges; however, motherboard makers implement RAID controllers outside of the southbridge on some motherboards). Before their introduction, a "RAID controller" implied that the controller did the processing, and the new type has become known in technically knowledgeable circles as "fake RAID" even though the RAID itself is implemented correctly. Adaptec calls them "host RAID".wikipedia:RAID
Despite the terminology, "fake RAID" via dmraid is a robust software RAID implementation that offers a solid system to mirror or stripe data across multiple disks with negligible overhead for any modern system. dmraid is comparable to mdraid (pure Linux software RAID) with the added benefit of being able to completely rebuild a drive after a failure before the system is ever booted.
History
In Linux 2.4, the ATARAID kernel framework provided support for fake RAID (software RAID assisted by the BIOS). For Linux 2.6 the device-mapper framework can, among other nice things like LVM and EVMS, do the same kind of work as ATARAID in 2.4. Whilst the new code handling the RAID I/O still runs in the kernel, device-mapper is generally configured by a userspace application. It was clear that when using the device-mapper for RAID, detection would go to userspace.
Heinz Maulshagen created the dmraid tool to detect RAID sets and create mappings for them. The controllers supported are (mostly cheap) fake RAID IDE/SATA controllers which contain BIOS functions. Common examples include: Promise FastTrak controllers; HighPoint HPT37x; Intel Matrix RAID; Silicon Image Medley; and NVIDIA nForce.
Information on supported hardware
RAID/Onboard @ Gentoo Linux Wiki
Backup
Outline
- Preparation
- Boot the installer
- Load dmraid
- Perform traditional installation
- Install GRUB
Preparation
- Open up any needed guides (e.g. Beginners' Guide, Official Arch Linux Install Guide) on another machine. If you do not have access to another machine, print it out.
- Download the latest Arch Linux install image.
- Backup all important files since everything on the target partitions will be destroyed.
Configure RAID sets
- Enter your BIOS setup and enable the RAID controller.
- The BIOS may contain an option to configure SATA drives as "IDE", "AHCI", or "RAID"; ensure "RAID" is selected.
- Save and exit the BIOS setup. During boot, enter the RAID setup utility.
- The RAID utility is usually either accessible via the boot menu (often F8, F10 or CTRL+I) or whilst the RAID controller is initializing.
- Use the RAID setup utility to create preferred stripe/mirror sets.
Boot the installer
See Official Arch Linux Install Guide#Pre-Installation for details.Load dmraid
Load device-mapper and find RAID sets:# modprobe dm_mod # dmraid -ay # ls -la /dev/mapper/Example output:
/dev/mapper/control
- data-blogger-escaped-a="" data-blogger-escaped-by="" data-blogger-escaped-controller="" data-blogger-escaped-created="" data-blogger-escaped-dev="" data-blogger-escaped-device-mapper="" data-blogger-escaped-first="" data-blogger-escaped-functioning="" data-blogger-escaped-if="" data-blogger-escaped-image="" data-blogger-escaped-is="" data-blogger-escaped-likely="" data-blogger-escaped-mapper="" data-blogger-escaped-on="" data-blogger-escaped-partition="" data-blogger-escaped-pre="" data-blogger-escaped-present="" data-blogger-escaped-raid="" data-blogger-escaped-sata="" data-blogger-escaped-set="" data-blogger-escaped-sil_aiageicechah1="" data-blogger-escaped-sil_aiageicechah="" data-blogger-escaped-silicon="" data-blogger-escaped-this="" If there is only one file (/dev/mapper/control
), check if your controller chipset module is loaded withlsmod
. If it is, then dmraid does not support this controller or there are no RAID sets on the system (check RAID BIOS setup again). If correct, then you may be forced to use software RAID (this means no dual-booted RAID system on this controller). If your chipset module is NOT loaded, load it now. For example:# modprobe sata_silSee/lib/modules/`uname -r`/kernel/drivers/ata/
for available drivers. To test the RAID sets:# dmraid -tayPerform traditional installation
Switch to tty2 and start the installer:# /arch/setupPartition the RAID set
- Under Prepare Hard Drive choose Manually partition hard drives since the Auto-prepare option will not find your RAID sets.
- Choose OTHER and type in your RAID set's full path (e.g.
/dev/mapper/sil_aiageicechah
). Switch back to tty1 to check your spelling. - Create the proper partitions the normal way.
Mounting the filesystem
If -- and this is probably the case -- you do not find your newly created partitions under Manually configure block devices, filesystems and mountpoints:- Switch back to tty1.
- Deactivate all device-mapper nodes:
# dmsetup remove_all
- Reactivate the newly-created RAID nodes:
# dmraid -ay # ls -la /dev/mapper
- Switch to tty2, re-enter the Manually configure block devices, filesystems and mountpoints menu and the partitions should be available.
Install and configure Arch
Re-activate the installer (tty2) and proceed as normal with the following exceptions:- Select Packages
- Ensure dmraid is marked for installation
- Configure System
- Add dm_mod to the MODULES line in
mkinitcpio.conf
. If using a mirrored (RAID 1) array, additionally add dm_mirror - Add chipset_module_driver to the MODULES line if necessary
- Add dmraid to the HOOKS line in
mkinitcpio.conf
; preferably after sata but before filesystem
- Add dm_mod to the MODULES line in
install bootloader
Use GRUB2
Please read GRUB2 for more information about configuring GRUB2. Currently, the latest version of grub-bios does not compatiable with fake-raid. If you got an error like this when you run grub-install:$ grub-install /dev/mapper/sil_aiageicechah Path `/boot/grub` is not readable by GRUB on boot. Installation is impossible. Aborting.You could try an old version of grub. You could find old version package of grub at ARM Search. Read Downgrade for more information. 1. download an old version package for grub
i686: http://arm.konnichi.com/extra/os/i686/grub2-bios-1:1.99-6-i686.pkg.tar.xz http://arm.konnichi.com/extra/os/i686/grub2-common-1:1.99-6-i686.pkg.tar.xz x86_64: http://arm.konnichi.com/extra/os/x86_64/grub2-bios-1:1.99-6-x86_64.pkg.tar.xz http://arm.konnichi.com/extra/os/x86_64/grub2-common-1:1.99-6-x86_64.pkg.tar.xz
You could verify these packages by the .sig file if you take care.2. install these old version packages by using "pacman -U *.pkg.tar.xz" 3. (Optional) Install os-prober if you have other OS like windows. 4. $ grub-install /dev/mapper/sil_aiageicechah 5. $ grub-mkconfig -o /boot/grub/grub.cfg 6. (Optional) put grub2-bios, grub2-common in /etc/pacman.conf's IgnorePkg array, if you don't want pacman upgrade it. That's all, grub-mkconfig will generate the configure automatically. You could edit /etc/default/grub to modify the configure (timeout, color, etc) before grub-mkconfig.
Use GRUB Legacy (Deprecated)
Please read GRUB Legacy for more information about configuring GRUB Legacy. For example, if you created logical partitions (creating the equivalent of sda5, sda6, sda7, etc.) that were mapped as:/dev/mapper | Linux GRUB Partition | Partition Number nvidia_fffadgic | nvidia_fffadgic5 | / 4 nvidia_fffadgic6 | /boot 5 nvidia_fffadgic7 | /home 6The correct root designation would be (hd0,5) in this example. After saving the configuration file, the GRUB installer will FAIL. However it will still copy files to
/boot
. DO NOT GIVE UP AND REBOOT -- just follow the directions below:
- Switch to tty1 and chroot into our installed system:
# mount -o bind /dev /mnt/dev # mount -t proc none /mnt/proc # mount -t sysfs none /mnt/sys # chroot /mnt /bin/bash
- Switch to tty3 and look up the geometry of the RAID set.
In order for cfdisk to find the array and provide the proper C H S
information, you may need to start cfdisk providing your raid set as the
first argument. (i.e. cfdisk /dev/mapper/nvidia_fffadgic):
- The number of Cylinders, Heads and Sectors on the RAID set should be written at the top of the screen inside cfdisk. Note: cfdisk shows the information in H S C order, but grub requires you to enter the geometry information in C H S order.
- Example:
18079 255 63
for a RAID stripe of two 74GB Raptor discs. - Example:
38914 255 63
for a RAID stripe of two 160GB laptop discs.
- GRUB will fail to properly read the drives; the geometry command must be used to manually direct GRUB:
- Switch to tty1, the chrooted environment.
- Install GRUB on
/dev/mapper/raidSet
:
# dmsetup mknodes # grub --device-map=/dev/null grub> device (hd0) /dev/mapper/raidSet grub> geometry (hd0) C H SExchange C H S above with the proper numbers (be aware: they are not entered in the same order as they are read from cfdisk). If geometry is entered properly, GRUB will list partitions found on this RAID set. You can confirm that grub is using the correct geometry and verify the proper grub root device to boot from by using the grub find command. If you have created a separate boot partition, then search for /grub/stage1 with find. If you have no separate boot partition, then search /boot/grub/stage1 with find. Examples:
grub> find /grub/stage1 # use when you have a separate boot partition grub> find /boot/grub/stage1 # use when you have no separate boot partitionGrub will report the proper device to designate as the grub root below (i.e. (hd0,0), (hd0,4), etc...) Then, continue to install the bootloader into the Master Boot Record, changing "hd0" to "hd1" if required.
grub> root (hd0,0) grub> setup (hd0) grub> quit
# cd /dev/mapper # for i in raidSetp*; do ln -s $i ${i/p/}; doneLastly, if you have multiple dmraid devices with multiple sets of arrays set up (say: nvidia_fdaacfde and nvidia_fffadgic), then create the
/boot/grub/device.map
file to help GRUB retain its sanity when working with the arrays. All the file does is map the dmraid device to a traditional hd#. Using these dmraid devices, your device.map file will look like this:
(hd0) /dev/mapper/nvidia_fdaacfde (hd1) /dev/mapper/nvidia_fffadgicAnd now you are finished with the installation!
# reboot
Troubleshooting
Booting with degraded array
One drawback of the fake RAID approach on GNU/Linux is that dmraid is currently unable to handle degraded arrays, and will refuse to activate. In this scenario, one must resolve the problem from within another OS (e.g. Windows) or via the BIOS/chipset RAID utility. Alternatively, if using a mirrored (RAID 1) array, users may temporarily bypass dmraid during the boot process and boot from a single drive:- Edit the kernel line from the GRUB menu
- Remove references to dmraid devices (e.g. change
/dev/mapper/raidSet1
to/dev/sda1
) - Append
disablehooks=dmraid
to prevent a kernel panic when dmraid discovers the degraded array
- Remove references to dmraid devices (e.g. change
- Boot the system
Error: Unable to determine major/minor number of root device
If you experience a boot failure after kernel update where the boot process is unable to determine major/minor number of root device, this might just be a timing problem (i.e. dmraid -ay might be called before /dev/sd* is fully set up and detected). This can effect both the normal and LTS kernel images. Booting the 'Fallback' kernel image should work. The error will look something like this:Activating dmraid arrays... no block devices found Waiting 10 seconds for device /dev/mapper/nvidia_baaccajap5 Root device '/dev/mapper/nvidia_baaccajap5' doesn't exist attempting to create it. Error: Unable to determine major/minor number of root device '/dev/mapper/nvidia_baaccajap5'To work around this problem:
- boot the Fallback kernel
- insert the 'sleep' hook in the HOOKS line of /etc/mkinitcpio.conf after the 'udev' hook like this:
HOOKS="base udev sleep autodetect pata scsi sata dmraid filesystems"
- rebuild the kernel image and reboot
dmraid mirror fails to activate
Does everything above work correctly the first time, but then when you reboot dmraid cannot find the array? This is because Linux software raid (mdadm) has already attempted to mount the fakeraid array during system init and left it in an umountable state. To prevent mdadm from running, move the udev rule that is responsible out of the way:# cd /lib/udev/rules.d # mkdir disabled # mv 64-md-raid.rules disabled/ # reboot============================ ============================ HOWTO: Linux Software Raid using mdadm 1) Introduction: Recently I went out and bought myself a second hard drive with the purpose of setting myself up a performance raid (raid0). It took me days and a lot of messing about to get sorted, but once I figured out what I was doing I realised that it's actually relatively simple, so I've written this guide to share my experiences I went for raid0, because I'm not too worried about loosing data, but if you wanted to set up a raid 1, raid 5 or any other raid type then a lot of the information here would still apply. 2) 'Fake' raid vs Software raid: When I bought my motherboard, (The ASRock ConRoeXFire-eSATA2), one of the big selling points was an on board raid, however some research revealed that rather than being a true hardware raid controller, this was in fact more than likely what is know as 'fake' raid. I think wikipedia explains it quite well: http://en.wikipedia.org/wiki/Redunda...ependent_disks
Hybrid RAID implementations have become very popular with the introduction of inexpensive RAID controllers, implemented using a standard disk controller and BIOS (software) extensions to provide the RAID functionality. The operating system requires specialized RAID device drivers that present the array as a single block based logical disk. Since these controllers actually do all calculations in software, not hardware, they are often called "fakeraids", and have almost all the disadvantages of both hardware and software RAID.
- Create two identical partitions
- Tell the software what the name of the new raid array is going to be, what partitions we are going to use, and what type of array we are creating (raid 0, raid 1 etc...)
Code:
ubuntu-6.10-alternate-amd64.iso
Code:
sudo apt-get update sudo apt-get install mdadm
- If (like me) you are going for a performance raid, then you will need to create a separate /boot partition, otherwise grub wont be able to boot - it doesn't have the drivers needed to access raid 0 arrays. It sounds simple, but it took me so long to figure out.
- If, on the other hand, you are doing a server installation (for example) using raid 1 / 5 and the goal is reliability, then you probably want the computer to be able to boot up even if one of the disks is down. In this situation you need to do something different with the /boot partition again. I'm not sure how it works myself, as I've never used raid 1, but you can find some more information in the links at the end of this guide. Perhaps I'll have a play around and add this to the guide later on, for completeness sake.
- If you are looking for performance, then there isn't a whole load of point creating a raid array for swap space. The kernel can manage multiple swap spaces by itself (we will come onto that later).
- Again, if you are looking for reliability however, then you may want to build a raid partition for your swap space, to prevent crashes should one of your drives fail. Again, look for more information in the links at the end.
Remember to check which partition numbers you are going to use in which raid arrays - if you forget, hita few times to bring you back to the partition editor screen where you can see whats going on. Tell the installer how to use the raid devices
Once you are done, hit finish - you will be taken back to the partitioner where you should see some new raid devices listed. Configure these in the same way you would other partitions - set them mounts points, and decide on their filesystem type. Finish the instalation
Once you are done setting up these raid devices (and swap / boot partitions you decide to keep as non-raid), the installation should run smootly. 6) Configuring Swap Space I mentioned before that the linux kernel automatically manages multiple swap partitions, meaning you can spread swap partitions across multiple drives for a performance boost without needing to create a raid array. A slight tweak may be needed however; each swap partition has a priority, and if you want the kernel to use both at the same time, you need to set the priority of each swap partition to be the same. First, type to see your current swap usage. Mine outputs the following:Code:swapon -sAs you can see, the second swap partition isn't being used at the moment, and won't be until the first one is full. I want a performance gain, so I need to fix this by setting the priority of each partition to be the same. Do this in /etc/fstab, by adding pri=1 as an option to each of your swap partitions. My /etc/fstab file now looks like this:Code:Filename Type Size Used Priority /dev/sda5 partition 979956 39080 -1 /dev/sdb5 partition 979956 0 -2As I mentioned earlier, if you ever boot into your instalation with a live cd, you will need to install mdadm to be able to access your raid devices, so its a good idea to at least roughly know how mdadm works.Code:# /dev/sda5 UUID=551aaf44-5a69-496c-8d1b-28a228489404 pri=1 swap sw 0 0 # /dev/sdb5 UUID=807ff017-a9e7-4d25-9ad7-41fdba374820 pri=1 swap sw 0 0 7) How to do things manually
http://man-wiki.net/index.php/8:mdadm has some detailed information, but the important options are simply: Code:
- -A, --assemble Assemble a pre-existing array that was previously created with --create.
- -C, --create Create a new array. You only ever need to do this once, if you try to create arrays with partitions that are part of other arrays, mdadm will warn you.
- --stop Stop an assembled array. The array must be unmounted before this will work.
for example:When using --create, the options are:
Code:mdadm --create md-device --chunk=X --level=Y --raid-devices=Z devices
- -c, --chunk= Specify chunk size of kibibytes. The default is 64.
- -l, --level= Set raid level, options are: linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4, raid5, 5, raid6, 6, multipath, mp, fautly.
- -n, --raid-devices= Specify the number of active devices in the array.
will create a raid0 array /dev/md0 formed from /dev/sda1 and /dev/sdb1, with chunk size 4. When using --assemble, the usage is simply:Code:mdadm --create /dev/md0 --chunk=4 --level=0 --raid-devices=2 /dev/sda1 /dev/sdb1for exampleCode:mdadm --assemble md-device component-devicesWhich will assemble the raid array /dev/md0 from the partitions /dev/sda1 and /dev/sdb1 Alternatively you can use:Code:mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1and it will assemble any raid arrays it can detect automatically. Lastly,Code:mdadm --assemble --scanwill stop the assembled array md0, so long as its not mounted. If you wish you can set the partitions up yourself manually using fdisk and mdadm from the command line. Either boot up a desktop live cd and apt-get mdadm as described before, or boot up the alternate installer and hit escape until you see a list of the different stages of instalation - the bottom one should read execute shell - which will drop you at a shell with fdisk, mdadm and mkfs etc... available. Note that if you ever need to create another raid partition, you create filesystems on them in exactly the same way you would a normal physical partition. For example, to create an ext3 filesystem on /dev/md0 I would use:Code:mdadm --stop /dev/md0And to create a swapspace on /dev/sda7 I woud use:Code:mkfs.ext3 /dev/md0Lastly, mdadm has a configuration file located atCode:mkswap /dev/sda7this file is usually automatically generated, and mdadm will probably work fine without it anyway. If you're interested then http://man-wiki.net/index.php/5:mdadm.conf has some more information. And that's pretty much it. As long as you have mdadm available, you can create / assemble raid arrays out of identical partitions. Once you've assembled the array, treat it the same way you would a partition on a physical disk, and you can't really go wrong! I hope this has helped someone! At the moment I've omitted certain aspects of dealing with raids with redundancy (like raid 1 and raid 5), such a rebuilding failed arrays, simply because I've never done it before. Again, I may have a play around and add some more information later (for completeness), or if anyone else running a raid 1 wants to contribute, it would be most welcome. Other links The Linux Software Raid Howto: http://tldp.org/HOWTO/Software-RAID-HOWTO.html This guide refers to a package "raidtools2" which I couldn't find in the Ubuntu repositories - use mdadm instead, it does the same thing. Quick HOWTO: Linux Software Raid http://www.linuxhomenetworking.com/w..._Software_RAID Using mdadm to manage Linux Software Raid arrays http://www.linuxdevcenter.com/pub/a/...2/05/RAID.html Ubuntu Fake Raid HOWTO In the community contributed documentation https://help.ubuntu.com/community/Fa...ght=%28raid%29 ============================ https://bugs.launchpad.net/linuxmint/+bug/682315 dmraid and kpart?Code:/etc/mdadm/mdadm.conf$ sudo dmraid -a y$ sudo dmraid -r $ sudo dmraid -l To fix this problem fast I added "dmraid -ay" at the bottom of the script so it did the trick for me but somebody should look deeper in to this. ====================== http://forums.linuxmint.com/viewtopic.php?f=46&t=105725 Do you have to use bios raid? Linux raid, mdadm, is recommended. The problem you are having is probably due to your Grub and/or your OS not having dmraid support. This is because the Installer only does half the job. Presumably you booted the live CD and installed dmraid, then installed to the raid device and Grub install failed. At this point you need to chroot into the raid and install dmraid, update initramfsand reconfigure grub. https://help.ubuntu.com/community/Grub2 ... ing#ChRoot sudo apt-get install dmraid sudo update-initramfs -u sudo dpkg-reconfigure grub-pc --------------------- How-to install LinuxMint or Ubuntu of softraid / fakeraid the simple way:======================= http://ubuntuforums.org/showthread.php?t=2089865&page=2 The possibility is a partition table irregularity which confuses the installer. Fairly easily fixed, but if your drive was not used in a RAID, lets see if this is so. Close the installer and open a terminal. Run this command and post the output:
- Install Linux using the default installer program (let it do the partitioning unless you have specific requirements)
- When grub fails at the end of the install select install grub manually
- chroot into system installed on raid (fake or soft) https://help.ubuntu.com/community/Grub2/Installing#ChRoot
- grub-install on the TOP level partition of the raid
- update-grub
- Reboot and enjoy linux
Code:sudo fdisk -luand then
I have suggested both as you may have GPT partitions and a UEFI bios.Code:sudo parted -l----------------------
Is a Intel SRT system. Those tend to be a large hard drive and small SSD to speed Windows up. Each vendor slightly different but the Intel part is the same. Some delete the SRT and install Linux to the SSD, others turn off SRT in Windows and have to use the RAID commands to remove RAID settings.
Then after install they seem to be able to re-implement the SRT, but I do not then think Linux sees Windows. Intel Smart Response Technology http://www.intel.com/p/en_US/support...ts/chpsts/imsm Intel SRT - Dell XPS http://ubuntuforums.org/showthread.php?t=2038121 http://ubuntuforums.org/showthread.php?t=2036204 http://ubuntuforums.org/showthread.php?t=2020155 Some info on re-instating http://ubuntuforums.org/showthread.php?t=2038121 http://ubuntuforums.org/showthread.php?t=2070491 Disable the RAID, for me it was using the Intel rapid management thingy and telling it to disable the acceleration or the use of the SSD. If you have a different system, just disable the RAID system then install Ubuntu. Once installed you can then re-enable it.
You will need to use the dmraid command prior to running the Ubuntu Installer so that it will be able to see the partitions on the drive because otherwise with the raid metadata in place it will see the drive as part of a raid set and ignore its partitions.
Try first to remove the dmraid package from Linux.
Code: sudo apt-get remove dmraid
No comments:
Post a Comment