Bienvenido! - Willkommen! - Welcome!

Bitácora Técnica de Tux&Cía., Santa Cruz de la Sierra, BO
Bitácora Central: Tux&Cía.
Bitácora de Información Avanzada: Tux&Cía.-Información
May the source be with you!

Tuesday, May 21, 2013

fakeRAID linux


https://wiki.archlinux.org/index.php/Installing_with_Fake_RAID
The purpose of this guide is to enable use of a RAID set created by the on-board BIOS RAID controller and thereby allow dual-booting of Linux and Windows from partitions inside the RAID set using GRUB. When using so-called "fake RAID" or "host RAID", the disc sets are reached from
What is "fake RAID"
From Wikipedia:
Operating system-based RAID doesn't always protect the boot process and is generally impractical on desktop versions of Windows. Hardware RAID controllers are expensive and proprietary. To fill this gap, cheap "RAID controllers" were introduced that do not contain a RAID controller chip, but simply a standard disk controller chip with special firmware and drivers. During early stage boot-up, the RAID is implemented by the firmware. When a protected-mode operating system kernel such as Linux or a modern version of Microsoft Windows is loaded, the drivers take over.
These controllers are described by their manufacturers as RAID controllers, and it is rarely made clear to purchasers that the burden of RAID processing is borne by the host computer's central processing unit -- not the RAID controller itself -- thus introducing the aforementioned CPU overhead which hardware controllers do not suffer from. Firmware controllers often can only use certain types of hard drives in their RAID arrays (e.g. SATA for Intel Matrix RAID, as there is neither SCSI nor PATA support in modern Intel ICH southbridges; however, motherboard makers implement RAID controllers outside of the southbridge on some motherboards). Before their introduction, a "RAID controller" implied that the controller did the processing, and the new type has become known in technically knowledgeable circles as "fake RAID" even though the RAID itself is implemented correctly. Adaptec calls them "host RAID".wikipedia:RAID
See Wikipedia:RAID or FakeRaidHowto @ Community Ubuntu Documentation for more information.
Despite the terminology, "fake RAID" via dmraid is a robust software RAID implementation that offers a solid system to mirror or stripe data across multiple disks with negligible overhead for any modern system. dmraid is comparable to mdraid (pure Linux software RAID) with the added benefit of being able to completely rebuild a drive after a failure before the system is ever booted.
History
In Linux 2.4, the ATARAID kernel framework provided support for fake RAID (software RAID assisted by the BIOS). For Linux 2.6 the device-mapper framework can, among other nice things like LVM and EVMS, do the same kind of work as ATARAID in 2.4. Whilst the new code handling the RAID I/O still runs in the kernel, device-mapper is generally configured by a userspace application. It was clear that when using the device-mapper for RAID, detection would go to userspace.
Heinz Maulshagen created the dmraid tool to detect RAID sets and create mappings for them. The controllers supported are (mostly cheap) fake RAID IDE/SATA controllers which contain BIOS functions. Common examples include: Promise FastTrak controllers; HighPoint HPT37x; Intel Matrix RAID; Silicon Image Medley; and NVIDIA nForce.
Information on supported hardware
RAID/Onboard @ Gentoo Linux Wiki

Backup

Warning: Backup all data before playing with RAID. What you do with your hardware is only your own fault. Data on RAID stripes is highly vulnerable to disc failures. Create regular backups or consider using mirror sets. Consider yourself warned!

Outline

  • Preparation
  • Boot the installer
  • Load dmraid
  • Perform traditional installation
  • Install GRUB

Preparation

  • Open up any needed guides (e.g. Beginners' GuideOfficial Arch Linux Install Guide) on another machine. If you do not have access to another machine, print it out.
  • Download the latest Arch Linux install image.
  • Backup all important files since everything on the target partitions will be destroyed.

Configure RAID sets

Warning: If your drives are not already configured as RAID and Windows is already installed, switching to "RAID" may cause Windows to BSOD during boot.[1]
  • Enter your BIOS setup and enable the RAID controller.
    • The BIOS may contain an option to configure SATA drives as "IDE", "AHCI", or "RAID"; ensure "RAID" is selected.
  • Save and exit the BIOS setup. During boot, enter the RAID setup utility.
    • The RAID utility is usually either accessible via the boot menu (often F8, F10 or CTRL+I) or whilst the RAID controller is initializing.
  • Use the RAID setup utility to create preferred stripe/mirror sets.
Tip: See your motherboard documentation for details. The exact procedure may vary.

Boot the installer

See Official Arch Linux Install Guide#Pre-Installation for details.

Load dmraid

Load device-mapper and find RAID sets:
# modprobe dm_mod
# dmraid -ay
# ls -la /dev/mapper/
Warning: Command "dmraid -ay" could fail after boot to Arch linux Release: 2011.08.19 as image file with initial ramdisk environment does not support dmraid. You could use an older Release: 2010.05. Note that you must correct your kernel name and initrd name in grubs menu.lst after installing as these releases use different naming
Example output:
/dev/mapper/control            
- data-blogger-escaped-a="" data-blogger-escaped-by="" data-blogger-escaped-controller="" data-blogger-escaped-created="" data-blogger-escaped-dev="" data-blogger-escaped-device-mapper="" data-blogger-escaped-first="" data-blogger-escaped-functioning="" data-blogger-escaped-if="" data-blogger-escaped-image="" data-blogger-escaped-is="" data-blogger-escaped-likely="" data-blogger-escaped-mapper="" data-blogger-escaped-on="" data-blogger-escaped-partition="" data-blogger-escaped-pre="" data-blogger-escaped-present="" data-blogger-escaped-raid="" data-blogger-escaped-sata="" data-blogger-escaped-set="" data-blogger-escaped-sil_aiageicechah1="" data-blogger-escaped-sil_aiageicechah="" data-blogger-escaped-silicon="" data-blogger-escaped-this=""
If there is only one file (/dev/mapper/control), check if your controller chipset module is loaded with lsmod.
 If it is, then dmraid does not support this controller or there are no RAID sets on the system (check RAID BIOS setup again). If correct, then you may be forced to use software RAID (this means no dual-booted RAID system on this controller).
If your chipset module is NOT loaded, load it now. For example:
# modprobe sata_sil
See /lib/modules/`uname -r`/kernel/drivers/ata/ for available drivers. To test the RAID sets:
# dmraid -tay

Perform traditional installation

Switch to tty2 and start the installer:
# /arch/setup

Partition the RAID set

  • Under Prepare Hard Drive choose Manually partition hard drives since the Auto-prepare option will not find your RAID sets.
  • Choose OTHER and type in your RAID set's full path (e.g. /dev/mapper/sil_aiageicechah). Switch back to tty1 to check your spelling.
  • Create the proper partitions the normal way.
Tip: This would be a good time to install the "other" OS if planning to dual-boot. If installing Windows XP to "C:" then all partitions before the Windows partition should be changed to type [1B] (hidden FAT32) to hide them during the Windows installation. When this is done, change them back to type [83] (Linux). Of course, a reboot unfortunately requires some of the above steps to be repeated.

Mounting the filesystem

If -- and this is probably the case -- you do not find your newly created partitions under Manually configure block devices, filesystems and mountpoints:
  • Switch back to tty1.
  • Deactivate all device-mapper nodes:
# dmsetup remove_all
  • Reactivate the newly-created RAID nodes:
# dmraid -ay
# ls -la /dev/mapper
  • Switch to tty2, re-enter the Manually configure block devices, filesystems and mountpoints menu and the partitions should be available.
Warning: NEVER delete a partition in cfdisk to create 2 partitions with dmraid after Manually configure block devices, filesystems and mountpoints have been set.
(really screws with dmraid metadata and existing partitions are worthless)
Solution: delete the array from the bios and re-create to force creation under a new /dev/mapper ID, reinstall/repartition.

Install and configure Arch

Tip: Utilize three consoles: the setup GUI to configure the system, a chroot to install GRUB, and finally a cfdisk reference since RAID sets have weird names.
  • tty1: chroot and grub-install
  • tty2: /arch/setup
  • tty3: cfdisk for a reference in spelling, partition table and geometry of the RAID set
Leave programs running and switch to when needed.
Re-activate the installer (tty2) and proceed as normal with the following exceptions:
  • Select Packages
  • Ensure dmraid is marked for installation
  • Configure System
    • Add dm_mod to the MODULES line in mkinitcpio.conf. If using a mirrored (RAID 1) array, additionally add dm_mirror
    • Add chipset_module_driver to the MODULES line if necessary
    • Add dmraid to the HOOKS line in mkinitcpio.conf; preferably after sata but before filesystem

install bootloader

Use GRUB2

Please read GRUB2 for more information about configuring GRUB2. Currently, the latest version of grub-bios does not compatiable with fake-raid. If you got an error like this when you run grub-install:
 $ grub-install /dev/mapper/sil_aiageicechah
 Path `/boot/grub` is not readable by GRUB on boot. Installation is impossible. Aborting.
You could try an old version of grub. You could find old version package of grub at ARM Search. Read Downgrade for more information. 1. download an old version package for grub
  i686:
   http://arm.konnichi.com/extra/os/i686/grub2-bios-1:1.99-6-i686.pkg.tar.xz
   http://arm.konnichi.com/extra/os/i686/grub2-common-1:1.99-6-i686.pkg.tar.xz
  x86_64:
   http://arm.konnichi.com/extra/os/x86_64/grub2-bios-1:1.99-6-x86_64.pkg.tar.xz
   http://arm.konnichi.com/extra/os/x86_64/grub2-common-1:1.99-6-x86_64.pkg.tar.xz
  You could verify these packages by the .sig file if you take care.
2. install these old version packages by using "pacman -U *.pkg.tar.xz" 3. (Optional) Install os-prober if you have other OS like windows. 4. $ grub-install /dev/mapper/sil_aiageicechah 5. $ grub-mkconfig -o /boot/grub/grub.cfg 6. (Optional) put grub2-bios, grub2-common in /etc/pacman.conf's IgnorePkg array, if you don't want pacman upgrade it. That's all, grub-mkconfig will generate the configure automatically. You could edit /etc/default/grub to modify the configure (timeout, color, etc) before grub-mkconfig.

Use GRUB Legacy (Deprecated)

Warning: You can normally specify default saved instead of a number in menu.lst so that the default entry is the entry saved with the command savedefault. If you are using dmraid do not use savedefault or your array will de-sync and will not let you boot your system.
Please read GRUB Legacy for more information about configuring GRUB Legacy.
Note: For an unknown reason, the default menu.lst will likely be incorrectly populated when installing via fake RAID. Double-check the root lines (e.g. root (hd0,0)). Additionally, if you did not create a separate /boot partition, ensure the kernel/initrd paths are correct (e.g. /boot/vmlinuz-linux and /boot/initramfs-linux.img instead of /vmlinuz-linux and /initramfs-linux.img.
For example, if you created logical partitions (creating the equivalent of sda5, sda6, sda7, etc.) that were mapped as:
  /dev/mapper     |    Linux    GRUB Partition
                  |  Partition      Number
nvidia_fffadgic   |
nvidia_fffadgic5  |    /              4
nvidia_fffadgic6  |    /boot          5
nvidia_fffadgic7  |    /home          6
The correct root designation would be (hd0,5) in this example.
Note: If you use more than one set of dmraid arrays or multiple Linux distributions installed on different dmraid arrays (for example 2 disks in nvidia_fdaacfde and 2 disks in nvidia_fffadgic and you are installing to the second dmraid array (nvidia_fffadgic)), you will need designate the second array's /boot partition as the GRUB root. In the example above, if nvidia_fffadgic was the second dmraid array you were installing to, your root designation would be root (hd1,5).
After saving the configuration file, the GRUB installer will FAIL. However it will still copy files to /boot. DO NOT GIVE UP AND REBOOT -- just follow the directions below:
  • Switch to tty1 and chroot into our installed system:
# mount -o bind /dev /mnt/dev
# mount -t proc none /mnt/proc
# mount -t sysfs none /mnt/sys
# chroot /mnt /bin/bash
  • Switch to tty3 and look up the geometry of the RAID set. In order for cfdisk to find the array and provide the proper C H S information, you may need to start cfdisk providing your raid set as the first argument. (i.e. cfdisk /dev/mapper/nvidia_fffadgic):
    • The number of Cylinders, Heads and Sectors on the RAID set should be written at the top of the screen inside cfdisk. Note: cfdisk shows the information in H S C order, but grub requires you to enter the geometry information in C H S order.
Example: 18079 255 63 for a RAID stripe of two 74GB Raptor discs.
Example: 38914 255 63 for a RAID stripe of two 160GB laptop discs.
  • GRUB will fail to properly read the drives; the geometry command must be used to manually direct GRUB:
    • Switch to tty1, the chrooted environment.
    • Install GRUB on /dev/mapper/raidSet:
# dmsetup mknodes
# grub --device-map=/dev/null
grub> device (hd0) /dev/mapper/raidSet
grub> geometry (hd0) C H S
Exchange C H S above with the proper numbers (be aware: they are not entered in the same order as they are read from cfdisk). If geometry is entered properly, GRUB will list partitions found on this RAID set. You can confirm that grub is using the correct geometry and verify the proper grub root device to boot from by using the grub find command. If you have created a separate boot partition, then search for /grub/stage1 with find. If you have no separate boot partition, then search /boot/grub/stage1 with find. Examples:
grub> find /grub/stage1       # use when you have a separate boot partition
grub> find /boot/grub/stage1  # use when you have no separate boot partition
Grub will report the proper device to designate as the grub root below (i.e. (hd0,0), (hd0,4), etc...) Then, continue to install the bootloader into the Master Boot Record, changing "hd0" to "hd1" if required.
grub> root (hd0,0)
grub> setup (hd0)
grub> quit
Note: With dmraid >= 1.0.0.rc15-8, partitions are labeled "raidSetp1, raidSetp2, etc. instead of raidSet1, raidSet2, etc. If the setup command fails with "error 22: No such partition", temporary symlinks must be created.[2] The problem is that GRUB still uses an older detection algorithm, and is looking for /dev/mapper/raidSet1 instead of /dev/mapper/raidSetp1.
The solution is to create a symlink from /dev/mapper/raidSetp1 to /dev/mapper/raidSet1 (changing the partition number as needed). The simplest way to accomplish this is to:
# cd /dev/mapper
# for i in raidSetp*; do ln -s $i ${i/p/}; done
Lastly, if you have multiple dmraid devices with multiple sets of arrays set up (say: nvidia_fdaacfde and nvidia_fffadgic), then create the /boot/grub/device.map file to help GRUB retain its sanity when working with the arrays. All the file does is map the dmraid device to a traditional hd#. Using these dmraid devices, your device.map file will look like this:
(hd0) /dev/mapper/nvidia_fdaacfde
(hd1) /dev/mapper/nvidia_fffadgic
And now you are finished with the installation!
# reboot

Troubleshooting

Booting with degraded array

One drawback of the fake RAID approach on GNU/Linux is that dmraid is currently unable to handle degraded arrays, and will refuse to activate. In this scenario, one must resolve the problem from within another OS (e.g. Windows) or via the BIOS/chipset RAID utility. Alternatively, if using a mirrored (RAID 1) array, users may temporarily bypass dmraid during the boot process and boot from a single drive:
  1. Edit the kernel line from the GRUB menu
    1. Remove references to dmraid devices (e.g. change /dev/mapper/raidSet1 to /dev/sda1)
    2. Append disablehooks=dmraid to prevent a kernel panic when dmraid discovers the degraded array
  2. Boot the system

Error: Unable to determine major/minor number of root device

If you experience a boot failure after kernel update where the boot process is unable to determine major/minor number of root device, this might just be a timing problem (i.e. dmraid -ay might be called before /dev/sd* is fully set up and detected). This can effect both the normal and LTS kernel images. Booting the 'Fallback' kernel image should work. The error will look something like this:
Activating dmraid arrays...
no block devices found
Waiting 10 seconds for device /dev/mapper/nvidia_baaccajap5
Root device '/dev/mapper/nvidia_baaccajap5' doesn't exist attempting to create it.
Error: Unable to determine major/minor number of root device '/dev/mapper/nvidia_baaccajap5'
To work around this problem:
  • boot the Fallback kernel
  • insert the 'sleep' hook in the HOOKS line of /etc/mkinitcpio.conf after the 'udev' hook like this:
HOOKS="base udev sleep autodetect pata scsi sata dmraid filesystems"
  • rebuild the kernel image and reboot

dmraid mirror fails to activate

Does everything above work correctly the first time, but then when you reboot dmraid cannot find the array? This is because Linux software raid (mdadm) has already attempted to mount the fakeraid array during system init and left it in an umountable state. To prevent mdadm from running, move the udev rule that is responsible out of the way:
# cd /lib/udev/rules.d
# mkdir disabled
# mv 64-md-raid.rules disabled/
# reboot
============================ ============================ HOWTO: Linux Software Raid using mdadm 1) Introduction: Recently I went out and bought myself a second hard drive with the purpose of setting myself up a performance raid (raid0). It took me days and a lot of messing about to get sorted, but once I figured out what I was doing I realised that it's actually relatively simple, so I've written this guide to share my experiences I went for raid0, because I'm not too worried about loosing data, but if you wanted to set up a raid 1, raid 5 or any other raid type then a lot of the information here would still apply. 2) 'Fake' raid vs Software raid: When I bought my motherboard, (The ASRock ConRoeXFire-eSATA2), one of the big selling points was an on board raid, however some research revealed that rather than being a true hardware raid controller, this was in fact more than likely what is know as 'fake' raid. I think wikipedia explains it quite well: http://en.wikipedia.org/wiki/Redunda...ependent_disks
Hybrid RAID implementations have become very popular with the introduction of inexpensive RAID controllers, implemented using a standard disk controller and BIOS (software) extensions to provide the RAID functionality. The operating system requires specialized RAID device drivers that present the array as a single block based logical disk. Since these controllers actually do all calculations in software, not hardware, they are often called "fakeraids", and have almost all the disadvantages of both hardware and software RAID.
After realising this, I spent some time trying to get this fake raid to work - the problem is that although the motherboard came with drivers that let windows see my two 250 GB drives as one large 500 GB raid array, Ubuntu just saw the two separate drives and ignored the 'fake' raid completely. There are ways to get this fake raid working under linux, however if you are presented with this situation then my advice to you is to abandon the onboard raid controller and go for software raid instead. I've seen arguments as to why software raid is faster and more flexible, but I think the best reason is that software raid is far easier to set up! 3) The Basics of Linux Software Raid: For the basics of raids try looking on Wikipedia again: http://en.wikipedia.org/wiki/Redunda...ependent_disks. I don't want to discuss it myself because its been explained many times before by people who are far more qualified to explain it than I am. I will however go over a few things about software raids: Linux software raid is more flexible than hardware raid or true raid because rather than forming a raid arrays between identical disks, the raid arrays are created between identical partitions. As far as I understand, if you are using hardware raid between (for example) two disks, then you can either create a raid 1 array between those disks, or a raid 0 array. Using software raid however, you could create two sets of identical partitions on the disks, and for a raid 0 array between two of those partitions, and a raid 1 array between the other two. If you wanted to you could probably even create a raid array between two partitions on the same disk! (not that you would want to!) The process of setting up the a raid array is simple:
  1. Create two identical partitions
  2. Tell the software what the name of the new raid array is going to be, what partitions we are going to use, and what type of array we are creating (raid 0, raid 1 etc...)
Once we have created this array, we then format and mount it in a similar way to the way we would format a partition on a physical disk. 4) Which Live CD to use: You want to download and burn the alternate install Ubuntu cd of your choosing, for example, I used:
Code:
ubuntu-6.10-alternate-amd64.iso
If you boot up the ubuntu desktop live CD and need to access your raid, then you will need to install mdadm if you want to access any software raid arrays:
Code:
sudo apt-get update
sudo apt-get install mdadm
Don't worry too much about this for now - you will only need this if you ever use the Ubuntu desktop cd to fix your installation, the alternate install cd has the mdadm tools installed already. 5) Finally, lets get on with it! Boot up the installer Boot up the alternate install CD and run through the text based installation until you reach the partitioner, and select "Partition Manually". Create the partitions you need for each raid array You now need to create the partitions which you will (in the next step) turn into software raid arrays. I recommend using the space at the start, or if your disks are identical, the end of your disks. That way once you've set one disk up, you can just enter exactly the same details for the second disk. The partitioner should be straightforward enough to use - when you create a partition which you intend to use in a raid, you need to change the type to "Linux RAID Autodetect". How you partition your installation is upto you, however there are a few things to bear in mind:
  1. If (like me) you are going for a performance raid, then you will need to create a separate /boot partition, otherwise grub wont be able to boot - it doesn't have the drivers needed to access raid 0 arrays. It sounds simple, but it took me so long to figure out.
  2. If, on the other hand, you are doing a server installation (for example) using raid 1 / 5 and the goal is reliability, then you probably want the computer to be able to boot up even if one of the disks is down. In this situation you need to do something different with the /boot partition again. I'm not sure how it works myself, as I've never used raid 1, but you can find some more information in the links at the end of this guide. Perhaps I'll have a play around and add this to the guide later on, for completeness sake.
  3. If you are looking for performance, then there isn't a whole load of point creating a raid array for swap space. The kernel can manage multiple swap spaces by itself (we will come onto that later).
  4. Again, if you are looking for reliability however, then you may want to build a raid partition for your swap space, to prevent crashes should one of your drives fail. Again, look for more information in the links at the end.
On my two identical 250 GB drives, I created two 1 GB swap partitions, two +150 GB partitions (to become a raid0 array fro my /home space), and two +40 GB partitions (to become a raid 0 array for my root space), all inside an extended partition at the end of my drives. I then also created a small 500 MB partition on the first drive, which would become my /boot space. I left the rest of the space on my drives for ntfs partitions. Assemble the partitions as raid devices Once you've created your partitions, select the "Configure software raid" option. The changes to the partition table will be written to the disk, and you will be allowed to create and delete raid devices - to create a raid device, simply select "create", select the type of raid array you want to create, and select the partitions you want to use. 
Remember to check which partition numbers you are going to use in which raid arrays - if you forget, hit  a few times to bring you back to the partition editor screen where you can see whats going on.
Tell the installer how to use the raid devices
Once you are done, hit finish - you will be taken back to the partitioner where you should see some new raid devices listed. Configure these in the same way you would other partitions - set them mounts points, and decide on their filesystem type.
Finish the instalation
Once you are done setting up these raid devices (and swap / boot partitions you decide to keep as non-raid), the installation should run smootly.
6) Configuring Swap Space
I mentioned before that the linux kernel automatically manages multiple swap partitions, meaning you can spread swap partitions across multiple drives for a performance boost without needing to create a raid array. A slight tweak may be needed however; each swap partition has a priority, and if you want the kernel to use both at the same time, you need to set the priority of each swap partition to be the same. First, type
Code:
swapon -s
to see your current swap usage. Mine outputs the following:
Code:
Filename                                Type            Size    Used    Priority
/dev/sda5                               partition       979956  39080   -1
/dev/sdb5                               partition       979956  0       -2
As you can see, the second swap partition isn't being used at the moment, and won't be until the first one is full. I want a performance gain, so I need to fix this by setting the priority of each partition to be the same. Do this in /etc/fstab, by adding pri=1 as an option to each of your swap partitions. My /etc/fstab file now looks like this:
Code:
# /dev/sda5
UUID=551aaf44-5a69-496c-8d1b-28a228489404 pri=1 swap sw 0 0
# /dev/sdb5
UUID=807ff017-a9e7-4d25-9ad7-41fdba374820 pri=1 swap sw 0 0
 7) How to do things manually
As I mentioned earlier, if you ever boot into your instalation with a live cd, you will need to install mdadm to be able to access your raid devices, so its a good idea to at least roughly know how mdadm works. 
http://man-wiki.net/index.php/8:mdadm has some detailed information, but the important options are simply:
Code:
  • -A, --assemble Assemble a pre-existing array that was previously created with --create.
  • -C, --create Create a new array. You only ever need to do this once, if you try to create arrays with partitions that are part of other arrays, mdadm will warn you.
  • --stop Stop an assembled array. The array must be unmounted before this will work.
When using --create, the options are:
Code:
mdadm --create md-device --chunk=X --level=Y --raid-devices=Z devices
  • -c, --chunk= Specify chunk size of kibibytes. The default is 64.
  • -l, --level= Set raid level, options are: linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4, raid5, 5, raid6, 6, multipath, mp, fautly.
  • -n, --raid-devices= Specify the number of active devices in the array.
 for example:
Code:
mdadm --create /dev/md0 --chunk=4 --level=0 --raid-devices=2 /dev/sda1 /dev/sdb1
will create a raid0 array /dev/md0 formed from /dev/sda1 and /dev/sdb1, with chunk size 4. When using --assemble, the usage is simply:
Code:
mdadm --assemble md-device component-devices
for example
Code:
mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1
Which will assemble the raid array /dev/md0 from the partitions /dev/sda1 and /dev/sdb1 Alternatively you can use:
Code:
mdadm --assemble --scan
and it will assemble any raid arrays it can detect automatically. Lastly,
Code:
mdadm --stop /dev/md0
will stop the assembled array md0, so long as its not mounted. If you wish you can set the partitions up yourself manually using fdisk and mdadm from the command line. Either boot up a desktop live cd and apt-get mdadm as described before, or boot up the alternate installer and hit escape until you see a list of the different stages of instalation - the bottom one should read execute shell - which will drop you at a shell with fdisk, mdadm and mkfs etc... available. Note that if you ever need to create another raid partition, you create filesystems on them in exactly the same way you would a normal physical partition. For example, to create an ext3 filesystem on /dev/md0 I would use:
Code:
mkfs.ext3 /dev/md0
And to create a swapspace on /dev/sda7 I woud use:
Code:
mkswap /dev/sda7
Lastly, mdadm has a configuration file located at
Code:
/etc/mdadm/mdadm.conf
this file is usually automatically generated, and mdadm will probably work fine without it anyway. If you're interested then http://man-wiki.net/index.php/5:mdadm.conf has some more information. And that's pretty much it. As long as you have mdadm available, you can create / assemble raid arrays out of identical partitions. Once you've assembled the array, treat it the same way you would a partition on a physical disk, and you can't really go wrong! I hope this has helped someone! At the moment I've omitted certain aspects of dealing with raids with redundancy (like raid 1 and raid 5), such a rebuilding failed arrays, simply because I've never done it before. Again, I may have a play around and add some more information later (for completeness), or if anyone else running a raid 1 wants to contribute, it would be most welcome. Other links The Linux Software Raid Howto: http://tldp.org/HOWTO/Software-RAID-HOWTO.html This guide refers to a package "raidtools2" which I couldn't find in the Ubuntu repositories - use mdadm instead, it does the same thing. Quick HOWTO: Linux Software Raid http://www.linuxhomenetworking.com/w..._Software_RAID Using mdadm to manage Linux Software Raid arrays http://www.linuxdevcenter.com/pub/a/...2/05/RAID.html Ubuntu Fake Raid HOWTO In the community contributed documentation https://help.ubuntu.com/community/Fa...ght=%28raid%29 ============================ https://bugs.launchpad.net/linuxmint/+bug/682315 dmraid and kpart?
$ sudo dmraid -a y 
$ sudo dmraid -r $ sudo dmraid -l To fix this problem fast I added "dmraid -ay" at the bottom of the script so it did the trick for me but somebody should look deeper in to this. ====================== http://forums.linuxmint.com/viewtopic.php?f=46&t=105725 Do you have to use bios raid? Linux raid, mdadm, is recommended. The problem you are having is probably due to your Grub and/or your OS not having dmraid support. This is because the Installer only does half the job. Presumably you booted the live CD and installed dmraid, then installed to the raid device and Grub install failed. At this point you need to chroot into the raid and install dmraid, update initramfsand reconfigure grub. https://help.ubuntu.com/community/Grub2 ... ing#ChRoot sudo apt-get install dmraid sudo update-initramfs -u sudo dpkg-reconfigure grub-pc --------------------- How-to install LinuxMint or Ubuntu of softraid / fakeraid the simple way:
  1. Install Linux using the default installer program (let it do the partitioning unless you have specific requirements)
  2. When grub fails at the end of the install select install grub manually
  3. chroot into system installed on raid (fake or soft) https://help.ubuntu.com/community/Grub2/Installing#ChRoot
  4. grub-install on the TOP level partition of the raid
  5. update-grub
  6. Reboot and enjoy linux
======================= http://ubuntuforums.org/showthread.php?t=2089865&page=2 The possibility is a partition table irregularity which confuses the installer. Fairly easily fixed, but if your drive was not used in a RAID, lets see if this is so. Close the installer and open a terminal. Run this command and post the output:
Code:
sudo fdisk -lu
and then
Code:
sudo parted -l
I have suggested both as you may have GPT partitions and a UEFI bios. 
----------------------
Is a Intel SRT system. Those tend to be a large hard drive and small SSD to speed Windows up. Each vendor slightly different but the Intel part is the same. Some delete the SRT and install Linux to the SSD, others turn off SRT in Windows and have to use the RAID commands to remove RAID settings. 
Then after install they seem to be able to re-implement the SRT, but I do not then think Linux sees Windows. Intel Smart Response Technology http://www.intel.com/p/en_US/support...ts/chpsts/imsm Intel SRT - Dell XPS http://ubuntuforums.org/showthread.php?t=2038121 http://ubuntuforums.org/showthread.php?t=2036204 http://ubuntuforums.org/showthread.php?t=2020155 Some info on re-instating http://ubuntuforums.org/showthread.php?t=2038121 http://ubuntuforums.org/showthread.php?t=2070491 Disable the RAID, for me it was using the Intel rapid management thingy and telling it to disable the acceleration or the use of the SSD. If you have a different system, just disable the RAID system then install Ubuntu. Once installed you can then re-enable it.
You will need to use the dmraid command prior to running the Ubuntu Installer so that it will be able to see the partitions on the drive because otherwise with the raid metadata in place it will see the drive as part of a raid set and ignore its partitions. 
Try first to remove the dmraid package from Linux.
Code:
sudo apt-get remove dmraid

No comments: