Bienvenido! - Willkommen! - Welcome!

Bitácora Técnica de Tux&Cía., Santa Cruz de la Sierra, BO
Bitácora Central: Tux&Cía.
Bitácora de Información Avanzada: Tux&Cía.-Información
May the source be with you!
Showing posts with label RAID. Show all posts
Showing posts with label RAID. Show all posts

Sunday, June 2, 2013

RAID para PyMEs

Cómo elegir el nivel RAID adecuado: Tutorial
Descubra cómo elegir un nivel RAID eficaz teniendo en cuenta el tipo de datos de sus aplicaciones, el nivel de relevancia de sus datos y el número de usuarios.
RAID 0
Una RAID 0 divide o reparte los datos entre todas las unidades del grupo RAID. La ventaja de la RAID 0 es que ofrece un mayor rendimiento de los datos. El inconveniente es que aunque carecer de redundancia mejora el rendimiento, cualquier fallo o avería en uno de los discos conlleva una pérdida total de los datos.
RAID 0 es la mejor opción cuando es primordial obtener un mayor rendimiento del almacenamiento, cuando el presupuesto es muy limitado y cuando una posible pérdida de los datos no supone mayor problema. Por ejemplo, algunos datos que funcionan bien en este nivel son los archivos temporales de edición de fotografía o vídeo.
RAID 1
Una RAID 1 duplica en espejo todos los datos de cada unidad de forma sincronizada a una unidad de duplicación exacta. Si se produce algún fallo o avería en alguna de las unidades, no se pierde ningún dato. La ventaja de utilizar una RAID 1 es disponer de un mayor rendimiento de lectura multiusuario, puesto que pueden leerse ambos discos al mismo tiempo. La desventaja es que el costo de la unidad de almacenamiento por byte usable se multiplica por dos, puesto que se necesitan dos unidades para almacenar los mismos datos.
Elija una RAID 1 para aplicaciones que requieran de una “red de seguridad” (es decir, cuando no pueda permitirse la posibilidad de que se pierdan o estropeen los datos de la aplicación) además de lecturas aleatorias de alto rendimiento. Un buen ejemplo para este tipo de RAID puede ser la base de datos de sólo lectura de una tienda de venta al por menor no virtual. Una RAID 1 también es una buena elección  para sistemas de nivel básico en los que sólo están disponibles dos unidades, como en el caso de un pequeño servidor de archivos.
RAID 10 (es decir, RAID 1+0 y RAID 0+1)
Una RAID 10 es la combinación de una RAID 0 y una RAID 1. La ventaja de utilizar una RAID 10 es disponer de la redundancia de la RAID 1 y del nivel de rendimiento de la RAID 0. El rendimiento del sistema durante la reconstrucción de una unidad también es sensiblemente superior en comparación con los niveles RAID basados en paridad (es decir, la RAID 5 y la RAID 6). Esto se debe al hecho de que los datos no necesitan realizar procesos de regeneración de la información de la paridad porque ésta se copia de la otra unidad replicada. El inconveniente es el costo, muy superior (normalmente, entre un 60 y un 80% más caro) al de los niveles RAID con paridad.
Hay dos tipos de RAID 10. El primero es la RAID 0+1, en la que se dividen los datos entre múltiples discos y, después, se duplican en espejo los discos distribuidos en un grupo de discos idéntico. La segunda clase es la RAID de nivel 1+0, que duplica en espejo los datos en los casos en los que las réplicas se han distribuido entre distintas unidades.
Debería decantarse por las RAID 10 cuando utilice aplicaciones que requieran del alto rendimiento de una RAID 0 y de la incomparable protección de los datos que ofrece una RAID 1. Las bases de datos transaccionales en línea suelen encajar en este perfil.
RAID 5La RAID 5 está diseñada para ofrecer el nivel de rendimiento de una RAID 0 con una redundancia más económica y es el nivel RAID más habitual en la mayoría de empresas. Lo consigue distribuyendo bloques de datos entre distintas unidades y repartiendo la paridad entre ellas. No se dedica ningún disco a la paridad de forma exclusiva. Las ventajas de utilizar una RAID 5 consisten en poder realizar operaciones de lectura y escritura de forma solapada (es decir, en poder hacer un uso más eficiente de las unidades de disco), lo que acelera los pequeños procesos de escritura en un sistema multiprocesador y facilita una cantidad de almacenamiento usable superior al de la RAID 1 o 10 (dado que la redundancia acarrea una reducción del almacenamiento de, aproximadamente, el 20%, en vez del 50%). La protección de los datos reside en la información de la paridad que se utiliza para reconstruir los datos si una unidad del grupo RAID falla o sufre una avería. Entre los inconvenientes, se encuentran: la necesidad de un mínimo de tres (y, normalmente, cinco) discos por grupo RAID, un nivel de rendimiento del sistema de almacenamiento significativamente inferior mientras se lleva a cabo la reconstrucción de una unidad, y la posibilidad de peDRer totalmente los datos de un grupo RAID si falla una segunda unidad mientras se está realizando la reconstrucción de la primera. Además, el rendimiento de lectura suele ser inferior al de otras modalidades de RAID porque los datos de la paridad se distribuyen entre cada una de las unidades.
Debería decantarse por una RAID 5 para la gran mayoría de aplicaciones, siempre y cuando las unidades de disco no sean unidades SATA de gran capacidad. Las unidades SATA tienen ciclos de trabajo más cortos que las unidades SAS o de canal de fibra, e índices MTBF inferiores. Y, dado que las unidades SATA tienen una gran capacidad (de 500 a 1000 GB), los tiempos de reconstrucción son muy largos y conllevan una degradación del rendimiento del controlador. Las unidades SATA de gran capacidad también aumentan la probabilidad de que se produzca un fallo o avería en una segunda unidad, lo que ocasionaría una pérdida total de los datos.
[...]

Tuesday, May 21, 2013

fakeRAID linux


https://wiki.archlinux.org/index.php/Installing_with_Fake_RAID
The purpose of this guide is to enable use of a RAID set created by the on-board BIOS RAID controller and thereby allow dual-booting of Linux and Windows from partitions inside the RAID set using GRUB. When using so-called "fake RAID" or "host RAID", the disc sets are reached from
What is "fake RAID"
From Wikipedia:
Operating system-based RAID doesn't always protect the boot process and is generally impractical on desktop versions of Windows. Hardware RAID controllers are expensive and proprietary. To fill this gap, cheap "RAID controllers" were introduced that do not contain a RAID controller chip, but simply a standard disk controller chip with special firmware and drivers. During early stage boot-up, the RAID is implemented by the firmware. When a protected-mode operating system kernel such as Linux or a modern version of Microsoft Windows is loaded, the drivers take over.
These controllers are described by their manufacturers as RAID controllers, and it is rarely made clear to purchasers that the burden of RAID processing is borne by the host computer's central processing unit -- not the RAID controller itself -- thus introducing the aforementioned CPU overhead which hardware controllers do not suffer from. Firmware controllers often can only use certain types of hard drives in their RAID arrays (e.g. SATA for Intel Matrix RAID, as there is neither SCSI nor PATA support in modern Intel ICH southbridges; however, motherboard makers implement RAID controllers outside of the southbridge on some motherboards). Before their introduction, a "RAID controller" implied that the controller did the processing, and the new type has become known in technically knowledgeable circles as "fake RAID" even though the RAID itself is implemented correctly. Adaptec calls them "host RAID".wikipedia:RAID
See Wikipedia:RAID or FakeRaidHowto @ Community Ubuntu Documentation for more information.
Despite the terminology, "fake RAID" via dmraid is a robust software RAID implementation that offers a solid system to mirror or stripe data across multiple disks with negligible overhead for any modern system. dmraid is comparable to mdraid (pure Linux software RAID) with the added benefit of being able to completely rebuild a drive after a failure before the system is ever booted.
History
In Linux 2.4, the ATARAID kernel framework provided support for fake RAID (software RAID assisted by the BIOS). For Linux 2.6 the device-mapper framework can, among other nice things like LVM and EVMS, do the same kind of work as ATARAID in 2.4. Whilst the new code handling the RAID I/O still runs in the kernel, device-mapper is generally configured by a userspace application. It was clear that when using the device-mapper for RAID, detection would go to userspace.
Heinz Maulshagen created the dmraid tool to detect RAID sets and create mappings for them. The controllers supported are (mostly cheap) fake RAID IDE/SATA controllers which contain BIOS functions. Common examples include: Promise FastTrak controllers; HighPoint HPT37x; Intel Matrix RAID; Silicon Image Medley; and NVIDIA nForce.
Information on supported hardware
RAID/Onboard @ Gentoo Linux Wiki

Backup

Warning: Backup all data before playing with RAID. What you do with your hardware is only your own fault. Data on RAID stripes is highly vulnerable to disc failures. Create regular backups or consider using mirror sets. Consider yourself warned!

Outline

  • Preparation
  • Boot the installer
  • Load dmraid
  • Perform traditional installation
  • Install GRUB

Preparation

  • Open up any needed guides (e.g. Beginners' GuideOfficial Arch Linux Install Guide) on another machine. If you do not have access to another machine, print it out.
  • Download the latest Arch Linux install image.
  • Backup all important files since everything on the target partitions will be destroyed.

Configure RAID sets

Warning: If your drives are not already configured as RAID and Windows is already installed, switching to "RAID" may cause Windows to BSOD during boot.[1]
  • Enter your BIOS setup and enable the RAID controller.
    • The BIOS may contain an option to configure SATA drives as "IDE", "AHCI", or "RAID"; ensure "RAID" is selected.
  • Save and exit the BIOS setup. During boot, enter the RAID setup utility.
    • The RAID utility is usually either accessible via the boot menu (often F8, F10 or CTRL+I) or whilst the RAID controller is initializing.
  • Use the RAID setup utility to create preferred stripe/mirror sets.
Tip: See your motherboard documentation for details. The exact procedure may vary.

Boot the installer

See Official Arch Linux Install Guide#Pre-Installation for details.

Load dmraid

Load device-mapper and find RAID sets:
# modprobe dm_mod
# dmraid -ay
# ls -la /dev/mapper/
Warning: Command "dmraid -ay" could fail after boot to Arch linux Release: 2011.08.19 as image file with initial ramdisk environment does not support dmraid. You could use an older Release: 2010.05. Note that you must correct your kernel name and initrd name in grubs menu.lst after installing as these releases use different naming
Example output:
/dev/mapper/control            
- data-blogger-escaped-a="" data-blogger-escaped-by="" data-blogger-escaped-controller="" data-blogger-escaped-created="" data-blogger-escaped-dev="" data-blogger-escaped-device-mapper="" data-blogger-escaped-first="" data-blogger-escaped-functioning="" data-blogger-escaped-if="" data-blogger-escaped-image="" data-blogger-escaped-is="" data-blogger-escaped-likely="" data-blogger-escaped-mapper="" data-blogger-escaped-on="" data-blogger-escaped-partition="" data-blogger-escaped-pre="" data-blogger-escaped-present="" data-blogger-escaped-raid="" data-blogger-escaped-sata="" data-blogger-escaped-set="" data-blogger-escaped-sil_aiageicechah1="" data-blogger-escaped-sil_aiageicechah="" data-blogger-escaped-silicon="" data-blogger-escaped-this=""
If there is only one file (/dev/mapper/control), check if your controller chipset module is loaded with lsmod.
 If it is, then dmraid does not support this controller or there are no RAID sets on the system (check RAID BIOS setup again). If correct, then you may be forced to use software RAID (this means no dual-booted RAID system on this controller).
If your chipset module is NOT loaded, load it now. For example:
# modprobe sata_sil
See /lib/modules/`uname -r`/kernel/drivers/ata/ for available drivers. To test the RAID sets:
# dmraid -tay

Perform traditional installation

Switch to tty2 and start the installer:
# /arch/setup

Partition the RAID set

  • Under Prepare Hard Drive choose Manually partition hard drives since the Auto-prepare option will not find your RAID sets.
  • Choose OTHER and type in your RAID set's full path (e.g. /dev/mapper/sil_aiageicechah). Switch back to tty1 to check your spelling.
  • Create the proper partitions the normal way.
Tip: This would be a good time to install the "other" OS if planning to dual-boot. If installing Windows XP to "C:" then all partitions before the Windows partition should be changed to type [1B] (hidden FAT32) to hide them during the Windows installation. When this is done, change them back to type [83] (Linux). Of course, a reboot unfortunately requires some of the above steps to be repeated.

Mounting the filesystem

If -- and this is probably the case -- you do not find your newly created partitions under Manually configure block devices, filesystems and mountpoints:
  • Switch back to tty1.
  • Deactivate all device-mapper nodes:
# dmsetup remove_all
  • Reactivate the newly-created RAID nodes:
# dmraid -ay
# ls -la /dev/mapper
  • Switch to tty2, re-enter the Manually configure block devices, filesystems and mountpoints menu and the partitions should be available.
Warning: NEVER delete a partition in cfdisk to create 2 partitions with dmraid after Manually configure block devices, filesystems and mountpoints have been set.
(really screws with dmraid metadata and existing partitions are worthless)
Solution: delete the array from the bios and re-create to force creation under a new /dev/mapper ID, reinstall/repartition.

Install and configure Arch

Tip: Utilize three consoles: the setup GUI to configure the system, a chroot to install GRUB, and finally a cfdisk reference since RAID sets have weird names.
  • tty1: chroot and grub-install
  • tty2: /arch/setup
  • tty3: cfdisk for a reference in spelling, partition table and geometry of the RAID set
Leave programs running and switch to when needed.
Re-activate the installer (tty2) and proceed as normal with the following exceptions:
  • Select Packages
  • Ensure dmraid is marked for installation
  • Configure System
    • Add dm_mod to the MODULES line in mkinitcpio.conf. If using a mirrored (RAID 1) array, additionally add dm_mirror
    • Add chipset_module_driver to the MODULES line if necessary
    • Add dmraid to the HOOKS line in mkinitcpio.conf; preferably after sata but before filesystem

install bootloader

Use GRUB2

Please read GRUB2 for more information about configuring GRUB2. Currently, the latest version of grub-bios does not compatiable with fake-raid. If you got an error like this when you run grub-install:
 $ grub-install /dev/mapper/sil_aiageicechah
 Path `/boot/grub` is not readable by GRUB on boot. Installation is impossible. Aborting.
You could try an old version of grub. You could find old version package of grub at ARM Search. Read Downgrade for more information. 1. download an old version package for grub
  i686:
   http://arm.konnichi.com/extra/os/i686/grub2-bios-1:1.99-6-i686.pkg.tar.xz
   http://arm.konnichi.com/extra/os/i686/grub2-common-1:1.99-6-i686.pkg.tar.xz
  x86_64:
   http://arm.konnichi.com/extra/os/x86_64/grub2-bios-1:1.99-6-x86_64.pkg.tar.xz
   http://arm.konnichi.com/extra/os/x86_64/grub2-common-1:1.99-6-x86_64.pkg.tar.xz
  You could verify these packages by the .sig file if you take care.
2. install these old version packages by using "pacman -U *.pkg.tar.xz" 3. (Optional) Install os-prober if you have other OS like windows. 4. $ grub-install /dev/mapper/sil_aiageicechah 5. $ grub-mkconfig -o /boot/grub/grub.cfg 6. (Optional) put grub2-bios, grub2-common in /etc/pacman.conf's IgnorePkg array, if you don't want pacman upgrade it. That's all, grub-mkconfig will generate the configure automatically. You could edit /etc/default/grub to modify the configure (timeout, color, etc) before grub-mkconfig.

Use GRUB Legacy (Deprecated)

Warning: You can normally specify default saved instead of a number in menu.lst so that the default entry is the entry saved with the command savedefault. If you are using dmraid do not use savedefault or your array will de-sync and will not let you boot your system.
Please read GRUB Legacy for more information about configuring GRUB Legacy.
Note: For an unknown reason, the default menu.lst will likely be incorrectly populated when installing via fake RAID. Double-check the root lines (e.g. root (hd0,0)). Additionally, if you did not create a separate /boot partition, ensure the kernel/initrd paths are correct (e.g. /boot/vmlinuz-linux and /boot/initramfs-linux.img instead of /vmlinuz-linux and /initramfs-linux.img.
For example, if you created logical partitions (creating the equivalent of sda5, sda6, sda7, etc.) that were mapped as:
  /dev/mapper     |    Linux    GRUB Partition
                  |  Partition      Number
nvidia_fffadgic   |
nvidia_fffadgic5  |    /              4
nvidia_fffadgic6  |    /boot          5
nvidia_fffadgic7  |    /home          6
The correct root designation would be (hd0,5) in this example.
Note: If you use more than one set of dmraid arrays or multiple Linux distributions installed on different dmraid arrays (for example 2 disks in nvidia_fdaacfde and 2 disks in nvidia_fffadgic and you are installing to the second dmraid array (nvidia_fffadgic)), you will need designate the second array's /boot partition as the GRUB root. In the example above, if nvidia_fffadgic was the second dmraid array you were installing to, your root designation would be root (hd1,5).
After saving the configuration file, the GRUB installer will FAIL. However it will still copy files to /boot. DO NOT GIVE UP AND REBOOT -- just follow the directions below:
  • Switch to tty1 and chroot into our installed system:
# mount -o bind /dev /mnt/dev
# mount -t proc none /mnt/proc
# mount -t sysfs none /mnt/sys
# chroot /mnt /bin/bash
  • Switch to tty3 and look up the geometry of the RAID set. In order for cfdisk to find the array and provide the proper C H S information, you may need to start cfdisk providing your raid set as the first argument. (i.e. cfdisk /dev/mapper/nvidia_fffadgic):
    • The number of Cylinders, Heads and Sectors on the RAID set should be written at the top of the screen inside cfdisk. Note: cfdisk shows the information in H S C order, but grub requires you to enter the geometry information in C H S order.
Example: 18079 255 63 for a RAID stripe of two 74GB Raptor discs.
Example: 38914 255 63 for a RAID stripe of two 160GB laptop discs.
  • GRUB will fail to properly read the drives; the geometry command must be used to manually direct GRUB:
    • Switch to tty1, the chrooted environment.
    • Install GRUB on /dev/mapper/raidSet:
# dmsetup mknodes
# grub --device-map=/dev/null
grub> device (hd0) /dev/mapper/raidSet
grub> geometry (hd0) C H S
Exchange C H S above with the proper numbers (be aware: they are not entered in the same order as they are read from cfdisk). If geometry is entered properly, GRUB will list partitions found on this RAID set. You can confirm that grub is using the correct geometry and verify the proper grub root device to boot from by using the grub find command. If you have created a separate boot partition, then search for /grub/stage1 with find. If you have no separate boot partition, then search /boot/grub/stage1 with find. Examples:
grub> find /grub/stage1       # use when you have a separate boot partition
grub> find /boot/grub/stage1  # use when you have no separate boot partition
Grub will report the proper device to designate as the grub root below (i.e. (hd0,0), (hd0,4), etc...) Then, continue to install the bootloader into the Master Boot Record, changing "hd0" to "hd1" if required.
grub> root (hd0,0)
grub> setup (hd0)
grub> quit
Note: With dmraid >= 1.0.0.rc15-8, partitions are labeled "raidSetp1, raidSetp2, etc. instead of raidSet1, raidSet2, etc. If the setup command fails with "error 22: No such partition", temporary symlinks must be created.[2] The problem is that GRUB still uses an older detection algorithm, and is looking for /dev/mapper/raidSet1 instead of /dev/mapper/raidSetp1.
The solution is to create a symlink from /dev/mapper/raidSetp1 to /dev/mapper/raidSet1 (changing the partition number as needed). The simplest way to accomplish this is to:
# cd /dev/mapper
# for i in raidSetp*; do ln -s $i ${i/p/}; done
Lastly, if you have multiple dmraid devices with multiple sets of arrays set up (say: nvidia_fdaacfde and nvidia_fffadgic), then create the /boot/grub/device.map file to help GRUB retain its sanity when working with the arrays. All the file does is map the dmraid device to a traditional hd#. Using these dmraid devices, your device.map file will look like this:
(hd0) /dev/mapper/nvidia_fdaacfde
(hd1) /dev/mapper/nvidia_fffadgic
And now you are finished with the installation!
# reboot

Troubleshooting

Booting with degraded array

One drawback of the fake RAID approach on GNU/Linux is that dmraid is currently unable to handle degraded arrays, and will refuse to activate. In this scenario, one must resolve the problem from within another OS (e.g. Windows) or via the BIOS/chipset RAID utility. Alternatively, if using a mirrored (RAID 1) array, users may temporarily bypass dmraid during the boot process and boot from a single drive:
  1. Edit the kernel line from the GRUB menu
    1. Remove references to dmraid devices (e.g. change /dev/mapper/raidSet1 to /dev/sda1)
    2. Append disablehooks=dmraid to prevent a kernel panic when dmraid discovers the degraded array
  2. Boot the system

Error: Unable to determine major/minor number of root device

If you experience a boot failure after kernel update where the boot process is unable to determine major/minor number of root device, this might just be a timing problem (i.e. dmraid -ay might be called before /dev/sd* is fully set up and detected). This can effect both the normal and LTS kernel images. Booting the 'Fallback' kernel image should work. The error will look something like this:
Activating dmraid arrays...
no block devices found
Waiting 10 seconds for device /dev/mapper/nvidia_baaccajap5
Root device '/dev/mapper/nvidia_baaccajap5' doesn't exist attempting to create it.
Error: Unable to determine major/minor number of root device '/dev/mapper/nvidia_baaccajap5'
To work around this problem:
  • boot the Fallback kernel
  • insert the 'sleep' hook in the HOOKS line of /etc/mkinitcpio.conf after the 'udev' hook like this:
HOOKS="base udev sleep autodetect pata scsi sata dmraid filesystems"
  • rebuild the kernel image and reboot

dmraid mirror fails to activate

Does everything above work correctly the first time, but then when you reboot dmraid cannot find the array? This is because Linux software raid (mdadm) has already attempted to mount the fakeraid array during system init and left it in an umountable state. To prevent mdadm from running, move the udev rule that is responsible out of the way:
# cd /lib/udev/rules.d
# mkdir disabled
# mv 64-md-raid.rules disabled/
# reboot
============================ ============================ HOWTO: Linux Software Raid using mdadm 1) Introduction: Recently I went out and bought myself a second hard drive with the purpose of setting myself up a performance raid (raid0). It took me days and a lot of messing about to get sorted, but once I figured out what I was doing I realised that it's actually relatively simple, so I've written this guide to share my experiences I went for raid0, because I'm not too worried about loosing data, but if you wanted to set up a raid 1, raid 5 or any other raid type then a lot of the information here would still apply. 2) 'Fake' raid vs Software raid: When I bought my motherboard, (The ASRock ConRoeXFire-eSATA2), one of the big selling points was an on board raid, however some research revealed that rather than being a true hardware raid controller, this was in fact more than likely what is know as 'fake' raid. I think wikipedia explains it quite well: http://en.wikipedia.org/wiki/Redunda...ependent_disks
Hybrid RAID implementations have become very popular with the introduction of inexpensive RAID controllers, implemented using a standard disk controller and BIOS (software) extensions to provide the RAID functionality. The operating system requires specialized RAID device drivers that present the array as a single block based logical disk. Since these controllers actually do all calculations in software, not hardware, they are often called "fakeraids", and have almost all the disadvantages of both hardware and software RAID.
After realising this, I spent some time trying to get this fake raid to work - the problem is that although the motherboard came with drivers that let windows see my two 250 GB drives as one large 500 GB raid array, Ubuntu just saw the two separate drives and ignored the 'fake' raid completely. There are ways to get this fake raid working under linux, however if you are presented with this situation then my advice to you is to abandon the onboard raid controller and go for software raid instead. I've seen arguments as to why software raid is faster and more flexible, but I think the best reason is that software raid is far easier to set up! 3) The Basics of Linux Software Raid: For the basics of raids try looking on Wikipedia again: http://en.wikipedia.org/wiki/Redunda...ependent_disks. I don't want to discuss it myself because its been explained many times before by people who are far more qualified to explain it than I am. I will however go over a few things about software raids: Linux software raid is more flexible than hardware raid or true raid because rather than forming a raid arrays between identical disks, the raid arrays are created between identical partitions. As far as I understand, if you are using hardware raid between (for example) two disks, then you can either create a raid 1 array between those disks, or a raid 0 array. Using software raid however, you could create two sets of identical partitions on the disks, and for a raid 0 array between two of those partitions, and a raid 1 array between the other two. If you wanted to you could probably even create a raid array between two partitions on the same disk! (not that you would want to!) The process of setting up the a raid array is simple:
  1. Create two identical partitions
  2. Tell the software what the name of the new raid array is going to be, what partitions we are going to use, and what type of array we are creating (raid 0, raid 1 etc...)
Once we have created this array, we then format and mount it in a similar way to the way we would format a partition on a physical disk. 4) Which Live CD to use: You want to download and burn the alternate install Ubuntu cd of your choosing, for example, I used:
Code:
ubuntu-6.10-alternate-amd64.iso
If you boot up the ubuntu desktop live CD and need to access your raid, then you will need to install mdadm if you want to access any software raid arrays:
Code:
sudo apt-get update
sudo apt-get install mdadm
Don't worry too much about this for now - you will only need this if you ever use the Ubuntu desktop cd to fix your installation, the alternate install cd has the mdadm tools installed already. 5) Finally, lets get on with it! Boot up the installer Boot up the alternate install CD and run through the text based installation until you reach the partitioner, and select "Partition Manually". Create the partitions you need for each raid array You now need to create the partitions which you will (in the next step) turn into software raid arrays. I recommend using the space at the start, or if your disks are identical, the end of your disks. That way once you've set one disk up, you can just enter exactly the same details for the second disk. The partitioner should be straightforward enough to use - when you create a partition which you intend to use in a raid, you need to change the type to "Linux RAID Autodetect". How you partition your installation is upto you, however there are a few things to bear in mind:
  1. If (like me) you are going for a performance raid, then you will need to create a separate /boot partition, otherwise grub wont be able to boot - it doesn't have the drivers needed to access raid 0 arrays. It sounds simple, but it took me so long to figure out.
  2. If, on the other hand, you are doing a server installation (for example) using raid 1 / 5 and the goal is reliability, then you probably want the computer to be able to boot up even if one of the disks is down. In this situation you need to do something different with the /boot partition again. I'm not sure how it works myself, as I've never used raid 1, but you can find some more information in the links at the end of this guide. Perhaps I'll have a play around and add this to the guide later on, for completeness sake.
  3. If you are looking for performance, then there isn't a whole load of point creating a raid array for swap space. The kernel can manage multiple swap spaces by itself (we will come onto that later).
  4. Again, if you are looking for reliability however, then you may want to build a raid partition for your swap space, to prevent crashes should one of your drives fail. Again, look for more information in the links at the end.
On my two identical 250 GB drives, I created two 1 GB swap partitions, two +150 GB partitions (to become a raid0 array fro my /home space), and two +40 GB partitions (to become a raid 0 array for my root space), all inside an extended partition at the end of my drives. I then also created a small 500 MB partition on the first drive, which would become my /boot space. I left the rest of the space on my drives for ntfs partitions. Assemble the partitions as raid devices Once you've created your partitions, select the "Configure software raid" option. The changes to the partition table will be written to the disk, and you will be allowed to create and delete raid devices - to create a raid device, simply select "create", select the type of raid array you want to create, and select the partitions you want to use. 
Remember to check which partition numbers you are going to use in which raid arrays - if you forget, hit  a few times to bring you back to the partition editor screen where you can see whats going on.
Tell the installer how to use the raid devices
Once you are done, hit finish - you will be taken back to the partitioner where you should see some new raid devices listed. Configure these in the same way you would other partitions - set them mounts points, and decide on their filesystem type.
Finish the instalation
Once you are done setting up these raid devices (and swap / boot partitions you decide to keep as non-raid), the installation should run smootly.
6) Configuring Swap Space
I mentioned before that the linux kernel automatically manages multiple swap partitions, meaning you can spread swap partitions across multiple drives for a performance boost without needing to create a raid array. A slight tweak may be needed however; each swap partition has a priority, and if you want the kernel to use both at the same time, you need to set the priority of each swap partition to be the same. First, type
Code:
swapon -s
to see your current swap usage. Mine outputs the following:
Code:
Filename                                Type            Size    Used    Priority
/dev/sda5                               partition       979956  39080   -1
/dev/sdb5                               partition       979956  0       -2
As you can see, the second swap partition isn't being used at the moment, and won't be until the first one is full. I want a performance gain, so I need to fix this by setting the priority of each partition to be the same. Do this in /etc/fstab, by adding pri=1 as an option to each of your swap partitions. My /etc/fstab file now looks like this:
Code:
# /dev/sda5
UUID=551aaf44-5a69-496c-8d1b-28a228489404 pri=1 swap sw 0 0
# /dev/sdb5
UUID=807ff017-a9e7-4d25-9ad7-41fdba374820 pri=1 swap sw 0 0
 7) How to do things manually
As I mentioned earlier, if you ever boot into your instalation with a live cd, you will need to install mdadm to be able to access your raid devices, so its a good idea to at least roughly know how mdadm works. 
http://man-wiki.net/index.php/8:mdadm has some detailed information, but the important options are simply:
Code:
  • -A, --assemble Assemble a pre-existing array that was previously created with --create.
  • -C, --create Create a new array. You only ever need to do this once, if you try to create arrays with partitions that are part of other arrays, mdadm will warn you.
  • --stop Stop an assembled array. The array must be unmounted before this will work.
When using --create, the options are:
Code:
mdadm --create md-device --chunk=X --level=Y --raid-devices=Z devices
  • -c, --chunk= Specify chunk size of kibibytes. The default is 64.
  • -l, --level= Set raid level, options are: linear, raid0, 0, stripe, raid1, 1, mirror, raid4, 4, raid5, 5, raid6, 6, multipath, mp, fautly.
  • -n, --raid-devices= Specify the number of active devices in the array.
 for example:
Code:
mdadm --create /dev/md0 --chunk=4 --level=0 --raid-devices=2 /dev/sda1 /dev/sdb1
will create a raid0 array /dev/md0 formed from /dev/sda1 and /dev/sdb1, with chunk size 4. When using --assemble, the usage is simply:
Code:
mdadm --assemble md-device component-devices
for example
Code:
mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1
Which will assemble the raid array /dev/md0 from the partitions /dev/sda1 and /dev/sdb1 Alternatively you can use:
Code:
mdadm --assemble --scan
and it will assemble any raid arrays it can detect automatically. Lastly,
Code:
mdadm --stop /dev/md0
will stop the assembled array md0, so long as its not mounted. If you wish you can set the partitions up yourself manually using fdisk and mdadm from the command line. Either boot up a desktop live cd and apt-get mdadm as described before, or boot up the alternate installer and hit escape until you see a list of the different stages of instalation - the bottom one should read execute shell - which will drop you at a shell with fdisk, mdadm and mkfs etc... available. Note that if you ever need to create another raid partition, you create filesystems on them in exactly the same way you would a normal physical partition. For example, to create an ext3 filesystem on /dev/md0 I would use:
Code:
mkfs.ext3 /dev/md0
And to create a swapspace on /dev/sda7 I woud use:
Code:
mkswap /dev/sda7
Lastly, mdadm has a configuration file located at
Code:
/etc/mdadm/mdadm.conf
this file is usually automatically generated, and mdadm will probably work fine without it anyway. If you're interested then http://man-wiki.net/index.php/5:mdadm.conf has some more information. And that's pretty much it. As long as you have mdadm available, you can create / assemble raid arrays out of identical partitions. Once you've assembled the array, treat it the same way you would a partition on a physical disk, and you can't really go wrong! I hope this has helped someone! At the moment I've omitted certain aspects of dealing with raids with redundancy (like raid 1 and raid 5), such a rebuilding failed arrays, simply because I've never done it before. Again, I may have a play around and add some more information later (for completeness), or if anyone else running a raid 1 wants to contribute, it would be most welcome. Other links The Linux Software Raid Howto: http://tldp.org/HOWTO/Software-RAID-HOWTO.html This guide refers to a package "raidtools2" which I couldn't find in the Ubuntu repositories - use mdadm instead, it does the same thing. Quick HOWTO: Linux Software Raid http://www.linuxhomenetworking.com/w..._Software_RAID Using mdadm to manage Linux Software Raid arrays http://www.linuxdevcenter.com/pub/a/...2/05/RAID.html Ubuntu Fake Raid HOWTO In the community contributed documentation https://help.ubuntu.com/community/Fa...ght=%28raid%29 ============================ https://bugs.launchpad.net/linuxmint/+bug/682315 dmraid and kpart?
$ sudo dmraid -a y 
$ sudo dmraid -r $ sudo dmraid -l To fix this problem fast I added "dmraid -ay" at the bottom of the script so it did the trick for me but somebody should look deeper in to this. ====================== http://forums.linuxmint.com/viewtopic.php?f=46&t=105725 Do you have to use bios raid? Linux raid, mdadm, is recommended. The problem you are having is probably due to your Grub and/or your OS not having dmraid support. This is because the Installer only does half the job. Presumably you booted the live CD and installed dmraid, then installed to the raid device and Grub install failed. At this point you need to chroot into the raid and install dmraid, update initramfsand reconfigure grub. https://help.ubuntu.com/community/Grub2 ... ing#ChRoot sudo apt-get install dmraid sudo update-initramfs -u sudo dpkg-reconfigure grub-pc --------------------- How-to install LinuxMint or Ubuntu of softraid / fakeraid the simple way:
  1. Install Linux using the default installer program (let it do the partitioning unless you have specific requirements)
  2. When grub fails at the end of the install select install grub manually
  3. chroot into system installed on raid (fake or soft) https://help.ubuntu.com/community/Grub2/Installing#ChRoot
  4. grub-install on the TOP level partition of the raid
  5. update-grub
  6. Reboot and enjoy linux
======================= http://ubuntuforums.org/showthread.php?t=2089865&page=2 The possibility is a partition table irregularity which confuses the installer. Fairly easily fixed, but if your drive was not used in a RAID, lets see if this is so. Close the installer and open a terminal. Run this command and post the output:
Code:
sudo fdisk -lu
and then
Code:
sudo parted -l
I have suggested both as you may have GPT partitions and a UEFI bios. 
----------------------
Is a Intel SRT system. Those tend to be a large hard drive and small SSD to speed Windows up. Each vendor slightly different but the Intel part is the same. Some delete the SRT and install Linux to the SSD, others turn off SRT in Windows and have to use the RAID commands to remove RAID settings. 
Then after install they seem to be able to re-implement the SRT, but I do not then think Linux sees Windows. Intel Smart Response Technology http://www.intel.com/p/en_US/support...ts/chpsts/imsm Intel SRT - Dell XPS http://ubuntuforums.org/showthread.php?t=2038121 http://ubuntuforums.org/showthread.php?t=2036204 http://ubuntuforums.org/showthread.php?t=2020155 Some info on re-instating http://ubuntuforums.org/showthread.php?t=2038121 http://ubuntuforums.org/showthread.php?t=2070491 Disable the RAID, for me it was using the Intel rapid management thingy and telling it to disable the acceleration or the use of the SSD. If you have a different system, just disable the RAID system then install Ubuntu. Once installed you can then re-enable it.
You will need to use the dmraid command prior to running the Ubuntu Installer so that it will be able to see the partitions on the drive because otherwise with the raid metadata in place it will see the drive as part of a raid set and ignore its partitions. 
Try first to remove the dmraid package from Linux.
Code:
sudo apt-get remove dmraid

Thursday, May 16, 2013

All about RAID

http://searchstorage.techtarget.com/definition/RAID
RAID (redundant array of independent disks; originally redundant array of inexpensive disks) is a way of storing the same data in different places (thus, redundantly) on multiple hard disks. By placing data on multiple disks, I/O (input/output) operations can overlap in a balanced way, improving performance. Since multiple disks increases the mean time between failures (MTBF), storing data redundantly also increases fault tolerance.
Ask your RAID questions at ITKnowledgeExchange.com
A RAID appears to the operating system to be a single logical hard disk. RAID employs the technique of disk striping, which involves partitioning each drive's storage space into units ranging from a sector (512 bytes) up to several megabytes. The stripes of all the disks are interleaved and addressed in order.
In a single-user system where large records, such as medical or other scientific images, are stored, the stripes are typically set up to be small (perhaps 512 bytes) so that a single record spans all disks and can be accessed quickly by reading all disks at the same time.
In a multi-user system, better performance requires establishing a stripe wide enough to hold the typical or maximum size record. This allows overlapped disk I/O across drives.
There are at least nine types of RAID plus a non-redundant array (RAID-0):
  • RAID-0: This technique has striping but no redundancy of data. It offers the best performance but no fault-tolerance.
  • RAID-1: This type is also known as disk mirroring and consists of at least two drives that duplicate the storage of data. There is no striping. Read performance is improved since either disk can be read at the same time. Write performance is the same as for single disk storage. RAID-1 provides the best performance and the best fault-tolerance in a multi-user system.
  • RAID-5: This type includes a rotating parity array, thus addressing the write limitation in RAID-4. Thus, all read and write operations can be overlapped. RAID-5 stores parity information but not redundant data (but parity information can be used to reconstruct data). RAID-5 requires at least three and usually five disks for the array. It's best for multi-user systems in which performance is not critical or which do few write operations.
  • RAID-10: Combining RAID-0 and RAID-1 is often referred to as RAID-10, which offers higher performance than RAID-1 but at much higher cost. There are two subtypes: In RAID-0+1, data is organized as stripes across multiple disks, and then the striped disk sets are mirrored. In RAID-1+0, the data is mirrored and the mirrors are striped.
  • RAID-50 (or RAID-5+0): This type consists of a series of RAID-5 groups and striped in RAID-0 fashion to improve RAID-5 performance without reducing data protection.

  • Calculating available disk space in a RAID-5 set
  • SAN School Answer #7
  • Storage basics: RAID striping in detail
  • Explaining RAID levels and RAID data protection
  • RAID tutorial: Picking the right RAID level
  • Storage Networking expert Christopher Poelker answers your questions about RAID

    Tuesday, February 5, 2013

    Macrium Reflect and RAID

    When restoring the operating system to a drive, disconnect all the other drives.
    Another piece of information that may be of value to Macrium users is that I found the solution to the problem below:
    PROBLEM: When booting from the Macrium Reflect Rescue CD (WIN PE) at first all goes well and it says it is loading up Windows. However, it then halts and you just see a grey screen with a cursor on it and nothing more happens.
    The fix to the problem is to go into the BIOS and change the protocol for the CD/DVD drive from AHCI (or whatever it is) to IDE. Note that when I prepared the WIN PE Rescue CD I had installed all the drivers as specified in one of your articles, so installing the AHCI driver on the Rescue Disk does not overcome this problem. The CD/DVD drive must be set to IDE.

    ---------
    http://support.macrium.com/topic.asp?TOPIC_ID=3234
    The 32 bit and 64 bit rescue CD's are functionally identical, and the 32 bit rescue CD will recover any image to any PC. When you populate the drivers folder you must add drivers for the same architecture as Windows PE, in your case 32 bit.
    Restoring an image of Windows 7 from RAID enabled to non-RAID will be OK using either the Linux or Windows PE rescue CD.
    The restore wizard shows images not files. By this I mean, if an image is split into multiple files, as is the case with FAT32, then you will only see one entry per image by date, not multiple files.
    How many images have you created?
    Can you see your image files if you browse the backup folder using PE Explorer?
    I suggest you use DiskRestore to recover your partitions. DiskRestore can recover multiple partitions at once and is an easier option to use for Windows 7 systems using the MSR partition. For more info on Windows 7 image and restore considerations please see here:
    http://kb.macrium.com/KnowledgebaseArticle50049.aspx
    To run DiskRstore take the 'File' > 'DiskRestore' menu option in Windows PE.
    http://kb.macrium.com/KnowledgebaseArticle50004.aspx
    Thank you for your suggestion re rescue CD compatibility. When you create the Linux rescue CD there is a warning dialog that indicates the need to check for compatibility but perhaps we should make this clearer.

    http://kb.macrium.com/KnowledgebaseArticle50095.aspx?Keywords=RAID
    Questions relating to Macrium Reflects support for RAID Systems.
    1. What is RAID ?
    2. Does Macrium Reflect support RAID ?
    3. Do I need to configure my RAID system in any way ?
    4. What about support in rescue environments ?
    5. Related Links

    Saturday, December 10, 2011

    PC crashed RAID kaput!

    Source
    Native RAID and Dynamic Disk Support
     
    What happens if your motherboard just fries, leaving you no choice except buying a readily available replacement? Finally, what if your old board gets discontinued, and you are forced to replace a faulty one with a new version with a different RAID controller?
    RAID arrays are gaining popularity among advanced computer users thanks to the built-in RAID support provided by modern motherboards. Assembling several disks into a RAID array allows for increased reliability or higher transfer speeds, or both.
    Native RAID Compatibility Issues: Intel, nVidia and VIA Chipsets
    What happens if you created a RAID on a system board with a VIA chip set, and then decide to upgrade to Intel or nVidia or vice versa? Or what if your motherboard just fries, leaving you no choice except buying a readily available replacement? Finally, what if your old board gets discontinued, and you are forced to replace a faulty one with a new version with a different RAID controller? In cases like those, plugging in your hard disk array into a new motherboard won’t make a working RAID array. At best, you’ll get several separate disks. At worst, the new RAID system will attempt ‘repairing’ the array, destroying your data to the point of complete loss.
    Windows Dynamic Disks: Solution or Problem?Windows 2000, XP, as well as 2003 and 2008 Server employ dynamic disks, a software-based technology that is similar to RAID. Often used on server operating systems, dynamic disks perform similarly to ‘real’ RAID arrays. Windows software enables low-level support of these disks instead of the chips on the computer’s motherboard.
    Dynamic disks allow upgrading computer hardware at any time without trouble. Everything is fine until you switch the operating system. Dynamic disks created in Windows 2008 Server are not recognized in prior versions, and RAID 5 dynamic disks are not recognized in Windows XP. As such, dynamic disks are more of an issue rather than a solution to RAID incompatibilities.
    The SolutionThe Native RAID and Dynamic Disk Support technology developed by DiskInternals is the real solution to the problem of broken hardware and software-based RAID arrays. The technology recognizes and mounts RAID arrays created with all current Intel, nVidia or VIA chipsets, and fully supports Windows Dynamic Disks, adding backward compatibility between the different versions of Windows.
    The technology can mount the following types of native RAID arrays: JBod (RAID 0), Stripe (RAID 0), Mirror (RAID 1), RAID 5 (with complete recovery of all data if any one disk is missing or damaged), and RAID 0+1 (RAID 10). DiskInternals data recovery products employ this technology to repair broken RAID arrays and recover data if any disk is corrupted or damaged.
    Native RAID and Dynamic Disk Support supports all types of dynamic disks, including Simple, Span, Stripe , Mirror and RAID 5 arrays, allowing mounting a dynamic disk on any Windows version. You can even boot from a Boot CD and still have access to the RAID and dynamic disks!
    Native RAID and Dynamic Disk Support is part of all data recovery products manufactured by DiskInternals. A stand-alone implementation of this technology is featured in a free RAID reader product, Raid 2 Raid.

    Data Recovery Solutions for RAID ArraysDiskInternals Raid Recovery
    Download Now Buy Now

    Sunday, October 30, 2011

    Recovering RAID


    Source
    Source
    Recover all types of corrupted RAID arrays
    Recover corrupted RAID arrays in a fully automatic mode. DiskInternals Raid Recovery is the first tool to automatically detect the type of the original RAID array while still allowing for fully manual operation. Raid Recovery is no doubt a highly valuable tool for users of all types of RAID arrays, whether hardware, native, or software. The drag-and-drop user interface allows for easy operation by anyone.
    Product features:
    • RAID JBOD, 0, 1, 1E, RAID 4, RAID 5, 0+1 and 1+0
    • RAID-enabled motherboard from NVidia, Intel, or VIA
    • Adaptec RAID Controllers and DDF compatible devices
    • Microsoft software raids (also called Dynamic Disks)
    • Linux software raids
    • All features from DiskInternals Partition Recovery
    • All features from DiskInternals Uneraser
    Download | Buy Now | Raid Recovery
    oron.com/
    Raid Recovery” is the first tool to automatically detect the type of the original RAID array while still allowing for fully manual operation. Raid Recovery is no doubt a highly valuable tool for users of all types of RAID arrays, whether hardware, native, or software. Due to the drag-and-drop interface you will be able to specify parts of the RAID array just by dragging and dropping the icons that represent the disks.
    Reconstruct all types of arrays just as easily as a single hard disk. Raid Recovery recognizes all imaginable configurations of various types of arrays, including RAID 0, 1, JBOD, RAID 5, and 0+1, no matter whether they are connected to a dedicated RAID controller or a RAID-enabled motherboard from NVidia, Intel, or VIA. Microsoft software raids (also called Dynamic Disks) are also supported, including JBOD (span), RAID 0, 1, and 5 configurations.
    Detecting the right type of an array is vital for correct recovery. “Raid Recovery” supports both manual and fully automatic detection of essential parameters such as type of array, type of RAID controller, stripe size, and disk order.
    Assemble RAID configurations manually via a simple drag-and-drop operation. Raid Recovery re-constructs an array from the available hard disks being simply dragged and dropped, and detects the right type and size or the array as well as the order of the disks automatically. Anyone can recover broken RAID arrays with Raid Recovery!
    Raid Recovery gives top priority to your data, allowing you to recover and back up all files from the corrupted array before attempting to fix it. You can store the files on another hard disk or partition, use a recordable CD or DVD, or even upload the files over FTP. Raid Recovery uses advanced search algorithms that allow recovering important files such as documents, pictures and multimedia even if there is a missing disk in the array, or if the file system is missing or damaged.
    Here are some key features of “DiskInternals Raid Recovery”:
    · Recovered files can be Uploaded to Ftp or NAS!
    · Recovered files can be burned to CD or DVD!
    · Preview recoverable files before purchasing the product.
    · Easy Recovery Wizard.
    · Supported file systems: FAT16, FAT32, EXT2, EXT3, NTFS, NTFS 4, NTFS 5.
    · Recovered files can be saved on any (including network) disks visible to the host operating system.
    · Creates recovery snapshot files for logical drives. Such files can be processed like regular disks.
    · Creates Virtual partitions. Such partitions can be processed like regular disks.

    Monday, September 26, 2011

    MediaShield-Raidtool installation guide:

    Source
    The easiest way to get the Raidtool installed is by running the SETUP.EXE of the associated nForce chipset driver package, but by doing this all nForce IDE drivers of the package will be installed too (and maybe replace the better and currently working ones).
    In these cases you have to use an other way to get full access to the MediaShield/RAID software (NVIDIA Control Panel) after having completed the Vista installation.
    The guide for the manually installation of the nForce Raidtool (on the basis of posts from nForcersHQ members TheMaxx32000 and Tweak_addict):
    • Run Vista.
    • Install the latest version of nTune.
    • Search for the RAIDTOOL folder of the actual Vista x86/x64 nForce chipset driver package.
    • Extract the content (all files) of the RAIDTOOL.CAB file into the C:\WINDOWS\SYSTEM32 folder.
    • Search for the file "RegRaidSedona.bat" (formerly named "RegRaid.bat") within C:\Windows\System32, right click on it, choose "Run as Administrator" and run the BAT file to get the Raidtool Services registered.
    • Search for the file "nvCplUI.exe" (formerly named "nvRaidman.exe") within the same folder and run it.
    That should bring up the Nvidia Control Panel and the "Storage" item should be listed on the left window task list.
    Suggestions:
    1. It is a good idea to create a shortcut to the NVCPLUI.EXE (formerly NVRAIDMAN.EXE) onto the Desktop or into the Startmenu. This way you will get an easy access to the NVIDIA MediaShield Control Panel.
    2. Additionally you should put a shortcut to the NVRAIDSERVICE.EXE into the Startup folder, if you want a continuous monitoring of the Raid health.

    Saturday, September 24, 2011

    free RAID recovery

    ReclaiMe Free RAID Recovery works with
    • hard drives (internal and external),
    • disk image files,
    • hardware and software RAIDs.
    Note: ReclaiMe Free RAID Recovery needs Windows to run. It does not run on Apple Mac or Linux environment.
    Our free RAID recovery software performs
    and recovers the following RAID parameters:
    • Start offset and block size,
    • Number of member disks,
    • Member disks and data order,
    • Parity position and rotation,
    Once you recovered the parameters using ReclaiMe Free RAID Recovery, you can:

    Friday, August 5, 2011

    SMART and RAID

    What is S.M.A.R.T. and how can we use it to avoid data disaster?

    Comparison of S.M.A.R.T. tools
    SMART errors are not always a SURE sign of hard disk failure. A hard disk can sometimes run for some time with a SMART error, but replacement is still VERY highly recommended.
    You can choose to turn SMART off in most BIOS's today, although it's not typically a good idea.
    All the big HDD makers have a utility to look at hard drive S.M.A.R.T. status. Many SMART errors are warranted. The drive doesn't actually have to stop working completely in order to get a warranty replacement, but check the warranty to make sure your SMART error is covered by the warranty.
    Western Digital has Datalife tools Hitachi Drive Fitness Test:
    www.hitachigst.com/hdd/technolo/dft/dftnew.htm
    www.cpuid.com/
    ROG CPU-Z 1.57.1 at this address.(ASUS)
    http://www.bios-info.de/4p92x846/befs.htm
    Ashampoo HDD Control 2 
    SpeedFan
    SpeedFan 4.33.png
    SpeedFan 4.44 in Windows 7
    Original author(s) Alfredo Milani Comparetti[1]
    Developer(s) Alfredo Milani Comparetti
    Initial release ?
    Stable release 4.44  (March 17, 2011; 4 months ago) [+/−]
    Preview release [+/−]
    Written in Delphi, C++, C
    Operating system Windows 95 and later[1]
    Available in Multilanguage
    Type System hardware monitor
    License Freeware[1]
    Website almico.com/speedfan.php


    Version 4.38 added full support for AMCC/3ware SATA and RAID controllers.

    AMD RAIDexpert -Raid1 Wiederherstellen

    Habe in meinem PC 2x 500GB als Raid1 laufen.
    Am Freitag ist das Kabel der 2. Platte verreckt und diese ist ausgefallen.
    Habe das Problem beim Neustart festgestellt  Raid critical

    Nach Tausch des Kabels ist der Raid immernoch critical.
    Die 2. Platte wird als Singeldisk angezeigt.

    Der Raidcontroller meint ich soll im Handbuch von Promise nachschauen.. hab ich aber keines.

    Mainboard ist ein MSI KA780G, Platten 2x Maxtor 500GB SATA.

    Habe die 2.Platte ausgiebig mit dem Program von Maxtor getestet. Scheint völlig ok zu sein.

    Im Bios des Raid habe ich folgende Auswahl:

    View , Define, Delete

    Wenn ich Delete wähle, fragt er ob ich die Daten Löschen will... will ich nicht.
    Bei Define kann ich nur neue Raid anlegen, das bestehende aber nicht ändern.

    Wie bekomme ich die beiden Platten wieder zusammen?

    Habe zur vorübergehenden Datensicherung die laufende Platte auf die zweite kopiert und die Kopie dann abgeklemmt.

    Das System läuft wenn ich eine der beiden Platten anschliese.
    Starte ich mit der noch laufenden Raid-Platte meckert der Controller >Raid critical
    Starte ich mit der Kopie Läuft alles normal (kein Raid vorhanden)

    Die Datensicherung altert halt stündlich. Irdendwie muss der Raid1 ohne Datenverlust wieder zum
    laufen gebracht werden können!?

    Gruß, Martin


    Re: Raid1 - Wiederherstellen
    Hallo VMartin, ich versuche mal mit ein paar Bemerkungen Hilfestellung zu leisten. Ich bin kein Raid-Profi, habe aber einige Erfahrungen mit Raid 1 auf Asus Motherbords gesammelt (leider nicht Promise-Raid, sondern Sil-Raid).
    Erste Vermutung: Nachdem du nach dem Auswechseln des defekten Kabels die Daten von Platte 1 auf Platte 2 per Hand kopiert hast, ist das Raid eigentlich endgültig tot. Wenn bei mir sich mal eine Platte aus dem Raid 1 verabschiedet hatte, musste ich die gleiche Platte am gleichen Anschluß nur wieder ins Laufen bringen und dann hat sich das Raid über mehrere Tage ununterbrochenen Durchlaufens des Computer selbst wieder eingerenkt. Die Raidcontroller sind nicht die schnellsten und bei vielleicht 300 GB Daten auf 500 GB Festplatte arbeitet so ein Prozess im Hintergrund mehrere Tage für einen Wiederherstellungsprozess.
    Zweite Vermutung: Suche in deinen Motherbordunterlagen oder in der installierten Raid-Software nach dem Namen des Raidcontrollers und hole dir das Handbuch aus dem Internet auf den Promise-Seiten. Es ist nicht normal, dass beim Löschen eines Raid auch die Daten gelöscht werden. Normalerweise wird nur das Raid aufgehoben. Das musst du nachlesen!!! Wird nur das Raid aufgehoben, dann mach es und binde beide Festplatten in ein neu erstelltes Raid ein und fertig. Aber wie gesagt, diese zweite Vermutung ist mit höchster Vorsicht zu genießen bei Höchststrafe vollständiger Datenverlust. Ich habe auch schon in Computerzeitungen gelesen, dass bereits der Wechsel auf einen anderen Controller die Höchststrafe bedeuten kann.
    Abschließende Bemerkungen:
    Sicherste Lösung deines Problems: Dritte (externe) Festplatte besorgen, alle Daten da drauf hauen, Raid abschießen und dann neu erstellen, Daten von der dritten Festplatte wieder ins Raid einspielen. Ich selbst habe mich vor ein paar Jahren vom Raid 1 verabschiedet eben wegen dem Grund, dass ein getöteter Raid-Controller auch den Daten-Gau hervorrufen kann. In meinem Arbeitsrechner gibt es eine Systemfestplatte Laufwerk C und eine Datenfestplatte. Diese Datenfestplatte synchronisiere ich mit einem alten LapLink-Programm regelmäßig mit einer externen Festplatte. Und das hat mehrere Vorteile: Die Datenfestplatte arbeitet schneller (kein Raid mehr notwendig, Raid 1 drückt die Festplattengeschwindigkeit runter), der Rechner verbraucht etwas weniger Strom (die externe Sicherungsfestplatte wird nur zur Synchronisation eingeschaltet), man kann die Daten auf der externen Festplatte auch mal mitnehmen (falls man braucht für unterwegs am Laptop) und die Daten sind an jedem USB oder SATA Anschluß an jedem Rechner verfügbar (in Abhängigkeit der Anschlüsse des Festplattengehäuses bzw. Rechners).
    So, genug gelabert, vielleicht hat's dich ein bischen vorwärts gebracht.


    Welcher Promise Controller ist das überhaupt?


    An eine 3. 500GB Platte habe ich auch schon gedacht...
    Der Controller heist MSI SB700 soll aber auf Promise beruhen..
    Martin


    Das ist die Southbridge des AMD 780G Chipsatzes des Boards und hat mit Promise direkt aber überhaupt nix zu tun:
    http://de.wikipedia.org/wiki/AMD-7er-Chipsatz-Serie#Southbridges
    AMD nutzt AFAIK Promise Patente.
    Wenn du das Raid 1 da neu anlegst, sind natürlich alle Daten futsch.
    Was daraus folgt ist wohl klar und AFAIK nicht zu umgehen.
    Wenn ein Software RAID aufgelöst ist, ist er halt wirklich aufgelöst und nicht mehr zu reparieren.
    Das ist halt ein "Spielzeug" RAID Controller!
    Nur dieses AHCI SBSETUP V2.5.1540.4 for DOS:
    forums.amd.com/forum/messageview.cfm?catid=12&threadid=96915
    ist da ne kleine Chance und muß natürlich unter reinem DOS ausgeführt werden.
    Kann sehr sehr lange dauern und Erfolg ist nicht gewiß.
    Sonst hilft da nur ne Neuinstallation.
    Man sollte halt immer ein aktuelles Image haben, auch RAID 1 ist keine sehr gute Datensicherung bei so einem "Spielzeug" RAID Controller.
    Jeden Tag automatisch ein Image (inkrementell) mit z.B. True Image erstellen ist definitiv viel besser.

    Hallo,
    Das es eine absolute Datensicherheit nicht gibt ist mir schon klar.
    Mir ist auch klar das die Sicherheit mit dem Einsatz von Material und Geld immer zu
    steigern ist.
    Mir war der "Spielzeugraid" meines PC's am Freitag allerdings sehr recht, da meine Daten noch da sind! Vielleicht keine sehr gute Datensicherung, aber hat funktioniert.
    Wirklich wichtige Daten sichere ich sowieso auf CDRW bzw. DVDRW und als Backup auf einem anderen PC.
    Inzwischen habe ich auch eine Lösung gefunden:
    Habe AMD Raidexpert installiert, bis heute wußte ich nicht was ich damit soll, da beim Start der Mozilla aufging und ein Passwort wollte. Auf der Webseite stand "zur Fernwartung", ich dachte mir "schön, aber ich sitz ja davor, brauche keine Fernwartung".
    Aber wenn man als User und PW "admin" eingibt sieht man den lokalen Rechner.
    Es kam auch die Meldung : Raid LD1 Critical.
    Ein klick auf Wiederherstellen -- Ersatzlaufwerk wählen (bei mir jenes mit neuem Kabel) -- Fertigstellen, 2h warten und der Raid ist wieder Funktional
    Und wenn wieder eine Platte ausfällt hab ich eine Live-Kopie....
    Das ist das Maß an Sicherheit das mir genügt...