MD is linux software raid, a consistent full-fledged native solution. DM can provide some minimal support for those proprietary legacy fake-raids, created by various firmwares. > You can check that by running "cat /proc/mdstat". Ok, I'll do this as soon as I get to that box again. However, if md was active then some mdX partitions would appear in /proc/partitions. But they didn't. Also, the installer didn't say anything about md, but kept talking about dm (which is correct, AFAICS)
The issue I see with your setup is if the controller fails you will loose your RAID.
RAID 1 is mirrored and gives some small amount of extra uptime or servers and such, but is not a real backup solution . You really need to backup if the data is important.
I have a RAID controller on my motherboard, but I only use a JBOD for my data drive, which is
backed up to a RAIDed NAS (Which in turn is backed up).
backed up to a RAIDed NAS (Which in turn is backed up).
Suse does not like it!
I have attempted to force RAID detection at the command prompt: as root, ran dmraid -ay. This returned the error "No raid disks". Is it perhaps that it is loading sata_nv instead of AHCI drivers for the sata disks? Perhaps I need to pass a kernel argument like dodmraid?
-------------
Unfortunately, I feared the above and didn't want to believe it but I reached the same conclusion after many hours spent trying diverse methods on my old ASUS A8V Deluxe, 1GB RAM, AMD ATHLON 64 X2 4400+. This MB has a Promise FastTrak 378 and the VIA VT8237. I used both in various RAID configurations, with several different openSUSE versions, starting from SUSE 10.0 up to openSUSE 11.0 (all of them x64 as well as all versions in my discussion below).
I am now trying to revive the PC and make it a server (internet proxy and content management for the kids, file and multimedia server for the home network, etc). So, I got 2 x 2Tb WD HDDs (RAID1)together with some old 2 x 250Gb WD HDD (RAID0)and a 60Gb Maxtor (single UDMA).
Since I hope to put this together once and not touch it for a couple of years, I wanted to get a head start with openSUSE 11.3 (milestone 7) so I don't run out of updates too soon. Also, I want to use FakeRaid so besides safety(1) or speed(0), the Acronis TI boot disk understands the partitions and can backup/restore.
Well, openSUSE 11.3 install fails miserably - 1st time ever I can't install openSUSE at all! Tried both DVD and LiveCD KDE and it can't even format any partition, in any combination using dmraid, regardless if RAID 0 or 1. I tried some workarounds for partitioning and formatting (see below) but 11.3 can't mount the partitions already prepared.
Went back to 11.2 and there are problems with the partitioning during DVD install. However, the LiveCD KDE4 has some success, even though it occasionally freezes during the formatting.
On 11.2 & 11.3 I tried both EXT3 & EXT4 but it doesn't seem to make a difference.
Tried 11.1 and it only froze once out of quite a few partitioning/formatting attempts.
I also tried some other distributions with mixed results:
- Mandriva PWP 2010 - works fine 1st try, no problems.
- Ubuntu Server 10.04 - doesn't even access the partitions to format or mount (diverse errors).
- CentOS 5.5 - no problem whatsoever.
I am still determined to use openSUSE (as on my other 3 systems) and after multiple trials and errors, I came up with a rough plan that seems to work on 11.2, for the most part:
- Boot with a LiveCD
- Check dmraid status #dmraid -r -this should list the correct array name and number of drives (i used a MHDD disk to clear boot info and erase 1st 100MB when had conflicting dmraid info caused by moving HDD around; #dmraid -rE didn't work all the time).
- Activate dmraid #dmraid -ay
- Verify with #dmraid -s
- Partition disks #fdisk /dev/mapper/via_yourarrayhere (or pdc_yourarrayhere)
- Make sure to assign correct partition types (83 Linux; 82 Swap)
- Reboot for good measure (had problems with system "confused" on how many partitions)
- Open YaST2 / Partitioner and format all partitions. Try one at the time to prevent partitioner from locking up.
- Once all partitions formatted, start the install script (from the desktop).
- During install, select expert partitioning but don't format or change type. Just assign mount points.
- Make sure the boot order is recognized correctly by Grub (changed order in BIOS if not) and select MBR for location (didn't try the default booting from partition, might still work)
- Continue normal installation. The system should install correctly, even with multiple dmraid partitions.
I hope this helps someone.
I am very frustrated with this situation. Regardless if is the kernel fault, or dmraid, or YaST partitioner module fault, this should not happen!
The main reason I switched to Linux is Vista didn't support my relatively older hardware but Linux did.
One shouldn't have to trade system functionality for "new features". Don't want to keep buying new hardware just to keep up with the OS updates!
Quick update:
1 - Installed latest BIOS update from ASUS (even though "beta"). The revision description says nothing about storage but I wanted latest possible.
2 - Switched PNP OS in BIOS to "No".
After "1" & "2" was able to install openSUSE 11.2 straightforward, twice - no workaround needed.
Recently, I got the final release DVD for 11.3 x86_64 and installed OK using the existing partitions (format only). This worked fine with both RAID 0 and RAID 1 partitions.
Since I didn't test all the combinations possible I can't say what is the fix but I lean towards "2" above.
1 - Installed latest BIOS update from ASUS (even though "beta"). The revision description says nothing about storage but I wanted latest possible.
2 - Switched PNP OS in BIOS to "No".
After "1" & "2" was able to install openSUSE 11.2 straightforward, twice - no workaround needed.
Recently, I got the final release DVD for 11.3 x86_64 and installed OK using the existing partitions (format only). This worked fine with both RAID 0 and RAID 1 partitions.
Since I didn't test all the combinations possible I can't say what is the fix but I lean towards "2" above.
I guess enabling PNP OS in BIOS causes the OS to mis-configure resources.
No comments:
Post a Comment