[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Workarounds for advanced RAID features in GRUB Legacy?

From: Leif W
Subject: Re: Workarounds for advanced RAID features in GRUB Legacy?
Date: Sat, 10 Sep 2005 19:12:20 -0400

From: "Tim Woodall" <address@hidden>
Sent: 2005 September 10 Saturday 17:02

On Sat, 10 Sep 2005, Leif W wrote:

Specific areas:

* RAID 1, RAID 10, RAID5
* Partitionable RAID devices
* Picking the most up-to-date drive

Right now the cleanest option seems to be to simply put a separate ext2 partition at the beginning of each disc, have a kernel and initrd there to handle the RAID setup, and install grub multiple times, and update everything manually. But I have SATA discs, which are treated like SCSI discs, and thus

What I've got is a raid1 /boot (md0) ext3 at the start of the disk and
then the rest of the disk is md1 which is then partitioned using LVM2.
(I'm just using raid1 on two IDE disks here)

Well, even here, let's take a closer look. An ext3, is that including a separate partition for a journal device? And a swap partition? You don't want to mirrow swap, and no need to make it RAID 0, as Linux swap is designed to balance the disc utilization. So we have journal, boot (RAID 1), swap, and data (root, RAID 1). That's 4 partitions already. I've got discs that are huge, and can easily fit several OS installations, and need to have some, all on the same discs. But the SATA discs have 16 partition limit that I bump into if everything is a flat level RAID 1. That's my dilemma. With WinXP, it requires the DOS partition table, and is very picky about partition types for a clean install. And is further picky about partitions changing later. And can't handle SW-RAID for a single partition, only the full disc, only using the BIOS RAID.

And then my menu.lst looks something like this:


You still need to install grub into the boot sector on both disks but
once that is done then any updates will update both drives (except for
updating the boot sector or partition table)

If either drive fails then the system will boot from the other one (you
might need to unplug the failed disk to make the bios happy)

This looks similar to some other configs that I've seen and I'll try this technique as well. The other technique I saw suggested using the 'device' command to assign subsequent discs to hd0. The thing is, I don't want to have to pull out plugs. I mean, if something fails, granted I need to power down and pull out plugs and swap the drives. But maybe I don't have time at the moment, and just need to limp along until I have sufficient time to obtain a disc (save money, purchase, have it delivered, test it before using, and so on). But bills need to be paid, data needs to be accessed. I may not want to fiddle with my drives until I have the replacement parts. Or the other situation, maybe everything is fine, but I goof up a config file. I don't want to have to be pulling plugs to get back in. I don't want to cause excess wear on my plugs. :-D So, a solution I'm hoping to find will just either try the next device/disc/partition/raid in a list, or I'll just settle for hitting a down arrow to go to the next menu item.

All of this is done using stock debian (sarge).

I've considered playing with making the entire disk raid1, including the boot sector and partition table but I've never got around to it and I'm
not certain it can be made to work (easily).

Yeah I've been looking at this. Boot on RAID1. It's fairly well described in the Software RAID HOWTO, but there are some gaps missing, like anything. And the GRUB specific config isn't too in-depth. For me personally, it just feels like ugly workarounds, not some very elegant solutions. So now I'm considering the initrd on boot partition method, but might need loopback support and nice to have CramFS support, but ext2 might suffice.

Thanks again for all the ideas! I'm going to try as much as I can this weekend.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]