Knowledge Base Glossary       Exit
(how to) Search  

Browse by Category
Knowledge Base > Products > Shared Issues > Restoring An Image of Linux OS To a Different Intel Firmware RAID Drive

Restoring An Image of Linux OS To a Different Intel Firmware RAID Drive

Problem:

After using IFL or IFD to restore an image of a Linux OS on an Intel firmware RAID drive to an Intel firmware RAID drive on a different system, or to an Intel firmware RAID drive that has been deleted and recreated since the image was taken, the restored system fails to boot properly. In some cases, the boot results in a kernel panic, with an error message about not being able to find or mount the root file system. In other cases, the boot stops at a command prompt, with the RAID not running.

Cause:

Because the RAID is on a different system, or has been deleted/recreated since the image was taken, the UUIDs of the firmware RAID drive are now different. There are typically 2 UUIDs associated with a firmware RAID drive; one for the "container", and one for the RAID drive itself. These UUIDs are stored in the firmware of the RAID controller card (not on the drive), and are usually used in the Grub boot loader's configuration file so that the RAID array can be identified and started at boot time. If the UUIDs have changed, but the Grub configuration file has not been updated with the new UUIDs, then the RAID array will not be started.

Note that the UUIDs referenced above are only the ones for the actual firmware RAID container and drive. Other UUIDs, such as for the Linux root partition, swap partition, and other Linux partitions, will not be changed from the original system that was imaged.

Solution:

The Grub configuration file needs to be updated with the new UUIDs to get the RAID to run, and the system to boot properly.

Updating the Grub configuration file (menu.lst, grub.conf, or grub.cfg): This file should be updated before attempting to boot into the restored Linux OS. By doing it from the IFL boot disk, you will be able to avoid writing to individual drives in the array while the RAID is not running. The Grub file update can be done most conveniently from the IFL (GUI) version of the boot disk, which will make copy/paste operations easier the with the IFL (CUI) version. The suggested steps are outlined below:

1. From the IFL boot disk, restore the image to the firmware RAID drive in the normal manner.

2. After the restore, reboot the system back into the IFL boot disk to ensure that the RAID drive and its partitions are recognized, and that the RAID is running. You can verify that the RAID is running by confirming that the RAID drive is detected by IFL (shows up in the list of drives).

3. Determine where the /boot directory resides. This can be either a dedicated /boot partition, or it could be included in the root partition. If the Linux OS is using LVM on top of the firmware RAID (a typical install choice for Redhat, CentOS, Fedora, etc.), there will  be a separate /boot partition. Otherwise it could be either way.

4. Mount the partition containing the /boot directory.  The IFL boot disk contains a script called 'dpmount' which can mount partitions from a menu interface. The script can be started  by clicking on the "mnt" icon at the top of the screen, or by running 'dpmount' from a terminal window.  The partition can also be mounted manually from the command line, in which case this KB article should help: Mounting Drives and Partitions on the IFL Boot Disk

5. Determine what the new UUID values are, by running the command 'mdadm -Es' from a terminal window. That will yield an output similar to the following example:

ARRAY metadata=imsm UUID=4aaf01e8:afb3a0b4:4211028f:dd0767bc
ARRAY /dev/md/RAID1 container=4aaf01e8:afb3a0b4:4211028f:dd0767bc member=0 UUID=3db99a7c:19b661fa:bad1686b:bd3ad62a

In the example above, the 2nd "ARRAY" line contains the 2 new UUIDs that you will need to copy/paste into the Grub configuration file to replace the old ones. The 1st one on the line is the container UUID, while the 2nd one is for the drive itself.

6. Locate and identify the Grub configuration file. Depending on which Linux distribution/version is installed, the file can be at any of the following paths:

/boot/grub/menu.lst -  Grub config file for most Grub legacy installs

/boot/grub/grub.conf - Grub config file for Redhat-based Grub legacy installs

/boot/grub/grub.cfg - Grub2 config file for most Grub2 installs

/boot/grub2/grub.cfg - Grub2 config file for some Redhat-based Grub2 installs

7. For Grub legacy (menu.lst or grub.conf), the UUIDs to change will be on the "kernel" lines.  There will usually be 2 or more "kernel" lines in the file, and each instance of the line will contain the same UUIDs.

8. For Grub2 (grub.cfg), the UUIDs to change will be on the "linux" lines. There will usually be 2 or more "linux" lines in the file, and each instance of the line will contain the same UUIDs.

9. Important: The UUIDs that are specified as root=<UUID> or root=UUID=<UUID> should not be changed. These are for the root partition, for which the UUID did not change.

10. Important: The two UUIDs that need to be changed will be specified each "linux" or "kernel" line with syntax similar to rd.md.uuid=<UUID>.  One instance will be the container UUID, and the other will be the drive UUID. Usually, the first of the two is the container UUID, but the order of the two UUIDs has been found not to matter. The following is an example of a "linux" line segment from a typical grub.cfg file that shows the two UUIDs for the firmware RAID (these occur on each of 3 "linux" lines in the file):

rd.md.uuid=e9584522:f466c674:fe40d6a3:b6819d09 rd.md.uuid=7689b602:6f0fd560:d677d9ff:2bf005c9

11. Important: Copy/paste the 1st UUID on the 2nd line of output from the 'mdadm -Es' command executed in step 5 into the Grub configuration file to replace the 1st UUID specified with rd.md.uuid=<UUID> on each "kernel" or "linux" line. This step will have to repeated once for each "linux" or "kernel" line in the Grub configuration file.

12. Important: Copy/paste the 2nd UUID on the 2nd line of output from the 'mdadm -Es' command executed in step 5 into the Grub configuration file to replace the 2nd UUID specified with rd.md.uuid=<UUID> on each "kernel" or "linux" line. This step will have to repeated once for each "linux" or "kernel" line in the Grub configuration file.

13. If the above changes were made correctly, the restored system should now boot normally from the RAID drive.

Additional Information:

1. An alternative method for managing this issue would be to prepare the updated Grub configuration file ahead of time, and store the file on the source system so that it is included in the source image. For example, it could reside in the boot/grub directory on the source system, and be named grub.conf-target, or something similar. Then, after the restore, the file containing the updated UUIDs can be copied to overwrite the source system's Grub configuration file. The target system's UUIDs can be determined ahead of time because the RAID drive must exist before the restore takes place. They can be displayed by booting the target system with IFL, and running the command 'mdadm -Es', as explained in step 5 above.

 

 



How helpful was this article to you?


powered by Lore