Friday, May 15, 2009

Converting a Physical Linux system to a VMWare VirtualCenter VM

This is a daunting task. At least, it is until you figure out how to do it. Once you've got the steps, it's quite simple, and relatively quick. We'll be importing a physical Linux (CENTOS 5) server -- using IDE drives -- into a VMWare VirtualCenter-managed host. So immediately, one thing needs to be made clear: the tools we're using in this exercise aren't free. There are a variety of tools out there that are free, and I may get around to exploring those for this situation, as well. In the mean time, however, we're talking about VMWare Enterprise Converter 4 (v3 works, as well), which isn't available (yet, anyway) without cost. So enough of the fine print: let's get to it.

Downloading the Converter Boot ISO

This seems like it'd be obvious, but it wasn't to me: Click on the downloads link at, and instead of clicking on the download link for VMWare Converter, choose VMware Infrastructure 3, instead. Now click on the download link for VMware vCenter Server 2.5. This is where you log in (see: I told you it wasn't free). Having done so, you'll see a link to download VMware vCenter Converter BootCD for vCenter Server. That's what you want. I know: it was obvious to you; I'm just slow. Burn that CD, and we'll move on to the next step.

Boot the Converter CD

There aren't a lot of options for using the converter CD, so I'll skip much detail about it, unless someone requests more. There is one very important gotcha, however: Network Autonegotiation Speed. It doesn't do it well. At least, it doesn't do it well with all network cards. Here's the problem: VMWare simply took their Windows-only converter application and put it on a Windows PE boot disk. And the network drivers don't appear to be terribly robust, at least, not for all network cards. So here's what you do: when the system boots, choose to edit the network settings manually (if you've already gotten past that point, you can edit them from the Network Settings menu in the Converter application), and set the speed from auto to whatever speed your system supports (in my case with this system, 100Mb Full duplex). This will make things work much, much better. I'll note that, if your import process is taking a *really* long time, and failing often during the process, this is most likely your problem.

Edit your new VM

Try a quick Boot

Once the import process is complete, go ahead and try powering the system on. It almost certainly will fail with a kernel panic, as below:
Kernel panic - not syncing: Attempted to kill init! VFS: Cannot open root device "LABEL=/" or 00:00 Please append a correct = "root=" boot option Kernel panic: VFS: Unable to mount root fs on 00:00
it may also say something like
VFS: Cannot open root device "VolGroup00/LogVol00" or unknown-block(0,0)
Depending upon your configuration. If not, you're done. You're probably not done, though; the problem we're witnessing here is that Linux is still looking for an IDE disk. That drive no longer exists: it's been converted to a SCSI disk, so we need to tell Linux how to read its new disk.

Change the VM SCSI controller type

Set your VM SCSI controller to use LSI Logic instead of BusLogic. VMWare says either will work, but I've had much better luck with LSI. Right-click on your VM and select Edit Settings. Click on the SCSI Controller and click on the Change Type button, if it's not already set to LSI Logic.

Boot to the Linux Install CD

Power your VM on and mount the ISO (or actual CD) for Disk 1 of the Linux install set. If you installed from a DVD, just use that. When prompted for boot options, type
linux rescue
When the boot process is complete, it will ask you if you want to mount your file system. Don't do it read only; just click on Continue. We'll change our root at this point, to make things easier:
chroot /mnt/sysimage
/mnt/sysimage, by the way, is where the linux rescue system mounts your original file system. If there is nothing mounted there, you have a problem. The best I can offer at this point is to power off the system and change the scsi controller to whatever it isn't set to right now. Having done that, there are a few files to edit, and then we'll re-create the boot image with the updated settings:

Edit the files

Edit the following three files and replace all occurrences of /dev/hda with /dev/sda (if you're coming from IDE). If you're coming from physical SCSI devices, you'll find, in addition to /dev/hda, /dev/cciss/c0d0. Change these to /dev/sda. If you're unsure. make a backup of these files first.
vim /etc/fstab vim /boot/grub/ vim /boot/grub/grub.conf
Now edit /etc/modprobe.conf:
vim /etc/modprobe.conf
While we're in here, VMWare suggests making sure the ethernet adapter has been updated: for each ethx (x is a number) alias in modprobe.conf, set the module entry to pcnet32. Now, since we're using LSI Logic for our SCSI controller, we'll add (usually; if the settings are in there, make sure their values are correct) the following:
alias scsi_hostadapter mptbase alias scsi_hostadapter1 mptscsih
If you're using BusLogic, the above setting is BusLogic instead of mptscsih.

Create the new Boot Image

Now we're (almost) ready to create our new boot image. Looking at the files in /boot, you'll likely see a whole bunch of different initrd*.img files. One of those is going to be replaced by what we're about to do. Look in /etc/grub.conf to see which one:
cat /etc/grub.conf
Note the initrd*.img file that is listed in the above file, as well as the kernel version (usually the same). This is what we'll be using.

Fix a RedHat bug

If you're running RedHat Enterprise Linux (RHEL) or CENTOS, you're almost certainly going to run into a mkinitrd bug. Let's nip that before it comes up:
echo "DMRAID=no" > /etc/sysconfig/mkinitrd/noraid chmod 755 /etc/sysconfig/mkinitrd/noraid
In short, the bug makes this happen when you run mkinitrd:
No module dm-mem-cache found for kernel 2.6.18-92.1.1.el5, aborting.
If you want more information on it, it's described here.

Run mkinitrd

Simply run mkinitrd -v -f followed by the /boot/initrd-*.img filename and then the kernel version that you noted from /etc/grub.conf above. In my case, it looked like this:
mkinitrd -v -f /boot/initrd-2.6.18-92.1.1.el5.img 2.6.18-92.1.1.el5
When that runs to completion, you should be able to boot. Note that you'll want to make sure that Grub boots to that version when you reboot your VM. To ensure that, when the grub message comes up "hit any key to enter boot menu", do so, and select the appropriate kernel from the list. Make sure you update your system after it has booted; you may have ended up with an older boot kernel than you want.


  1. Thanks for the great info. I was able to follow the instructions and it worked great for me. One issue I did have though was with the network settings for the VMWare Converter. It wasn't initially configured to use DHCP for DNS. This coused the conversion process to fail at 2% with a generic "Unknown" error until I finally found some info on VMWare's site that indicated the problem was DNS. Once I changed DNS to DHCP it worked great. Thanks again!

  2. Darkman, thanks for the feedback; I'm glad it was helpful! Thanks, too, for the heads-up about the DHCP piece. It does appear that the XP PE base on which they built the converter CD could use some improvements. I do wish they'd come up with a linux-based boot CD; I think that'd clear up a lot of the problems with it

  3. I hope God will give eternal life so that you can help me anytime i'll need again!!! Tanks very much!


Thanks for leaving a comment!