Home
Home

Ghosts in the Machine

May 11 09

Ghosts in the Machine

Paul Weinstein

Over the past couple of years I have become quite a fan of virtualization. While full virtualization has been around for quite awhile, it has only come into its own on the Intel x86 architecture with the addition of extensions added to the x86-based CPUs from Intel and AMD in the last four years or so. Since then the options available for taking advantage of running virtual machines has expanded and matured. Even Apple and its line of Macs have gotten into the mix, since the migration from PowerPC to Intel x86 in Apple’s computing products.

In fact my first successful use of virtualization came on a Mac running Parallels. Back at Zoomshare, my main workstation was a Mac running OS X. Which in and of itself is fine for me as I could do just about anything I needed development-wise, write code using TextWrangler, mange code using Subversion, even do some quick prototyping by running Apache, Perl and Postgress all on one system.

Great, except that well over half of the visitors that visit Zoomshare hosted sites are doing so using some version of Internet Explorer and IE has quite a few well know issues. Plus, part of my tenure at Zoomshare covered the release of IE 7 which required the ability to test against two different version despite the fact that one could only run IE 6 or IE 7, not both.

Enter Parallels for the Mac and virtualization. With virtualization, I was able to run several instances of Windows XP, concurrently if needed, to test against both version of IE. This despite the fact that I was using a Mac Mini with meager Intel Solo CPU [1].

But wait you say, Macs have been able to run version of Windows for years. What about Virtual PC or even OrangePC? Well both options provided for the ability to run Windows, but in the non-Intel Mac world of PowerPC CPUs. Moreover, both Virtual PC and Orange PC provided technically different solutions. Virtual PC provided a software emulation of the Intel based hardware and Orange PC did provide a hardware option that was incorporated into the Mac environment as a daughterboard

Virtualization on the other hand simulates the current underlying hardware, in the case of the “modern” Mac mini at Zoomshare, this means simulating the underlying Intel hardware for use by Windows. In the case of my current laptop, a Lenovo ThinkPad R61i, this means a Linux (Fedora Core 9) “host” running VMWare Workstation and a “guest” Windows XP, Mac OS X or just about any other OS built to run on the underlying Core Duo Intel Processor.

VM Screen Shot
VM Screen Shot

In my most recent, and current, incarnation as an independent computer consultant, I’ve taken on the task of updating a retail bridal shop’s online store. Step one being to setup a development environment. At previous places I’ve worked, in order to keep control of costs, the development setup was built using repurposed and scavenged production equipment. With VMware’s Server product I can create multiple development setups for testing on one midsize Quad Core server with ample memory.

Not that this isn’t without its own issues, some based on software limitations some on hardware. For example, one issue I have with Vmware’s Server software is the lack of cloning options. See, one advantage of virtual machines is the ability to replicate virtual “appliances“, the software image containing a software stack designed to run inside a virtual machine. That is I can create a virtual machine, install the necessary base components and then replicate it as many times as needed [2].

The problem with VMware Server is that while I can copy the necessary files that represent the virtual machine on the host’s file system, I then need to modify configuration options in the vmx and vmxf files. Even then I found I need to remove and readd the “hard drive” within VMware’s web access tool after notifying VMware that the cloned virtual machine is a “copied” virtual machine and needs a new unique identifier.

The other catch to all of this is that each of these instances where I have grown to accept virtualization as a viable tool have been in closed, controllable environments. Where virtualization is suppose to shine, from a cost of business ownership, is in the data center – in messy, live production environments. That to me seems risky, at least in terms of the production environments I tend to work in.

Full virtualization, as the type of virtualization discussed here, is successful for isolating users and computing environments from each other; e.g. multiple differentiated development servers for testing various coding or deployment approaches. In a live production environment, with a mission critical business application on it, such as the online store for a retailer, I’m not quite sure how virtualization would be effective from a cost perspective. One would need some beefy iron to run a popular online store and if the underlying hardware had any sort of failure, the whole business would come to a halt. On the other hand, for a non-critical, but business necessary operation; development and testing environments, email, filesharing, print servers, digital telephony and even employee workstations, virtualization is definitely worth a look.

 


 

[1] Ok, I did have to up the amount of RAM on the machine in order to run XP in a responsive manner, but that’s a minor upgrade, even with a non-straightforward Mac Mini enclosure.

[2] Well not quite. One issue here is software licensing. For example, Apple’s End User License Agreement limits even virtual implementations of OS X Server to Apple designed hardware. Other software packages have other sorts of limitations.