Virtualization: Now and The Future

posted by on 28th July 2009, at 9:20pm

What is Virtualization?

The most common use of the word virtual for most people is “virtual environment” or “virtual world.” Expanding to the realm of computers also expands the areas in which “virtual” objects can be used. Before we move any further it’s important to state that hardware will be defined as the physical technology used and software will be defined as applications that run on top of hardware whether the hardware is virtual or not. The virtualization we will be talking about can be thought of as using software to create hardware (that does not exist, except in memory) for other pieces of software including operating systems to run on. A very clear example of basic virtualization is to use an application like Sun’s VirtualBox, VMWare, or Parallels to install another operating system. This is full virtualization because the application chosen creates a full selection of hardware (CPU, RAM, Hard Drive, Video Card, Sound Card, etc…)

The Here and Now

There are currently two fronts of virtualization. The first being platform virtualization. The second is application virtualization. Most people have not heard of application virtualization as there are no real mainstream uses yet. Right now everyone is primarily concerned with what advantages platform virtualization can bring.

Before we go any further it’s important to define platform virtualization and application virtualization. Platform virtualization is the act of using software to create a full set of hardware as mentioned above. With this you can run an operating system with one of the above mentioned applications. Application virtualization is slightly more tricky. Application virtualization can be thought of as using software to a section of software based containment within your operating system. The most common use of this is to prevent applications from damaging the existing operating system.

Platform virtualization is currently the main focus in the consumer space. In early 2006 when Apple moved to an Intel based design it was possible to run Windows on their computers. It wasn’t long until various virtualization solutions popped up, Parallels was the first. At the same time Linux was also gaining steam and becoming more viable for mainstream use. This was the catalyst for VirtualBox which in turn enabled Windows to be run inside Linux using open source software. While I mention Mac OS and Linux it’s also fair to say that all of these solutions work on Windows.

If you’re on Windows and you’re wondering why you would want to use a virtual machine there are a few scenarios. The first one being is that it’s fun. It’s fun experimenting with different operating systems without the risk of your main system being corrupted. At the same time trying out a new operating system can be a good learning experience. Second, you may be a software developer and need to test your software on previous or newer versions of Windows. Of course in this category you can also use virtualization if one of your needed applications does not work with your operating system.

As stated before there is little emphasis on application virtualization at the present time. The biggest area for expansion in the realm of application virtualization deals with security. The reason for this is that with application virtualization a “sandbox” can be created allowing applications to run without the potential for damaging your existing operating system. One such application is Sandboxie. The name Sandboxie stems from the fact that Sandboxie was first designed to Sandbox Internet Explorer (IE.) Sandboxie can currently sandbox any application that you wish, which makes it a great utility for testing applications that you do not trust.

Application virtualization can also be used for portable applications. Applications such as Firefox do have a portable version (Opera also does.) These can be useful for when you are out and about away from your computer and using a public computer whose security may be questionable. They can also be used when you desire an increased level of security as your drive could be encrypted until given an appropriate passcode.

Virtualizing hardware architectures also falls under application virtualization. Virtualizing hardware architecture is needed when you want to run applications built for a different computer architecture. A hardware architecture can be thought of as a different way of handling interaction with hardware and software from the perspective of the CPU. An example of this was during the Mac OS transition to an Intel architecture. If you were to run a PowerPC based application on an Intel machine a layer of invisible virtualization would kick in allowing it to be run seamlessly.

Currently virtualization tends to be an afterthought unless the user is aware that they require a virtual solution. These are stepping stones towards the future in which virtualization will become more relevant.

The Virtual Future

Platform virtualization is the tip of the iceberg. Today dual core processors are commonplace. Most applications and operating systems can’t take full advantage of both cores but that’s a topic for another day. Quad cores are becoming more common as well and they suffer the same problem as dual core processors. It is clear that the direction of both Intel and AMD is to keep innovating on processor technology increasing the amount of work that can be done per clock cycle. Also, they will continue to develop processors with more cores. Developing an operating system or application to utilize all cores isn’t an easy task. We are likely to see incremental development on the operating system front and specific applications be updated to utilize multi-core technology. This leaves us asking the question, what are we going to do with 4, 8, 16, or more processor cores?

The answer is simple: virtualize. We are heading down a road to a point in which your operating system won’t be running on real hardware. As the technology advances and gets faster this becomes all the more relevant. This will be accomplished by booting to a minimal kernel to handle interaction between the real hardware and existing software on the machine. From here the operating system would be loaded virtually. Along side this other operating systems could also be loaded. Microsoft has an interesting technology to handle interaction with the host hardware and loading of the operating systems, it’s called Hyper-V. As of now Hyper-V is currently only available in the Server flavour of the Windows operating system. It is important to underscore that the root operating system would be loaded transparently, it would appear much like the current Windows loading process.

In a virtual operating system applications could still be sandboxed. This of course would allow for another level of application security. As stated before multi core is coming on strong, it’s entirely possible that each application could run sandboxed on its own processor core(s). At the time of writing this both of my systems are running with less than 100 processes (application or task that the operating system is currently working on) in use, this means that a 128 core system would be able to handle virtualization of my processes. Once again this isn’t something we’ll be seeing in the next 3 years, more likely sometime in the next 10 years.

Intel recently demonstrated an octal core (8 cores) server CPU. The demonstration was further bolstered by the fact it was running in a configuration with 8 of these CPUs, which of course means 64 cores. This was doubled by the use of Hyper-Threading (a technology which emulates 2 cores in the place of one). The demonstration running Windows XP showed 128 active cores. This would be of little benefit now but with virtualization as described above a whole new world of possibilities opens up. One day single CPUs with a massive amount of cores such as in this demo will be available and we will reap the benefits of virtualization.

I hope this has given you an expanded view of how important virtualization is to future development of operating systems and software. If you would like anything clarified or explained in more depth, don’t hesitate to ask. If you have any suggestions for future articles please send me a PM on the forums.

That’s all for today.


This article is filed under Tech. You can follow any responses to this entry through the RSS 2.0 feed. Both comments and pings are currently closed.

2 Comments

  • tobylane Says:
    16th August 2009, at 4:00pm

    It’ll be a server thing for a long time, unless you’re using photoshop at the same time as Crysis the Intel Dual Core are enough.

    Servers can use it very well, my college has something like 30 linux servers vming ~80 windows and ~20 linux servers we use.

  • Russell Hohmann Says:
    9th April 2010, at 2:19am

    informative post, raises interesting points