Virtualization is a massively growing aspect of computing and IT. Virtualization has been around for many years in some form or other, the most common usage being hard drive partitioning on a home PC. It essentially means creating a new “virtual” version of something, rather than a “real” version. The disc partition example is the perfect way to understand virtualization, as you are essentially taking one hard drive and splitting it in two, with the second partition being a “virtual” partition.
The main two competing pieces of software that are used for virtualization tasks are Microsoft Hyper-V and VMWare, both of which have their merits and there is much debate over which is best.
There are three main types of virtualization, all of which carry out important tasks in businesses:
This is where the resources of many different network storage devices such as hard drives are pooled so that it looks like they are all one big vat of storage. This is then managed by a central system that makes it all look much simpler to the network administrators. This is also a great way to keep an eye on resources in a business, as you can then see exactly how much you have left at a given time. It gives the administrator much less hassle when it comes to backups etc.
Network Virtualization is when all of the separate resources of a network are combined, allowing the network administrator to share them out amongst the users of the network. This is done by splitting the resources’ bandwidth into channels and allowing the administrator to assign these resources as and when required. This allows each user to access all of the network resources from their computer. This can be files and folders on the computer, printers or hard drives etc. This streamlined approach makes the life of the network administrator much easier, and it makes the system seem much less complicated to the human eye than it really is.
This is the main area of virtualization, whereby a number of “virtual machines” are created on one server meaning that multiple tasks can then be assigned to the one server, saving on processing power, cost and space. This means that any network tasks that are happening on the server still appear to be on a separate space, so that any errors can be diagnosed and fixed quickly.