Moving on however and we are in 2014, vsphere is at version 5.5 and hyper v has come on in leaps and bounds, it can even be found in Windows 8.1 rather than just in Microsoft's server family of products. But wait a minute, there's a new buzz word on the block, it begins with a 'c' and it is of course 'CLOUD'.
There's a lot of controversy regarding what the 'cloud' actually is. Some say it's simply a word that's encompasses hosting services that have been around for ages. Some then say that it is just a new buzz word for the internet (meaning anything that's not on your comp uter of device is in the cloud... aka dropbox or spotify etc...)
Then there's what the cloud really means to us tech folk... a fully automated environment that abstracts service from the infrastructure used to provide it. When a user needs a server, they click a button and get it, if they need more storage space they click a button and get it... even networking in a true cloud is defined by software.... a user wants a load balancer or firewall... boom, they've got one..... the same with databases.... user clicks a button and they have a database ready for use. All this happens without any user interaction with the underlying networks, storage systems and servers even though it might fill an entire rack, an entire data hall, or even multiple data centres across different parts of the world....
Obviously the above is possible thanks to virtualisation... but as with any system, when size increases, so does cost. This is why the world of open source technology and in particular open source virtualisation is at the absolute forefront of massively scalable cloud platform technology.
This post will be focusing on perhaps the biggest player in the Linux virtualisation world, that is of course KVM, the Linux Kernel Virtual Machine, along with program called Proxmox VE which is one of many management layers that can be credited for KVM's rapid success and adoption as a serious contender against the likes of VMWare's vSphere and Microsoft's Hyper-V.
----------------------------------------
For a detailed explanation of what KVM is and how it works, a quick google is your friend, but simply put it is a Kernel Module that turns Linux into a fully capable bare metal 'Type 1' hypervisor. (Type 1 meaning it interfaces directly to the hardware and does not require an underlying operating system to act as the middle man for it.) Hyper-V server and VMWare ESXi are also both type 1 hypervisors. However Oracle's Virtualbox, VMWare's Workstation, or even Microsoft's Virtual PC are not, since they require the host system to already have an operating system that they can interact with installed.Now the biggest advantage products like VMWare have or 'had' is their excellent management suites. Lets say VMWare's ESXi for example, the standalone management interface on on each host allows you to do basically everything imaginable, but when you start using vCenter server to manage multiple ESXi hosts, thats when you really see doors opening up about how simple management of an entire server estate could be... Fault tolerance, HA, DRS, storage interfacing, distributed virtual switching... Its got it all, and I must say, its very easy to use once you get the hang of it.
Im going to through it out there and say that VMWare's vCenter is the industry benchmark for virtual environment management.... So this is what the likes of KVM has to contend with, which leads me onto introduce Proxmox VE.
Proxmox is a fully featured, out of box solution that leverage's the KVM Hypervisor & OpenVZ Containers. It supports clustering, HA and various storage options such as NFS & iSCSI.
Management of your virtualised environment is taken care of using the excellent web GUI as pictured above and I have to say, I think this is where Proxmox really starts to shine. It is bridging the gap between the highly polished world of commercial virtualisation solutions and the not so polished world of open virtualisation tech such as KVM. Its thanks to programs like these that people like me are looking away from what we have known and are giving something else a try... After all, competition is what drives innovation right?
Proxmox is installed by means of a prebuilt ISO thats downloadable from their website. It is based on Debian and has an extremely straight forward installation procedure. Literally, boot the cd, enter a hostname, network details, password.... click click done! Then point your browser at the machine via HTTPS on port 8006 and login, that is literally it... You now have a fully functional VM host.
Networking in proxmox is fairly simple, however you do need to understand a bit about how linux deals with network adapters, bonding and bridging. If doing anything beyond the default network configuration, you will require console access to the server (as in out of band management such as DRAC, iLO or another KVM over IP system).
Allow me to explain my above example...
The server has two NIC's, one connects to switch A, the other to switch B. I have bonded the two NIC's to provide network redundancy to the host. This is labelled "bond0", eth0 and eth1 are members of this bond. To allow your VM's to connect to this physical network, we must create a network bridge (This is effectively a port group on a vSwitch in the world of VMWare). By default there is already a bridge, "vmbr0" and bond0 will be the bridge port, meaning VM's connected to that bridge can access the physical network via bond0.
However in my example, and lets face it... The real world, VLAN's will probably be in use on your network, so naturally this host will be plugged into trunk ports on the switches... Therefore we need to employ VLAN tagging on our interfaces. In debian linux this is achieved by listing the interface name, followed by the VLAN ID in the /etc/network/interfaces file. In my example, you can see that I have created two VLAN interfaces, bond0.20 and bond0.50.
The next part is simple... Any "vmbr" bridges we create, must connect to a VLAN interface and not the underlying bond0 itself... So you will see that I changed the default "vmbr0" to bridge onto "bond0.50" which is my management VLAN. I also created a second bridge and connected it to "bond0.20", this is one of the front end production networks that live traffic flows over.
I will write up some guides as I work more and more with Proxmox and document my experiences with it in new posts on here...
**At the time of writing this, Proxmox VE 3.2 has just been released and brings with it a huge new array of features including Open vSwitch support and the SPICE protocol. This makes it an even stronger force against the likes of VMWare. So I shall be testing the 3.2 release before I write any guides to make sure I don't post inaccurate information.**
https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_3.2





