Monday, 10 March 2014

KVM/Proxmox.. My journey into Open Source Virtualisation

I have been a keen follower and end user of virtualisation technologies for quite a few years now. Back in my first role I quickly discovered vmware workstation which became my home turf when training for the various Microsoft certifications I have pursued since starting my career. From there I have gone on to become extremely familiar with products such as vmware ESXi and Microsoft hyper v... in fact I have used the vmware vsphere family of products since version 3.5 and I love it to bits. It's tried, it's tested and by God it's just good but I have designed and built infrastructure with both Microsoft's hyper v and vmware vsphere, so have experience with both.

Moving on however and we are in 2014, vsphere is at version 5.5 and hyper v has come on in leaps and bounds, it can even be found in Windows 8.1 rather than just in Microsoft's server family of products. But wait a minute, there's a new buzz word on the block, it begins with a 'c' and it is of course 'CLOUD'.

There's a lot of controversy regarding what the 'cloud' actually is. Some say it's simply a word that's encompasses hosting services that have been around for ages. Some then say that it is just a new buzz word for the internet (meaning anything that's not on your comp uter of device is in the cloud... aka dropbox or spotify etc...)

Then there's what the cloud really means to us tech folk... a fully automated environment that abstracts service from the infrastructure used to provide it. When a user needs a server, they click a button and get it, if they need more storage space they click a button and get it... even networking in a true cloud is defined by software.... a user wants a load balancer or firewall... boom, they've got one..... the same with databases.... user clicks a button and they have a database ready for use. All this happens without any user interaction with the underlying networks, storage systems and servers even though it might fill an entire rack, an entire data hall, or even multiple data centres across different parts of the world....

Obviously the above is possible thanks to virtualisation... but as with any system, when size increases, so does cost. This is why the world of open source technology and in particular open source virtualisation is at the absolute forefront of massively scalable cloud platform technology.

This post will be focusing on perhaps the biggest player in the Linux virtualisation world, that is of course KVM, the Linux Kernel Virtual Machine, along with program called Proxmox VE which is one of many management layers that can be credited for KVM's rapid success and adoption as a serious contender against the likes of VMWare's vSphere and Microsoft's Hyper-V.

----------------------------------------

For a detailed explanation of what KVM is and how it works, a quick google is your friend, but simply put it is a Kernel Module that turns Linux into a fully capable bare metal 'Type 1' hypervisor. (Type 1 meaning it interfaces directly to the hardware and does not require an underlying operating system to act as the middle man for it.) Hyper-V server and VMWare ESXi are also both type 1 hypervisors. However Oracle's Virtualbox, VMWare's Workstation, or even Microsoft's Virtual PC are not, since they require the host system to already have an operating system that they can interact with installed.

Now the biggest advantage products like VMWare have or 'had' is their excellent management suites. Lets say VMWare's ESXi for example, the standalone management interface on on each host allows you to do basically everything imaginable, but when you start using vCenter server to manage multiple ESXi hosts, thats when you really see doors opening up about how simple management of an entire server estate could be... Fault tolerance, HA, DRS, storage interfacing, distributed virtual switching... Its got it all, and I must say, its very easy to use once you get the hang of it.

Im going to through it out there and say that VMWare's vCenter is the industry benchmark for virtual environment management.... So this is what the likes of KVM has to contend with, which leads me onto introduce Proxmox VE.


Proxmox is a fully featured, out of box solution that leverage's the KVM  Hypervisor & OpenVZ Containers. It supports clustering, HA and various storage options such as NFS & iSCSI.

Management of your virtualised environment is taken care of using the excellent web GUI as pictured above and I have to say, I think this is where Proxmox really starts to shine. It is bridging the gap between the highly polished world of commercial virtualisation solutions and the not so polished world of open virtualisation tech such as KVM. Its thanks to programs like these that people like me are looking away from what we have known and are giving something else a try... After all, competition is what drives innovation right?

Proxmox is installed by means of a prebuilt ISO thats downloadable from their website. It is based on Debian and has an extremely straight forward installation procedure. Literally, boot the cd, enter a hostname, network details, password.... click click done! Then point your browser at the machine via HTTPS on port 8006 and login, that is literally it... You now have a fully functional VM host.


Networking in proxmox is fairly simple, however you do need to understand a bit about how linux deals with network adapters, bonding and bridging. If doing anything beyond the default network configuration, you will require console access to the server (as in out of band management such as DRAC, iLO or another KVM over IP system).

Allow me to explain my above example...

The server has two NIC's, one connects to switch A, the other to switch B. I have bonded the two NIC's to provide network redundancy to the host. This is labelled "bond0", eth0 and eth1 are members of this bond. To allow your VM's to connect to this physical network, we must create a network bridge (This is effectively a port group on a vSwitch in the world of VMWare). By default there is already a bridge, "vmbr0" and bond0 will be the bridge port, meaning VM's connected to that bridge can access the physical network via bond0.

However in my example, and lets face it... The real world, VLAN's will probably be in use on your network, so naturally this host will be plugged into trunk ports on the switches... Therefore we need to employ VLAN tagging on our interfaces. In debian linux this is achieved by listing the interface name, followed by the VLAN ID in the /etc/network/interfaces file. In my example, you can see that I have created two VLAN interfaces, bond0.20 and bond0.50.

The next part is simple... Any "vmbr" bridges we create, must connect to a VLAN interface and not the underlying bond0 itself... So you will see that I changed the default "vmbr0" to bridge onto "bond0.50" which is my management VLAN. I also created a second bridge and connected it to "bond0.20", this is one of the front end production networks that live traffic flows over.

I will write up some guides as I work more and more with Proxmox and document my experiences with it in new posts on here...

**At the time of writing this, Proxmox VE 3.2 has just been released and brings with it a huge new array of features including Open vSwitch support and the SPICE protocol. This makes it an even stronger force against the likes of VMWare. So I shall be testing the 3.2 release before I write any guides to make sure I don't post inaccurate information.**

https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_3.2


Saturday, 1 March 2014

Xubuntu Dell Vostro 3560 fan noise SOLVED!

Since I switched to Xubuntu 13.10 from Windows 8.1 a couple of weeks ago, one thing hasn't been quite right.... The fan control on my Dell Vostro 3560 laptop, after a short period of use the fan wound up to 100% and stayed there, and its a pretty damn annoying noise to have in your ear all day long.

Now I know Windows had the fan under better control, but even then, the fan did come on more often than not. But in Xubuntu, it was maxed out nearly all the time. Therefore I put this down to a driver problem or something where the OS is unable to read all the required sensors etc... I came across the i8k fan control application, but it just didn't work by itself no matter what I tried.

Then I found this:

http://en.community.dell.com/support-forums/laptop/f/3518/t/19511930.aspx

So it turns out Dell's excellent design team thought it would be a good idea to put the PCH directly under the keyboard without any form of heatsink whatsoever. The system also happens to monitor the temperature of this chip and as such it has influence on the fan speed. I see someone on the Dell forums made their own heatsink with some copper plate and it worked perfectly.

So I got some pre cut 0.9mm copper sheet and some thermal paste and here are the results. (I have to say this is a bit of throw back to my PC modding and overclocking days a number of years back, so im no stranger to opening up PC's and putting random things in them).
























Remove the keyboard using the 4 catches in the notches on top of the keyboard with a flat head screwdriver, move the keyboard out the way and you will see the culprit. Blob of thermal paste on the copper, press it down nice and firm making sure its not shorting anything out, and hold it in place with something (I used insulating tape). The pressure of the keyboard seems to do the rest when clipped back in if you use 0.9mm copper.

Anyway, this done the job instantly, my fan hardly comes on all all now and that copper becomes too hot to touch which means its doing its job.

Job Done!

Tom

Wednesday, 19 February 2014

pfSense in production with HA

If you work with or have an interest in IT networking and haven’t already heard of pfSense then you must go here and be amazed:

http://www.pfsense.org/

Basically (well, it is by no means your 'basic' BT home hub), pfSense is an Open Source routing platform, stateful firewall, IDS/IPS, Proxy, VPN gateway..... In fact... It can probably do everything!

I have been using pfSense on the office network and on the production network in the data centre for nearly a year so have got to play with it a fair bit.

Its basic set up simply asks you what interfaces you want to use for what (LAN, WAN, +more) and from there you can point your browser at the web GUI and start building your proper configuration.

The following is an example of one of my production HA configurations. I will walk you through it and try to give an overview of what's required to achieve this type of set up using pfSense 2.1.

You should be able to make sense of the diagram pretty easily. The data centre where this set up resides provides two WAN feeds into the rack. This is provided by their own Cisco HSRP set up and a pair of 6509's. (Big modular Cisco switches for those that don't know). Each of the colocated customers in this datacentre will have their own VLAN that pipes their routable subnet to their rack.

Those two feeds come into the WAN interfaces of my pfSense boxes and thats where the DC's network ends and mine begins.

Understanding High Availability

Our aim with this set up is to remove the single point of failure that is the LAN's default gateway. For sake of explanation... Take your home network for example, your router's IP address may be 192.168.0.1, this will most likely be the default gateway your PC is configured to use in order to communicate outside of the local subnet. Obviously... If a rouge piece of space junk comes crashing through your roof and lands on your router, no more default gateway for your PC's.

The same applies when you scale up to the enterprise IT world, hosted environments or building networks... If the default gateway dies, things stop talking beyond their local subnet.

"So to solve this problem, we need to add a second default gateway right?"

Hey presto! Yes, we need to double up on our default gateway, in this case pfSense. However, the basic fundamentals of networking & routing dictate that we can't give a host two different default gateways and obviously we can't give our second pfSense the same IP as the first.. That doesn't work! Doh...

Common Address Redundancy Protocol

Or 'CARP' as you may have heard it called is BSD's implementation of a gateway redundancy protocol. Read up here.

It works in very similar fashion to Cisco's HSRP or VRRP from the IETF. It uses virtual addressing and an Active/Passive cluster set up in order to present a single IP to the network. The passive cluster member can then assume the role of hosting this virtual address in the event that the master node fails.

The configuration of the master node is syncronised in real time to the cluster slave. Looking at my diagram again, this means if I felt like hitting pfSense A very hard with a 10lb sledgehammer, pfSense B instantly assumes ownership of the VIP (Virtual IP) that I am using as the LAN's default gateway address. The servers on the LAN are none the wiser, since they are still talking to the same default gateway address as before.

The same applies to incoming WAN traffic. Virtual 'CARP' IP's are also used on the WAN interface and therefore the second node takes ownership of them as well so everything continues as normal.


The above picture is one of two Dell Poweredge R610's that I use in a CARP configuration in a production environment. 1 interface is for the WAN, 1 is for a dedicated PFSYNC interface and the remaining two are configured as a bond for the LAN, the bond is split between two switches on the LAN side for further redundancy.

PFSYNC is used to synchronise the states and other aspects of the pfsenae master configuration to the slave. Obviously this is vital to the seamless failover between the master and slave. Inconsistent states or configuration will result in a disastrous failover. 

Support
Even though pfSense doesn't have the heavy weight support of big player like Cisco and Juniper etc... I find that it's community is excellent and as such have had no problems finding answers to my questions on their forums and the numerous guides and walkthroughs online.

But to be honest, if you can get even a basic grasp on what needs to happen in order to acheive high availability at the network perimeter then a simple guide such as the following as should point you in the right direction.


Until next time....

Tom

Friday, 14 February 2014

Moving to Linux.... Full time.

So today marks somewhat of an occasion for me and its nothing to do with Valentines day! Today marks the end of the first week since I made the switch to Linux as my full time OS on my work/personal laptop.

My interest in the Linux platform was sparked around 18 months ago when previous employment had me take ownership of the cPanel web hosting platform, which obviously runs on Linux. This came about due to my personal experience with Wordpress sites and in particular, migrating them from one hosting provider to another.

At the time I was part of a 'Wintel' server support team looking after a massive exchange environment, a fairly big AD with a multi-domain forest, plenty of trust relationships, IIS servers, ISA servers etc.... So I was still very much a Windows man even though I had already been trying to specialise in VMware ESX and Citrix in previous roles... Windows was still what I’d call 'Home'.

Anyway, I took on the cPanel platform and before long needed to go beyond the excellent web based GUI.

"Ah... This isn't a Windows command prompt?!?!"

So out comes VMware workstation to the rescue, on goes CentOS 6.2 and the fun begins. This is where I first experienced Linux in a professional capacity and over the next 18 months, with the excellent guidance of a Unix consultant who I worked with (on the end of the phone and with Lync for many, many hours), I started my journey into the Linux world.

Fast forward to the present and I have since run a number of LAMP web servers and experimented with Linux to run applications bespoke to the industry I currently apply my knowledge in. This experience also opened my mind to various open source platforms such as pfSense, OpenStack, Cloudstack and the KVM hypervisor... Funnily enough, I am putting a failover pfSense CARP set up in the datacentre production environment tomorrow.

Thats how much I have come to trust and adopt 'Open Source' technology, especially the community that surrounds it.

In at the deep end... Bye bye Windows! and I have to say considering this is my laptop I use for work, I am very impressed and am not regretting a thing nor am I finding myself needing to spin up a Windows VM. Result!

Also, the UI... And it needs to be said, the stupid Metro touch crap in Windows 8 has done this for me.... I don't want my laptop to look like a phone and I definitely don't want to use it like one!...

This is my nice, slick, fast and tidy Xubuntu 13.10 desktop:


It does everything I need, I can manage my Production ESXi environments, get on all the DRAC's of servers, connect to exchange with thunderbird, open office documents, connect to my VPN gateways.... Even RDP works fine! So does teamviewer... Promox VE displays perfectly and I have Oracle Virtualbox for local virtualisation.

That was the biggest point for me... Being able to manage Windows environments from Linux... and so far I have to say.... 10 out of 10 from me!

So after 14 years of using Windows, I have made the move away from it on my only computer that I use for everything. I suppose this is the best way to learn...

Until Next time!

Tom

Thursday, 13 February 2014

Let me introduce myself!

Well, I have to be honest and say that in the 14 years I have been using computers & the internet, this is my very first blog post!

So here goes "One small step for man.... One giant leap for mankind"...

I am Tom and at the time of writing this I am a 24 year old Infrastructure Architect working in the IT hosting field (Keeping in with the 'Cloud' buzz word at the moment some might say...). I am one of those tech types that doesn't switch off when they leave the office at 4 or 5 in the afternoon, my mind is constantly spinning with ideas and possibilities that IT brings to the table. What can I achieve with this other hypervisor? What can I do to get this packet from here to there? How can I implement open source IDS/IPS? etc... etc...

You get the picture, technology and more so the challenges it presents give me a real buzz. IT isn't just a job to me, its a way of life and my way of thinking.

I have worked my way through a variety of roles within the industry very quickly and I think thanks to my drive for learning and hunger for information, I have been able to progress technically with every move.

Without mentioning any names, I started out as a 1st line IT help desk support analyst for one of the worlds largest IT companies. A big windows environment where I got my first exposure to things such as Microsoft Active Directory, Citrix, ticketing systems, file shares etc etc...

It didn't take me long to get my teeth into that and soon I was creating Windows 2003 domains in VMware Workstation at home, playing with DHCP, DNS, Roaming profiles and most importantly pestering my technical superiors at work for information about how things worked or ...... why they didn't work.

After I while I moved onto a more technical role at another of the worlds biggest IT companies. 2nd Line desktop support.... Same thing as the first, just more in depth and more admin rights on the network.... Every IT guy likes more rights?... right?

This continued and I gained more and more responsibility there, including the chance to play with some MS file and print servers. Ooooohhh Servers!

This is the point where my career kicked into overdrive and my head literally went into information overload.... But most importantly, this is when I really started getting down and dirty with the stuff that makes it all work. Servers, storage and networks.

This is when I joined the ranks of 3rd Line Support. Yes, the 'god like' team of support engineers that work mostly unseen or unheard and only make an appearance when the s**t really hits the fan and 2nd line are running round like a bunch of headless chickens whilst concerned management are waving the proverbial stick of authority and desperately trying to explain to the customer why 180,000 of their users are unable to log in to windows.

I was employed as a member of the 3rd line infrastructure services team, based in a data centre, looking after a production environment for a public facing portal system and its development environments to go with. I never counted but we were talking around 900-1000 servers, a few large SAN's and plenty of networking and encryption kit.

This was again a Windows environment with plenty of MS SQL clusters, IIS and Biztalk. Server 2008 & 2003 were the OS of choice. I learnt a hell of a lot during this time. Our team covered everything, cisco routers, switches, EMC SAN's and all the dell servers themselves... 

This was also where I got my first look at disaster recovery solutions as well as one other technology that I would become highly familiar with. VMWare ESXi. 

Fast forwarding a few years and having worked with numerous different technologies at '3rd line' level, headed large P2V projects and taken a massive interest in virtualisation..... I am now where I have always wanted to be, which is at the infrastructure design level. 

So.... That's a very brief background on my career so far. I will be posting my technical endeavours to this blog as frequently as possible in the hope that My experience might help others with similar interests. 

Seeing as I have used the internet as my primary source of information to do what I do.... I thought I should give something back!

Until next time.....

Tom