XenServer 6.5 & Containers – “Hello World” Edition

In between calls on Friday while skimming my Twitter feed, I stumbled across an interesting announcement, Preview of XenServer support for Docker and Container Management, which led me to, How to Get Started with Container Monitoring on CoreOS. Curious, having seen a brief demo in January, I decided to jump in and see if I could get it running in my lab. Containers are quick and easy, right?

Pre-Requesites

First things first, head over to the XenServer Pre-Release Downloads and under the Docker Integration section you’ll need to grab both the supplemental pack and updated XenCenter. Once downloaded, use an SFTP client to copy the ISO onto your XenServer(s) – I’m fond of ForkLift on Mac OS X. You’ll also need to run the XenCenter installer, but its your basic Next, Next, Next ordeal.

With the ISO copied onto your XenServer and your XenCenter updated, we’ll start having some fun.

The Basics

Installing the supplemental pack is a simple ordeal. Run the following command in whichever directory you copied the ISO file:

xe-install-supplemental-pack xscontainer-6.5.0-100205c.iso

image

Since we’ve already installed the preview version of XenCenter, we’re essentially done. Your XenServer now includes a new guest template for CoreOS and the updated XenCenter includes:

  • New VM Wizard – CoreOS cloud-config/config-drive support
  • VM Properties – Container Management / Cloud-Config Parameters
  • Probably more, but those are the visible differences I noted

Of course, we’re not stopping there, let’s dig deeper and see how the new features look.

Your First CoreOS VM

If, like me, you’re new to CoreOS you’ll want to head to their download page and grab an ISO. While not exactly required, as I’ve learned through additional research, its how I got up and running and works for this “Hello World” example. Don’t forget to copy the ISO to your ISO Storage Repository so we can do some installing.

Head over to XenCenter so we can create a new VM.

image

My first pass was pretty much Next, Next, Next (and serves as a learning opportunity). I gave it a name, took the default CPU/Memory/Disk (1 vCPU, 1GB RAM and 5GB disk) and left the Cloud-Config Parameters unchanged. This is important, but I’ll explain why in a second. Make sure to select the new ISO for the installation source and boot the VM when the wizard completes. After a quick provisioning, your new VM should be up and running and will automatically boot into the console.

Run the following command to install CoreOS to the disk:

sudo coreos-install -d /dev/xvda -o xen -C stable

image

Once the installation is finished, eject the ISO (using XenCenter) and reboot your VM (from its console):

sudo reboot

You now have a purpose built OS runtime for containers, but what’s next?

XenCenter Enhancements

In order for XenCenter to monitor and control our containers, we need to enable Container Management. Right-click on the new VM and open its Properties.

image

There’s also another set of options called Cloud-Config Parameters. As I mentioned previously, the preview release supports both cloud-config and config-drive, for automated configuration of CoreOS VMs.

image

Accessing a CoreOS VM

At some point we’ll want to connect to the VM and actually use it. Assuming you went the Next, Next, Next route and didn’t add your SSH public keys, there’s not much you can do until you rectify the situation (as I learned). We need to revisit the Cloud-Config Parameters and add at least one SSH public key of our own.

If you’re not familiar with using public/private keys for SSH authentication, Ubuntu has some good documentation. I ran the following command on my Mac, to setup a new keypair, accepting the default file location and creating a passphrase:

ssh-keygen -t rsa -C “for CoreOS testing”

image

To insert the key into your cloud-config, we need to use good ole copy/paste. Print the public key you just created, so we can highlight and copy (everything between “ssh-rsa” and your comment, “for CoreOS testing” in my case):

cat ~/.ssh/id_rsa.pub

image

Now, make sure the VM is shutdown and open the Cloud-Config Parameters again. Paste your public key at the end of the “- ssh-rsa” line, just under the “ssh_authorized_keys:” line. Pay close attention to the comments regarding the %XSCONTAINERRSAPUB% line, its what enables a new daemon in dom0 to monitor/control containers.

image

With those changes complete, you should be able to SSH from your local machine to the CoreOS VM. I elected not to save my passphrase in my Mac’s keychain, hence the error “Saving password to keychain failed”.

Launching a Few Containers

Once connected, you can create a couple containers, like so:

docker run –name hello -d busybox /bin/sh -c “while true; do echo Hello World; sleep 1; done”

docker run –name goodbye -d busybox /bin/sh -c “while true; do echo Goodbye World; sleep 1; done”

image

Flipping back to XenCenter, you should now see the containers grouped under your CoreOS VM.

image

The Payoff

All told, I was able to go from start to finish in about 30 minutes. The XenServer installation was dead simple and getting started with CoreOS/Docker was pretty intuitive. There’s a lot more under the hood to investigate, that’s for sure, so expect more updates in the future (like how do you view/interact with each container, using fleet for container management, etc).

A stroll down memory lane, XenServer style.

With the much anticipated release of XenServer 6.0 on Monday, I got the urge to take a stroll down memory lane. For those that don’t know, I joined Citrix back in February 2008 and my sole responsibility was spreading the word about XenServer: educating partners, customers and those within Citrix that were used to virtualizing applications and desktops, not servers. I’m not an original XenSource guy, but I was picked for the team because I brought strong Linux/Storage experience. All of us on the team had our own unique talents and most importantly we brought a perspective to Citrix that was different from your traditional MetaFrame/Presentation Server/XenApp administrator.

My experiences that first year, from growing our install base to building the technical prowess of my local partners and fighting an uphill battle against the 800 lbs. gorilla in the datacenter built character, so to speak. We were the scrappy underdog, striving to make server virtualization ubiquitous and affordable, for all. We were little known, in the enterprise at least, and working at a feverish pace trying to get features into the product. We built scripts and utilities to help people unfamiliar with this thing we called dom0 and the xe CLI. I was the Linux shell scripter; another guy on the team kept going on about these Windows GUIs he could get built. Funny enough, both of us are still around and banging the XenServer drum on a daily basis.

Many of the challenges we faced back in 2008 are water under the bridge. We fought hard, built up our customer base and changed the server virtualization landscape for everyone involved. The features we lacked back in the early days of XenServer 4.0 and 4.1 are distant memories. In early 2009 we did the unthinkable and made a full-featured, free version of XenServer available for everyone to download. This set the stage for a serious uplift in awareness for XenServer. Downloads of the free edition went viral and a shift in marketing took the “10 to Xen” message to a new level.

Not wanting to rest on our laurels, the engineering team continued to innovate, launching XenServer 5.5 and becoming only the second hypervisor to be deemed Enterprise-Production Ready. A year later came XenServer 5.6 and with it a slew of new features, from enhancements to Workload Balancing to the inclusion of Role Based Access Control and a new Site Recovery feature. At the same time, our visibility in the cloud computing arena was picking up. Service Providers, seeing an opportunity to build their clouds on XenServer Free — and thanks to our simple per-server pricing, we could position XenServer Advanced or Enterprise and still be compelling against our competition — were quick to jump on board. Along with the announcement of XenServer 5.6 came news that Rackspace Cloud Servers would be standardizing their offering on XenServer.

And that brings us full circle to the release of XenServer 6.0, which includes tons of new features across all of Citrix’s core competencies.

  • Cloud – The full integration of Open vSwitch enables customers to design next-generation networks based on the OpenFlow standard. The number of articles I’ve found on Software Defined Networks and OpenFlow in public/private clouds is staggering, but this, this and this should get you started.
  • Virtualization – Doubling down on the value-proposition argument, XenServer 6.0 now includes the VM Protection and Recovery feature in all paid editions, i.e. Advanced and above. It’s also had its core engine, Xen, ugpraded to 4.1 and the supportability limits for both hosts and VMs are increased over previous editions.
  • Desktop – Fully supported to run the latest release of XenDesktop 5.5, XenServer 6.0 differentiates itself as an HDX-optimized platform thanks to the new GPU pass-through feature, which enables a physical GPU to be assigned to a VM providing high-end graphics capabilities, and the continued support and development of the IntelliCache feature.
For the full laundry-list of features, check out the New Features section of the XenServer 6.0 Release Notes.
-LF

Creating a Local ISO Storage Repository on XenServer

Trying to find a quick and painless way to add an ISO storage repository to your XenServer without using CIFS or NFS? I was searching for just that after attaching an external USB drive with all my ISOs to my XenServer (running version 5.6 FP1, aka “Cowley”, in my case). Since there’s no direct method for this I used a little trial-and-error and came up with the following steps, to be run at the CLI. First, you need to identify and prepare the disk so after plugging it into your XenServer run the following commands (replacing /dev/sdb with the /dev node returned in the first command).

ls -l /dev/disk/by-id | grep usb

fdisk /dev/sdb

If you’re unfamiliar with fdisk on Linux, use the following documentation for guidance, http://tldp.org/HOWTO/Partition/fdisk_partitioning.html. You’ll need to format the partition for Linux (Partition ID = 83). The final step in preparing the disk is to create an EXT3 filesystem on your desired partition. In my case I created a single partition, so it looks like this.

mkfs -t ext3 /dev/sdb1

With everything prepped and ready to go you need to create a mount point and then mount the drive.

mkdir /mnt/wd500gb/ISO

mount /dev/sdb1 /mnt/wd500gb/ISO

This will be enough to run through the rest of the tutorial, but you’ll need to add the USB drive to the /etc/fstab in order for it to be mounted every time the XenServer reboots and attempts to reconnect the Storage Repository. The following commands will create the actual Storage Repository (use the UUID returned by the pbd-create command as input for the pbd-plug command — it will differ from the below). You can change the name-label for the SR to something appropriate for your environment and you’ll need to change the name-label in the host-list command to match your XenServer as well as the mount point you used (I’ve bolded them for clarity):

UUID=$(uuidgen)

xe sr-introduce name-label=“ldf-xs-01:/mnt/wd500gb/ISO” content-type=iso shared=false type=iso uuid=${UUID}

xe sr-param-set other-config:auto-scan=true uuid=${UUID}

xe pbd-create host-uuid=`xe host-list name-label=ldf-xs-01 —minimal` sr-uuid=${UUID} device-config:location=“/mnt/wd500gb/ISO” device-config:options=”-o bind”

xe pbd-plug uuid=8f5d37c8-775e-aff1-0943-e470576372d2

Keep in mind that this was tested in a single server resource pool, isn’t intended for production usage and you’ll only be able to access the ISOs from the XenServer where you attached the USB drive.

-LF

Internal Routing with XenServer and Vyatta

A couple weekends ago I spent some time getting Vyatta Core 6.1, vyatta-xenserver_VC6.1-2010.10.16_i386 to be precise, deployed in my lab. I found a few tutorials for configuring Vyatta with VMware products, but didn’t really see anything for XenServer. Citrix highlighted the possibilities fairly soon after the XenSource acquisition in a blog post, but that was a couple years ago. Since then Vyatta and Citrix have announced a closer partnership and Vyatta was even part of the C3 Cloud Bridge blueprint. All positive signs that it should be fairly painless, so off we go.

First and foremost you need to download the latest version of their XenServer virtual appliance. If you’re a newbie to Vyatta, like I was, you’ll probably want to grab some documentation as well. The great thing about the appliance is you don’t need to muck about with a custom installation. Once you’ve imported the virtual appliance you should have a new template ready for VM deployments (in my case it was called: vyatta-xenserver_VC6.1-2010.10.16_i386).

XenServer Networks

You can see in the screenshot above that I’m using 3 physical NICs in my XenServer. The on-board NIC is a dedicated management interface (highlighted in the screenshot) and I’ve bonded a pair of Intel NICs for VM traffic. The third network, without a physical NIC associated to it, is an internal-only network and the primary reason I want the Vyatta router.

With my networks outlined, I’ll walk through the process of configuring the Vyatta router so that the internal-only network (remember: 192.168.48.x/24) can access the lab network (remember: 192.168.24.x/24).

New VM

Using the imported template, create a new VM, adjusting the settings as appropriate until you get to the networking configuration. Ensure a virtual interface is added for both the lab and internal-only network(s). Treat the router like you would any other VM; it doesn’t need an interface on the management network, unless you’re using it for VM traffic as well.

Networking Config

Complete the VM setup and have it boot up automatically. To login, use the default credentials: vyatta/vyatta.

Vyatta Console

Initially, we’re going to configure each interface with an IP address and enable SSH access. You’re not required to use SSH to complete the configuration, but it will allow you to access the CLI without XenCenter in the future.

configure
set interfaces ethernet eth0 address 192.168.24.254/24
set interfaces ethernet eth1 address 192.168.48.254/24
set service ssh
commit

You should now have a rudimentary configuration up and running. The two networks won’t be able to communicate with each other yet, but you should definitely be able to ping each interface from another device on the same network segment. In order to get the internal-only network talking to the lab network we’ll configure a NAT rule to pass traffic back and forth.

set service nat rule 1 source address 192.168.48.0/24
set service nat rule 1 outbound-interface eth0
set service nat rule 1 type masquerade
commit

If you were to stop here, any VM on the internal-only network using 192.168.48.254 as its default gateway would have access to the lab network, BUT it won’t be able to access the Internet. This may not be a big deal in your environment, but I still wanted to access OS updates, software installs, etc. without jumping through too many hoops. To achieve that we need to configure the router to use the default gateway on the lab network.

set system gateway-address 192.168.24.1
commit

Test your setup. From a VM on the internal-only network you should be able to ping hosts on the lab network and the Internet. To save the configuration so it persists after a reboot, run this final command in the CLI.

save

The last thing I did in my environment was configure a static route on my wireless router so that devices on the lab network can access the internal-only network without any client-side modifications. I’m running Tomato v1.28 on a Linksys WRT54G, so adding the static route is under Advanced -> Routing. The exact method for performing this step depends on your wireless router, but the gist of it is that you want all traffic destined for the internal network (192.168.48.x in my case) to use the Vyatta router’s interface as its gateway (192.168.24.254 if you’re following my example).

And with that, you should now have an internal-only network that’s completely accessible from the lab network, but allows you to retain greater control of the isolation. Need a DHCP server for PVS? Want to test the Branch Repeater VPX on separate networks? Start exploring and leave me your feedback in the comments.

-LF

Hello XenCenter Consoles+

First, I’d like to say thanks to everyone that checked out Better XenCenter. The response to my first release has been extremely positive and for that I’m thankful. As a way to show my appreciation, I’ve get a new version of the plugin for everyone. New features include:

  • Revamped the tab pages. Instead of assuming the URL and always going there, you now have to configure a custom field called xcpURL and provide the URL you would like loaded. This enables support for HTTPS & FQDNs and by extension can reduce the security warning dialogs.
  • Added an About page which includes detailed usage instructions. It’s a “xencenter-only” tab, so it’ll be on the parent XenCenter node.

The other major change is that I’ve renamed the plugin to XenCenter Consoles+. I think its a more fitting name and removes the implied deficiency in XenCenter itself.

Announcing the Better XenCenter plugin

UPDATE (01/23/2011): The Better XenCenter plugin has been renamed XenCenter Consoles+. The old landing page will redirect you to the new one automatically. See my announcement for details on the name change.

A lot of people may not be aware of this, but XenCenter, the management console for Citrix XenServer, supports 3rd-party plugins and even has a community site for people interested in developing their own. I’m proud to say that I’m now one of those people. I’ve spent the last couple weeks working on a new XenCenter plugin that I call Better XenCenter.

Inspired by the Access Gateway, NetScaler and Branch Repeater VPX plugins already available on the community site I decided to replicate their functionality, but also expand upon it to include other management consoles. Right now Better XenCenter supports 10 different management consoles and I intend to keep it updated as I test new virtual appliances or receive requests from the XenServer community. Find out more about the plugin on its dedicated page: Better XenCenter.

You should expect to see a few updates in the coming weeks, as I plan to enhance the plugin’s functionality with a PowerShell configuration script and support for custom URLs. I’m also planning to include some PowerShell scripts that automate routine CLI tasks. I’ve got a couple brewing and am always open to requests/feedback from others.

Leave a comment below or on the project page with your questions or requests.