XenServer 6.5 & Containers – “Hello World” Edition

In between calls on Friday while skimming my Twitter feed, I stumbled across an interesting announcement, Preview of XenServer support for Docker and Container Management, which led me to, How to Get Started with Container Monitoring on CoreOS. Curious, having seen a brief demo in January, I decided to jump in and see if I could get it running in my lab. Containers are quick and easy, right?


First things first, head over to the XenServer Pre-Release Downloads and under the Docker Integration section you’ll need to grab both the supplemental pack and updated XenCenter. Once downloaded, use an SFTP client to copy the ISO onto your XenServer(s) – I’m fond of ForkLift on Mac OS X. You’ll also need to run the XenCenter installer, but its your basic Next, Next, Next ordeal.

With the ISO copied onto your XenServer and your XenCenter updated, we’ll start having some fun.

The Basics

Installing the supplemental pack is a simple ordeal. Run the following command in whichever directory you copied the ISO file:

xe-install-supplemental-pack xscontainer-6.5.0-100205c.iso


Since we’ve already installed the preview version of XenCenter, we’re essentially done. Your XenServer now includes a new guest template for CoreOS and the updated XenCenter includes:

  • New VM Wizard – CoreOS cloud-config/config-drive support
  • VM Properties – Container Management / Cloud-Config Parameters
  • Probably more, but those are the visible differences I noted

Of course, we’re not stopping there, let’s dig deeper and see how the new features look.

Your First CoreOS VM

If, like me, you’re new to CoreOS you’ll want to head to their download page and grab an ISO. While not exactly required, as I’ve learned through additional research, its how I got up and running and works for this “Hello World” example. Don’t forget to copy the ISO to your ISO Storage Repository so we can do some installing.

Head over to XenCenter so we can create a new VM.


My first pass was pretty much Next, Next, Next (and serves as a learning opportunity). I gave it a name, took the default CPU/Memory/Disk (1 vCPU, 1GB RAM and 5GB disk) and left the Cloud-Config Parameters unchanged. This is important, but I’ll explain why in a second. Make sure to select the new ISO for the installation source and boot the VM when the wizard completes. After a quick provisioning, your new VM should be up and running and will automatically boot into the console.

Run the following command to install CoreOS to the disk:

sudo coreos-install -d /dev/xvda -o xen -C stable


Once the installation is finished, eject the ISO (using XenCenter) and reboot your VM (from its console):

sudo reboot

You now have a purpose built OS runtime for containers, but what’s next?

XenCenter Enhancements

In order for XenCenter to monitor and control our containers, we need to enable Container Management. Right-click on the new VM and open its Properties.


There’s also another set of options called Cloud-Config Parameters. As I mentioned previously, the preview release supports both cloud-config and config-drive, for automated configuration of CoreOS VMs.


Accessing a CoreOS VM

At some point we’ll want to connect to the VM and actually use it. Assuming you went the Next, Next, Next route and didn’t add your SSH public keys, there’s not much you can do until you rectify the situation (as I learned). We need to revisit the Cloud-Config Parameters and add at least one SSH public key of our own.

If you’re not familiar with using public/private keys for SSH authentication, Ubuntu has some good documentation. I ran the following command on my Mac, to setup a new keypair, accepting the default file location and creating a passphrase:

ssh-keygen -t rsa -C “for CoreOS testing”


To insert the key into your cloud-config, we need to use good ole copy/paste. Print the public key you just created, so we can highlight and copy (everything between “ssh-rsa” and your comment, “for CoreOS testing” in my case):

cat ~/.ssh/id_rsa.pub


Now, make sure the VM is shutdown and open the Cloud-Config Parameters again. Paste your public key at the end of the “- ssh-rsa” line, just under the “ssh_authorized_keys:” line. Pay close attention to the comments regarding the %XSCONTAINERRSAPUB% line, its what enables a new daemon in dom0 to monitor/control containers.


With those changes complete, you should be able to SSH from your local machine to the CoreOS VM. I elected not to save my passphrase in my Mac’s keychain, hence the error “Saving password to keychain failed”.

Launching a Few Containers

Once connected, you can create a couple containers, like so:

docker run –name hello -d busybox /bin/sh -c “while true; do echo Hello World; sleep 1; done”

docker run –name goodbye -d busybox /bin/sh -c “while true; do echo Goodbye World; sleep 1; done”


Flipping back to XenCenter, you should now see the containers grouped under your CoreOS VM.


The Payoff

All told, I was able to go from start to finish in about 30 minutes. The XenServer installation was dead simple and getting started with CoreOS/Docker was pretty intuitive. There’s a lot more under the hood to investigate, that’s for sure, so expect more updates in the future (like how do you view/interact with each container, using fleet for container management, etc).

This just in, X1 welcomed as overlord

I, for one, welcome our new X1 overlords..

Earlier today, Richard Hayton and Manu Chauhan announced the tech preview release of X1 Storefront and Receiver X1 for Web and I’ve been excited since I saw Richard demo it earlier this year.

Homage to Kent Brockman aside, I think this update is deserving of attention. I’ve spent the better part of 6 years listening to enterprises (from the smallest of small, to the largest of large) describe their wants, desires and challenges delivering simple, secure and seamless access to critical business applications, desktops and data. Many have looked to Citrix for Disaster Recovery and Business Continuity solutions, others leverage it for line of business applications and there are those that use Citrix as their primary desktop and application solution.

Over all those years, through all of the conversations, conference calls, GoToMeetings, WebExes, LiveMeetings, Proof of Concepts there are 4 building blocks, if you will, I’ve been discussing, demonstrating and architecting:

  • Client
  • Gateway (optional)
  • Resource Aggregator
  • Infrastructure

Ignoring the “Infrastructure” for a moment, 2 of the 3 (and in some cases the remaining is optional) just got a major overhaul. We’re not talking “Citrix added a new theme, yay” or “Citrix changed the look again..still need those WI features” – no, this is the first hint at an evolved Client/Resource Aggregator and even better integration should you look at NetScaler as your Gateway of choice. With an extensible Client able to connect securely to an extensible Resource Aggregator, the underlying infrastructure becomes amorphous – a commodity resource (physical/virtual, on-premise/off-premise, owned/leased)  and to a certain extent irrelevant. The goal has always been in seamlessly connecting an employee with their workspace, so they can be as productive as possible wherever they are. Yes, there are countless design decisions related to the infrastructure, but without the right client/gateway/aggregator (user experience) none of it matters.

Receiver X1 is about uniting the Citrix experience, across both client and server components – Receiver X1 for Web and X1 Storefront “today”, X1 XenMobile Server “tomorrow” – and I’m excited to dig in and see what the future holds.


CEREBRO: NetScaler Gateway analysis for XenMobile deployments

The release of XenMobile 9.0 a couple weeks ago is a pretty big accomplishment, in itself, but along with shipping a solid product our internal support teams have been toiling away on some updated troubleshooting tools to coincide with the release.

One of those tools, CEREBRO, has been available internally for testing the last few months and is now available for partners and customers to download. You can grab it from the XenMobile 9.0 Enterprise Edition download page, after logging into your MyCitrix account.

  • Login to MyCitrix: https://www.citrix.com/account
  • Select Downloads, XenMobile, Product Software
  • Select XenMobile 9.0 Enterprise Edition
  • Expand Tools and click Download next to Cerebro


That’s great, but what does it do, you ask?

From the FAQ, available in the zip file:

CEREBRO is a diagnostic tool developed to help analyze and debug XenMobile deployments.

CEREBRO is a Windows executable, that can be fed NetScaler Gateway configuration (ns.conf), post which the tool can do analysis, and point out likely issues. It also provides recommendations on fixing the issues found.

CEREBRO can also perform online connectivity checks with the back end servers that are configured with the Netscaler Gateway server. To achieve this, CEREBRO needs to be run on a machine (Windows) from which the Netscaler Gateway Server is reachable.  You can also run a Command line tool on the NetScaler Gateway server to get the back end server connectivity status.

Analysis of NetScaler configuration validates the configuration for XenMobile setup and can point to missing policies, syntactical issues and consistency issues. Based on the issues identified by configuration analysis, CEREBRO gives clear and concise recommendations on fixing the same.

The FAQ also includes step-by-step usage instructions for both ONLINE and OFFLINE analysis options.

Happy Troubleshooting!

ShareFile finds a home at Citrix

Last week the enterprise focused file-sharing company ShareFile was acquired by Citrix. The press release bills the acquisition as an acceleration of Citrix’s Cloud Data Strategy and I, for one, welcome our new Cloud Data overlords. All joking aside, I’m looking forward to Synergy Barcelona even more now that all of the Dropbox/Box.net rumors have been put to rest and we finally have another piece of the “Follow Me Data” story.

The vision for “Follow Me Data”, as originally presented, goes like this:

  • Centralized and secure storage and access to enterprise data from any device, anywhere.
  • Tight integration with Citrix Receiver enables enterprises to deliver desktops, applications and data in a simple, cross-platform manner.
  • Open, SAML-based architecture will ensure support for various cloud storage providers.

The first question that likely comes to mind for those keeping tabs on the “Follow Me Data” story is whether Citrix intends to stand firm in their “openness” mantra and continue to support the cloud storage providers outside of ShareFile. Brian Madden alludes to this in his post on the acquisition. I have no direct knowledge of product plans, hence my anticipation for Synergy Barcelona, but I’d be incredibly shocked if anything changes from the original vision. While Citrix has their own solutions that are supported by XenDesktop/CloudStack, the company continues to support and endorse technologies from other vendors (App-V, Hyper-V, vSphere, KVM, Xen). The same thing should go for cloud storage providers that compete with ShareFile. Citrix would be doing their customers a real disservice by ignoring the other vendors and I just don’t see it happening.

To be honest, I was a little surprised by the announcement; not because of the acquisition itself, but because of the company acquired. Just like the rumors flying around the industry-at-large, there were internal rumblings and ShareFile never came up. I think it would have been cool to end up with a company like Dropbox/Box.net/SugarSync, mainly because I’m familiar with them, but after getting a little hands-on time with the new team’s software I can see why the executives made the decision they did. There’s a heavy focus on the enterprise and being able to share data with both internal and external teams.

I’ll leave the product deep-dive for another post, but it seems clear this acquisition is more about the core technology, existing customer base and the team joining Citrix than the need to grab one of the larger vendors in this space. The integration with Citrix Receiver should be impressive and I’m excited to see what else Citrix has in store.

What say you?


Link Smorgasbord: XenDesktop 5.5 & XenApp 6.5

While preparing for a customer briefing on XenDesktop 5.5 and XenApp 6.5 this morning I decided to search for recent blog posts to review. Turns out there were tons of individual postings, from release announcements to deep-dives on specific new features. I also found a bunch of Citrix partners blogging about this major release — the first synchronized XenDesktop and XenApp release ever! What follows is a smorgasbord of links that highlight the new features and benefits. I’m sure I missed some, but this is more than enough for one day.

Citrix Blogs

Partner Blogs

Troubleshooting NetScaler Configuration Utility launch failures


If you’re having problems launching the NetScaler Configuration Utility, here are a few troubleshooting steps which should come in handy. I was recently working with a customer that had a pair of boxes in one datacenter that worked just fine and a pair of boxes in another datacenter that failed to load.

  • Network Connectivity – Verify that you can ping and/or SSH into the NetScaler appliance(s).
  • Java Ports – Verify that you can connect to the ports used by Java. If you’re accessing the NetScaler GUI over HTTP then the Java port is TCP/3010. If you’re accessing the GUI over HTTPS then the Java port is TCP/3008. A simple telnet to the correct port should verify no firewalls are blocking communication.
  • Proxy Servers – Verify your browser and java proxy settings. Don’t overlook the java proxy settings and assume it matches/defers to your browser. This was ultimately my customer’s problem. Easiest way to test for errors is connecting to the java port from within your browser, e.g. http://<NSIP>:3010 or https://<NSIP>:3008. We had to change the java proxy settings to “Use Direct Connection” to resolve the launch issue.
  • Client Workstation – If possible, do a quick test from another workstation to ensure it’s not a connectivity problem specific to one machine.
  • Restart the HTTPD process on the NetScaler: http://support.citrix.com/article/CTX120034.
  • Last, but not least, break out Wireshark and do some tracing to pin-point where the communication flow is breaking down.
If you’ve got any other quick tips for troubleshooting this type of issue that I missed, post ‘em in the comments.

A stroll down memory lane, XenServer style.

With the much anticipated release of XenServer 6.0 on Monday, I got the urge to take a stroll down memory lane. For those that don’t know, I joined Citrix back in February 2008 and my sole responsibility was spreading the word about XenServer: educating partners, customers and those within Citrix that were used to virtualizing applications and desktops, not servers. I’m not an original XenSource guy, but I was picked for the team because I brought strong Linux/Storage experience. All of us on the team had our own unique talents and most importantly we brought a perspective to Citrix that was different from your traditional MetaFrame/Presentation Server/XenApp administrator.

My experiences that first year, from growing our install base to building the technical prowess of my local partners and fighting an uphill battle against the 800 lbs. gorilla in the datacenter built character, so to speak. We were the scrappy underdog, striving to make server virtualization ubiquitous and affordable, for all. We were little known, in the enterprise at least, and working at a feverish pace trying to get features into the product. We built scripts and utilities to help people unfamiliar with this thing we called dom0 and the xe CLI. I was the Linux shell scripter; another guy on the team kept going on about these Windows GUIs he could get built. Funny enough, both of us are still around and banging the XenServer drum on a daily basis.

Many of the challenges we faced back in 2008 are water under the bridge. We fought hard, built up our customer base and changed the server virtualization landscape for everyone involved. The features we lacked back in the early days of XenServer 4.0 and 4.1 are distant memories. In early 2009 we did the unthinkable and made a full-featured, free version of XenServer available for everyone to download. This set the stage for a serious uplift in awareness for XenServer. Downloads of the free edition went viral and a shift in marketing took the “10 to Xen” message to a new level.

Not wanting to rest on our laurels, the engineering team continued to innovate, launching XenServer 5.5 and becoming only the second hypervisor to be deemed Enterprise-Production Ready. A year later came XenServer 5.6 and with it a slew of new features, from enhancements to Workload Balancing to the inclusion of Role Based Access Control and a new Site Recovery feature. At the same time, our visibility in the cloud computing arena was picking up. Service Providers, seeing an opportunity to build their clouds on XenServer Free — and thanks to our simple per-server pricing, we could position XenServer Advanced or Enterprise and still be compelling against our competition — were quick to jump on board. Along with the announcement of XenServer 5.6 came news that Rackspace Cloud Servers would be standardizing their offering on XenServer.

And that brings us full circle to the release of XenServer 6.0, which includes tons of new features across all of Citrix’s core competencies.

  • Cloud – The full integration of Open vSwitch enables customers to design next-generation networks based on the OpenFlow standard. The number of articles I’ve found on Software Defined Networks and OpenFlow in public/private clouds is staggering, but this, this and this should get you started.
  • Virtualization – Doubling down on the value-proposition argument, XenServer 6.0 now includes the VM Protection and Recovery feature in all paid editions, i.e. Advanced and above. It’s also had its core engine, Xen, ugpraded to 4.1 and the supportability limits for both hosts and VMs are increased over previous editions.
  • Desktop – Fully supported to run the latest release of XenDesktop 5.5, XenServer 6.0 differentiates itself as an HDX-optimized platform thanks to the new GPU pass-through feature, which enables a physical GPU to be assigned to a VM providing high-end graphics capabilities, and the continued support and development of the IntelliCache feature.
For the full laundry-list of features, check out the New Features section of the XenServer 6.0 Release Notes.

Virtual Stepping Stones

Simple Question:

Is desktop virtualization a key stepping stone for cloud computing in the enterprise?

Sure, as someone that spends most of their day (and quite a few evenings) working on desktop virtualization and cloud computing projects — okay, mostly desktop virtuailzation, but cloud discussions are increasingly common and I enjoy the blog/twitter debates — it might seem natural for me to consider desktop virtualization a primer for enterprise adoption of cloud computing.

How can I not, though? Setting aside licensing and business concerns for a moment, the idea of taking a relatively static workload, like a physical desktop, and migrating it into an elastic, virtualized, self-service platform makes a lot of sense. Its no wonder VMware was so quick to move from server virtualization to desktop virtualization. Its also why Citrix was ready, willing and able to move in that direction, building on their success with TS/RDS-based solutions. It might not be the end-all-be-all for all users and it may only serve specific use-cases at the moment (or long-term), but I would argue that cloud architects and desktop virtualization architects would learn a lot from each other if they made time for collaboration.

Some people might argue that being able to deliver desktops as a service, the ultimate goal of most VDI or RDS-based projects, doesn’t automatically make it cloud computing; which may be true, but doesn’t really matter and misses the crux of my argument. Desktop virtualization architects can learn a lot from the “pie-in-the-sky” cloud architects that are championing DevOps, pushing the envelope with elasticity, metering, chargeback and the reality of consumption-based pricing for IT resources. How long have enterprises struggled with the device-centric days of managing physical PCs? How can these “new” concepts be applied to one of IT’s oldest recurring costs, the physical PC?

If you still think I’m off-base, consider the excellent blog post by @reillyusa, titled 1-2-3 easy as VPC. In it, he outlines a theoretical evolution of the AWS and Citrix capabilities which would allow an enterprise to securely leverage public cloud resources to deliver Windows “crapplications” to both internal and external users. His thought experiment, and at this point in time I would consider it an experiement, just goes to show that enterprises thinking about cloud computing come at it from many angles. It’s yet another use case for hybrid cloud, and this time targeting a subset of the broader desktop virtualization technologies, namely application virtualization.

  • ACME Corp. has several client / server LoB applications that it would like to use the flexible nature of AWS to “serve” them from. The applications are not accessed outside of North America and are used between 8am-8pm Eastern Time.
  • ACME Corp. uses Citrix XenApp to provide access to the applications.
  • The user base is internal and external, but all have valid Active Directory accounts for ACME Corp. environment.

Now, for more of a private on-premise feel, consider the large VDI environments enterprises are deploying. They are having desktop sizing and scalability discussions. They are digging into the various performance concerns that VDI introduces — App Streaming, Profile/Personalization and OS Boot Storms, to name just a few — and ultimately they need to be able to show financial and/or operational gains to the check signers. Many of these projects are being spurred by Windows 7 migrations, the increasing proliferation of mobile devices and the opportunistic problem-solving that us humans are so damn good at. In and of itself, not cloud computing, but a veritable smorgasbord of lessons learned that can be applied across an organization.

It’s like a virtual stepping stone. Tackling desktop workloads in the enterprise is one area where you can test your meddle for what it takes to build a private cloud and you can do it today, for good reasons. At the same time, you can learn from the technical, financial and political challenges that go hand-in-hand with the development of a private cloud or the adoption of public off-premise cloud computing. I find the two painfully intertwined, even if the end-state is to have anywhere access to an elastic application that cares not for its VM master or the silly desktop you might access it from.


Creating a Local ISO Storage Repository on XenServer

Trying to find a quick and painless way to add an ISO storage repository to your XenServer without using CIFS or NFS? I was searching for just that after attaching an external USB drive with all my ISOs to my XenServer (running version 5.6 FP1, aka “Cowley”, in my case). Since there’s no direct method for this I used a little trial-and-error and came up with the following steps, to be run at the CLI. First, you need to identify and prepare the disk so after plugging it into your XenServer run the following commands (replacing /dev/sdb with the /dev node returned in the first command).

ls -l /dev/disk/by-id | grep usb

fdisk /dev/sdb

If you’re unfamiliar with fdisk on Linux, use the following documentation for guidance, http://tldp.org/HOWTO/Partition/fdisk_partitioning.html. You’ll need to format the partition for Linux (Partition ID = 83). The final step in preparing the disk is to create an EXT3 filesystem on your desired partition. In my case I created a single partition, so it looks like this.

mkfs -t ext3 /dev/sdb1

With everything prepped and ready to go you need to create a mount point and then mount the drive.

mkdir /mnt/wd500gb/ISO

mount /dev/sdb1 /mnt/wd500gb/ISO

This will be enough to run through the rest of the tutorial, but you’ll need to add the USB drive to the /etc/fstab in order for it to be mounted every time the XenServer reboots and attempts to reconnect the Storage Repository. The following commands will create the actual Storage Repository (use the UUID returned by the pbd-create command as input for the pbd-plug command — it will differ from the below). You can change the name-label for the SR to something appropriate for your environment and you’ll need to change the name-label in the host-list command to match your XenServer as well as the mount point you used (I’ve bolded them for clarity):


xe sr-introduce name-label=“ldf-xs-01:/mnt/wd500gb/ISO” content-type=iso shared=false type=iso uuid=${UUID}

xe sr-param-set other-config:auto-scan=true uuid=${UUID}

xe pbd-create host-uuid=`xe host-list name-label=ldf-xs-01 —minimal` sr-uuid=${UUID} device-config:location=“/mnt/wd500gb/ISO” device-config:options=”-o bind”

xe pbd-plug uuid=8f5d37c8-775e-aff1-0943-e470576372d2

Keep in mind that this was tested in a single server resource pool, isn’t intended for production usage and you’ll only be able to access the ISOs from the XenServer where you attached the USB drive.


Internal Routing with XenServer and Vyatta

A couple weekends ago I spent some time getting Vyatta Core 6.1, vyatta-xenserver_VC6.1-2010.10.16_i386 to be precise, deployed in my lab. I found a few tutorials for configuring Vyatta with VMware products, but didn’t really see anything for XenServer. Citrix highlighted the possibilities fairly soon after the XenSource acquisition in a blog post, but that was a couple years ago. Since then Vyatta and Citrix have announced a closer partnership and Vyatta was even part of the C3 Cloud Bridge blueprint. All positive signs that it should be fairly painless, so off we go.

First and foremost you need to download the latest version of their XenServer virtual appliance. If you’re a newbie to Vyatta, like I was, you’ll probably want to grab some documentation as well. The great thing about the appliance is you don’t need to muck about with a custom installation. Once you’ve imported the virtual appliance you should have a new template ready for VM deployments (in my case it was called: vyatta-xenserver_VC6.1-2010.10.16_i386).

XenServer Networks

You can see in the screenshot above that I’m using 3 physical NICs in my XenServer. The on-board NIC is a dedicated management interface (highlighted in the screenshot) and I’ve bonded a pair of Intel NICs for VM traffic. The third network, without a physical NIC associated to it, is an internal-only network and the primary reason I want the Vyatta router.

With my networks outlined, I’ll walk through the process of configuring the Vyatta router so that the internal-only network (remember: 192.168.48.x/24) can access the lab network (remember: 192.168.24.x/24).

New VM

Using the imported template, create a new VM, adjusting the settings as appropriate until you get to the networking configuration. Ensure a virtual interface is added for both the lab and internal-only network(s). Treat the router like you would any other VM; it doesn’t need an interface on the management network, unless you’re using it for VM traffic as well.

Networking Config

Complete the VM setup and have it boot up automatically. To login, use the default credentials: vyatta/vyatta.

Vyatta Console

Initially, we’re going to configure each interface with an IP address and enable SSH access. You’re not required to use SSH to complete the configuration, but it will allow you to access the CLI without XenCenter in the future.

set interfaces ethernet eth0 address
set interfaces ethernet eth1 address
set service ssh

You should now have a rudimentary configuration up and running. The two networks won’t be able to communicate with each other yet, but you should definitely be able to ping each interface from another device on the same network segment. In order to get the internal-only network talking to the lab network we’ll configure a NAT rule to pass traffic back and forth.

set service nat rule 1 source address
set service nat rule 1 outbound-interface eth0
set service nat rule 1 type masquerade

If you were to stop here, any VM on the internal-only network using as its default gateway would have access to the lab network, BUT it won’t be able to access the Internet. This may not be a big deal in your environment, but I still wanted to access OS updates, software installs, etc. without jumping through too many hoops. To achieve that we need to configure the router to use the default gateway on the lab network.

set system gateway-address

Test your setup. From a VM on the internal-only network you should be able to ping hosts on the lab network and the Internet. To save the configuration so it persists after a reboot, run this final command in the CLI.


The last thing I did in my environment was configure a static route on my wireless router so that devices on the lab network can access the internal-only network without any client-side modifications. I’m running Tomato v1.28 on a Linksys WRT54G, so adding the static route is under Advanced -> Routing. The exact method for performing this step depends on your wireless router, but the gist of it is that you want all traffic destined for the internal network (192.168.48.x in my case) to use the Vyatta router’s interface as its gateway ( if you’re following my example).

And with that, you should now have an internal-only network that’s completely accessible from the lab network, but allows you to retain greater control of the isolation. Need a DHCP server for PVS? Want to test the Branch Repeater VPX on separate networks? Start exploring and leave me your feedback in the comments.