Virtual Stepping Stones

Simple Question:

Is desktop virtualization a key stepping stone for cloud computing in the enterprise?

Sure, as someone that spends most of their day (and quite a few evenings) working on desktop virtualization and cloud computing projects — okay, mostly desktop virtuailzation, but cloud discussions are increasingly common and I enjoy the blog/twitter debates — it might seem natural for me to consider desktop virtualization a primer for enterprise adoption of cloud computing.

How can I not, though? Setting aside licensing and business concerns for a moment, the idea of taking a relatively static workload, like a physical desktop, and migrating it into an elastic, virtualized, self-service platform makes a lot of sense. Its no wonder VMware was so quick to move from server virtualization to desktop virtualization. Its also why Citrix was ready, willing and able to move in that direction, building on their success with TS/RDS-based solutions. It might not be the end-all-be-all for all users and it may only serve specific use-cases at the moment (or long-term), but I would argue that cloud architects and desktop virtualization architects would learn a lot from each other if they made time for collaboration.

Some people might argue that being able to deliver desktops as a service, the ultimate goal of most VDI or RDS-based projects, doesn’t automatically make it cloud computing; which may be true, but doesn’t really matter and misses the crux of my argument. Desktop virtualization architects can learn a lot from the “pie-in-the-sky” cloud architects that are championing DevOps, pushing the envelope with elasticity, metering, chargeback and the reality of consumption-based pricing for IT resources. How long have enterprises struggled with the device-centric days of managing physical PCs? How can these “new” concepts be applied to one of IT’s oldest recurring costs, the physical PC?

If you still think I’m off-base, consider the excellent blog post by @reillyusa, titled 1-2-3 easy as VPC. In it, he outlines a theoretical evolution of the AWS and Citrix capabilities which would allow an enterprise to securely leverage public cloud resources to deliver Windows “crapplications” to both internal and external users. His thought experiment, and at this point in time I would consider it an experiement, just goes to show that enterprises thinking about cloud computing come at it from many angles. It’s yet another use case for hybrid cloud, and this time targeting a subset of the broader desktop virtualization technologies, namely application virtualization.

  • ACME Corp. has several client / server LoB applications that it would like to use the flexible nature of AWS to “serve” them from. The applications are not accessed outside of North America and are used between 8am-8pm Eastern Time.
  • ACME Corp. uses Citrix XenApp to provide access to the applications.
  • The user base is internal and external, but all have valid Active Directory accounts for ACME Corp. environment.

Now, for more of a private on-premise feel, consider the large VDI environments enterprises are deploying. They are having desktop sizing and scalability discussions. They are digging into the various performance concerns that VDI introduces — App Streaming, Profile/Personalization and OS Boot Storms, to name just a few — and ultimately they need to be able to show financial and/or operational gains to the check signers. Many of these projects are being spurred by Windows 7 migrations, the increasing proliferation of mobile devices and the opportunistic problem-solving that us humans are so damn good at. In and of itself, not cloud computing, but a veritable smorgasbord of lessons learned that can be applied across an organization.

It’s like a virtual stepping stone. Tackling desktop workloads in the enterprise is one area where you can test your meddle for what it takes to build a private cloud and you can do it today, for good reasons. At the same time, you can learn from the technical, financial and political challenges that go hand-in-hand with the development of a private cloud or the adoption of public off-premise cloud computing. I find the two painfully intertwined, even if the end-state is to have anywhere access to an elastic application that cares not for its VM master or the silly desktop you might access it from.


Creating a Local ISO Storage Repository on XenServer

Trying to find a quick and painless way to add an ISO storage repository to your XenServer without using CIFS or NFS? I was searching for just that after attaching an external USB drive with all my ISOs to my XenServer (running version 5.6 FP1, aka “Cowley”, in my case). Since there’s no direct method for this I used a little trial-and-error and came up with the following steps, to be run at the CLI. First, you need to identify and prepare the disk so after plugging it into your XenServer run the following commands (replacing /dev/sdb with the /dev node returned in the first command).

ls -l /dev/disk/by-id | grep usb

fdisk /dev/sdb

If you’re unfamiliar with fdisk on Linux, use the following documentation for guidance, You’ll need to format the partition for Linux (Partition ID = 83). The final step in preparing the disk is to create an EXT3 filesystem on your desired partition. In my case I created a single partition, so it looks like this.

mkfs -t ext3 /dev/sdb1

With everything prepped and ready to go you need to create a mount point and then mount the drive.

mkdir /mnt/wd500gb/ISO

mount /dev/sdb1 /mnt/wd500gb/ISO

This will be enough to run through the rest of the tutorial, but you’ll need to add the USB drive to the /etc/fstab in order for it to be mounted every time the XenServer reboots and attempts to reconnect the Storage Repository. The following commands will create the actual Storage Repository (use the UUID returned by the pbd-create command as input for the pbd-plug command — it will differ from the below). You can change the name-label for the SR to something appropriate for your environment and you’ll need to change the name-label in the host-list command to match your XenServer as well as the mount point you used (I’ve bolded them for clarity):


xe sr-introduce name-label=“ldf-xs-01:/mnt/wd500gb/ISO” content-type=iso shared=false type=iso uuid=${UUID}

xe sr-param-set other-config:auto-scan=true uuid=${UUID}

xe pbd-create host-uuid=`xe host-list name-label=ldf-xs-01 —minimal` sr-uuid=${UUID} device-config:location=“/mnt/wd500gb/ISO” device-config:options=”-o bind”

xe pbd-plug uuid=8f5d37c8-775e-aff1-0943-e470576372d2

Keep in mind that this was tested in a single server resource pool, isn’t intended for production usage and you’ll only be able to access the ISOs from the XenServer where you attached the USB drive.