Photo by net_efekt
I spend most of my time working on virtual machines. Mostly VMWare but also occasionally Parallels and Hyper-V. I've learned a few things about getting the best performance out of these machines and here are my top recommendations.
Fast Hard Disks
Fast disks are critical to a well functioning virtual machine, particularly if you have several machines on the same computer.
Speedy Disk Drives
Coming in a close second is the need for speedy disk drives, especially if one computer is hosting several virutal machines.
Performant Persistent Storage
Following closely on is the need for high performance storage for the virtual machines, most critical when there is a plenitude of artificial computers operating within the confines of an individual physical device.
In all seriousness, I have found nothing that gives a greater boost to the performance of VMs than fast disk drives. It's also important to keep the VMs running on a separate physical disk from the operating system.
It's also very important to turn off as much of the UI pizzazz as possible. Windows 7 Aero themes just suck up the processor. VMs seem to lag furthest behind hardware in the graphics adapter department.
Other than that it's just a matter of tweaking the right balance of virtual proessors, RAM, and other software that may be running on the same system. A few extra percent of performance can be wrung out with some trial and error.
Any other tips you might have to share? Please post them in the comments.
Photo by Striatic
"Developers should consider all data input by a user as harmful until proven otherwise." ~ Rocky Heckman
I liked this statement so much I had to use it for this article. Though somewhat unrelated, the sentiment is precisely what I was looking for.
ZDNet Australia ran a story by Josh Taylor who discussed a TechNet 2010 discussion by Rocky heckman about hackers inadvertently sending their malicious code to Microsoft during the development phase of their soon-to-be virus. Heckman went on to say that the same practices have been occuring for over six years, meaning that developers aren't listening to their security counterparts.
Thanks Mr. Heckman. Now, let's expand that statement not to just developers who might have to deal with an outsiders maclicious code, but to our jobs as Windows administrators, specificially as it relates to remotely installing applications to our users desktops.
Just like a realtor cares about location, location, location, sys admins should care about testing, testing, testing.
From time to time we receive suppport requests from users who have pushed the lastest bleeding edge package (or patch) to all of their systems. This is careless and can cause some serious problems.
First off, we define serious problems as anything that:
- prevents or hinders a users ability to complete their tasks
- compromises system security
Testing is a fairly straight forward proposition. Get the application or patch into a controlled environment and test the installation. Once you get an installation that is successful (i.e. the patch or application is working as advertised), review the event logs and test any other critical applications. (Testing critical apps, even though they may be unrelated to the installed app or patch is very important when it comes to installing anything on servers.)
We are big proponents of VMWare, especially the ESXi flavor. We have several in our lab and we use them extensively for testing and support.
Read the documentation of the app or patch. Does it change your security posture in any way? Does it open new ports? Does it necessitate the creation of any service level accounts? Does it make any changes to system files? All of these answers (and more) are included in the documentation that we all just love to read.
I know, I know. Documentation is a bore. But it's part of our jobs as sys admins. Reading docs is like shoveling dirt. It's boring and tedious, but once in a while you'll find a nugget, and that will make the work worthwhile.
I'm a big believer that we'll not as a society achieve true utopian peace until every entity (that is person and business) backs up their data and verifies that those backups actually work. (That's another way of saying I think that utopian thinking is nonsense.) True too the testing of apps prior to major deployments.
Be that as it may, just because there are groups of admins who cowboy up and deploy with reckless abandon doesn't mean that you have to. Just like the guys being chased by the bear realize that they don't have to outrun the bear, they just need to outrun their slowest group member.
Remember, ESXi is free and is only a download away.
Follow me on Twitter @ShawnAnderson
Deploy software for free with PDQ Deploy.
Photo by teclasong
While reading my daily blog roll I ran across a posting at the always informative Train Signal Training blog about VHDs, or Virtual Hard Disks. This really caught my eye as I hadn't heard of this functionality before. Virtual disks have been a part OS X since the beginning (I believe they go back to the NeXT days) and I find them to be very useful. It's great to see this capability now in Windows 7 and Server 2008 R2. The steps to create and use a VHD are a bit more complicated than creating a DMG on the Mac, but that's a small price to pay for the capability.
As usual, I'm interested in the command line options and here Microsoft doesn't disappoint. The DiskPart.exe utility provides all the necessary functionality to create, partition, format, and use a virtual disk. Here's a session that creates a 32 GB disk and assigns it a drive letter.
PS C:\> diskpart
Microsoft DiskPart version 6.1.7600
Copyright (C) 1999-2008 Microsoft Corporation
On computer: AADEV
DISKPART> create vdisk file="c:\test.vhd" maximum=32000 type=expandable
100 percent completed
DiskPart successfully created the virtual disk file.
DISKPART> select vdisk file="c:\test.vhd"
DiskPart successfully selected the virtual disk file.
DISKPART> attach vdisk
100 perent completed
DiskPart successfully attached the virtual disk file.
DISKPART> create partition primary
DiskPart succeeded in creating the specified partition.
DISKPART> list partition
Partition ### Type Size Offset
------------- ---------------- ------- -------
* Partition 1 Primary 31 GB 1024 KB
DISKPART> select partition=1
Partition 1 is now the selected partition.
DISKPART> format quick fs=ntfs
100 percent completed
DiskPart successfully formatted the volume
DISKPART> list volume
Volume ### Ltr Label Fs Type Size Status Info
---------- --- ----------- ---- ----------- ------- --------- ------
Volume 0 D CD-ROM 0 B No Media
Volume 1 System Rese NTFS Partition 100 MB Healthy System
Volume 2 C NTFS Partition 127 GB Healthy Boot
Volume 3 NTFS Partition 31 GB Healthy
DISKPART> select volume=3
Volume 3 is the selected volume.
DISKPART> assign letter=V
DiskPart successfully assigned the drive letter or mount point.
PS C:\> copy license.xml v:\
PS C:\> dir v:\
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 9/08/2010 2:41 PM 418 license.xml
As you can see, it's pretty straightforward to create and use a VHD. You can even install Windows on a VHD and boot to it, which can be very useful for troubleshooting. I love finding a new features that I didn't know about and can explore.
Looking for unattended installation software? Download a free copy of PDQ Deploy.
Photo by Richard-G
Infoworld has a great article entitled 10 Tips for Boosting Network Performance that I found fascinating. Two tips in particular struck me, numbers 7 and 8. These address a problem that we ran into recently when setting up a virtual lab, that is disk speed. It was a real surprise to me just how much disk speed (RPM and bus throughput) affects the performance of virtual machines. It makes sense, once you understand the reasons behind it, but it was still quite shocking to have it bite me in the butt. It's tempting to try to save money with cheaper SATA drives and while there are times that's appropriate, a virtual server isn't one of those times.
The whole article, though, is full of some great ideas. Some of which you may be aware of, and some probably not as much. There should be something there for everyone to learn from.
While you're reading about network performance you may want to learn how to use software deployment to Unplug the Sneakernet.
Photo by joebeone
Okay, now you've got some servers consolidated using virtualization (you are consolidating servers, right?) But what do you do with the old boxes? Sure, you could donate them to a charity or recycle them in an eco-friendly way. But where's the fun in that? Here are some other ideas.
- Cubicle space heater.
- Fill with sand for large zen garden.
- Add realistic wind noise to flight simulator game.
- Glue front panels on fridge to hide beer in server room rack.
- Boat anchor for CEO's new yacht.
- Add some tubing, Mentos and Diet Coke to win contract for new fountain sculpture outside courthouse.
- Landscaping for new miniature golf course in old server room.
- Build vacation shack and show up that beer can house guy.
- Pizza warmer.
- Advanced new "laptop" for that guy in sales who's always making fun of your conference t-shirts.
Follow me on Twitter @AdamRuth
Photo by geowomble
Steven Warren over at Train Signal Training
has a good tutorial on using the Virtual PC optional download for Windows 7. I have always been a fan of virtualization and I really like what Microsoft is doing here. This gives them a new choice of how to deal with backward compatibility that they didn't have in the past, that is to simply ignore it.
Backwards compatibility is a double-edged sword. It keeps people from abandoning your platform when things break, but it makes it more difficult to innovate because it requires that you maintain the bad code as well as the good. It's a good thing to retire old APIs and tools when better options exist, but if you can't get rid of that second copy of notepad
then it puts a lot of pressure on future versions to maintain all of the cruft.
Apple broke ground in this area with first running OS 9 on top of OS X
and then Rosetta
to run PowerPC apps on Intel chips. Apple certainly leans a different direction than Microsoft in the backwards compatibility arena. Which model is better in the long run is up for debate, but it's nice to see Microsoft taking the cue and accepting the compromise of virtualization. It will be good to see some of the deprecated interfaces in Windows finally disappear.
Want to deploy Virtual PC remotely? Get a 30-day free trial of Admin Arsenal today.
My oh-so-cool dual keyboard set up
Author: Adam Ruth
(Note: last week
I said that this week I was going to write up a tutorial for running Perforce, but it was getting so long I decided it was best to expand into a whitepaper
, we'll all be better for it.)
What follows is a story of hope. That even when things seem their bleakest a kludge can come along and get things limping along.
For the last couple of years I've been dreaming of getting a second keyboard, something that will allow me to take the myriad keyboard shortcuts I have now and give me one-key access to most of them. While developing in Microsoft Visual Studio, there are a lot of tasks that I perform regularly that require two hands and a bit of contortion to operate correctly. There's only so much I can do with changing the keyboard mappings, since I have more common tasks than buttons. An example is the default keyboard shortcut to debug all of a project's unit tests: Ctrl+T, Ctrl+A. This is easy enough to type, but it requires two hands and 4 keystrokes. Even if I wanted to simplify it, there just aren't many free buttons left to use.
This is exacerbated by the fact that I run my development inside of a VMWare Fusion
virtual machine on an iMac. OS X takes some of my precious keys away for its own use, and I don't want to give them all up. If you've ever looked at an Apple keyboard, you see that the function keys double as system keys for such things as volume control, Dashboard, and Exposé. Now I've got one more modifier key to worry about, the Fn key.
I thought the solution would be a second keyboard with a bunch of keys that I could map to different functions at will. But I wasn't sure how to do it. I didn't want to spend money on a specialized second keyboard, they can get expensive. I was looking for a way to plug in a normal keyboard and use it's keys as all brand new. I couldn't figure out how to do this because a second normal keyboard just duplicates the existing keyboard. Until I figured that out I installed a program called Keyboard Maestro
which at least let me use all of the extra keys on an Apple keyboard with Visual Studio (F13-F19, in particular.) But I was still running out of keys.
Then a product came to my attention called QuicKeys
. It has the ability to map separate devices to different actions. I tried plugging in another Apple keyboard, just like the one I already had and it seemed to work. But unfortunately, because it was the exact same type of device, QuicKeys kept losing the new mappings and it was a chore to get it working again. I had an older Bluetooth Apple keyboard in a box, which I dusted off and tried. It worked great, now I have a whole 78 keys that I can use for single-key access instead of the carpal tunnel inducing keystrokes I was using.
Only one problem. QuicKeys, for some reason, doesn't work with VMWare Fusion. When it sends keystrokes to Fusion the modifier keys (Shift, Ctrl, and Alt) get stripped off. Damn! I had just spent a few hours getting my new keyboard up and running and figuring out the QuicKeys mapping interface. I e-mailed support only to be told that the problem was VMWare, not QuicKeys (how many times have you heard that excuse?) That didn't sound right, because Keyboard Maestro worked just fine.
Aha! Wait, if Keyboard Maestro works maybe I could use it along side QuicKeys. An idea so crazy that it just might work. I set it up so that QuicKeys maps a keystroke from my Bluetooth keyboard to a keystroke that Keyboard Maestro then listens for and translates it for VMWare. An example through the Kludge-o-train:
The normal keystroke to build a project in Visual Studio is Ctrl+Shift+B. Not the simplest thing in the world to type, especially when you do it 5,000 times a day. So I used QuicKeys to map the second keyboard's B key to Shift+Ctrl+Cmd+B, and then Keyboard Maestro maps Shift+Ctrl+Cmd+B to Shift+Ctrl+B and voilà I can build my projects with a single hand easily. I use the Command key modifier for all of my QuicKeys -> Keyboard Maestro keystrokes because the Command key is rarely used in Windows with other modifier keys (it maps to the Windows key by default.)
My kludge is limping along just fine and I've even started to use my extra keyboard for other programs like iMovie and iPhoto. I'm in geek heaven.
Need administrator tools that aren't kludges? Try a 30-day trial of Admin Arsenal
Follow me on twitter @AdamRuth
Photo by iLoveButter
Author: Adam Ruth
I've always been a fan of virtualization. I've been running computers in a virtual testing lab for several years now. But, as good as it was, it was never quite fast enough for me to run as my primary development workstation. That is, until now.
For the last month I've been running VMWare Fusion 3
on one of the new 27" iMacs
(with the i7 processor and 8 GB RAM.) Windows 7 running in a virtual machine on this computer is quite a bit faster than my previous workstation, which was starting to get long in the tooth. I'm certain that for the same amount of money I could have gotten a computer that would blow them both away. But, what I have now is fast enough and includes some benefits that a dedicated box doesn't.
- Being able to rapidly shift between VMs. I have several VMs, each dedicated to a different project or purpose. These were all shoehorned into a single computer before, now all I need to do is put one to sleep and start up another. I can also, in a pinch, run two side by side. I try not to do this very often, because even with 8 GB of RAM, it sometimes pushes me to swapping.
- Sharing files between VMs is easy using MacFUSE and shared folders. This removes much of the need to run two VMs concurrently.
- Sharing files and clipboard between the VMs and the host. I do my e-mail, browsing, and most of the administrative work on my Mac. Because of this, it was always a bit of a pain when I needed to share something between the Mac and my development workstation. Now, it's a breeze with shared folders, drag and drop, and the shared clipboard.
- Travelling is now much easier. I don't travel much, but when I do it's always been quite a bit of work to take my development workstation with me. It would usually entail creating a VM (only for emergencies, I try not to do much development on my five year-old laptop.) Now, though, it's a breeze. I just need to copy the VMs to an external disk and away I go. Also, I can just start the VMs on a fast machine on the other end.
- Snapshots. Being able to take a snapshot, and to rollback is a great help. If I'm installing some software to experiment with, I can just take a snapshot before I install, and rollback after I'm done with it. No need to worry about anything left behind (particularly spyware, which sometimes gets bundled.) I have my VMs set to create an auto-protect snapshot every day. This came in handy after I wanted to retrieve a file that I had deleted accidentally (this was on a "scratch" VM that doesn't have proper backup or source control, and it saved me the trouble of recreating the file.)
I think that this pretty well sums it up. Virtualization on the desktop has finally come of age for me. I'm sure it's still not fast enough for most real graphics intensive applications, such as video editing or gaming, but it has finally crossed the bar for my software development. It's a day that I've been waiting for (with every new release of VMWare Fusion or Parallels I spent a couple days trying to fit in my development only to be disappointed) and I'm glad it's here now.
Photo by * w a a *
Author: Adam Ruth
Today, cloud computing, virtualization, and tablet PCs are vying for the hype crown. At this point it's impossible to tell which claims will bear fruit, and which will fall to the earth and rot.
Two of those items, cloud computing and virtualization, are high on my list of "game changers" for the future. But all of the failed technologies on the list would probably have been equally high on my list in the past. I would like to think that they're on my list because I'm so prescient, but the reality may be that I'm just caught up in the hype storm but don't know it (does anyone know when they're in the hype storm?)
Only the future will tell what will end up changing the IT landscape, and we do need to try to anticipate it if we want to stay relevant in the future, but it's helpful to look at our past to see how easy it is to be very wrong about it.
A couple months ago I wrote about my use of Microsoft Hyper-V, which is Microsoft's answer to VMWare. I have been quite happy with it for working with a development lab. Since I wrote that blog post I've learned a couple new things.
- Photo by fdecomite
First, Hyper-V in Windows Server 2008 R2 is really slick. Windows 7 and Server 2008 R2 include the bits and pieces to run within a virtual machine. With other operating systems you would need to install the Hyper-V tools in the guest in order to get full functionality. That saves a step when installing a new virtual machine, especially if you run the Hyper-V manager through remote desktop (without the tools, the mouse can't be captured making install a real pain.) 2008 R2 is going to give VMWare a run for its money for small organizations.
Second, VMWare still has more power overall than Hyper-V. I started using VMWare Server which is now free (in response to the "freeness" of Hyper-V, I assume.) It lacks some of the power of Hyper-V, but it does seem more mature. The user interface is quite a bit better, in my opinion, and it seems to support more options for hardware and configuration. It doesn't support multiple snapshots, like Hyper-V does, which can be limiting, but you can always pay for that feature if it's needed.
What I really like about VMWare, though, is its cross platform nature. I run a number of Apple Macintoshes and it's nice to be able to shuttle virtual machines back and forth between VMWare Fusion and VMWare Server. If I ever need to do some running on Linux, I would be supported there as well. Also, as for the virtual machines themselves VMWare is agnostic, while Hyper-V is focusing on Windows. I haven't tested anything but Windows just yet, but based on what I've seen so far it's probably going to be smoother with VMWare.
In the end, I think I'm going to stick mainly with VMWare so I won't have virtual machines trapped on Windows. But I won't abandon Hyper-V completely, there will be times that it'll make sense to use, particularly for testing. At some point I can see my development lab getting very large, and if that happens then I will look to VMWare's higher end products which look to have a lot more capabilities that Microsoft's high end offerings, at least for the time being.