It's difficult to blame RAM or CPU when two identical hosts are provided for testing. We
spent three weeks in the test lab and, based on the results that we had to present before
an engineering review board, threw out the load-balanced 8 virtual server architecture
in favor of a single physical server with a virtual failover host, and eventually threw out
the failover host because it was unreliable with the needed fibrechannel drivers.
By definition, virtualization has a cost associated with it (I call it the "virtualization tax").
One of the prime benefits of virtualization is that it lets you steal CPU cycles from unbusy
systems in order to reduce the physical host footprint. Yet when the systems are NOT unbusy,
and especially when the busyness is I/O related, there is a greater cost. GIS applications
easily have the highest I/O demand in the business marketplace. Virtualizing production
GIS servers causes them to have the highest taxes. You won't notice when the systems
aren't under heavy load, but when they are, you can't help but notice it.
I had VM administrators pleading with me to speed up physical deployment of an ArcSDE
server because the four virtual servers it was replacing were increasing the data center's
demand numbers to the point where it was endangering their contract performance metrics
(when deployed, the ArcSDE connect time [with scores of concurrent users and thousands
of feature classes] dropped from 40-120 minutes to 5 minutes). There was nothing wrong
with the RAM or CPUs involved, and only two of the servers were really busy. That same
physical server now services 3-4 times the client load of the four virtual hosts, and still has
a similar connection time (with 30% more tables).
- V