Apr 28

 

Less is not always more. We can agree that virtualization and server admins need less data to process and analyze. In a recent post, I mentioned some brilliant advances that are serving that end, including data compression technology, data crunching speeds and algorithm sophistication. We also agree that a virtualization management solution can and should minimize the need for root cause analysis. But the steps between a) the information about your virtual infrastructure and b) the data you must process and deal with to effectively manage the environment are important and varied.

Processing and storing the vast amounts of data about both the physical and virtual network and all areas of interoperability is hard—really hard. That’s why some vendors want to convince you it’s unnecessary. Their thought process goes something like this: You don’t really need all that data. It’s just for troubleshooting. Wouldn’t you rather have a solution that just solves your problems outright? And I have to admit, it sounds enticing, dare I say, too good to be true.

In truth, a virtualization management solution that strips out useful data required for effective problem-solving and performance management is doing you a disservice. That doesn’t mean the right solution leaves you overwhelmed underneath a mountain of data. Rather it’s what data you’re presented with—in the context of your unique challenges.

When a virtualization management tool skimps on data and analysis, you could end up wasting time, effort and money—in direct contrast to the promised benefits. Many of these tools base results on the most recent hour of data only—and probably can’t correlate that data with historical maximum resource usage. In consequence, the results and recommendations don’t tell the whole story about your virtual environment. You could end up moving resource workloads into some unnecessary places. And if the tool constantly makes recommendations from hour to hour, you’ll end up thrashing your virtual environment around for no real performance benefit. The most effective virtualization performance management solutions are aware of historical resource usage, and may go even further to place that usage into context. Following this example, the ideal solution remembers that a particular VM spikes every Tuesday at 2 p.m. without negative consequences, and shapes its recommendations accordingly. The knowledge is based on more comprehensive data collection, aggregation and analysis, but it means less work and wasted effort for you.

The problem isn’t too much data. The real problem is wrapping context around the data. We have devoted a lot of intellectual property to solving this challenge in order to put performance, capacity, right sizing, workload, configuration management, monitoring and security data into the proper perspective for our customers. It’s context, not simply less data, that enables more efficient and effective virtualization management.

Aaron Bawcom is the Chief Technology Officer for Reflex Systems, a provider of end-to-end virtualization management solutions based out of Atlanta, GA. Contact him at abawcom@reflexsystems.com.

written by Aaron Bawcom \\ tags: , , ,

Jul 23

ClusterMerge

“I’ve been working on the railroad…”
One of the most useful features of virtualization is the concept of workload balancing. Workload balancing can turn a server farm into a sweaty toothed engine of computation power. As a virtualization administrator, the constant challenge is squeezing as much performance out of your virtual infrastructure as possible. The more physical hosts you can have in a cluster, the better operational efficiency you will get across your virtual infrastructure. So how can VMsafe, a security thing, possibly help with this?

There’s no place like home
One of the key requirements for workload balancing is that all of the virtual machines that operate within a shared cluster must have network connectivity. Which makes sense; the virtual infrastructure has to know that if it moves a virtual machine to a new host that it will still have the same network connectivity that it had before and the applications on the virtual machine will continue to operate normally.

Off the grass!
Since the virtual infrastructure requires network connectivity for virtual machines in a cluster, application owners end up wanting some segmentation between their applications and other applications. Some administrators may use VLANs to solve that problem but this can be problematic and somewhat cumbersome. There are several ways to deal with this problem but what happens more often than not is that separate clusters are created for different departments or applications.

Slackin’
So let’s say you had 30 ESX hosts and you segment 10 different applications that are hosted by the virtual infrastructure into different clusters which means you would be running 10 clusters with an average of 3 hosts per cluster. vSphere 4.0 allows you to run up to 32 hosts in a single cluster so you would not be getting the greatest operational efficiency you could squeeze out of your environment.

A brave new world
Using the new Reflex vTrust technology, you can easily segment different applications without the use of VLANs so that they all exist on the same network meeting the requirements for large-scale workload balancing and still provide application owners isolation between their applications. You could merge those existing 10 clusters into a single super cluster achieving higher service levels and ultimately requiring less hardware to operate an efficient virtual environment.

captured_Image.png

written by Aaron Bawcom \\ tags: , , , ,

Mar 06

Do you ever see an alarm in the VIC and after a few minutes of frustration finally figure out what caused the alarm? The latest release of Reflex Virtualization Management Center 1.9 has added new and improved functionality to manage alarms within a virtualized datacenter environment. Continue reading »

written by Aaron Bawcom \\ tags: , , ,