Dec 21

We are often asked by customers and prospects about how Reflex is different from the competition. The normal response to the “competitor differentiation” question is for vendors to engage in a feature function battle, often times comparing apples to oranges just to make a point.   This can be very confusing to customers as many of the vendors in this space have a similar marketing message and use many of the same “buzz” words and phrases when describing their features even though the functionality is dramatically different.  Another way for IT professionals to truly understand how they can benefit from a solution above and beyond the feature war is to take a step back and look at the vendor’s general philosophy,  approach, and technology architecture and understand how that  will  support the long term strategy of managing the data center.

Picture2Therefore, I’m going to take a different approach and talk about the Reflex philosophy and how that led to the technical architecture of the product. One of the biggest differentiators for Reflex is something that isn’t always obvious, it’s how we went about designing and architecting our software. Most of us at Reflex have been around large IT environments for our entire careers. We have used and watched the enterprise systems management space evolve over that time. There is a common occurrence where companies shift from developing components to acquiring them. This shift leads to the most common type of solutions we see today that are built from loose integrations of various acquisitions. In some cases the integration is fairly robust, but it’s more common to see them being a mere user interface façade with the original silo applications living inside. We set out to build something different that leveraged all the new information and access provided by virtualization and its rich set of APIs.

It’s All About the Data

In 2008, Reflex started out with the goal to leverage virtual intelligence available in vCenter to create a new kind of firewall. Having lots of DNA from the IT security site of the house, we could see how adding that context to firewall configuration and operation could be very interesting. Obviously many others agreed with us as we walked away from VMworld in 2008 with not only a best in category for security, but an overall best in show award. The funny thing that happened was that during that show, we spoke with lots of people that wanted to see more of the configuration and topology data we were gathering for non-security use cases. Not surprisingly, people wanted the data and depending on that person’s role, they wanted to consume it in different ways. From that came one of our primary development tenants, we would build based on a Software Product Line methodology.

In short, we took the underpinnings of the original product and made that a set of base core components that would be used to build multiple “products” upon. With this method we quickly came out with vWatch (Topology Discovery, Monitoring, and Change Tracking), vTrust (virtual firewall), and vProfile (Configuration management). Those are three products that are actually just distinct feature sets running above a single core platform with a single data model. In contrast, that kind of functionality from another vendor would likely require multiple servers with unique databases, management, and user interfaces for each function. With Reflex, the data and collection capabilities are always there, the users just enable features via license key; nothing new to install.

 

A Peek Under the Hood

Reflex builds a platform, which we then build our application on. That platform has a number of discrete components that have been specifically designed to handle the types of data presented by enterprise virtualization. Though we are not a big data company, we leverage much of the same concepts and technologies required to deal with the volume, velocity and variation of data that is available from hypervisors, element managers, and infrastructure that runs those environments. Within the platform there are two major components that manage the data collection, indexing, and analysis: configuration and topology data in the Virtualization Management Center (VMC)  and time-series data in the Performance Data Collector (PDC). The VMC handles the collection of the slower moving data, like configuration and topology, while the PDC is responsible for the high velocity data that is predominantly items like streaming performance metrics and resource allocation; time series data points at very high sample rates. There are a couple of technologies that tie these units together and provide the ability to render analytics in real-time and scale a Reflex implementation to the largest of infrastructures.

ReflexBigData_finalFirst there is a domain specific language [Virtualization Query Language or VQL] (Our CTO, Aaron Bawcom, has written about it multiple times [here] and [here]), that provides distributed query on the VMC and PDC nodes. A simple analog to VQL is the SQL language used for retrieving data from relational databases, but there are some major technological differences. VQL is an abstraction layer that allows Reflex to collect and store disparate types of data in multiple locations without requiring a normalization step. This allows data to be stored in a native or a close to native format as possible while still providing a normalized way to access that data. Another difference is that VQL is distributed, which means that for a single query, some data (e.g. configuration/topology) may come from one set of Reflex nodes, while the other portion of the query (e.g. performance metrics) may come from another set of nodes. In fact, the language does not require that the data reside in a database at all. Each VQL object can either be associated with a persistent data store or can be directly associated with an API. This flexibility allows for the dynamic gathering of data from external systems in conjunction with collection from Reflex managed stores.

In addition to VQL, Reflex uses an implementation of a technology concept called Complex Event Processing or CEP. The Reflex incarnation has been built specifically to work with VQL and is thus called Virtualization Event Processor or VEP. VEP nodes can exist in multiple places within the product. Depending on the node location, VEP is performing analysis on streaming configuration changes, creating rollups of performance metrics, or doing heuristics and trending calculations on the streams of performance information flowing into Reflex. Not surprisingly, the VQL language is also used by VEP. There are some queries that are perfectly fine when used in a batch methodology. In those cases, like monthly reporting, for example, queries can be issued against a persistent data store, churn on the response, and then provide an answer at some later time. But there are other use cases where answers are required immediately. Questions like workload placement, or security policy decisions must be done in real-time. For those questions, the VQL query is loaded into VEP and from that point forward VEP is always calculating and will provide “the answer” when needed. VEP is also leveraged as part of federation of data and can be used hierarchically. For example, there may be many geographic locations where analytics is taking place. For each of those locations the VEP nodes are summarizing and providing results specific to those locations. Those VEP instances can also roll up summarized data to a higher level for aggregate calculations. This results in a drill down structure that provides higher level of detail as the nodes are traversed.

As is simplified in the infographic on the right, various data sources are harvested by Reflex nodes, that data is then indexed, stored, and analyzed. Today we have many applications of that data and analysis surfaced through our software modules. In addition we make access to that data simple for users and third-parties via extensible web APIs. By making the decision to build our product in this way, Reflex has the opportunity to quickly add new and interesting data sets to be harvested and thus provide net new functionality without building an entirely new software platform. Among the sources being planned are additional hypervisors, converged infrastructure platforms, hybrid cloud management, and even software defined networking (SDN). We believe that SDN will provide some very interesting data to track and analyze and is going to become a key component to how virtualization and cloud are deployed in the near future.

Why You Should Care

We, and other like Bernd Harzog at the Virtualization Practice, strongly believe that in order to be successful at consuming and analyzing the amounts of data provided by virtualization today and moving forward, software products must be designed with big data in mind. Products that were developed even a few years ago without that vision will not be able to keep up with the flood of data. They may somewhat function, but they will not be taking advantage of the opportunity the data provides. As these are not simple technologies, there is little chance those vendors will have success in attempts to retrofit older software to a model utilizing these principles.

Examining messaging of the vendors in this space shows that they understand that their customers want, well integrated solutions that can provide value in multiple management domains. Even VMware has shifted to bundling their many separate management products into “suites of functionality”; primarily integrated via web views into those separate products. When making buying decisions it’s even more important to look under the hood at how a product is architected. If the “integration” is merely a façade and the tools where never designed to work together, much less scale and address the data volumes created by virtualization, beware. To keep the message simple, Reflex talks about our modules being integrated. After showing you how it’s architected, you can now see that we didn’t really have to “integrate” anything. The Reflex VMC is a single system of technologies, designed from the ground up to solve the complex virtualization challenges of today and the new challenges presented by the move to private, public, and hybrid clouds.

Mike Wronski, VP of Product Management
Twitter: @Reflex_Mike

written by Mike Wronski \\ tags: , ,

Aug 06

In Mike Vizard’s recent article The Rise of the Programmable Data Center he does a great job of outlining a lot of the challenges and methods that are being explored to automate, streamline, and simplify data centers. As we all know, the goal of IT is to provide applications that solve real world problems. But applications need a lot of services in order to operate. In the early 1970’s, applications needed the ability to query, store, and manipulate data in a structured way. During the 1970’s, there were several researchers and vendors providing capabilities using completely different access methods (think API’s). Then some real wizards at IBM came up with the idea “Hey instead of having a specific programming API to use relational data, let’s create a language that makes dealing with relational data a lot easier” and thus SEQUEL was born; well, the language was born and someone made some legal complaint about the name so they just changed it to SQL. And it was such a good idea that other great databases at the time adopted the language as well for their own databases. Now theoretically an application developer could develop their application using SQL to access relational data and it should work with different databases as long as each database supported SQL (or specific version like SQL-92).

Language beats API

Because history always repeats itself it turns out we are in a very similar position today with private/public cloud computing. The proliferation of the x86 hypervisor has made infrastructure such as CPU, Memory, Networking, Disks, and entire operating system instances consumable services to an application just like a database. So of course we have created as many REST, ATOM, SOAP (RAS) and other proprietary API’s with thousands of methods to work with these new services. The challenge for application developers is the same lock-in problem that’s been around for years: If I build my app to work with cloud API X then if I want to make it work with API Y then it is my problem. Cloud broker technologies are trying to solve this problem but these approaches introduce Yet-Another-Layer-Of-Complexity© between application and service since these technologies often involve run-time applications with state and still usually have their own RAS based API.

So in an effort to not repeat the mistakes of the past, we at Reflex have been trying to create the most Insanely-Simple™ language to interface applications and services. For example, if an application needed to find out something like “what host is the vm named ‘foo’ running on?” you would use the query

vm.name = ‘foo’ project host

Or let’s say you don’t know the name of the VM, you just know one of the IP addresses

vnic.ip_addresses contains ‘192.168.3.4’ project host

And let’s say you want to turn on all VM’s that are in a particular application

vm.app = ‘App 1’ | set vm.status = running

One of the first things you will notice is the key-value syntax that has become very popular for accessing data. But what is more important is that the vm & host object types and the properties on those types might as well be table and column names in a database. They are not specific to the language at all. Whoever implements the query processor can decide what objects and properties they expose. The language just provides a uniform way to access the data. Thus far we have been providing the compiler’s for the language to anyone who has asked using a standard Open Source license. More information about VQL can be found on the Reflex Systems website.

Plumbing is Nice

The advantage of using a language like the one described above to interface applications with the datacenter is that it makes infrastructure services act & feel pretty much the same way data is accessed in a SQL (or NoSQL) database. This approach affords application developers a way to specify what they want, and the processor that implements the language implements how those instructions are carried out. Dependency resolution, orchestration, parallelization and element level API’s (load balancer, operating systems, firewall, hypervisor) essentially become plumbing that the application developer no longer has to deal with. There is a lot of hype in the market right now around Software Defined Networking (SDN) but SDN is only a subset of the Declarative Datacenter (Software Defined Datacenter or Programmable Data Center). The ability to easily change the complex operation of a datacenter including not only networking but also storage, computation, and memory resources based on the changing needs of applications is what the Declarative Datacenter is all about. Yes it is possible to code some scripts that take actions on the data center but what those scripts probably don’t do is have an omniscient real-time model of the entire datacenter. A Declarative Datacenter Controller is essentially the brain of a data center that is taking inputs in real-time from everything going on and instantly calling aforementioned actions when sufficiently complex conditions are met. That is a big part of the value of the Declarative Datacenter.

Aaron Bawcom is the Chief Technology Officer for Reflex Systems, a provider of end-to-end virtualization management solutions based out of Atlanta, GA. Contact him at abawcom@reflexsystems.com.

written by Aaron Bawcom \\ tags: , , , , , , ,

Jun 28

A few months ago at a user group event in Italy, VMware’s CTO, Steve Herrod, made some very interesting (and revealing) statements about theReflex Virtual Intelligence lack of integration in its VMware Cloud Infrastructure Suite.  Mr. Herrod’s comments have stirred up debate about the best approach for virtualization management—and the challenges to doing it right.

Never one to shy away from a lively debate, I can’t help adding my perspective to the conversation…much of it focused on the role of integration in virtualization management solutions.

At Reflex, our “integration philosophy” differs from a lot of other solution providers, both the proverbial “big guys” and some of the niche players. Some companies claim their solutions are integrating data from different VM management functions. When in reality, the only integration is found in a single dashboard view of disparate information. Other companies like to minimize the importance of integrated data, based on the idea that you are overwhelmed with too much information about your environment already.

“Dashboard-level” integration is usually the result of acquisition-led development. If your virtualization management solution evolved through a series of tech acquisitions, integration was probably never top of mind when the original software package was created. What you end up with is a set of disparate tools loosely cobbled together as a single management solution, but incapable of providing integrated data required for effective virtualization management.

More recently, we’ve been hearing about the “less is more” approach when it comes to virtualization management data. Some companies argue that you only need a few metrics to effectively manage your environment, rendering all the back-end integration of various data superfluous. We think this is a major cop-out coming from companies that have not invested enough time and energy to deliver what’s really required for virtualization management—accurate, faithful data about the environment that produces actionable intelligence to guide your decisions.

I would suggest a third approach, one in which integration delivers intelligence about the virtual environment.

Virtualization is breaking down traditional siloes of IT management, so effective management solutions must enable process and workflow across different disciplines to glean real intelligence from virtual data. But bringing different sources of data and capabilities together in a functional manner can be hard, especially in the virtual environment. That’s where data-level integration comes in. It’s the foundation for any management solution that proposes to give you actionable intelligence for managing your virtual infrastructure.

Based on how virtualization management solutions have evolved, real integration is often overlooked—either because tools cobbled together via acquisition can’t really do it, or because new solutions are glossing over it in favor of aggregated, overly abstracted viewpoints. A lot of newer, purpose-built virtualization management technology simply can’t scale to accommodate the breadth of information and metrics produced by enterprise virtualization today.  These vendors are forced to preach the “too many metrics, too much data” story to hide the fact that their solutions can’t comprehend the volumes or velocity of the data. Neither approach provides the real intelligence you need to manage your virtual environment effectively.

Integration is easy when you limit the scope. But at Reflex, we consider ourselves gluttons for punishment, and take a strange delight in solving the really ugly, complex and convoluted problems at the core of virtualization management. Maybe that’s why we’re so hung up on integration…because we’ve figured out how to do it properly, and we’ve seen how much it benefits the management function.

To manage a complex virtualization infrastructure, your solution should simplify information extraction and remediation and provide context around real-time and past events. It should rely on one data structure and one system for all analysis. This will give you context and intelligence across management disciplines, enabling you to exert greater control over virtual infrastructure and maintain greater data fidelity over time.

The ability to collect, store, integrate, analyze and represent data in a virtual way cannot be underemphasized. It’s critical to many of the virtualization management functions you need, including:

•             Tightly integrating all components and virtualized objects in the data center.

•             Providing a large content- and context-aware data abstraction later on top of a comprehensive, centralized data store.

•             Enabling quick access and search for current and past virtual infrastructure data.

•             Collecting more data in shorter periods of time to eliminate the over-averaging and normalization that can mask real issues.

•             Providing customized metrics and more historical data for accurate trending, analysis and projections.

When virtualization management solutions are integrated at the data level, all management functionality and reporting draw from a single source of truth. That means less manual integration work for you, less data and alerts to sort through, and more context, awareness and actionable intelligence about your environment. No matter which philosophy you subscribe to, it’s hard to argue with that.

Aaron Bawcom is the Chief Technology Officer for Reflex Systems, a provider of end-to-end virtualization management solutions based out of Atlanta, GA. Contact him at abawcom@reflexsystems.com.

written by Aaron Bawcom

Apr 28

 

Less is not always more. We can agree that virtualization and server admins need less data to process and analyze. In a recent post, I mentioned some brilliant advances that are serving that end, including data compression technology, data crunching speeds and algorithm sophistication. We also agree that a virtualization management solution can and should minimize the need for root cause analysis. But the steps between a) the information about your virtual infrastructure and b) the data you must process and deal with to effectively manage the environment are important and varied.

Processing and storing the vast amounts of data about both the physical and virtual network and all areas of interoperability is hard—really hard. That’s why some vendors want to convince you it’s unnecessary. Their thought process goes something like this: You don’t really need all that data. It’s just for troubleshooting. Wouldn’t you rather have a solution that just solves your problems outright? And I have to admit, it sounds enticing, dare I say, too good to be true.

In truth, a virtualization management solution that strips out useful data required for effective problem-solving and performance management is doing you a disservice. That doesn’t mean the right solution leaves you overwhelmed underneath a mountain of data. Rather it’s what data you’re presented with—in the context of your unique challenges.

When a virtualization management tool skimps on data and analysis, you could end up wasting time, effort and money—in direct contrast to the promised benefits. Many of these tools base results on the most recent hour of data only—and probably can’t correlate that data with historical maximum resource usage. In consequence, the results and recommendations don’t tell the whole story about your virtual environment. You could end up moving resource workloads into some unnecessary places. And if the tool constantly makes recommendations from hour to hour, you’ll end up thrashing your virtual environment around for no real performance benefit. The most effective virtualization performance management solutions are aware of historical resource usage, and may go even further to place that usage into context. Following this example, the ideal solution remembers that a particular VM spikes every Tuesday at 2 p.m. without negative consequences, and shapes its recommendations accordingly. The knowledge is based on more comprehensive data collection, aggregation and analysis, but it means less work and wasted effort for you.

The problem isn’t too much data. The real problem is wrapping context around the data. We have devoted a lot of intellectual property to solving this challenge in order to put performance, capacity, right sizing, workload, configuration management, monitoring and security data into the proper perspective for our customers. It’s context, not simply less data, that enables more efficient and effective virtualization management.

Aaron Bawcom is the Chief Technology Officer for Reflex Systems, a provider of end-to-end virtualization management solutions based out of Atlanta, GA. Contact him at abawcom@reflexsystems.com.

written by Aaron Bawcom \\ tags: , , ,

Apr 10

Gartner 2012 TrendsRecently Gartner Group announced what they consider the top five server virtualization trends for 2012.   In the brief analysis which we are sharing below, they emphasize that while server virtualization is maturing it is still pretty dynamic and ever changing, so much so that it is actively impacting their own decisions and guidance to their clients.  As price and selection has become varied, it is important for Reflex to be diligent in providing the information and education our current and future customers need to make the best possible decisions on virtualization solutions for their environment.  We thought it was an informative and encouraging piece to share with our own followers, along with our thoughts on the 2012 trends that Gartner has identified:

1)     Competitive Choices Mature: VMware’s competition has greatly improved in the past few years, and price is becoming a big differentiator. Enterprises that have not yet started to virtualize (and they exist, but tend to be small) have real choices today.

We believe this growth in competition is great for customers and vendors. Not only are prices coming down, but customers now have more choice than ever, and are not beholden to VMware’s architecture or pricing model. We see customers making much more informed decisions, selecting solutions that deliver the breadth of functionality they need not just today, but for their future plans to grow and scale their infrastructure and develop private and hybrid cloud solutions at a reasonable price.

2)     Second Sourcing Grows: Existing VMware users may not be migrating away from VMware, but they’re concerned with costs and potential lock-in. A growing number of enterprises are pursuing a strategy of “second sourcing” – deploying a different virtualization technology in a separate part of the organization. Heterogeneous virtualization management is mostly aspirational, although there is interest.

We agree that many users are currently looking for additional solutions for virtualization technology and the lack of management is a current obstacle for these end users.  Specifically, management solutions from the individual hypervisor vendors can be problematic for the growth of second sourcing.  Cross-hypervisor management will become essential within the next 12-18 months as we see more diversification of basic virtualization technology. The integrated management platform strategy vs. a multi-point solution strategy becomes key when expanding consistent management capabilities across multiple hypervisors like VMware, Red Hat KVM, Microsoft Hyper-V, etc. to enable holistic management of the virtual and cloud infrastructure. We believe that the management layer will be the great equalizer as this market progresses.  Red Hat and Microsoft are beginning to embrace the ecosystem in a way that makes holistic management a real possibility.  We expect to be able to provide significant parity for VMware, Red Hat, and Microsoft as we enter 2013.  These advancements in the management market will help accelerate the adoption of other hypervisors and bring true flexibility to the market place because the management technology will provide the features that may be lacking in some of the hypervisors.

3)     Pricing Models in Flux: From expensive hypervisors to free hypervisors to core-based pricing and now memory-based entitlements – virtualization pricing has always been in flux, and trends toward private and hybrid cloud will ensure that virtualization pricing will continue to morph and challenge existing enterprise IT funding models.

We believe this is also a positive for customers, who now have alternatives to VMware’s pricing.  Customers do not like to be negatively impacted by a pricing model that penalizes customers for gaining the efficiency and benefits from virtualization.  Customers will be very mindful of pricing scenarios and choice of vendor as more continue to build out private and hybrid clouds.  These pricing practices serve to artificially stunt the growth of virtualization as customers pause to understand the financial impact.  As the management technologies become a larger portion of the budget allocation for virtualization, it is very important to make sure that the pricing model allows the customer to achieve the scale in the infrastructure as well as benefit from its efficiency and agility.  Consumption based pricing models are difficult for most enterprises to plan and execute today.  This may change over time, but the key is to find a way to allow the customer to grow its usage of the solution as it grows value in their enterprise without financial penalty.

4)     Penetration and Saturation: Virtualization hitting 50% penetration. Competition and new, small customers driving down prices. The market is growing, but not like it used to, and vendor behavior will change significantly because of it. And don’t forget the impact on server vendors – the next few years will prove to be a challenge until virtualization slows down.

We actually see the market for virtualization management growing just as fast, if not faster, than it has in the past. While straight server virtualization purchases may be slowing, customers are wising up to the fact that they need to manage these environments as well, if not better, than they have managed their physical environments in the past, if they want to really get the benefits virtualization promises. We believe the management market has tremendous growth ahead, and will provide most of the value added features that deliver on the promise of agile and elastic datacenters.

5)     Cloud Service Providers Are Placing Bets: IaaS vendors can’t ignore the virtualization that is taking place in enterprises. Creating an on-ramp to their offerings is critical, which means placing bets – should they create their own standards (perhaps limited their appeal), buy into the virtualization software used by enterprises (perhaps commoditizing themselves), or build/buy software that improves interoperability (which may or may not work well)? Not an easy choice, and winners and losers will being determined.

Many of our customers, who are cloud service providers themselves, realize that they must provide solutions that 1) customers are familiar with, and 2) can be integrated into a broader data center vision that includes both private and public cloud, leveraged for different needs of the business. Developing a platform that enables these two things is key to their success.  Service Providers have struggled to get enterprises to buy into the promise of a more efficient cost model using the public cloud.  This is primarily because they do not want to let go of the true business critical applications.  The private cloud is growing in popularity, and that is being driven by the technologies being delivered by a new generation of software companies that spend every day trying to solve these problems.  The service providers are going to have to embrace these technologies/vendors and work with them in a meaningful way in order to get true enterprise buy in for use of cloud services.

As virtualization continues to mature and shape how IT functions, organizations should become educated on the virtualization options available and look for a strong management solution that offers flexibility, scalability and comprehensive capabilities that evolve with the dynamic nature of the virtualized data center.

Preston Futrell is President & CEO of Reflex Systems.

written by Preston Futrell \\ tags: , , , , ,

Apr 02

Now, a few words on looking for things. When you go looking for something specific, your chances of finding it are very bad. Because of all the things in the world, you’re only looking for one of them. When you go looking for anything at all, your chances of finding it are very good. Because of all the things in the world, you’re sure to find some of them. – Daryl Zero

This sage advice from the greatest private detective in the world isn’t just applicable to figuring out who is blackmailing you; it’s also useful for general problem solving. We’ve had some great discussions lately with some people who have some really tough data problems. Now mind you these problems are all over the map. Some problems are completely different and some others overlap. One challenge in problem solving is imposing arbitrary restrictions on how a problem is solved. People sometimes latch onto a particular way of thinking and as my Uncle Olaf used to say “When all you have is a hammer, everything looks like a nail…”. Take the following data for example. The first image below with the single green line looks pretty simple. It looks like it is telling you all you need to know without a lot of complexity. The problem is that the next image with the blue line is the actual data that the first image was created from. After you look at both, it is clear that the view of the single straight line could potentially be misleading as the trend is actually going down at the end of the detailed graph.

Simple

Detail

What is far more useful, offers simplicity, and provides detail for validation is the following image that includes both types of data in relation to each other.

Continue reading »

Aaron Bawcom is the Chief Technology Officer for Reflex Systems, a provider of end-to-end virtualization management solutions based out of Atlanta, GA. Contact him at abawcom@reflexsystems.com.

written by Aaron Bawcom \\ tags: , , ,

Mar 06

Cloud Computing_HiResLots of buzz around big data and cloud these days.  Thinking about how the technologies of big data, virtualization and cloud intersect is also being trumpeted by many of the big IT vendors.

Reflex has always seen virtualization as an opportunity to do things differently.  The scale of large enterprise virtualization implementations and the trajectory of virtualization, not to mention cloud, creates an interesting big data question.

“Can the tools of yesterday’s data center be adapted to operate efficiently in this new environment?”

We think the answer is no. At best, they will “function”, but what they wont be able to do is take advantage of the wealth of information available to extend the value of virtualization and eventually cloud.

I recently wrote an article for Enterprise Systems Journal that discusses the general concept.  If you’re familiar with Reflex it will be obvious that we are serious about the intersection and have developed some really cool technologies to leverage the big data of virtualization in our VMC product. Much of what I discuss in the article is materializing in our products.

The two primary technologies are the evolution of the VQL language and our more recent introduction of Complex Event Processing technology with a VQL/Virtualization specific implementation.  It is through this focus that we have proven our common platform for virtualization management at enterprise scale.

written by Mike Wronski \\ tags: , ,

Jul 21

The first Virtualization Query Language was officially born around February of 2008 and was first released in the summer of 2009. VQL provides data awareness of the IT environment by easily surfacing information from different data sources such as the VI Java library produced by Steve Jin. Since VQL was first developed it has been getting lots of new objects added to the library but the grammar has not changed in any major way…until now. Today we are announcing some pretty major changes to VQL that we are very excited about. The new capabilities include functions, new objects, and real-time query processing.

Functions!

As an object pipeline VQL was great as a classifier but we realized that objects can be somewhat difficult when you want to perform analytics on data so we introduced a generic object type that makes the production of analytical data possible (think mapping objects to spreadsheets). Some of the new built-in VQL functions are the usual suspects like aggregates (sum,count,avg,min,max) but some of the other functions are also nice like top(), math(), density(), and select() which suddenly allows the VQL query engine to produce partial objects which makes transferring VQL objects over a network rather zippy.

New Objects

  • Some of the new objects joining the team include performance metrics, datastore mounts, vCPU, and VQL queries themselves are now VQL objects. Why make VQL queries objects? Well one of the new functions introduced is called QueryOutput() which takes the name of a VQL query as a parameter. This capability allows VQL queries to be chained together in a stream by reference so changing the output of one query does not effect the definition of another query. The payoff here is performance. The ability to chain trees of queries together at the application level eliminates complexity and adds a lot of performance gains.

 

 

Real-Time Query Processing

One of the most disruptive new capabilities in VQL 2.0 is the addition of soft real-time processing of VQL queries. Processing VQL data in real-time means you can instantaneously see extremely sophisticated information about your virtual environment. One of the most straightforward applications of this technology is reporting. Everyone is used to running a report and waiting some time before viewing the report. Usually the more useful the report is the longer it takes to run. And if any of the data that makes up that report changes, the entire report has to be computed again. What the Reflex real time processing engine offers is the ability to recompute only the portion of that report that may have changed based on the new data. This type of data computation provides a double-whammy of utility. The segmented/streamed processing can produce sophisticated data instantly and the computation of that data actually takes less overall CPU cycles than computing it using standard database techniques.

This type of technology can be applied to not only reports but any type of complex data computation. Some examples of how real-time data computation can be used:

  • Instantly understanding complex forecasting of resource usage and supply
  • Instantly reacting to new environmental data and instituting a modified security policy based on the new information
  • Alerting when the performance of an application might be suffering due to resource constraints or new load placed on the application
  • Instantly adapting the load balancing of resource demand across resource surplus that may exist for a forecasted amount of time

Real-Time Intelligence

The previous examples provide some insight into the possibilities of real-time data processing but another great example is using real-time processing as a component itself to produce new intelligence that then higher level decisions can be based off of. Today if you wanted to find out which virtual machines in your environment are oversized or undersized you would buy a product that produces that type of report or you could write some scripting logic to produce that information yourself. The time needed to compute that type of information can be anywhere from a few seconds to a few minutes depending on the size of your environment. Now imagine being able to constantly compute that data within milliseconds no matter how large your environment is. You then could produce new metrics that record that data so now you can know at any point of time in the past how undersized or oversized a VM was and even graph those trends over time. That being said, you could envision the software you would need to produce that type of intelligence. The real innovation that VQL 2.0 provides is making that type of incredibly complex processing to occur with a single VQL query specification.

Another way to understand the new real-time processing capabilities of VQL is to think of constantly computing the output of a Powershell script so that if any of the data that the Powershell script queries for changes that the output of the Powershell script instantly changes.

You might be asking how this technology is different from some other software that exists out in the world. A lot of real-time processing engines that exist can only process data in a very specific form and can only produce data for a very specific output. Since VQL is a graph based language the new Reflex real-time processing engine is one of the first graph based complex event processing systems that can analyze any data which means if you have Key/Value based data with relationships then it can probably be packaged into a VQL object, historically persisted, and analyzed in real time.

For more information please visit the VQL section of the Reflex Website.

Aaron Bawcom is the Chief Technology Officer for Reflex Systems, a provider of end-to-end virtualization management solutions based out of Atlanta, GA. Contact him at abawcom@reflexsystems.com.

written by Aaron Bawcom

Apr 19

 

On April 12th McAfee and Reflex announced a new product integration. Since then I have received many requests for clarification on what the relationship means and how it differs from other offerings and previous Reflex partnerships.

Let me first start with a little about Reflex’s philosophy on integration and partnerships. Our goal is to be the go-to company for integrated virtualization management and security. But that does not mean that we believe that all the technology that goes into the solution will be home grown.  In some cases there are subject matter experts that are far  more knowledgeable than Reflex and thus are better equipped to solve specific problems.  Intrusion detection and prevention (IDP) is one of these areas.  IDP is more than just building software to inspect network packets, it also needs to be backed up by a team of security researchers that provide the content, or signatures, for the scanning software that make it effective.  It is for this reason that Reflex prefers to integrate with the top vendors in the IDP space and thus the McAfee relationship.
Continue reading »

Mike Wronski, VP of Product Management
Twitter: @Reflex_Mike

written by Mike Wronski

Apr 10

If your virtualization environment has snapshots growing like weeds in the yard then you are not alone. The more snapshots that exist for longer periods of time degrades the performance of the virtual machine the snapshots are on. BlankTimelineTo further illustrate the flexibility of the automation engine in the Reflex VMC we will illustrate a real world example of addressing this problem. This example dives deeper into the concept of an Action. An Action is at minimum a script and can also include a VQL query. An action can either run a script either on a periodic basis or whenever the output of a VQL query changes. We will illustrate an Action that uses the output of a VQL query in a subsequent post. For now, we will take a look at Actions that are run on a periodic basis. First lets describe what we want to accomplish with this policy:

  1. Send an e-mail to the owner of a VM and the IT Admin when they have reached X snapshots and the image is not marked as an exception
  2. Send an e-mail to the owner, IT Admin and the Group Admin when they have reached X snapshots, the image is not marked as an exception and the X snapshot condition has lasted more than Y days
  3. To make sure virtual machines have the proper data on them, Query for all machines that have no owner set and tell the Virtualization Architect the name of the machine and what functional group the VM is in if any

Continue reading »

Aaron Bawcom is the Chief Technology Officer for Reflex Systems, a provider of end-to-end virtualization management solutions based out of Atlanta, GA. Contact him at abawcom@reflexsystems.com.

written by Aaron Bawcom \\ tags: , , , , ,