Dec 21

We are often asked by customers and prospects about how Reflex is different from the competition. The normal response to the “competitor differentiation” question is for vendors to engage in a feature function battle, often times comparing apples to oranges just to make a point.   This can be very confusing to customers as many of the vendors in this space have a similar marketing message and use many of the same “buzz” words and phrases when describing their features even though the functionality is dramatically different.  Another way for IT professionals to truly understand how they can benefit from a solution above and beyond the feature war is to take a step back and look at the vendor’s general philosophy,  approach, and technology architecture and understand how that  will  support the long term strategy of managing the data center.

Picture2Therefore, I’m going to take a different approach and talk about the Reflex philosophy and how that led to the technical architecture of the product. One of the biggest differentiators for Reflex is something that isn’t always obvious, it’s how we went about designing and architecting our software. Most of us at Reflex have been around large IT environments for our entire careers. We have used and watched the enterprise systems management space evolve over that time. There is a common occurrence where companies shift from developing components to acquiring them. This shift leads to the most common type of solutions we see today that are built from loose integrations of various acquisitions. In some cases the integration is fairly robust, but it’s more common to see them being a mere user interface façade with the original silo applications living inside. We set out to build something different that leveraged all the new information and access provided by virtualization and its rich set of APIs.

It’s All About the Data

In 2008, Reflex started out with the goal to leverage virtual intelligence available in vCenter to create a new kind of firewall. Having lots of DNA from the IT security site of the house, we could see how adding that context to firewall configuration and operation could be very interesting. Obviously many others agreed with us as we walked away from VMworld in 2008 with not only a best in category for security, but an overall best in show award. The funny thing that happened was that during that show, we spoke with lots of people that wanted to see more of the configuration and topology data we were gathering for non-security use cases. Not surprisingly, people wanted the data and depending on that person’s role, they wanted to consume it in different ways. From that came one of our primary development tenants, we would build based on a Software Product Line methodology.

In short, we took the underpinnings of the original product and made that a set of base core components that would be used to build multiple “products” upon. With this method we quickly came out with vWatch (Topology Discovery, Monitoring, and Change Tracking), vTrust (virtual firewall), and vProfile (Configuration management). Those are three products that are actually just distinct feature sets running above a single core platform with a single data model. In contrast, that kind of functionality from another vendor would likely require multiple servers with unique databases, management, and user interfaces for each function. With Reflex, the data and collection capabilities are always there, the users just enable features via license key; nothing new to install.

 

A Peek Under the Hood

Reflex builds a platform, which we then build our application on. That platform has a number of discrete components that have been specifically designed to handle the types of data presented by enterprise virtualization. Though we are not a big data company, we leverage much of the same concepts and technologies required to deal with the volume, velocity and variation of data that is available from hypervisors, element managers, and infrastructure that runs those environments. Within the platform there are two major components that manage the data collection, indexing, and analysis: configuration and topology data in the Virtualization Management Center (VMC)  and time-series data in the Performance Data Collector (PDC). The VMC handles the collection of the slower moving data, like configuration and topology, while the PDC is responsible for the high velocity data that is predominantly items like streaming performance metrics and resource allocation; time series data points at very high sample rates. There are a couple of technologies that tie these units together and provide the ability to render analytics in real-time and scale a Reflex implementation to the largest of infrastructures.

ReflexBigData_finalFirst there is a domain specific language [Virtualization Query Language or VQL] (Our CTO, Aaron Bawcom, has written about it multiple times [here] and [here]), that provides distributed query on the VMC and PDC nodes. A simple analog to VQL is the SQL language used for retrieving data from relational databases, but there are some major technological differences. VQL is an abstraction layer that allows Reflex to collect and store disparate types of data in multiple locations without requiring a normalization step. This allows data to be stored in a native or a close to native format as possible while still providing a normalized way to access that data. Another difference is that VQL is distributed, which means that for a single query, some data (e.g. configuration/topology) may come from one set of Reflex nodes, while the other portion of the query (e.g. performance metrics) may come from another set of nodes. In fact, the language does not require that the data reside in a database at all. Each VQL object can either be associated with a persistent data store or can be directly associated with an API. This flexibility allows for the dynamic gathering of data from external systems in conjunction with collection from Reflex managed stores.

In addition to VQL, Reflex uses an implementation of a technology concept called Complex Event Processing or CEP. The Reflex incarnation has been built specifically to work with VQL and is thus called Virtualization Event Processor or VEP. VEP nodes can exist in multiple places within the product. Depending on the node location, VEP is performing analysis on streaming configuration changes, creating rollups of performance metrics, or doing heuristics and trending calculations on the streams of performance information flowing into Reflex. Not surprisingly, the VQL language is also used by VEP. There are some queries that are perfectly fine when used in a batch methodology. In those cases, like monthly reporting, for example, queries can be issued against a persistent data store, churn on the response, and then provide an answer at some later time. But there are other use cases where answers are required immediately. Questions like workload placement, or security policy decisions must be done in real-time. For those questions, the VQL query is loaded into VEP and from that point forward VEP is always calculating and will provide “the answer” when needed. VEP is also leveraged as part of federation of data and can be used hierarchically. For example, there may be many geographic locations where analytics is taking place. For each of those locations the VEP nodes are summarizing and providing results specific to those locations. Those VEP instances can also roll up summarized data to a higher level for aggregate calculations. This results in a drill down structure that provides higher level of detail as the nodes are traversed.

As is simplified in the infographic on the right, various data sources are harvested by Reflex nodes, that data is then indexed, stored, and analyzed. Today we have many applications of that data and analysis surfaced through our software modules. In addition we make access to that data simple for users and third-parties via extensible web APIs. By making the decision to build our product in this way, Reflex has the opportunity to quickly add new and interesting data sets to be harvested and thus provide net new functionality without building an entirely new software platform. Among the sources being planned are additional hypervisors, converged infrastructure platforms, hybrid cloud management, and even software defined networking (SDN). We believe that SDN will provide some very interesting data to track and analyze and is going to become a key component to how virtualization and cloud are deployed in the near future.

Why You Should Care

We, and other like Bernd Harzog at the Virtualization Practice, strongly believe that in order to be successful at consuming and analyzing the amounts of data provided by virtualization today and moving forward, software products must be designed with big data in mind. Products that were developed even a few years ago without that vision will not be able to keep up with the flood of data. They may somewhat function, but they will not be taking advantage of the opportunity the data provides. As these are not simple technologies, there is little chance those vendors will have success in attempts to retrofit older software to a model utilizing these principles.

Examining messaging of the vendors in this space shows that they understand that their customers want, well integrated solutions that can provide value in multiple management domains. Even VMware has shifted to bundling their many separate management products into “suites of functionality”; primarily integrated via web views into those separate products. When making buying decisions it’s even more important to look under the hood at how a product is architected. If the “integration” is merely a façade and the tools where never designed to work together, much less scale and address the data volumes created by virtualization, beware. To keep the message simple, Reflex talks about our modules being integrated. After showing you how it’s architected, you can now see that we didn’t really have to “integrate” anything. The Reflex VMC is a single system of technologies, designed from the ground up to solve the complex virtualization challenges of today and the new challenges presented by the move to private, public, and hybrid clouds.

Mike Wronski, VP of Product Management
Twitter: @Reflex_Mike

written by Mike Wronski \\ tags: , ,

Aug 06

In Mike Vizard’s recent article The Rise of the Programmable Data Center he does a great job of outlining a lot of the challenges and methods that are being explored to automate, streamline, and simplify data centers. As we all know, the goal of IT is to provide applications that solve real world problems. But applications need a lot of services in order to operate. In the early 1970’s, applications needed the ability to query, store, and manipulate data in a structured way. During the 1970’s, there were several researchers and vendors providing capabilities using completely different access methods (think API’s). Then some real wizards at IBM came up with the idea “Hey instead of having a specific programming API to use relational data, let’s create a language that makes dealing with relational data a lot easier” and thus SEQUEL was born; well, the language was born and someone made some legal complaint about the name so they just changed it to SQL. And it was such a good idea that other great databases at the time adopted the language as well for their own databases. Now theoretically an application developer could develop their application using SQL to access relational data and it should work with different databases as long as each database supported SQL (or specific version like SQL-92).

Language beats API

Because history always repeats itself it turns out we are in a very similar position today with private/public cloud computing. The proliferation of the x86 hypervisor has made infrastructure such as CPU, Memory, Networking, Disks, and entire operating system instances consumable services to an application just like a database. So of course we have created as many REST, ATOM, SOAP (RAS) and other proprietary API’s with thousands of methods to work with these new services. The challenge for application developers is the same lock-in problem that’s been around for years: If I build my app to work with cloud API X then if I want to make it work with API Y then it is my problem. Cloud broker technologies are trying to solve this problem but these approaches introduce Yet-Another-Layer-Of-Complexity© between application and service since these technologies often involve run-time applications with state and still usually have their own RAS based API.

So in an effort to not repeat the mistakes of the past, we at Reflex have been trying to create the most Insanely-Simple™ language to interface applications and services. For example, if an application needed to find out something like “what host is the vm named ‘foo’ running on?” you would use the query

vm.name = ‘foo’ project host

Or let’s say you don’t know the name of the VM, you just know one of the IP addresses

vnic.ip_addresses contains ‘192.168.3.4’ project host

And let’s say you want to turn on all VM’s that are in a particular application

vm.app = ‘App 1’ | set vm.status = running

One of the first things you will notice is the key-value syntax that has become very popular for accessing data. But what is more important is that the vm & host object types and the properties on those types might as well be table and column names in a database. They are not specific to the language at all. Whoever implements the query processor can decide what objects and properties they expose. The language just provides a uniform way to access the data. Thus far we have been providing the compiler’s for the language to anyone who has asked using a standard Open Source license. More information about VQL can be found on the Reflex Systems website.

Plumbing is Nice

The advantage of using a language like the one described above to interface applications with the datacenter is that it makes infrastructure services act & feel pretty much the same way data is accessed in a SQL (or NoSQL) database. This approach affords application developers a way to specify what they want, and the processor that implements the language implements how those instructions are carried out. Dependency resolution, orchestration, parallelization and element level API’s (load balancer, operating systems, firewall, hypervisor) essentially become plumbing that the application developer no longer has to deal with. There is a lot of hype in the market right now around Software Defined Networking (SDN) but SDN is only a subset of the Declarative Datacenter (Software Defined Datacenter or Programmable Data Center). The ability to easily change the complex operation of a datacenter including not only networking but also storage, computation, and memory resources based on the changing needs of applications is what the Declarative Datacenter is all about. Yes it is possible to code some scripts that take actions on the data center but what those scripts probably don’t do is have an omniscient real-time model of the entire datacenter. A Declarative Datacenter Controller is essentially the brain of a data center that is taking inputs in real-time from everything going on and instantly calling aforementioned actions when sufficiently complex conditions are met. That is a big part of the value of the Declarative Datacenter.

Aaron Bawcom is the Chief Technology Officer for Reflex Systems, a provider of end-to-end virtualization management solutions based out of Atlanta, GA. Contact him at abawcom@reflexsystems.com.

written by Aaron Bawcom \\ tags: , , , , , , ,

Apr 10

Gartner 2012 TrendsRecently Gartner Group announced what they consider the top five server virtualization trends for 2012.   In the brief analysis which we are sharing below, they emphasize that while server virtualization is maturing it is still pretty dynamic and ever changing, so much so that it is actively impacting their own decisions and guidance to their clients.  As price and selection has become varied, it is important for Reflex to be diligent in providing the information and education our current and future customers need to make the best possible decisions on virtualization solutions for their environment.  We thought it was an informative and encouraging piece to share with our own followers, along with our thoughts on the 2012 trends that Gartner has identified:

1)     Competitive Choices Mature: VMware’s competition has greatly improved in the past few years, and price is becoming a big differentiator. Enterprises that have not yet started to virtualize (and they exist, but tend to be small) have real choices today.

We believe this growth in competition is great for customers and vendors. Not only are prices coming down, but customers now have more choice than ever, and are not beholden to VMware’s architecture or pricing model. We see customers making much more informed decisions, selecting solutions that deliver the breadth of functionality they need not just today, but for their future plans to grow and scale their infrastructure and develop private and hybrid cloud solutions at a reasonable price.

2)     Second Sourcing Grows: Existing VMware users may not be migrating away from VMware, but they’re concerned with costs and potential lock-in. A growing number of enterprises are pursuing a strategy of “second sourcing” – deploying a different virtualization technology in a separate part of the organization. Heterogeneous virtualization management is mostly aspirational, although there is interest.

We agree that many users are currently looking for additional solutions for virtualization technology and the lack of management is a current obstacle for these end users.  Specifically, management solutions from the individual hypervisor vendors can be problematic for the growth of second sourcing.  Cross-hypervisor management will become essential within the next 12-18 months as we see more diversification of basic virtualization technology. The integrated management platform strategy vs. a multi-point solution strategy becomes key when expanding consistent management capabilities across multiple hypervisors like VMware, Red Hat KVM, Microsoft Hyper-V, etc. to enable holistic management of the virtual and cloud infrastructure. We believe that the management layer will be the great equalizer as this market progresses.  Red Hat and Microsoft are beginning to embrace the ecosystem in a way that makes holistic management a real possibility.  We expect to be able to provide significant parity for VMware, Red Hat, and Microsoft as we enter 2013.  These advancements in the management market will help accelerate the adoption of other hypervisors and bring true flexibility to the market place because the management technology will provide the features that may be lacking in some of the hypervisors.

3)     Pricing Models in Flux: From expensive hypervisors to free hypervisors to core-based pricing and now memory-based entitlements – virtualization pricing has always been in flux, and trends toward private and hybrid cloud will ensure that virtualization pricing will continue to morph and challenge existing enterprise IT funding models.

We believe this is also a positive for customers, who now have alternatives to VMware’s pricing.  Customers do not like to be negatively impacted by a pricing model that penalizes customers for gaining the efficiency and benefits from virtualization.  Customers will be very mindful of pricing scenarios and choice of vendor as more continue to build out private and hybrid clouds.  These pricing practices serve to artificially stunt the growth of virtualization as customers pause to understand the financial impact.  As the management technologies become a larger portion of the budget allocation for virtualization, it is very important to make sure that the pricing model allows the customer to achieve the scale in the infrastructure as well as benefit from its efficiency and agility.  Consumption based pricing models are difficult for most enterprises to plan and execute today.  This may change over time, but the key is to find a way to allow the customer to grow its usage of the solution as it grows value in their enterprise without financial penalty.

4)     Penetration and Saturation: Virtualization hitting 50% penetration. Competition and new, small customers driving down prices. The market is growing, but not like it used to, and vendor behavior will change significantly because of it. And don’t forget the impact on server vendors – the next few years will prove to be a challenge until virtualization slows down.

We actually see the market for virtualization management growing just as fast, if not faster, than it has in the past. While straight server virtualization purchases may be slowing, customers are wising up to the fact that they need to manage these environments as well, if not better, than they have managed their physical environments in the past, if they want to really get the benefits virtualization promises. We believe the management market has tremendous growth ahead, and will provide most of the value added features that deliver on the promise of agile and elastic datacenters.

5)     Cloud Service Providers Are Placing Bets: IaaS vendors can’t ignore the virtualization that is taking place in enterprises. Creating an on-ramp to their offerings is critical, which means placing bets – should they create their own standards (perhaps limited their appeal), buy into the virtualization software used by enterprises (perhaps commoditizing themselves), or build/buy software that improves interoperability (which may or may not work well)? Not an easy choice, and winners and losers will being determined.

Many of our customers, who are cloud service providers themselves, realize that they must provide solutions that 1) customers are familiar with, and 2) can be integrated into a broader data center vision that includes both private and public cloud, leveraged for different needs of the business. Developing a platform that enables these two things is key to their success.  Service Providers have struggled to get enterprises to buy into the promise of a more efficient cost model using the public cloud.  This is primarily because they do not want to let go of the true business critical applications.  The private cloud is growing in popularity, and that is being driven by the technologies being delivered by a new generation of software companies that spend every day trying to solve these problems.  The service providers are going to have to embrace these technologies/vendors and work with them in a meaningful way in order to get true enterprise buy in for use of cloud services.

As virtualization continues to mature and shape how IT functions, organizations should become educated on the virtualization options available and look for a strong management solution that offers flexibility, scalability and comprehensive capabilities that evolve with the dynamic nature of the virtualized data center.

Preston Futrell is President & CEO of Reflex Systems.

written by Preston Futrell \\ tags: , , , , ,

Mar 06

Cloud Computing_HiResLots of buzz around big data and cloud these days.  Thinking about how the technologies of big data, virtualization and cloud intersect is also being trumpeted by many of the big IT vendors.

Reflex has always seen virtualization as an opportunity to do things differently.  The scale of large enterprise virtualization implementations and the trajectory of virtualization, not to mention cloud, creates an interesting big data question.

“Can the tools of yesterday’s data center be adapted to operate efficiently in this new environment?”

We think the answer is no. At best, they will “function”, but what they wont be able to do is take advantage of the wealth of information available to extend the value of virtualization and eventually cloud.

I recently wrote an article for Enterprise Systems Journal that discusses the general concept.  If you’re familiar with Reflex it will be obvious that we are serious about the intersection and have developed some really cool technologies to leverage the big data of virtualization in our VMC product. Much of what I discuss in the article is materializing in our products.

The two primary technologies are the evolution of the VQL language and our more recent introduction of Complex Event Processing technology with a VQL/Virtualization specific implementation.  It is through this focus that we have proven our common platform for virtualization management at enterprise scale.

written by Mike Wronski \\ tags: , ,

Mar 24

config

In mid 2008 our first major venture into Virtualization Management was a set of features around discovery, visualization, and monitoring of the virtual infrastructure. That set of features has now been bundled into a bundle called  vWatch. In Mid 2009, we then introduced a set of features around securing the virtual infrastructure called vTrust. Since then we’ve been busy building the next major component of the End-to-End Reflex VMC Management platform was announced today and is called vProfile.

vProfile provides a set of User Interface and System components on top of a single tightly integrated Virtualization CMDB framework that significantly improves the ability to visualize, manage, and control a Virtualization Infrastructure. The vProfile product page provides some high level bullets of the features so I’m instead going to spend a lot of time on the in-depth functionality, design, and architecture here.
Continue reading »

Aaron Bawcom is the Chief Technology Officer for Reflex Systems, a provider of end-to-end virtualization management solutions based out of Atlanta, GA. Contact him at abawcom@reflexsystems.com.

written by Aaron Bawcom \\ tags: , , , ,

Feb 25

Extending Security Policies into the Cloud with Dynamic Policy Enforcement

EnterprSpeaking at USA2010 v2ise organizations are looking to the Cloud as a way to improve operational efficiency and reduce fixed infrastructure costs. However, most enterprises are reluctant to leverage cloud infrastructure in any meaningful way due to the inherent security risks. Hezi Moore, founder of and CTO of Reflex Systems along with Ken Owens, Technical VP of Servers and Security for SAVVIS will look at how organizations can leverage virtualization management technologies to seamlessly and securely move VMs that run business-critical applications and their operational policies between private and public cloud environments.

WHEN: Wednesday, March 3rd at 10:40AM PDT

WHERE: RSA Conference 2010
Moscone Center, San Francisco
Orange Room 309

WHO: Hezi Moore, Founder and CTO, Reflex Systems
Ken Owens, Technical VP of Servers & Security, SAVVIS

written by Laura Armistead \\ tags: , , , , ,

Sep 22

10000_nodes

What would a deployment of over 6 million virtual machines look like? The picture to the right is 10,000 nodes so multiply that by 600. Today we announced that Savvis (NASDAQ:SVVS) has selected Reflex to provide virtual security infrastructure for their new Project Spirit offering. The Reflex vTrust software will be deployed in tandem in the Savvis cloud with the Cisco Nexus 1000V creating a highly scalable cloud infrastructure. Savvis has over 1.4 million square feet of raised floor so if you did some back of the envelope extrapolation, made some assumptions on hardware, and considered every square foot could be used for virtualization hosting, which would never happen but is kind of fun to think about, Savvis could theoretically run over 6 million virtual machines across their existing 28 data centers. Managing the security in that large of an environment presents some pretty tough challenges.

savvis As Phil Koen, CEO of Savvis, has stated, “Without a doubt, security is a single largest customer concern around Cloud.” In addition John Chambers has stated that cloud computing “is a security nightmare and it can’t be handled in traditional ways.” John is right; solving the cloud security problem has been quite challenging. We definitely did not take a traditional approach and instead came up with a very dynamic policy model that can scale to large environments. The Savvis selection of Reflex is a testament to the intellectual property that has been built into the Reflex VMC solution using the vTrust technology and made possible by the new VMsafe technology built into vSphere by VMware.

One of the exciting new possibilities about Savvis and other service providers running the Reflex VMC solution internally is the ability to dynamically move virtual machines between the environments. With the addition of the new VMware vCloud API, VMware has opened up a great foundation for moving virtual machines between your enterprise and the cloud. What Reflex is adding to the vCloud initiative is automating the transfer of sophisticated internal security policy to/from the cloud. Automated security policy transfer means that no matter where virtual machines may live as a part of an application, the security policy of the virtual machines travels with them. This type of policy movement does not make any assumptions about the applications themselves but instead assumes a raw IaaS type of service offering.

To hear more about how VMware, Reflex, and Savvis are working together to advance the state of cloud computing, feel free to register for our joint webinar on October 6th: https://www2.gotomeeting.com/register/679833250

Aaron Bawcom is the Chief Technology Officer for Reflex Systems, a provider of end-to-end virtualization management solutions based out of Atlanta, GA. Contact him at abawcom@reflexsystems.com.

written by Aaron Bawcom \\ tags: , , , , ,

Aug 17

CloudDrift4
Smooth Move
Cloud computing offers lots of opportunities for small startups, medium sized businesses, and large enterprise organizations to operate their IT organization more efficiently. In fact there has been a lot of discussion around VMware’s recent acquisition announcement of SpringSource. This is a great move by VMware and could enrich their upcoming vCloud offering, give them a product offering for PaaS that can integrate tightly into their existing IaaS product offering, and in general give them deeper insight into applications as well as getting into the minds of developers. I’m not going to go into the benefits of cloud computing (cost, flexibility) but instead spend some time on some new capabilities of an Infrastructure as as Service (IaaS) cloud offering.

Your turn. PaaS
One of the challenges with PaaS is that if you have an existing application that is not currently compatible with the platform, it may be difficult or even impossible to reap the benefits of cloud computing. Even though the cloud may offer huge advantages, you have to figure out how to get your app in the cloud. At minimum, this may require a slight refactoring of the application or potentially re-writing the majority of the application. Worse yet, what happens if you need to move the application from an external cloud back into your local cloud or even a different external cloud? The application’s compatibility with the cloud is extremely dependent on the platform support by the cloud and thus might make its mobility less viable. PaaS offerings are rich with many integrated features that much of the industry is moving towards but they do suffer from problems surrounding compatibility.

Where did I put those keys again?
IaaS however can be completely generic, offering application mobility with little to no application changes. The problem is that there is currently a lack of rich capabilities for IaaS offerings in the marketplace. There is a lack of broad infrastructure services that offer enterprise class services compared to entire platforms that offer several large buckets of services. As pointed out by Cisco CEO John Chambers one of the most difficult cloud computing problems is around securing the cloud.

vTrust Insidetm
One of the new technologies from VMware to help address this problem is called VMsafe. VMsafe is an infrastructure technology that allows security services built directly into the cloud infrastructure itself. Reflex has spent the last year building a new technology called vTrust that provides this infrastructure level security service within the cloud plumbing. This means that if you are running an Enterprise VMware cloud the Reflex vTrust technology could connect your internal private cloud to an external VMware based cloud and secure your virtual machines the same way no matter where they were running. In fact, you could run some portions of an application in an external cloud and other parts of the application internally based on a single application security policy.

Not your daddy’s cloud
At this point you might be wondering “How is this different from running a firewall in the cloud?”. The difference is in the policy management mechanism. The Reflex vTrust technology allows you to set the policy for your applications once within your Enterprise cloud and no matter where your virtual machines move, the security policy automatically moves with the virtual machine. There is no need to manually set the firewall policy in the cloud once the VM moves to the cloud, that is the advantage of the cloud plumbing handling the security infrastructure.

Your own little slice of cloud
Once you have more flexibility in deciding where your IT assets are running, your IT organization as well as your business can operate with more agility. You have the options to:

  • Move an existing application to the cloud
  • Burst the number of virtual machines dedicated to an application into the cloud
  • Adjust application resources seasonally
  • Make applications more accessible to worldwide teams
  • Deploy portions of an application to a cloud

All of these capabilities would be possible without the requirement to restructure your application code.

written by Aaron Bawcom \\ tags: , ,