vendredi 26 septembre 2014

It’s the Applications, Stupid (Part 3 of 3)!

If you missed the first 2 parts of this series, you can catch them here and here. The short version is that there are Enterprise customers that are actively seeking to automate the production deployment of their workloads, which leads them to discover that capturing business policy as part of the process is critical. We’ve arrived here at the point that once policy can be encapsulated in the process of application workload orchestration, it is then necessary to have infrastructure that understands how to enact and enforce that policy. This is largely a networking discussion, and to-date, networking has largely been about any-to-any all equal connectivity (at least in Data Centers), which in many ways means no policy. This post looks at how networking infrastructure can be envisioned differently in the face of applications that can express their own policy.


[As an aside, Rich Fichera over at Forrester researcher wrote a great piece on this topic (which unfortunately is behind a pretty hefty paywall unless you're a Forrester client, but I'll provide a link anyway). Rich coins the term "Systems of Engagement" to describe new models for Enterprise applications that depart from the legacy "Systems of Record." If you have access to this document or can afford the license, I would highly recommend reading it.]


It is becoming fairly standard for an Enterprise to stand up a commodity compute and storage infrastructure that can easily run highly distributed applications such as the ones in the Hadoop family (Map/Reduce, Spark, etc). As these types of applications become the heart of the Enterprise decision support system, how these distributed components get connected in real-time to each other becomes a very important problem to solve.


Turns out there are actually 3 broad problems to solve: (1) Performance at reasonable cost, (2) Security / Data Privacy and Governance, and (3) Operational Tooling. The first 2 are largely driven by policy, as discussed earlier, and the last is hopefully the result of a highly integrated application to infrastructure approach. For the noble sake of brevity, we’ll focus on the first 2 here.


1. Performance at reasonable cost.


As applications achieve more scale and become more dispersed to move the processing of data closer to the sources of the data, it stands to reason that connectivity models built on centralized aggregation start to become prohibitive and challenging to deal with [by centralized aggregation, I mean networks that in general try to funnel data from the edge (compute/storage nodes) to a central place to be re-distributed back to the edge again, sort of like a hub-and-spoke airline system]. In a traditional Enterprise Data Center architecture where all customer transaction data is stored in a centralized place and all applications process that data, it is easy to see how aggregated connectivity is helpful.


However, consider a new model where all Enterprise analytics data is stored in a wide-reaching HDFS (HDFS is the Hadoop Distributed File System) within the Enterprise data center and across every retail site. There is no center to which to aggregate. Or how about an Enterprise that leverages colocation facilities around the world that have direct connects to other “cloud” sources of data? In both of these cases, the ideal is to directly connect the edges of the network. The current models of connectivity are driven from a centralized application/data model and in this new world would require extending full connectivity of all application/data components throughout the infrastructure regardless if they need that connectivity or not, or even worse highly compartmentalizing specific application/data sets with thin connectivity barriers between the compartments.


An alternative model leverages dynamic policy-driven approach to focus connectivity resources around the components that need to connected at any given time, according to their actual performance SLAs. Since (as covered in the previous posts) we’re building the policy into the application orchestration (in this case performance SLA policy), it stands to reason that we should also leverage that policy to minimize the cost of connectivity while meeting the SLA. This type of approach assumes that all applications components are horizontally connected (i.e. peer-to-peer, versus aggregated through a central point) and that SLA-driven connectivity groupings can be dynamically managed to avoid the expense, operational overhead, and latency of any-to-any connectivity.


2. Security / Data Privacy and Governance.


Ultimately, how data is stored, how (and by whom) it is accessed, and how it travels from place to place, is an extremely important consideration for infrastructure. Today’s any-to-any networking architectures leverage primitive packet-based filters (access control lists) and closed user groups (VLANs) to scale back the inherent any-to-any-ness of the approach. In these new application architectures, however, each application component and individual piece of data can be tagged and classified and matched to a business policy. This policy can be easily interpreted by a capable network infrastructure to not only ensure consistency of packet-based filters and closed user groups, but to also enforce specific traffic through more specialized security analysis gear.


More importantly, we could even imagine an infrastructure of multiple overlapping Layer 1 connectivity domains over a shared infrastructure for highly sensitive applications and data. The point is that networks basically do 2 things: keep things together, or keep them apart. If we start with the assumption that everything is connected to everything, keeping things apart becomes more and more challenging as we require more granular permutations. However, if we start with the notion that policy will drive connectivity, the infrastructure can provide a much richer set of tools to ensure the goals are met.


Enterprise applications as we know them are changing. They are being rewritten to scale horizontally over commodity compute and storage infrastructure, and they are being deployed by advanced orchestration systems that encode business-level policy for the application to express to its infrastructure. So these new applications will express how their components expect to be connected to their required compute and storage resources, and they’ll expect the network to make that happen at the lowest possible cost. If you’re listening network, it’s the applications, stupid.


[Today's Fun Fact: It is illegal for a portrait of a living person to appear on U.S. postage stamps. Ahh... that's the holdup!]






It’s the Applications, Stupid (Part 3 of 3)!

Aucun commentaire:

Enregistrer un commentaire