Load Balancing: Back to the Future of Workload Management

Datetime:2016-08-23 01:36:31          Topic: Load Balancing  Nginx           Share

We often talk about workload orchestration as though it’s the hottest topic of discussion in information technology. But what gets read, and what gets discussed , are often two separate categories. If we’re being honest, we should acknowledge that a great many CIOs still don’t actually know what “workload orchestration” is, besides a task that the DevOps guys are probably doing, someplace further down the task list.

In an interview for an upcoming edition of The New Stack Context podcast , Mesosphere founder Ben Hindman told us some of his enterprise customers who are investigating Mesosphere’s Data Center Operating System (DCOS) for their data centers, do have an understanding ofmicroservices architecture, at least to the extent that it may offer them some benefits. “But there’s a lot that don’t. In fact, there’s a lot that might not even mention it again,” he said.

“The things that they care first and foremost about is service discovery and load balancing. So there’s some that won’t even mention it; once you mention it, they’ll say, ‘Oh yes, that sounds important too! I want that as well.’ At an organization like Twitter — where I worked for quite some time — the really important ones were service discovery and load balancing. Even there, in a large organization, many different services running simultaneously.”

Load balancing is the task that CIOs and IT managers tend to understand. It’s been part of the overall job of making workloads run efficiently on networks since the client/server era and the ugly, dark days of knowledge management.

“What we’re seeing,” relates Peter Guagenti , the chief marketing officer of web server software provider  NGINX Inc ., “is CIOs saying, ‘I’ve got a data center, and I’ve got legacy applications that are running super-effectively in that data center. I want to figure out how to cut some costs. I want to shift to software-defined networking, software load balancers, and applications delivery platforms.”

Do That Thing You Do

Guagenti says the objectives that CIOs frame for their data centers is to take a handful of critical applications that are already running there, and transform them (using whatever magic IT people use to do this) into service-model delivery applications. If that involves public cloud, perhaps that’s fine, but let’s make certain the cloud service providers are offering fair pricing.

Secondarily, in what Guagenti described as an “oh, yeah!” moment, CIOs may realize they have this internal infrastructure that runs some of these newer classes of applications, and they may need to hybridize that infrastructure for better performance and cost benefits. “That’s the reality,” he said.

[left to right] Peter Guagenti, CMO; Faisal Memon, technical product marketer; Gus Robertson, CEO, NGINX

“The customer has already decided hybrid cloud is the future: ‘Show me how to make sure I’ve got flexibility and interchangeability, and I can move things interchangeably between those environments as needed, and I can pick the best tool for the job at that moment.’”

Granted, this is also now NGINX would like for us to perceive its customers and the problems they’re wrestling with, in the context of their own workplaces. But it’s an accurate picture in that these CIOs, IT, and operations managers do not have the same perception of “the new stack” as hyperscale applications developers. Our perspective is more focused on tools, methodologies, and abstract concepts; theirs on customer experience (CX), total cost of ownership (TCO), and bottom-line revenue.

Yet NGINX’ Guagenti illustrates how a bridge can be built between these two planes of reason, if you will, using concepts and terminology that are less foreign to managers and CIOs — and, perhaps as a result, more likely to lead to adoption. As a concept, containerization does not appear on the surface to address the issue of load balancing. An orchestration scenario that orbits around the load balancer as the center of gravity, may get people’s attention.

The Other Way to Build a Sandbox

Nate Baechtold is a lead IT architect at Boston-based EBSCO Publishing, which publishes research databases online for universities and researchers. We met Baechtold at the lastOpenStack Summit. EBSCO is a customer of Avi Networks , which is NGINX’ biggest rival in the modern load balancing space.

He told us about a project he led to engineer what he described as a “sandbox” for developers to test applications that utilize its databases, without bringing down servers in the process. The project’s goal, as articulated from upper management, was “to push infrastructure-as-a-service into the hands of our development teams,” as he described it. While your mind comes up with any number of possible solutions to this goal (many fromthe posts of this site), consider how Avi Networks approached this customer with the promise of “load-balancing-as-a-service.”

“We see a strong movement away from service-oriented architecture to a microservice-based architecture for application development,” — NGINX CEO Gus Robertson.

“Without giving them the access to load balancers,” said Baechtold, “they can’t provision highly available environments. And their dev environments are going to look different in development than they will in live environments, because the live environments are behind the load balancer, and the dev environments are not.”

This is the quandary that EBSCO, and a multitude of enterprises in vastly different industries, face with respect to staging development environments: The very fact that they are sandboxes, with all their extra safety factors and their isolation from production assets, changes their performance profiles, so that devs don’t get a precise picture of how their code will behave in production.

Such situations are resolved in data centers every day — this is not an example of an impenetrable fortress of unresolved issues (we know what those look like). But the truth is, for each customer, the resolution is different . Avi Networks — as well as NGINX — are using this to their advantage. Rather than presenting their respective load balancing services as cohesive platforms where the solutions have already been automated, they treat each customer situation as an individual puzzle, and sell the process of working it out.

Ashish Shah, director of product management, Avi Networks

“When we built the architecture, we had both models in mind,” explained Avi Networks’ director of product management, Ashish Shah , referring to traditional IT and the DevOps model. “We knew that today we are here , and tomorrow we’re going to DevOps. So we designed and architected it in such a way that we can appeal to both sets of customers.”

Shah says he’s encountered organizations that actually are trying the “bimodal IT” model, even going so far as to hire “traditional IT” teams alongside DevOps teams. But these attempts at cultural transformation have failed to address customers’ basic needs, which they can articulate quite well without resorting to forking their teams — for example, with self-service provisioning. Shah does believe that more responsibility for the strategic direction of IT, is indeed being shifted to developers. But at the same time, it’s the developers who are being tasked with bringing their organizations’ IT resources across this bridge, from the “traditional” world (which still looks a lot like client/server) to this nirvana of scalability and self-service.

Away from SOA

“We see a strong movement away from service-oriented architecture to a microservice-based architecture for application development,” remarked NGINX CEO Gus Robertson . He’s thought this out, and no, he did not misspeak. He has seen first-hand where SOA made dangerous, perhaps catastrophic, turns in its history, and where microservices may repair the damage done .

“Where there’s heavyweight middleware tools of the past, there’s now movement toward lightweight, high-performant, easy-to-deploy, developer-friendly tools that don’t arrive in 10,000 DVDs in the back of a semi-trailer,” Robertson continued. “They’re less than three megabytes. And they’re tools like NGINX, Node.js, MongoDB, Docker.”

To that list, I added CI/CD platforms such asJenkins, for whose deployments DevOps teams set up safe development environments. He wasn’t so sure about that one.

“What we see in our world,” the CEO responded, “is that applications need to be nimble. You can’t hamstring developers and operations guys. They’re trying to get features and functions out in hours, not days or weeks. And giving them control and independence over their own application, without being locked into the underlying infrastructure, is critical. The more you lock those things together, the slower both teams are.”

Robertson’s point is that an end-to-end platform which constitutes a single, cohesive stack, may limit organizations’ options, binding them into regimented pipelines and pre-determined configurations. Granted, we may have seen evidence to the contrary. However, one aspect of Robertson’s argument cannot be overlooked: Orchestration platforms don’t deal with performance metrics and traffic analysis at a granular level. Certainly, applications performance management (APM) tools from the likes of New Relic andDynatrace have evolved almost orthogonally to one another.  One does not preclude the other , but it doesn’t act like a gentleman and open the doors for the other, either.

In recent days, NGINX has attacked this problem by extending the existing performance monitoring capabilities of NGINX + as a new feature called Amplify, which was released into public beta in June. As NGINX co-founder Andrew Alexeev told us since NGINX does utilize third-party APM tools itself, Amplify was designed to be — once again — lightweight, supplementary, and not really a platform.

During this public beta period, he added, NGINX pledges to work with customers testing Amplify, evidently to learn more about the variety of scenarios in which customers find themselves today, and determine how NGINX tools can address each one individually. It’s classic enterprise customers, said Alexeev, whose systems have been sending the most diverse types of feedback.

“A traditional enterprise, with limited exposure to performance optimization, wants to use its APM tools,” said NGINX CEO Robertson during our OpenStack Summit interview. “So aggregating our data with the APM tools gives them the visibility and control they need to improve the performance of their applications. The really advanced users actually build their own dashboards, and that’s where things start to get really interesting. So with our raw JSON… if you’re an advanced user, you can look and say, ‘This is what healthy looks like, in my infrastructure. If it deviates from this, alert me.’”

“When you have a system that is built on an easy-to-consume model,” advised EBSCO’s Baechtold, “it means the barrier to entry to even using that system is so much lower. You can go to a developer and say, ‘Hey, build me a load balancer,’ and present them with the user interface, and they’ll figure it out.”

Baechtold’s explanation encapsulates the current state of the relationship between the executives and managers group, and the developers and operations group, quite effectively. The former group gives an order in its own language — in the context of what they understand. The latter group doesn’t correct them, or set them straight, or bring up the front page of The New Stack for them to read. Rather, they keep calm and figure it out.

Mesosphere and  New Relic are sponsors of The New Stack.

Title image of a US Navy-built World War II pontoon bridge over the Rhine River , licensed through Wikimedia Commons.





About List