An Interview with TeamQuest DevOps Expert Luis Colon
In preparation for TeamQuest’s presentation at the upcoming Velocity conference, we asked DevOps expert Luis Colon for his thoughts on the industry.
In a few short weeks, TeamQuest will present at the June 22-23 Velocity Conference, where notable leaders from within the DevOps, cloud, and IT development fields will share insights concerning the most pressing trends in IT today.
As a primer, we sat down with our resident DevOps expert, Luis Colon, to hear his thoughts about what the biggest takeaways from the conference are likely to be, and how IT experts should approach the rest of 2016.
Let’s start with a big one, Luis: has the cloud lived up to its expectations? In other words, if there were to be a “State of the Cloud” address at Velocity, what would you imagine it to sound like?
The cloud has pretty much become a baseline expectation for the IT industry; it’s not a matter of if, but when and how. The “How?” of that equation has received a lot of attention recently, because IT experts have realized that traditional applications and architectures don’t always adapt well to elastic cloud environments, where servers are sometimes added rapidly.
Which begs the question: how should we transition apps to the cloud? Many people have gravitated to the idea of putting applications in containers, and for that reason, 2015 could almost be called the Year of Docker — everyone from Google, to Amazon, to Microsoft, has agreed as much.
From that perspective, it appears that 2016 is quickly becoming the Year of Orchestration. We’ve all agreed that we need to be into the cloud, but it’s clear that we need to go deeper into the stack, re-architecting our apps, managing our containers efficiently, and orchestrating our systems so that they don’t triple the workload of DevOps. Then it’ll live up to expectations.
We’ve written about this online, but it’s become clear it doesn’t matter how many machines you throw at a performance problem — in the cloud or anywhere — if you blow away your budget.
We need to move away from pure performance testing, for one. I think shift-left testing will become increasingly popular, because it’s critical that apps not only scale up well, but scale down, as well. Before you find out whether your app can handle 10,000 users, you need to make sure that your response times don’t degrade with just two or three.
That’s where DevOps and capacity management come in; you need to make changes earlier and faster in the development pipeline because things don’t always go as planned — and stress testing can be a little too optimistic. We’ve seen this shift with, for example, Netflix’s Chaos Monkey and Simian Army, and we’ll see it more as 2016 continues.
We’ll be talking about how you should build modeling into your pre-launch test process, manage the complexity of heterogeneous, multi-cloud environments, and avoid cost traps in the cloud.
Our argument is that models bring predictability — performing predictive analytics from your historical data. By the time your tactical testing and measurements inform you about declining app response rates, it’s often already too late. Modeling lets you see further into the future and respond faster.
At the same time, as IT develops cloud-based apps, they need to ensure that they aren’t tethered to one single cloud provider — you need to pit clouds against each other to accurately compare their value. Both of these areas contribute to cost traps; having too many or too few virtual servers at your disposal can hide underlying performance problems. You need to get ahead of the curve early and prepare for diagonal scaling.
It will be interesting to see how other Velocity presenters respond to these same challenges, but I think the majority of DevOps leaders will be in agreement.