DevOps Can Do Wonders for Capacity Management

    April 29, 2016

    By Luis Colon

    If companies automate the process of application testing and development, organizations themselves can become higher-performing and more resource-efficient.

    The leaders of today’s IT world are able to withstand continual disruption by keeping a constant eye on efficiency, making quick corrections to ensure the best service for the least amount of money. Soon, this kind of sustained optimization will become mandatory for every business — so why not institute change as a core driver of business? It’s easy to manage your capacity effectively when process improvement itself is the ultimate goal.

    That’s the thinking behind DevOps: that organizations should continuously develop and release applications through across-the-board automation of the programming and development processes. This represents a departure from the common practice of releasing applications two or three times per year in the name of agility, flexibility, and effective capacity management. Indeed, constant optimization ensures that your applications are both high-performing and resource-efficient.

    This does wonders not only for your resource budget, but your customers as well, who are continually resupplied with up-to-date, bug-free applications.

    Traditional Models Transfer Too Much Risk to the Company

    Whether IT organizations like it or not, applications can be a significant drain on resources, and a long lead time between new releases guarantees that optimization efforts will fall behind the curve. Traditional organizations release twice a year; if a they want to deploy a corrective update to an application released in June, they can either wait until the next release in December or perform a quick emergency release.

    Application development and delivery are highly manual processes. But when an emergency release demands a quick turnaround, there isn’t enough time to perform manual regression testing, making coding errors unavoidable. As IT teams fix one inefficiency, they risk introducing others, which won’t come to light until customers report them. On the other hand, if they spend six months developing out these issues, they will have spent the same amount of time dumping resources unnecessarily — a tough situation to explain to executives.

    This traditional model burdens organizations (and their long-term brand reputations) with a hefty degree of risk. Moreover, these companies bear any inefficient application processes as direct resources costs. While long lead times used to be the ideal procedure, leaving time to hone products to perfection, today’s market demands agility and speed. And that’s where automation comes into play.

    If you follow developments by leading DevOps practitioners, many of these things will sound
    familiar to you. But, did you know that in the same way you can prevent regression breakages by automation, you can also proactively address capacity requirement changes?

    Building on an automation-as-a-priority approach, performance benchmarking can quickly identify whether recent changes to an application make it more CPU-intensive or memory-intensive. This performance regression testing can be a critical part of a shift-left testing strategy, which can elevate your DevOps practices to better maturity levels.

    Instituting Efficiency as a Process

    With a DevOps mentality, IT departments can root out these persistent capacity issues as a matter of course, turning agile development into a competitive advantage. By automating performance application testing and delivery, they can dramatically reduce lead times, limit the risk of programming bugs, and make deployment an exact science. The result is that many of the challenges of capacity management are proactively exposed, and thus can be resolved sooner.

    For instance, if a company reduces their lead time from six months to two, they can meaningfully respond to resource-usage problems in the short-term, even as they reorient for more optimal long-term solutions with the next release. Meanwhile, sophisticated monitoring tools automatically test each iteration of the application, greatly diminishing the chance that bugs or coding inefficiencies ever reach the customer.

    In fact, a true DevOps approach entails testing and retesting the development process itself, eliminating any unseen or unnecessary steps that would never have come to light with a manual strategy.

    Because DevOps condenses many processes into one automated routine, it requires not just one tool, but a range of interoperable software products. In this space, TeamQuest’s comprehensive Vityl suite has quickly become an indispensable asset to DevOps strategies market-wide.

    When companies learn to automate the disruption and optimization process, they find two things: first, that their current capacity levels get them much farther than they would have thought; and two, that extra capital saved is immediately available for strategic reinvestment.`

    To learn more, read the TeamQuest White Paper, DevOps Development: Keeping the Lights on Activities.

    (Main image credit: Matt Moor/flickr)

    Tags: teamquest, vityl