ITSO Blog

    Mainframe

    Capacity Management Is a Mainframe Discipline. But Is it Relevant to Virtualized Systems?

    October 27, 2015
    avatar

    By John Miecielica

    Even as virtualization aims to replace older systems, traditional strategies are still the preferred choice of most IT departments. Given the necessity for networked storage, capacity management as a mainframe discipline is really just as relevant to consolidated virtualized systems.

    In the world of cloud computing, it’s rare that a sole, six-foot-tall computer  a mainframe  would host the entire digital operations of a company. If a system failure occurs within that single computer, the entire company would be forced offline and essential data could be lost forever.

    As companies continue to move away from physical mainframes in favor of virtualized systems, their strategies for managing capacity have likewise shifted. But should they? Capacity management  the ability of companies to effectively manage their IT resources  is often dismissed as an outdated mainframe discipline. But even as they increasingly adopt networked storage, businesses must realize the importance of drawing on older yet still highly relevant concepts related to capacity management.

    Mainframes and Virtualized Systems

    TechTarget defines a mainframe as a high-octane computer that has historically been “associated with centralized rather than distributed computing.” Supporting hundreds of users at a time, mainframe computers once required capacity management to effectively allocate resources so as not to clog the system and create downtime.

    More than a carefully budgeted schedule, these computers require specialized heating, cooling, and ventilation, as well as a dedicated power supply, which comes with a price tag attached that can run into the millions of dollars, according to Study.com.

    The current alternative is the virtual system. Largely based in cloud platforms, virtualized systems export the actual data processing (and physical hardware) to a remote server host. This not only diffuses the risk of data loss and system failures, but it’s also generally cheaper. Moreover, if you wrongly estimate your capacity needs, the cost of immediately bumping up your processing capabilities is relatively low.

    But in spite of their apparent differences, these two processing methods make use of the same essential technologies. In this way, mainframe concepts remain extremely relevant to virtualized systems.

    Mainframe Disciplines

    Where the methods find common ground is in the consolidation of servers. So-called networked storage  file sharing, data sharing, collaboration around information  works because servers have the processing room to handle those demands. Well, the strategies to free up processing power are remarkably similar for both system types. And indeed, there are practical limitations to the extra storage capacity companies typically throw at IT problems.

    Here, mainframe lessons can be applied. According to Mike Kahn, Clipper Group chairman, mainframes require structure, discipline, and “well-laid-out methodologies for adding users, handling backup, recovery, security and other [functions].”

    As companies come under pressure to more effectively and efficiently use their current resources, they’ll need to apply rigorous methodologies. This holds especially true when you consider that, according to storage analyst Dianne McAdam, mainframes can utilize over 30% more storage than their open system counterparts.

    Intelligent Capacity Management

    Though the systems may be changing, the core issues have not. Brad Stamas points out that these challenges are “more about analysis and assessment” than hardware limitations. According to TechTarget, “There need to be experts broken along functional lines  people who can tune storage for multiple applications, or who know about backup and recovery and can cope with different operating systems.”

    Today’s systems may not be as simple as mainframes, but the consolidation of multiple servers and systems creates the need for flexible and predictive capacity management. Consider utilizing TeamQuest’s sophisticated capacity planning tools than can anticipate demand and complications on mainframe and virtual systems alike.

    (Main image credit: Pargon/flickr)