From Mainframe to the Cloud: How Capacity Management Has Remained Relevant As IT Transforms
In today’s IT environment, it’s exceedingly rare to find a company that hosts its entire digital operations on a single, six-foot-tall computer.
Yet capacity management, the same discipline that was used to prevent these huge mainframes from crashing due to excessive demand still guides the way we allocate cloud resources to serve online customers today.
Of course, as some companies continue to move away from physical mainframes in favor of virtualized systems, they’ve come to see capacity management as obsolete. The misperception is that the cloud’s flexibility effectively protects enterprises from the risk of service outage.
Not only is this inaccurate, but it completely ignores the fact that capacity management enables companies to monitor resource efficiency in their infrastructure and avoid the kind of unsustainable costs that can result from irresponsible use of the cloud.
TechTarget defines a mainframe as a high-octane computer that has historically been “associated with centralized rather than distributed computing.” Mainframe computers support hundreds of users at a time—and require technicians skilled in capacity management to effectively allocate resources so as not to overload the system and cause a crash.
More than a carefully budgeted schedule, these computers require specialized heating, cooling, and ventilation, as well as a dedicated power supply, which comes with a price tag that can run into the millions of dollars, according to Study.com.
The current alternative is the virtualized system. Largely based in cloud platforms, virtualized systems export the actual data processing (and physical hardware) to a remote server host. This diffuses the risk of data loss and system failures and, when used efficiently, is generally cheaper. If you underestimate your capacity needs, the cost of immediately bumping up your processing capabilities is relatively low.
But in spite of their apparent differences, these two processing methods make use of the same essential technologies. In this way, mainframe concepts remain extremely relevant to virtualized systems.
Capacity management is relevant to both computing frameworks because of their shared need for the consolidation of servers.
So-called networked storage—file sharing, data sharing, collaboration around information—works because servers have the processing room to handle those demands. Plus, the strategies to free up processing power in both cloud and mainframe systems are remarkably similar for both system types. And indeed, there are practical limitations to the extra storage capacity companies typically throw at IT problems.
Here, mainframe lessons can be applied. According to Mike Kahn, Clipper Group chairman, mainframes require structure, discipline, and “well-laid-out methodologies for adding users, handling backup, recovery, security and other [functions].”
As companies come under pressure to more effectively and efficiently use their current resources, they’ll need to apply rigorous methodologies. This is especially true when you consider that, according to storage analyst Dianne McAdam, mainframes can utilize over 30% more storage than their open system counterparts.
Though the systems may be changing, the core issues have not. Brad Stamas points out that these challenges are “more about analysis and assessment” than hardware limitations.
According to TechTarget, “There need to be experts broken along functional lines—people who can tune storage for multiple applications, or who know about backup and recovery and can cope with different operating systems.”
Today’s IT environments may be more complicated than the mainframe set-up of old, but the consolidation of multiple servers and systems creates the need for flexible and predictive capacity management.
If you need help in this area, consider utilizing TeamQuest’s sophisticated capacity planning tools that can anticipate demand and complications on mainframe and virtual systems alike.