Analyze IT performance and capacity in cloud, virtual, and physical environments with TeamQuest Surveyor. Use built-in performance analytic intelligence to free staff from mundane tasks so they can focus on projects that impact the business. Analyze data integrated across business units and technology silos, translating IT metrics into business-relevant terms, such as cost per business application and transaction.
TeamQuest Surveyor has the ability to analyze hundreds to thousands of systems and find just the ones you care about. In capacity planning, often the underutilized servers are just as important as the over-utilized ones. Servers with excess capacity are great targets for server consolidation efforts and also make excellent virtualization candidates. After analyzing thousands of servers, this report shows the ones that are candidates for consolidation. This report can also show public cloud instances that should be shut down due to current non use.
TeamQuest Surveyor has the ability to analyze hundreds to thousands of systems and find just the ones you care about. Many organizations have alerts in place when file system utilization reaches a pre-defined threshold. Regardless of whether the threshold is defined in gigabytes or a percentage, the alert does not fire until that threshold is breached. With this view, TeamQuest Surveyor shows which file systems WILL cause a problem in the future because of high growth, allowing you to be proactive in addressing problems before they occur.
When troubleshooting a new issue from yesterday, for example, one of the first questions a performance analyst asks is “was yesterday a normal processing day?” The historical comparison view in Surveyor addresses this question by plotting resource utilization on the server in question against historical norms. This view shows you that yesterday was not normal, focusing your attention on where to investigate - what made yesterday a busier than normal day?
Many organizations have alerts in place to determine when CPU usage reaches a certain threshold. Some organizations even have a level of prediction or forecasting in place to detect when a problem will occur. Most forecasting algorithms either ignore anomolies and outliers or average them in. Ignoring anomolies is very valid when looking at long-term trends. However, what is causing these spikes in processing and how often do they occur? The attached view is a great place to start. It analyzes all systems in an environment and detects which ones have volatile processing.