Application Availability

Application availability is the extent to which an application is operational, functional and usable for completing or fulfilling a user’s or business’s requirements. This measure is used to analyze an application’s overall performance and determine its operational statistics in relation to its ability to perform as required.

Application availability is usually part of application monitoring and management software. It is used by application administrators to identify an application’s ability to deliver the required functionality. Typically, application availability is measured through application-specific key performance indicators (KPIs). These can include the overall or timed application uptime and downtime, the number of transactions completed, application responsiveness, errors and other availability-related metrics. Application scalability, reliability, recoverability and fault tolerance may also be considered when weighing application availability.

Application performance management

In the fields of information technology and systems management, application performance management (APM) is the monitoring and management of performance and availability of software applications. APM strives to detect and diagnose complex application performance problems to maintain an expected level of service. APM is “the translation of IT metrics into business meaning .

 

The APM conceptual framework

Applications themselves are becoming increasingly difficult to manage as they move toward highly distributed, multi-tier, multi-element constructs that in many cases rely on application development frameworks such as .NET or Java. The APM Conceptual Framework was designed to help prioritize an approach on what to focus on first for a quick implementation and overall understanding of the five-dimensional APM model. The framework slide outlines three areas of focus for each dimension and describes their potential benefits. These areas are referenced as “Primary” below, with the lower priority dimensions referenced as “Secondary. 

End user experience (primary)

Measuring the transit of traffic from user request to data and back again is part of capturing the end-user-experience (EUE). The outcome of this measuring is referred to as Real-time Application monitoring (aka Top Down monitoring), which has two components, passive and active. Passive monitoring is usually an agentless appliance implemented using network port mirroring. A key feature to consider in this solution is the ability to support multiple protocol analytics (e.g., XML, SQL, PHP) since most companies have more than just web-based applications to support. Active monitoring, on the other hand, consists of synthetic probes and web robots predefined to report system availability and business transactions. Active monitoring is a good complement to passive monitoring; together, these two components help provide visibility into application health during off peak hours when transaction volume is low.

User experience management (UEM) is a subcategory that emerged from the EUE dimension to monitor the behavioral context of the user. UEM, as practiced today, goes beyond availability to capture latencies and inconsistencies as human beings interact with applications and other services. UEM is usually agent-based and may include JavaScript injection to monitor at the end user device. UEM is considered another facet of Real-time Application monitoring.

Runtime application architecture (secondary)

Application Discovery and Dependency Mapping (ADDM) solutions exist to automate the process of mapping transactions and applications to underlying infrastructure components. When preparing to implement a runtime application architecture, it is necessary to ensure that up/down monitoring is in place for all nodes and servers within the environment (aka, bottom-up monitoring). This helps lay the foundation for event correlation and provides the basis for a general understanding on how network topologies interact with application architectures.

Business transaction (primary)

Focus on user-defined transactions or the URL page definitions that have some meaning to the business community. For example, if there are 200 to 300 unique page definitions for a given application, group them together into 8-12 high-level categories. This allows for meaningful SLA reports, and provides trending information on application performance from a business perspective: start with broad categories and refine them over time. For a deeper understanding, see Business transaction management.

Deep dive component monitoring (secondary)

Deep dive component monitoring (DDCM) requires an agent installation and is generally targeted at middleware, focusing on web, application, and messaging servers. It should provide a real-time view of the J2EE and .NET stacks, tying them back to the user-defined business transactions. A robust solution shows a clear path from code execution (e.g., spring and struts) to the URL rendered, and finally to the user request. Since DDCM is closely related to the second dimension in the APM model, most products in this field also provide application discovery dependency mapping (ADDM) as part of their offering.

Analytics/reporting (primary)

It is important to arrive at a common set of metrics to collect and report on for each application, then standardize on a common view on how to present the application performance data. Collecting raw data from the other tool sets across the APM model provides flexibility in application reporting. This allows for answering a wide variety of performance questions as they arise, despite the different platforms each application may be running on. Too much information is overwhelming. That is why it is important to keep reports simple or they won’t be used.