The Now And Future Of Digital Performance Management
By James Urquhart
When I joined SOASTA, my key goal was to learn as much as I could about the flow of systems monitoring (especially application monitoring) and operations data throughout the agents that define, build, test, operate and consume distributed applications, especially in an enterprise setting. Today, we call that digital performance management.
It’s been three weeks (well, five, but I had two weeks of previously scheduled vacation in there) and I’ve already learned a lot about the landscape in which I’ll be working. Certainly there are huge trends in application and digital performance management related to new correlations and corroborations between fairly disparate data sources to be explored. However, there are also huge implications to the way business decisions will be made with respect to the application of software to the way customers and suppliers interact with the business.
Let’s Start With The Now
Traditionally, systems monitoring has been the purview of infrastructure management, the teams that dealt directly with computing, networking storage hardware and the software that directly controlled it. As such, there was little about systems data that shed any light on much more than application availability and performance.
Was a server accessible? Was CPU utilization high? Was there enough memory for the JVM (java virtual machine)? Those were the types of questions easily answered by the early SNMP and agent-based monitoring tools. They excelled at understanding if your infrastructure systems were configured correctly, or your load was distributed well across those systems. They sucked at giving you any insight into how well your application was designed, built and tested.
Thus, slowly but surely, monitoring began to climb up the execution stack. First we saw much more detailed execution information reflected in the Java virtual machine (which evolved into JMX), then increased use of call stacks in a variety of languages to break down the application itself (which evolved into Application Performance Management tools).
Today, with the addition of metering technologies like Boomerang in the browser and on mobile apps, there are very few aspects of systems performance that we can’t measure, store, report and analyze. Of course there will be new technologies, and even new architectures (e.g. microservices) that challenge the status quo, but evolving today’s monitoring to meet new needs is relatively “doable”.
Not that monitoring is perfect today, by any means, but for those of us that lived through client-server in the ‘90s, its pretty amazing what smart operators are doing today.
Systems Data As Business Data
With web and application performance monitoring quickly becoming a standard product category (nexus anyone?), a wide variety of vendors in this and related spaces are seeing a new, and very exciting opportunity — one that opens up whole new classes of customers: companies are beginning to find valuable business insights in systems performance data.
Page load times, component render times, heat maps, segmentation, and a handful of other statistics already captured by modern browsers and application platforms can be collected and analyzed to determine how an application’s user experience (including performance) affects things like shopping cart abandonment and conversion (or the percentage of site visitors that actually complete a desired transaction, like purchasing something).
Add to this data provided directly from the application itself, through mechanisms added by developers, and you can start to get a detailed picture of the business aspects of an application’s performance. Did one product drive customers to close a sale faster than others? Are there landing pages on a media site that result in longer visits and more page views than others?
Furthermore, there are analytics being rapidly added to these products to give visibility into where an application team should focus their attention when addressing performance. For example, we can show customers which pages’ performance have a greater effect on conversion rate, and thus should get the most attention. Sometimes, customers are surprised that their slowest pages aren’t the ones that should get that attention.
The Future: Interdisciplinary Data
If you extend the trend of finding value in systems data for those that depend the most on those systems, then the question quickly becomes how systems data can be combined with financial, brand/marketing and operational data to create entirely new insights. What happens when we correlate events across CRM, finance and commerce systems? Can we evaluate performance of a customer interaction as a whole if it includes both systems and human elements? Are there untapped uses of systems data that correlate to inventory decisions, arbitrage choices or brand value?
James Urquhart is Senior Vice President, Performance Analytics at SOASTA, Inc. Named one of the ten most influential people in cloud computing by both the MIT Technology Review and the Huffington Post, and a former contributing author to GigaOm and CNET, James brings a deep understanding of disruptive technologies and the business opportunities they afford. James is a seasoned technologist with more than 20 years of experience in distributed systems development and deployment, focusing on service-oriented architectures, cloud computing, and automation. Prior to joining SOASTA, he held leadership roles at Dell, Enstratius, Cisco, Cassatt, Sun and Forte Software.