As we look ahead to the 2020 presidential campaign, I find myself looking back to previous campaigns to help make sense of these unusual political times; for some reason the 1992 presidential campaign springs to mind (go ahead and analyze why if you want).

Who can forget the famous phrase coined during that presidential campaign: It’s the Economy, Stupid! While crudely made, its point was to focus on the things that matter most for the electorate. In this case, the economy, which affects us all in the most direct of ways.

It’s the USER, Stupid!

According to the recent Gartner Digital Experience Monitoring Market Guide, improving the end-user experience is a strategic part of digital transformation. The end user (customer and/or employee) is at the heart of every digital business. User experience directly impacts the bottom line, unfortunately negatively, most of the time. As an example, it takes 12 positive user experiences to make up for one unresolved negative experience, while IT outages cost the U.S. $700B a year, largely due to lost employee productivity.    

Managing user experience is therefore critical for any business. However, are businesses today truly able to have control over user experience? What tools do they have at their disposal to ensure that they understand what their customers are really experiencing? Are they able to ensure that services are reachable and available all the time and performing reliably in all regions? Are they able to verify reliable employee access to all the systems and apps necessary for their roles? 

I am afraid that for the most part, the answer is negative and there are two main reasons for this: the first reason is due to a loss of control. Although Cloud, SaaS, and the Internet have been incredible digital business enablers, they have also created major visibility gaps for the modern enterprise. Gartner states that “improving the end-user experience is a strategic part of digital transformation, yet I&O (Infrastructure & Operations teams) is losing direct control of infrastructure and applications.”

The second reason is because the majority of monitoring tools are incompatible with the new digital world.

I’ve been in the enterprise monitoring space long enough to have seen the evolution of this market and how it deals with the ever-changing nature of the enterprise industry. Allow me to analyze what has changed and then look back at some other not so distant history.

The three catalysts of change: Cloud, Mobile, and Analytics

On the Enterprise side, three significant catalysts continue to enable companies of all sizes and verticals to innovate, build and grow revenue in ways never seen before. Cloud, Mobile, and Analytics are allowing businesses to invent and transform the way they attract and serve customers while motivating and enabling their employees. New digital businesses are created, and existing businesses are transformed and disrupted. It’s truly fascinating to see.

What does this mean in practical terms? It means that any digital business can reach customers on a global basis. It means that an employer can access a global talent pool and hire people virtually anywhere in the world. It means that services are offered around the clock, every day of the year, thanks to the public cloud and SaaS. To summarize:

  • Customers are globally distributed
  • Employees are increasingly remote and mobile
  • Applications and services are running in multiple clouds and offered globally

Still stuck in the 90s

Unfortunately, however, most monitoring technology is still stuck in the 90s; a time when two parallel worlds existed side by side. We had the enterprise world, characterized by office-based employees, centralized data center-based apps and systems, and a clean security perimeter. Everything was under the direct control of the IT department.

In parallel, we had the Internet world and the emerging but still-in-its-infancy ecommerce world. Internet-based services were not associated with anything that was considered business-critical. If a website was down, company performance was not irreversibly affected since not a lot of business was transacted over the Internet. Therefore, digital customer experience and digital experience monitoring were not things that companies needed to care a lot about. Additionally, if an employee had a problem accessing a service, an IT person would be right there in the office helping them; the notion of remote employees was uncommon.

These two worlds coexisted into the early 2000s and monitoring vendors kept pumping out “powerful” monitoring tools, checking the health of the infrastructure, the Local Area Network, the various databases, the routers and gateways, and the VMs running all the apps in the datacenter. As a consequence, IT had tools coming out its ears and was spending hundreds of millions every year. Heavy, appliance-based packet analyzers were (and still are!) deployed in every key network segment and every branch office. Each appliance would consume multiple TBs of data for historical analysis and troubleshooting, leading to a constant running out of capacity and the need to regularly upgrade storage units. Each department had its favorite monitoring toy tool that helped to prove their innocence when all IT department heads were suddenly pulled into the war room to deal with an outage. When employees complained about things running slow, each tool would nevertheless show that everything was running well, leaving the IT leaders scratching their heads. Justifying millions of dollars’ worth of IT spend on monitoring tech that was essentially useless in preventing outages became a common place farce.

Changing Times

Then…things got even more interesting. Applications started rapidly moving to the cloud, common business apps were pulled out of the data center and over to the provider’s SaaS environment, and companies began to have a much more distributed workforce in need of “local access like” performance. Last but not least, ecommerce and online business was going gangbusters.

Where are we today?

So where are we today? The two worlds (Internet and enterprise) have essentially merged. When I am at home shopping online on Amazon, I use a browser to access a service in the cloud via the Internet. When I am at home working, I access my email in the O365 cloud via the Internet. There is effectively no difference. There are exceptions of course for certain apps where MPLS networks are used (e.g. VoIP) as enterprises need the SLA protection, but for the most part, our consumer and employee worlds have come together.

However, the enterprise monitoring vendor technology has NOT evolved to deal with this. Looking at the majority of Infrastructure Monitoring (ITIM), Application Performance Monitoring (APM), and Network Performance Monitoring and Diagnostics (NPMD) tools, they still rely on heavy hardware-based instrumentation, complex agent telemetry, and on-prem based management. As such, they require huge IT investments to deploy and maintain. Is this huge investment even worth it? These legacy tools are useless at monitoring SaaS applications, complex service architectures hosted in multiple public clouds, and third-party providers (such as ISPs, wireless providers, CDNs and DNS providers) – services the modern enterprise relies heavily on to deliver its services to the customer and employee. The worst thing is that the majority of tools ignore what matters the most: the USER!

Let’s sum up:

  • Enterprises are investing heavily in digital technology to grow their customer base and improve employee productivity and satisfaction, thereby dramatically elevating the importance and value of user experience.
  • The vast majority of apps and services are Cloud and SaaS based and are delivered to customers and employees via the Internet, relying on third-party services and providers. This move has created many blind spots for IT as they have lost control over apps and infrastructure.
  • The tools of legacy tech have not evolved with the times; present day tech is typically incompatible with the modern digital world – the worst part is that they are not focused on the user.

We have a problem…

So what solutions are out there?

Gartner uses the umbrella term Digital Experience Monitoring to describe technologies that focus on the user. The recent Gartner Digital Experience Monitoring Market Guide highlights some of the visibility gaps and challenges confronted by modern digital architectures. These include the loss of control over apps and infrastructure and the inability of traditional monitoring tools to monitor the most important asset of the digital business: the user.

The report discusses the best technologies and monitoring approaches for managing user experience. However, although all the technologies under discussion include some level of user experience monitoring, many are simply repackaged legacy monitoring tools. As an example, most of the APM-centric approaches to end-user experience rely on complex, agent-based telemetry to collect the data and are only capable of monitoring the user experience for the one application that is instrumented on the server side. Moreover, applications in today’s multi-tier, multi-cloud, cloud-native world are extremely complex, error prone, and massively expensive to instrument.

Going forward, with increasing numbers of app developers choosing to leverage ‘serverless’ options, the traditional telemetry approach of APM will become obsolete as Digital Experience Monitoring increases its footprint.

Additionally, APM solutions are reactive by design. If an enterprise has spent millions instrumenting a ‘critical’ application, they will monitor user experience using Real User Monitoring. When something goes wrong with a user transaction, they will have lots of data to analyze and they will eventually be able to identify the root cause of the issue. By then, however, it will be too late as the user experience has already been impacted; isn’t prevention the main objective of monitoring? My point is obvious but it’s easy to miss.

So given today’s enterprise reality, customers need a monitoring solution that:

  • Is user-centric and has the necessary breadth and depth to accurately represent today’s customer and employee diversity
  • Can provide proactive insights
  • Does not require a monumental upfront investment by the client to configure and deploy
  • Covers all the visibility gaps created by the loss of control of the app hosting environment (cloud, SaaS)
  • Does not rely on complex, agent-based and hardware-based probe telemetry for collecting data
  • Can monitor the complex modern digital delivery chain (Internet, ISPs, Wireless, Last Mile, DNS, CDN)
  • Offer world-class support services, staffed by people that know what they are doing when it comes to global web-based services

Interestingly, the only technology that does meet all the above requirements was conceived and developed to serve the web & Internet business over ten years ago. Its suitability for the modern digital business should not be a surprise given that it was developed to handle large scale environments, monitor third-party services and providers, and most significantly, it was user-centric by design,  making customer experience its focus.

The Most Complete Digital Experience Monitoring Solution: Catchpoint

Looking at the main DEM technologies listed in the Gartner Digital Experience Monitoring Market Guide, there is one vendor with the most complete digital experience monitoring solution: Catchpoint.

More importantly, Catchpoint offers the most proactive user-centric digital experience monitoring solution on the market. Catchpoint clients can start to monitor user experience within minutes without the need to deploy agents, probes, or expensive appliances.

That is because we offer the largest, most diverse, and most distributed monitoring infrastructure in the world. Nothing comes close to the scale of the Catchpoint network with 825 locations all over the globe, allowing customers to proactively monitor the quality of user experience and the reachability, availability, performance, and reliability of their services from anywhere in the world.

Note that Catchpoint was founded by a team of executives that know a thing or two about web services and performance monitoring. Having dealt with ineffective monitoring tools during their long, hands-on tenure at DoubleClick and later Google, drove them to build a powerful, scalable, user-centric digital experience monitoring platform that now helps some of the world’s most recognizable brands deliver reliable, high-performance services to their customers and employees.

When it comes to truly managing the digital experience of the customer and employee, proactiveness is key. APM, NPMD, and RUM-only approaches are good for troubleshooting after the user experience has already been compromised or the digital service has encountered an issue, but the Catchpoint platform approach, which combines L3 and L7 Synthetic Monitoring, Network Monitoring, Real User Monitoring and Endpoint Monitoring is the most effective in managing customer experience, employee experience, and digital services.

A final word of caution: not every synthetic monitoring solution out there is the same. Most of the synthetic monitoring solutions offered by vendors fail to deal with the complexities of modern digital service delivery as outlined in this article, and do an awful job at representing the diversity of customers and employees. Many vendors have settled for a very narrow “synthetics” solution, limited to placing monitoring nodes in a few public cloud locations only, purely because of convenience. 

If you want to know more about why Catchpoint is different and how its digital experience monitoring solution can help you manage the digital experience of your customers and employees, and ensure that you get the best performance from your services, get in touch or sign up for a free trial.