Earlier this month, I had the chance to spend a couple of days at the Gartner IT Operations, Strategies & Solutions conference in National Harbor, Md. The conference was a mix of pep talks, how-to sessions, roadmap ideas and networking opportunities for IT operations professionals.
We heard a lot of the usual encouragement for IT to embrace change and innovation, break down silos both within the organization and between IT and business units. There was even a whole session on business unit IT, already accounting for 25% of all IT spending today, expected to grow to 50% by 2020, according to Gartner.
But among the overarching themes of IT needing to be nimble, adaptable to ever-changing business realities, and able to adjust to a brave new world where business runs 24-7 and the data center is pretty much everywhere (on-premise, cloud, SaaS, co-location, etc.), came some ideas that we talk about at Catchpoint all the time.
For example, Ed Holub, Gartner’s IT operations research vp, talked in his Day 1 opening keynote about growing complexity in the IT operational environment, driven by cloud and other new technologies as well as changing business requirements. Capacity requirements in response to business demand were harder to plan for than ever. This unpredictability is especially true in customer-facing systems.
Sounds like a good argument for synthetic monitoring, which can help you get in front of potential problems with your customer-facing systems, regardless of the back-end complexity behind them, preventing service disruptions and outages. You can monitor not just web availability and response times but dig down into the complex back end infrastructure that supports modern web applications including network, server, database, DNS, CDNs, APIs, etc.
Fast-forward to Holub’s fellow Gartner IT operations research vp Cameron Haight’s presentation on IT monitoring scenarios a few hours later. Turns out synthetic monitoring, once thought to be in decline, has seen a resurgence in recent years. Haight credited this resurgence to the rise of digital business. As new online business models emerge—think Uber, AirBnB, DraftKings, etc.—and traditional businesses focus more on online operations, the digital business boom is in full swing.
Synthetic monitoring is how these digital businesses stay ahead of the performance problems that can kill their businesses and how they keep an eye on the multiple layers of Internet infrastructure that support delivery of their applications, which in turn are their businesses.
Even for a digital business, synthetic monitoring is just part of an overall monitoring strategy. Haight cautioned attendees that one size doesn’t fit all when it comes to monitoring so they should plan for a multivendor, multimodal architecture. He recommended using microservices or building APIs with Swagger to make integration of monitoring tools less challenging.
In his Day 2 presentation on “Rethinking APM in a Digital Business Era”, Haight reminded attendees that when it came to their APM strategy, they needed to be enablers, not impediments, to digital business. He later spoke of the reality of modern application environments, where changes to applications are continuously integrated and those applications are continuously delivered. IT organizations need to know when those changes are put into production and what effect they have on performance.
This theme came up again in Gartner IT operations research director Hank Marquis’ presentation on Predicting Service Outages. Marquis pointed out that 80% of outages were the direct result of configuration changes made to an application or system, but only 20% of changes cause outages, a new spin on the famous 80-20 rule.
With synthetic monitoring, a digital business can constantly test web applications to make sure that changes made are not causing outages or other performance issues. We would argue that this is the best way to test changes as digital businesses can see the impact on the end user experience before end users are actually impacted.
As if on cue, Norm Morrison, senior director of performance management and principal architect at omni-channel ecommerce service provider Radial (formerly eBay Enterprise), presented a case study on “Shifting from Firefighting to Fire Prevention with Performance Analytics.” Radial uses Catchpoint Synthetic as a key part of its performance monitoring and analytics strategy for e-commerce clients like Ace Hardware, Dick’s Sporting Goods, GNC, Designer Shoe Warehouse, Major League Baseball, Walgreens and Zales.
Morrison pointed out all the performance errors Radial is able to find with Catchpoint Synthetic, including web layer instability, application instance hopping, firewall and load balancer configuration issues, application concurrency issues, database query performance, intermittent transaction failures and 3rd party tag failures. Some oohs and aahs went up from the crowd after Morrison demonstrated how Catchpoint Synthetic pinpointed the causes of slow response times for customers in China, a notoriously difficult place for performance management.
Are you taking advantage of the full capabilities of synthetic monitoring and real user measurement to improve the customer experience of your digital business? If not, sign up for a free trial at http://pages.catchpoint.com/freetrial.html.
Don’t forget to download a copy of our new report “Closing the End-User Experience Gap in APM”, published in collaboration with Gartner and released at the Gartner ITOSS conference.