Monthly Archives: November 2010

The Ultimate Differentiator: Reliability

Every company that monitors their website or application performance focuses on two key metrics Availability and Speed. However, there is a third metric, Reliability, which is often misunderstood or in some cases ignored by companies. Reliability measures availability, accuracy, and delivery of a service within a time threshold.

Reliability is difficult to define and measure as it is different for each company and service. To simplify it, you can think of Reliability as how consistent are you in delivering the “service”. The question it is trying to answer is: Can the “service” deliver the same experience, every time, all the time to all users? The service, could be a site, an application, a process, or a series of processes.
Continue reading

Fast Web Performance Starts with DNS…

You wake up, make coffee, sit down by the computer and start reading your favorite web sites. You fire up your favorite browser and type www.site.com on the address bar, hit enter and continue sipping on that coffee. You wait for the page to load, sipping some more coffee – a few seconds later you get the Google search results for “www.site.com”. You scratch your head, sip some more coffee, and start wondering if you did a typo, but no it is correct – Google is not correcting your spelling. Obviously you are online, you got the Google search page. By now the coffee is gone, you are frustrated and wonder what in the world just happened to you favorite site?

We all have had the above experience, or have dealt with parents and family that had the above problem, and struggle to understand what happened to their favorite site!
As most of you know, computers do not understand “www.site.com”, they rely  on DNS resolution to resolve the name into an IP address which the computer can connect to. DNS is like a giant phone book that associates memorable names to humanly incoherent IP numbers.
Continue reading

Monitoring 101: Peeling The Onion.

I am often asked by customers and prospects: “What should we be monitoring?” This is a billion dollar question and it seems like everyone has their own answers. I have seen different approaches, some better than others.

In this blog post I want to share with you what I think is a “good” methodology for monitoring. To illustrate it I will use a simple webpage, but you can easily apply it to web-based applications.

Before we talk about what to monitor, lets quickly cover why you should monitor in the first place:

  • Availability – Is your site or application up and running?
  • Speed – Is the site or application operating at the desired speed (this is not about optimization, this is about is it running as fast as is supposed to run or not).
  • Reliability or Integrity – Great; we accessed it, and it was fast. Now, is it giving me what is intended, and working as it is supposed to?

In a webpage there are multiple components that determine availability, speed and integrity. If we were to dissect a webpage we would see the following:

  • Primary URL of the page – the URL the user has to type/click to access the webpage.
  • HTML response from the Server – this is essentially what the browser will render.
  • External JavaScript and CSS files – these will build the display and the functionality of the page.
  • External Objects – images, ads, beacons and widgets. All different web technologies, but each could impact your webpage.

All of the above rely on HTTP requests to one or more hosts, and the browser executing the HTTP responses properly. If we were to analyze the loading of the webpage we would see all these requests being issued, answered, and executed – and some of them will have a major impact while others might be very limited. So how do we go about monitoring this rich ecosystem?
Continue reading

Web Performance Optimization Applied, A Customer Success Story.

At Catchpoint, we work very closely with our clients and love hearing how they use the system and how it helps them. Recently we heard from Kelsey at WhippleHill Communications.

Some background:

We started working with Whipplehill back in early October.  Besides providing Web Performance Monitoring, we also provided one on one Web Performance Optimization consultations; where we share our experience in Operations, Monitoring and Web Performance Optimization.

So we rolled up our sleeves, monitored, analyzed and made some recommendations.

Here is what Kelsey had to say:

Catchpoint is really well put together and easy to use.  It has taken the guess work out of performance tuning.  Since we’ve started using Catchpoint it’s been very easy to measure the impact of changes made to our application code and hosting infrastructure.  The ability to run tests and collect metrics from many different ISP’s and locations gives us real world insight that would otherwise be impossible to get.  The way the user interface is setup makes it real easy to spot a problem and drill deeper in to get the details.   Of equal importance to the product itself is the willingness of the Catchpoint team to share their expertise.  In spending time reviewing performance waterfalls with Catchpoint, Mehdi and his team were able to point out problems and help us focus in on the changes that would make the most impact.

Here are my two favorite Catchpoint stories:

1. Performance increase on one of our home pages after hooking up Catchpoint and making recommended changes.

Impact of Website Performance Optimization

2. Impact of issue with our CDN… without Catchpoint we would not have been aware of this and our users would have suffered poor performance for longer period of time.

Thank you Catchpoint”

Mehdi – Catchpoint