Category: Web performance concepts

Effective Web Performance: Positively Managing Performance Issues

The moment a Web site goes live, the publishers lose control of the performance.

When I say lose control of the performance, I mean that despite everything that has been done to ensure scalability and capacity, the Web is inherently an infrastructure that is out of anyone’s direct ability to manage.

This is something that needs to be accepted. And while the datacenter is only that part of an application/infrastructure/network that can be directly managed by the Web site’s owners, a company has to accept that the real datacenter is the Internet. Not a datacenter that is on the Internet; the Internet as the datacenter.

Now that your head is spinning, let’s step back and consider this idea for a minute. The whole concept of the Internet being the datacenter makes operations and IT folks very uncomfortable. Why? There is no way for one company to manage the Internet. As a result, the general perspective is that the Internet can’t be trusted, and all that can be done is manage what can be managed directly.

Ignoring the Internet allows many organizations to leave the entire Internet out of their application or performance planning. They will measure and monitor, and they may even employ third-parties to help improve performance. When the shiny exterior is peeled back, it’s pretty clear that these organizations have built their entire performance culture on the assumption that if a problem exists on the Internet, there is nothing that can be done by them to fix it.

This may be effectively true. And it is not positive way to ensure effective Web performance

Having a what-if, emergency response plan in place is never a bad idea. If a problem appears on the Internet, and it affects your Web site, what are you going to do about it? Whine and moan and point fingers? Or take actions that effectively and clearly communicate to customers the steps you are taking to make things right?
Wait. Managing the Internet through customer communication?

I argue that besides working feverishly behind the scenes to resolve the problem, customer communication is the next most critical component of any Web performance issue management plan.

Web performance issue management plan. You have one, don’t you?
Well, when you get around to it, here are some concepts that should be built into the plan.

Effectively monitor your site

How can measurement and monitoring be part of issue management? Well, isn’t it always good policy to detect and begin investigating problems before your customers do?

Key to the measurement plan is monitoring the parts of your application that customers use. A homepage test will not give you vital information on issues with your authentication process, and is the same as saying the car starts, while ignoring the four flat tires.

If you aren’t effectively monitoring your site, your business is at risk.

Measure where the customers are

If your organization is focused on what it can control, then it will want to measure from locations that are controlled, and can provide stable, consistent, repeatable data.

Hate to break this to you, Sparky, but my Internet connection isn’t an OC-48 provisioned through a large carrier with a written SLA. Real people have provider networks that are congested, under-built, and deliver bandwidth using the old best effort approach.

Some customers may have given up on wires altogether, and access the site through wireless broadband or mobile devices.

Understand how your customers use your site. Then plan your response to managing the Internet from the outside-in.

Test with what your customers use

The greatest cop-out any Web site can make is Our site is best viewed using…
I’m sorry. This isn’t good enough.

Customers demand that your site work the way they want it to, not the other way around. If a customer wants to use Safari on a Mac, or Chromium on Linux, then understanding how the site performs and responds with these browsers is critical.
The one-browser/one-platform world no longer exists. If a large number of customers with one particular configuration indicate that they are having a problem with the new site, what is the proper reaction?

And why did this happen in the first place?

Monitor and respond to social media

No, this isn’t just here for buzzwords and SEO. In the last year, Twitter and Facebook have become the de-facto soapboxes for people who want to announce that their favorite site isn’t working. Wouldn’t hurt to monitor these sites for issues that might not be detected by traditional performance monitoring.

This approach means that you have to be willing to accept responsibility when something affects your site performance or availability, even if it isn’t your fault. No need to tell folks exactly what the problem is, but acknowledging that there is a legitimate issue that you recognize will go a long way toward making visitors/customers more understanding of the situation.

Get your message out effectively

Communicating about a performance issue means that the Marketing and PR teams will have to be brought in.

What? Marketing and Operations/IT working together? Yes. In a situation where there is a major outage or issue, Marketing will DEMAND to be involved. Wouldn’t it be easier if these two parts of the organization knew each other and a plan for responding to critical performance issues?

If Marketing understands the degree of the problem, what it will take to fix, and what is being done about it, they can craft a message that handles any question that might come in, while acknowledging that there is an issue.

A corollary to this: If there is an issue, don’t deny it exists. Denying a problem when it clear to anyone using the site that there is one is worse than saying nothing at all.

Takeaway

Practicing effective Web performance means a company understands that directly managing the Internet is impossible, but having a process to respond to Internet performance issues is critical. A Web performance incident plan shows that you understand that stuff happens on the Internet and you’re working on it.

Effective Web Performance: Choosing a CDN

Content Delivery Networks (CDNs) are a key component to any Web performance strategy. If you examine the content from any large online business or media provider, it won’t take long to find the objects that these organizations have entrusted to CDNs to ensure faster delivery and a better user experience.

When working with CDNs, it is critical to understand some terms or concepts that you will be presented with. Each CDN will present them in it’s own unique way and using its own unique terminology. Having an understanding of the underlying concepts, you will be able to have discussions with CDNs that are more meaningful, and targeted on your needs.

The Massively Distributed Model

CDNs fall into one of two categories, the first being the massively distributed model. CDNs that use this method will demonstrate how they have hardware and caching content servers in almost every city and town of any size in the world. As well, they have their systems located on every major consumer network in order to ensure that they are as close to the end-user as possible.

The CDN everywhere model, while far-reaching and seemingly extremely effective does have its disadvantages. First, the CDN infrastructure relies on having extremely accurate maps of the Internet in order to direct visitors to the most proximate CDN server location. However, these maps are only truly effective when visitors use DNS servers that are on the same network that they are. Services such as OpenDNS and DNS Advantage can seriously effect the proximity algorithms of the distributed CDN by removing the key piece of localization information that they need to ensure that the best cache location is selected.

Also, as with any proxy caching methodology, this model relies on use. More popular items stay in the cache longer, while less popular items may be pushed aside or stored further upstream at parent caches for retrieval, adding a few extra milliseconds for the initial request. Also, new content has to be pushed out to the edge, and may take a few hours to be completely propagated.

The Massively Concentrated Model

CDNs that use this model rely on a smaller number of locations than the massively distributed model. However, these locations tend to be massive and incredibly well connected, relying on the concept that even if they are a few more hops away, their content is always there and ready for requests.

These sites have massive amounts of storage and rely on private networks to ensure that new content is immediately pushed out to the super-nodes as soon as it is added. And while they may be those extra few hops away, the performance difference may not be enough for the average site visitor to notice.

The obvious disadvantage of the massively concentrated model is that it is great for serving those places where there is a lot of traffic. However, in regions with less traffic, or less developed infrastructures, the fewer boots on the ground may begin to have an effect on performance.

Other CDN Concepts

Application Proxy

CDNs offer many institutions the ability to use their network for all incoming requests, even if they are for dynamic content that will require processing in the client datacenter. In these instances, the CDN acts as an application proxy, using its superior knowledge of routing and traffic patterns to move requests from the edge of the Internet back to the datacenter more effectively.

Remember: Just because the CDN is providing fast routing and delivery to the visitor, your application is still the bottleneck. Poor app design or slow queries will affect the application in exactly the same way that it would if the call was coming straight to your datacenter.

Traffic Acceleration

In certain circumstances, security and regulatory concerns completely eliminate the ability of a business to use the standard CDN model. Banks, government agencies, and health-care providers cannot store data in an environment whose security they cannot vouch for, no matter how many safeguards are put in place.
These organizations still need to be able to deliver a good customer experience, so there has to be a way to help accelerate their content without taking control of it. Traffic acceleration serves this purpose by using proprietary network protocol adaptations that remove some of the overhead associated with standard network protocols.

Content is intercepted at the datacenter and routed across private networks using the streamlined network protocols to an network location that is as close to the visitor as possible. Once it has reached the appropriate location, it is converted back to standard TCP and passed to the visitor.

The method above describes how a standard Web request works, but this can also be extended to true point-to-point VPNs with endpoints separated by great network and/or physical distances.

Validating the Claims

Any component of choosing or using a CDN is quantifying the effectiveness of the solution. The standard for many years has been the bake-off method of comparison. The prospect’s origin site is measured against the same site delivered by one or more CDNs. The CDN vendor with the fastest performance and the best price usually wins.

Before walking into a bake-off, come prepared. Turn your CDN bake-off into an episode of Iron Chef. Come to the table with the ingredients, and make the CDNs prepare a solution that meets your needs.

Measure Transactions

The standard base measurement that CDNs will use in a bake-off is single object(s) or page measurement. Your visitors do not just visit a single page, so ensure that the CDN has an effective solution that produces noticeable performance improvements across all the key functions of your site, including the secure components of the site, where the money is made.

Measure from the Edge

Backbone measurements are great for baselining and detecting operational issues that require a consistent and stable dataset. Your customers, however, do not have direct connections to high-priced datacenters with fat pipes.

The two CDN models will react differently to under certain circumstances, and this will appear in edge measurements. Measuring on the ground, from the ISPs that your customers use, will give you a clear sense of how much improvement a CDN will provide when compared to the performance of your origin datacenter.

The edge is messy, chaotic, and what your customers deal with everyday.

Understand the SLAs/SLOs

CDNs will always provide either service level agreement (SLA) with service level objectives (SLOs) stated in it. This topic is at once recognizable and about as well understood as 11 Dimensional Theoretical Physics.

I have written briefly about SLAs and SLOs before [here and here]. Do your research before you wade into this polite version of white-collar trench warfare.
Make sure you understand what the goal of the SLA is. Make sure that the SLOs are clear, measurable, valid, and enforceable. Then ensure that the method used to measure the SLOs is one that your organization can understand and can accept as valid.

Finally, ensure that the SLOs are reviewed monthly.

Takeaways

Understanding the foundational technology that underlies the CDNs you use or are considering using will help you make better decisions.

Effective Web Performance

Slap up some measurements. Look at some graphs. Make a few calls. Your site is faster. You’re a hero.
Right.

Effective Web performance is something that requires planning, preparation, execution, and the willingness to try more than once to get things right. I have discussed this problem before, but wanted to expand my thoughts into some steps that I have seen work effectively in organizations that have effectively established Web performance improvement strategies that work.

This process, in its simplest form, consists of five steps. Each step seems simple, but skipping any one of them will likely leave your Web performance process only half-baked, unable to help your team effectively improve the site.

1. Identification – What do we want/need to measure?

We want to measure everything. From everywhere.

This is an ineffective approach to Web performance measurement. This approach leads to a mass of data flowing towards you, causing your team to turn and flee, finding any way possible to hide from the coming onslaught.

Work with your team to carefully chose your Web performance targets. Identify two or three things about your site’s performance that you want to explore. Make these items discrete and clearly understood by everyone on your team. Clearly state their importance to improving Web performance. Get everyone to sign off on this.

Now, what was just said above will not be easy. There will be disagreements among people, among different parts of the organization, about which items are the most crucial to measure. This is a good thing.

Perhaps the greatest single hindrance to Web performance improvement is the lack of communication. An active debate is better than quiet acceptance and a grudging belief that you are going the wrong way. Corporate silos and a culture of assurance will not allow your company to make the decisions you need to have an effective Web performance strategy.

2. Selection – What data will we need to collect?

In order to identify a Web performance issue (which is far more important than trying to solve it), the data that will be examined will need to be decided on. This sounds easy – response time and success rate. We’re done.
Right.

Now, if your team wants to be effective, they have to understand the complexity of what they are measuring. Then an assessment of what useful data can be extract to isolate the specific performance issue under study can be made.
Choose your metrics carefully, as the wrong data is worse than no data.

3. Execution – How will we collect the data?

Once what is to be measured is decided on, the mechanics of collecting the data can be decided on. In today’s Web performance measurement environment, there are solutions to meet every preferred approach.

  • Active Synthetic Monitoring. This is the old man of the methods, having been around the longest. A URL or business process is selected, scripted, and them pushed out to an existing measurement network that is managed/controlled. These have the advantage of providing static, consistent metrics that can be used as baselines for long-term trending. However, they are locked to a single process, and do not respond or indicate where your customers are going now.
  • Passive User Monitoring – Browser-Side. A relative newcomer to the measurement field, this process allows companies to tag pages and follow the customer performance experience as they move through a site. This methodology can also be used to discretely measure the browser-side performance of page components that may be invisible to other measurement collection methods. It does have a weakness in that it is sometimes hard to sell within an organization because of its perceived similarity to Web analytics approaches and its need to develop an effective tagging strategy.
  • Passive User Monitoring – Server-Side. This methods follows customers as they move through a site, but collects data from a users interaction with the site, rather than with the browser. Great for providing details of how customers moved through a site and how long it took to move from page to page. It is weak in providing data on how long it took for data to be delivered to the customer, and how long it took their browser to process and render the requested data.

Organizations often choose one of the methods, and stay with it. This has the effect of seeing the world through hammer goggles: If all you have is a hammer, then every problem you need to solve has to be turned into a nail.

Successful organizations have a complex, correlative approach to effective Web performance analysis. One that links performance data from multiple inputs and finds a way to link the relationships between different data sets.

If your team isn’t ready for the correlative approach, then at least keep an open mind. Not every Web performance problem is a nail.

4. Information – How do we make the data useful?

Your team now has a great lump of data, collected in a way that is understood, and providing details about things they care about.
Now what?

Web performance data is simply the raw facts that come out of the measurement systems. It is critical that during the process of determining why, what and how to measure that you also decided how you were going to process the data to produce metrics that made sense to your team.
Strategies include:

  • Feeding the data into a business analytics tool
  • Producing daily/weekly/monthly reports on the Key Performance Indicators (KPIs) that your team uses to measure Web performance
  • Annotate change, for better or worse
  • Correlate. Correlate. Correlate. Nature abhors a vacuum.

Providing a lot of raw data is the same as a vacuum – a whole bunch of nothing.

5. Action – How do we make meaningful Web performance changes?

Data has been collected and processed into meaningful data. People throughout the organization are having a-ha moments, coming up with ideas or realizations about the overall performance of the site. There are cries to just do something.
Stick to the plan. And assume that the plan will evolve in the presence of new information.

Prioritizing Web performance improvements falls into the age-old battle between the behemoths of the online business: business and IT.
Business will want to focus on issues that have the greatest effect on the bottom-line. IT will want to focus on the issues that have the greatest effect on technology.
They’re both wrong. And they’re both right.

Your online business is just that: a business that, regardless of its mission, based on technology. Effective Web performance relies on these two forces being in balance. The business cannot be successful without a sound and tuned online platform, and the technology needed to deliver the online platform cannot exist without the revenue that comes from the business done on that platform.

Effective Web performance relies on prioritizing issues so that they can be done within the business and technology plans. And an effective organization is one that has communicated (there’s that word again) what those plans are. Everyone needs to understand that the business makes decisions that effect technology and vice-versa. And that if these decisions are made in isolation, the whole organization will either implode or explode.

Takeaway

Effective Web performance is hard work. It takes a committed organization that understands that running an online business requires that everyone have access to the information they need, collected in a meaningful way, to meet the goals that everyone has agreed to.

Web Performance: On the edge of performance

A decade of working in the Web performance industry can leave one with the idea that no matter how good a site is, there is always the opportunity to be better, be faster. However, I am beginning to believe, just from my personal experience on the Internet, that speed has reached its peak with the current technologies we have.

This does not bode well for an Internet that is shifting more directly to true read/write, data/interaction heavy Web sites. This needs to have home broadband that is not only fast, but which has equality for inbound and outbound connection speeds.

But will faster home broadband really make that much of a difference? Or will faster networks just show that even with the best connectivity to the Internet money can buy, Web sites are actually hurting themselves with poor design and inefficient data interaction designs?

For companies on the edge of Web performance, who are trying to push their ability to improve the customer experience as hard as possible, who are moving hard and fast to the read/write web, here are some ways you can ensure that you can still deliver the customer experience your visitors expect.

Confirm your customers’ bandwidth

This is pretty easy. Most reasonably powerful Web analytics tools can confirm this for you, breaking it down by dialup, and high broadband type. It’s a great way to ensure that your preconceptions about how your customers interact with your Web site meets the reality of their world.

It is also a way to see just how unbalanced your customers’ inbound and outbound connection speeds. If it is clear that traffic is coming from connection types or broadband providers that are heavily weighted towards download, then optimization exercises cannot ignore the effect of data uploads on the customer experience.

Design for customers’ bandwidth

Now that you’ve confirmed the structure of your customers’ bandwidth, ensure that your site and data interaction design are designed with this in mind. Data that uses a number of inefficient data calls behind the scenes in order to be more AJAXy may hurt itself when it tries to make those calls over a network that’s optimized for download and not upload.

Measure from the customer perspective

Web performance measurement has been around a long time. But understanding how the site performs from the perspective of true (not simulated) customer connectivity, right where they live and work, will highlight how your optimizations may or may not be working as expected.

Measurements from high-throughput, high-quality datacenter connections give you some insight into performance under the best possible circumstances. Measure from the customer’s desktop, and even the most thoughtfully planned optimization efforts may have been like attacking a mammoth with a closed safety pin: ineffective and it annoys the mammoth [to paraphrase Hugh Macleod].

As well as synthetic measurements, measure performance right from within the browser. Understanding how long it takes pages to render, how long it takes to show content above the fold, and to gather discrete times on complex Flash and AJAX events within the page will give you even more control over finding those things you can fix.

Takeaway

In the end, even assuming your customers have the best connectivity, and you have taken all the necessary precautions to get Web performance right, don’t assume that the technology can save you from bad design and slow applications.
Be constantly vigilant. And measure everything.

Web Performance: How long can you ignore the money?

Web performance is everywhere. People intuitively understand that when a site is slow, something’s wrong. Web performance breeds anecdotal tales of lost carts, broken catalogs, and searches gone wrong. Web performance can get you name in lights, but not in the way you or your company would like.

It’s a mistake to consider Web performance a technology problem. Web performance is really a business problem that has a technological solution.
Business problems have solutions that any mid-level executive can understand. A site that can’t handle the amount of traffic coming in requires tuning and optimization, not the firing of the current VP of Operations and a new marketing strategy.

Can you imagine the fate of the junior executive who suggested that a new marketing strategy was the solution to brick-and-mortar stores that are too small and crowded to handle the number of prospective customers (or former prospective customers) coming in the door?

Every Web performance event costs a company money, in the present and in the future. So when someone presents your company with the reality of your current Web performance, what is your response?

Some simple ideas for living with the reality that Web performance hurts business.

  1. Be able to explain the issue to everyone in the company and to customers who ask. Gory details and technical mumbo-jumbo make people feel like there is something being hidden from them. Tell the truth, but make it clear what happened.
  2. Do not blame anyone in public. A great way to look bad to everyone is to say that someone else caused the problem. Guess what? All that the people who visited your site during the problem will remember is that your site had the problem. Save frank discussions for behind closed doors.
  3. Be able to explain to the company what the business cost was. While everyone is pointing fingers inside your company, remind them that the outage cost them $XX/minute. Of course, you can only tell them that if you know what that number is. Then gently remind everyone that this is what it cost the whole company.
  4. Take real action. I don’t mean things like “We will be conducting an internal review of our processes to ensure that this is not repeated”. I mean things like listening and understanding what technology or business process failed and got you into this position in the first place. Was it someone just hitting the wrong switch? Or was it a culture of denial that did not allow the reality of Web performance to filter up to levels where real change could be implemented?
  5. Demand quantitative proof that this will never happen again. Load test. Monitor. Measure. Correlate data from multiple sources. Decide how Web performance information will be communicated inside your company. Make the data available so people can ask questions. Be prepared to defend your decisions with real information.

The most successful Web companies have done thing very well. It is the core of their success and it is what makes them ruthlessly strive for Web performance excellence.

These companies understood that in order to succeed they needed to create a culture where business performance and Web performance are the same thing.

Web Performance: The Rise of Browser Computing

The next generation of browser all tout that they are able to more effectively deliver on the concept of cloud computing and Web applications. That may be the case, but it changes the entire world of Web performance measurement and monitoring.

The Web performance focus for most firms is simple: How quickly can code/text/images/flash can be transferred to the desktop?

The question that needs to be asked now is: What effect does my content have on the browser and the underlying OS when it arrives at the desktop?

Emphasis is now put on the speed and efficiency of Web pages inside browsers. How much CPU/RAM does the browser consume? Are some popular pages more efficient than others? Does continuous use of a browser for 8-12 hours a day cripple a computers ability to do other tasks?

The performance measurement will include instrumenting of the browser. This will not be to capture the content performance, but the browser performance. Through extensions, plugins, accelerators, whatever browsers will be able to report the effect of long-term use of the health of the computer and how it degrades the perceived performance over time.

Many solutions for page-performance tracking have been implemented using JavaScript tags, etc. What would be interesting to many developer is to see the long-term effects of the Web on certain browsers. This information could be tagged with specific event markers, DOM events, plugin usage (Flash, Silverlight, Java), and other items that indicate what events truly effect the browser.

Most browsers provide users and developers tools to debug pages. But what if this data was made globally available? What would it tell us about the containers we use to interact with our world?

Why Web Measurements? Part IV: Technical Operations

In the first three parts of this series, the focus has been on the business side of the business: Customer Generation, Customer Retention, and Business Operations. The final component of any discussion of why companies measure their Web performance falls down to Technical Operations.

Why is Technical Operations last?

This part of the conversation is the last, mainly because it is the most mature. A technical audience will understand the basics of a distributed Web performance measurement system, or a Web analytics system, or a QA testing tool without too much explanation. The problems that these tools solve are well-defined and have been around for many years.

Quickly thinking about these types of problems makes it clear, however, that the kind of data needed in a technical operations environment is substantially different than that which is needed at the Business Operations level. Here, the devil is in the details; at Business Operations, the devil is in the patterns and trends.

What are you trying to measure?

The short answer is that a Technical Operations team is trying to measure everything. More data is better data at this level. The key is the ability to correlate multiple sources of system inputs (Web performance data, systems data, network data, traffic data, database queries, etc.) to detect the patterns of behavior which could indicate impending crises or complete system outage, or simply a slower than expected response time during peak business hours.

And while Technical Operations teams thrive on data, they do not thrive on explaining this data very well to others. So the metrics which are important in one organization may not be the key ones in another. Or they may be called by a completely different name. Which is why Technical Operations sigh and throw up their hands in despair when talking to management who are working from Business Operations data.

How do you measure it?

Measure early. Measure often.

This sums up the philosophy of most Technical Operations teams. They want to gather as much data as possible. So much data that the gathering of this data is often one step away from affecting the performance of their own systems. This is how the scientific mind works. So, be prepared to control this urge to measure and instrument everything with a need to ensure that the system is operationally sound.

Summary

Even in the well-developed area of Technical Operations, there is still opportunity to ensure that you are measuring the right things the right way. Do an audit of your measurements. Ask the question “why do we measure this this way?”.
Measure meaningful things in a meaningful way.

Why Web Measurements? Part III: Business Operations

In the Customer Generation and Customer Retention articles of this series, the focus was on Web performance measurements designed to serve an audience outside of your organization. Starting with Business Operations, the focus shifts toward the use of Web performance measurements inside your organization.

Why Business Operations?

When I was initially developing these ideas with my colleague Jean Campbell, the idea was to call this section Reporting and Quality of Service. What we found was that this didn’t completely encompass all of the ideas that fall under these measurements. The question became: which part of the organization do reporting and QoS measurements serve?

What was clear was these were the metrics that reported on the health of the Web service to management and the company as a whole. This was the measurement data that the line of business tied to revenue and analytics data to get a true picture of the health of the online business.

What are you measuring?

Measurements for business operations need to capture the key metrics that are critical for making informed business decisions.

  • How do we compare to our competitors?
  • Are we close to breaching our SLAs?
  • Are the third-parties we use close to breaching their SLAs?
  • What parts of the site affect performance / user experience the most so we can set priorities?
  • How does Web performance correlate with all the other data we use in our online business?

Every company will use different measures to capture this information, and correlate the data in different ways. The key is that you do use it to understand how Web performance ties into the line of business.

How often do I look at it?

Well, honestly, most people who work in business operations only need to examine Web performance once a day in a summary business KPI report (your company has a useful daily KPI report that everyone understands and uses, right?), and in greater detail at weekly and monthly management meetings.

The goal of the people examining business operations data is not to solve the technical problems that are being encountered, but to understand how the performance of their site affects the general business health of the company, and how it plays in the competitive marketplace.

What metrics do I need?

Business operations teams need to understand

  • End-to-end response time for measured business processes
  • Page-level response times for measured business processes
  • Success rate of the transaction during the measurement period
  • How third-parties are affecting performance
  • How Web analytics and Web performance relate
  • How different regions are affected by performance
  • How does performance look from the customer ISPs and desktops

Detailed technical data is lost on these people. It is their role to take all of the data they have, and present a picture of the application as it affects the business, and discuss challenges that they face at a technical level in terms of how they affect the business.

Summary

For people who work at an extremely detailed level with Web measurement data (the topic for the next part of this series), Business Operations metrics seem light, fluffy, and often meaningless. But these metrics serve a distinct audience: the people who run the company. Frankly, if the senior business leaders at an organization are worried on a daily basis about the minute technical details that go into troubleshooting and diagnosing performance issues, I would be concerned.
The objective of Business Operations measurements is to convey the health of the Web systems that support the business, and correlate that health with other KPIs used by the management team.

Why Web Measurements? Part II: Customer Retention

In the first part of this series, using Web performance measurements to generate new customers was the topic. This article focuses on using the same data to keep the customers you have, and make them believe in the value of your service.

Proving the Point

Getting a customer is the exciting and glamorous work. Resources are often drawn from far and wide in an organization to win over a prospect and make them a customer.

Once the deal is done, the day-to-day business of making the customer believe that they are getting what they paid for is the job of the ongoing benchmarking measurements. CDNs and third-party services need to prove that they are delivering the goods, and this can only be done by an agreed upon measurement metric.

Some people leap right into an SLA / SLO discussion. As a Web performance professional, I can tell you that there are few SLAs that are effective, and ever fewer that are enforceable.

Start with what you can prove. Was the performance that was shown me during the pre-sales process a fluke, or does it represent the true level of service that I am getting for my money?

Measure Often and Everywhere

The Web performance world has become addicted to the relatively clean and predictable measurements that originate from high-quality backbone measurement locations. This perspective can provide an slightly unrealistic view of the Web world.

How many times have you heard from the people around you about site X (maybe this is your site) behaving badly or unpredictably from home connections? Why, when you examine the Web performance data from the backbone, doesn’t this show up?

Web connections to the home are unpredicatble, unregulated, and have no QoS target. It is definitely best effort. This is especially true in the US, where there is no incentive (some would say that there is a barrier) to delivering the best quality performance to the home. But that is where the money is.

As a service provider, you better be willing to show that your service is able to surmount the obstacles and deliver Web performance advantages at the Last Mile and the Backbone.

Don’t ever base SLAs on Last Mile data – this is Web performance insanity. But be ready to prove that you can deliver high quality performance everywhere.

Show me the data

As a customer of your service, I expect you to show me the measurement that you’re are collecting. I expect you to be honest with me when you encounter a problem. I do not want to hear/see your finger-pointing, especially when you try and push the blame for any performance issues back to me.

As a service provider, you live and die by the Web performance data. And if you see something in the data, not related to your business, but that could make my site faster and better, tell me about it.

Remember that partnership you sold me on during the Customer Generation phase? Show it to me now. If you help me get better, this will be added to plus column on the decision chart at renewal time, when your competitor comes knocking on my door with a lower price and Web performance data that shows how much you suck.

Shit Happens. Fess up.

The beauty of Web performance measurement is that your customers can replicate exactly the same measurements that you run on their behalf. And, they may actually measure things that you hadn’t thought about.

And sure as shooting, they will show up at a meeting with your team one day with data that shows that your service FUBAR‘d on a massive scale.

It’s the Internet. Bad shit happens on the Internet. I’ve seen it.
If you can show them that you know about the problem, explain what caused it, how you resolved it, and how you are working to prevent it, good.

Better: Call them when the shit happens. Let them know that you know about the problem and that you have a crack team of Web performance commandos deployed worldwide to resolve the problem in non-relativistic time. Blog it. Tweet it. Put a big ‘ol email in their inbox. Call your primary contact, and your secondary contact, and your tertiary contact.

Fess up. You can only hide so much before your customers start talking. And the last thing your want prospects seeing is your existing customers talking smack about your service.

Summary

Web performance measurement doesn’t go away the second you close the deal. In fact, the process has only just begun. It is a crazy, competitive world out there. Be prepared to show that you’re the best and that you aren’t perfect every single day.

Why Web Measurements? Part I: Customer Generation

Introduction to the Series

This is the first of a four-part series focusing on the reasons why companies measure their Web performance. This perspective is substantially different than ones posited by others in the field as they focus on the meat and potatoes reasons, rather than the sometimes more difficult to imagine future effects that measurement will bring.

Reason One: Customer Generation

It is critical that a company be able to show that their Web services are superior to others, especially in the third-party services and delivery sectors of the Web. In this area, Web performance measurement is key to demonstrating the value and advantage of a service versus the option of self-delivering or using another competitor’s service.

Comparative benchmarking that clearly demonstrates the performance of each of the competitive services in the geographic regions that are of greatest interest to the prospect is key to these Web performance measurements. To achieve truly competitive benchmarks and prove the value of a service, measurements must be realistic and flexible.

In the CDN field, a one object fits all approach is no longer valid. CDNs are responsible for delivering not just images or static objects, but may also host an entire application on their edge servers, serving both HTTP and HTTPS content. In other cases, the application may not be hosted at the edge, but the edge server may act as a proxy for the application, using advancing routing algorithms to deliver the visitor the requested dynamic content more quickly (in theory) than the origin server alone.

This complex range of services means that a CDN has to be willing to demonstrate effective and efficient service delivery before the sale is complete. A CDN has to be willing to expose their system not just to the backbone-based measurements offered in a traditional customer generation process, but to take measurements from the real-user perspective.

Ad-providers have to be willing to show that their service does not affect the overall performance of the site they are trying to place their content on. Web analytics firms face the same challenge. Web analytics firms have one advantage: if their object doesn’t load properly, it may not effect the visitor experience. However, neither ad-providers nor Web-analytics providers can hide from Web measurement collection methods that show all of the bling and the blemishes.
Using Web performance measurements to generate customers is a way that a firm can clearly show that they have faith enough in their service to openly compare it to other providers and to the status quo.

But woe be the firm who uses Web performance metrics in a way that tries to show only their good side. Prospects become former prospects very quickly if a firm using Web performance data to generate new business is found to be gaming the system to their advantage. And it will happen.

Customer Generation is a key method that Web performance measurements are used by firms to clearly show how their service is superior to what a prospect currently has, or is also considering. However, this method does come with substantial caveats, including

  • The need to measure what is relevant
  • The need to measure from where the prospect has the greatest interest
  • The need to consider that gaming the system to show advantage will cost a firm in the end.

Copyright © 2024 Performance Zen

Theme by Anders NorenUp ↑