Category: Performance Trends

Web Caching and Old Age

In 2002, I was invited to speak at an IBM conference in San Francisco. When it came time to give my presentation, no one showed up.

I had forgotten about it until I was perusing the Wayback Machine and found the PDF of my old presentation.

The interesting thing is that the discussion in this doc is still relevant, even though the web is a very different beast than it was in 2002. Caching headers and their deployment have not changed much since they were introduced.

And there are still entities out there who get them wrong.


If you like ancient web docs, check out what webperformance.org looked like in 2007. [Courtesy of the Wayback Machine]

Performance Trends for ???? – Smarter Systems

IF-repair by Yo Mostro (Flickr)

Most of the trending items that I have discussed in the last two weeks are things that can be done today, problems that we are aware of and know need to be resolved. One item on my trend list, the appearance of smarter performance measurement systems, is something the WPO industry may have to wait a few years to appear.

A smarter performance measurement system is one that can learn what, when, and from where items need to be monitored by analyzing the behavior of your customers/employees and your systems. A hypothetical scenario of a smarter performance measurement system at work would be in the connection between RUM and synthetic monitoring. All of the professionals in WPO claim that these must be used together, but the actual configuration relies on humans to deliver the advantages that come from these systems. If RUM/analytics know where your customers are, what they do, and when they do it, then why can’t these same systems deploy (maybe even create and deploy!) synthetic tests to those regions automatically to capture detailed diagnostic data?

Why do measurement systems rely on us to manually configure the defaults for measurements? Why can’t we take a survey when we start with a system (and then every month or so after that) that helps the system determine the what/when/where/why/how of data and information we are looking to collect and have the system create a set of test deployment defaults and information displays that match our requirements?

The list of questions goes on, but they don’t have to. Measurement systems have, for too long, been built to rely on expert humans to configure and interpret results. Now we have a chance to step back and ask “If we built a performance measurement system for the a non-expert, what would it look like?”

More data isn’t the goal of performance measurement systems – more information is what we want.

Web Performance Trends 2013 – Third Party Services

Every site has them. Whether they’re for analytics, advertising, customer support, or CDN services, third-party services are here to stay. However, for 2013, I believe that these services will face a level of scrutiny that many have avoided up until now.

Recent performance trends indicate that while web site content has been tested and scaled to meet even the highest levels of traffic, the third-party services that these sites have some to rely on (with a few exceptions) are not yet prepared to handle the largest volumes of traffic that occur when many of their customers experience a peak on the same day.

In 2013, I see web site owners asking their third-party service providers to provide verification that their systems be able to handle the highest volumes of traffic on their busiest days, with an additional amount of overhead – I suggest 20% – available for growth and to absorb “super-spikes”. Customer experience is built on the performance of the entire site, so leaving a one component of site delivery untested (and definitely unmonitored!) leaves companies exposed to brand and reputation degradation as well as performance degradation.

In your own organizations, make 2013 the year you:

  • Implement tight controls over how outside content is deployed and managed
  • Implement tight change control policies that clearly describe the process for adding third-party content to your site, including the measurement of performance impacts
  • Define clear SLAs and SLOs for your third-party content providers, including the performance levels at which their content will be disabled or removed from the site.

When speak to your third-party content and service providers about their plans for 2013, ask them to:

  • Explicitly detail how they handled traffic on their busiest days in 2012, and what they plan to do to effectively handle growth in 2013
  • Clearly demonstrate how they are invested in helping their customers deliver successful mobile sites and apps in 2013
  • Lay out how they will provide more transparent access to system performance metrics and what the goals of their performance strategy for 2013 are.

Take control of your third-party content. Don’t let it control you.

Why Web Measurements? Part I: Customer Generation

Introduction to the Series

This is the first of a four-part series focusing on the reasons why companies measure their Web performance. This perspective is substantially different than ones posited by others in the field as they focus on the meat and potatoes reasons, rather than the sometimes more difficult to imagine future effects that measurement will bring.

Reason One: Customer Generation

It is critical that a company be able to show that their Web services are superior to others, especially in the third-party services and delivery sectors of the Web. In this area, Web performance measurement is key to demonstrating the value and advantage of a service versus the option of self-delivering or using another competitor’s service.

Comparative benchmarking that clearly demonstrates the performance of each of the competitive services in the geographic regions that are of greatest interest to the prospect is key to these Web performance measurements. To achieve truly competitive benchmarks and prove the value of a service, measurements must be realistic and flexible.

In the CDN field, a one object fits all approach is no longer valid. CDNs are responsible for delivering not just images or static objects, but may also host an entire application on their edge servers, serving both HTTP and HTTPS content. In other cases, the application may not be hosted at the edge, but the edge server may act as a proxy for the application, using advancing routing algorithms to deliver the visitor the requested dynamic content more quickly (in theory) than the origin server alone.

This complex range of services means that a CDN has to be willing to demonstrate effective and efficient service delivery before the sale is complete. A CDN has to be willing to expose their system not just to the backbone-based measurements offered in a traditional customer generation process, but to take measurements from the real-user perspective.

Ad-providers have to be willing to show that their service does not affect the overall performance of the site they are trying to place their content on. Web analytics firms face the same challenge. Web analytics firms have one advantage: if their object doesn’t load properly, it may not effect the visitor experience. However, neither ad-providers nor Web-analytics providers can hide from Web measurement collection methods that show all of the bling and the blemishes.
Using Web performance measurements to generate customers is a way that a firm can clearly show that they have faith enough in their service to openly compare it to other providers and to the status quo.

But woe be the firm who uses Web performance metrics in a way that tries to show only their good side. Prospects become former prospects very quickly if a firm using Web performance data to generate new business is found to be gaming the system to their advantage. And it will happen.

Customer Generation is a key method that Web performance measurements are used by firms to clearly show how their service is superior to what a prospect currently has, or is also considering. However, this method does come with substantial caveats, including

  • The need to measure what is relevant
  • The need to measure from where the prospect has the greatest interest
  • The need to consider that gaming the system to show advantage will cost a firm in the end.

Managing Web Performance: A Hammer is a Hammer

Give almost any human being a hammer, and they will know what to do with it. Modern city dwellers, ancient jungle tribes, and most primates would all look at a hammer and understand instinctively what it does. They would know it is a tool to hit other things with. They may not grasp some of the subtleties, such as that is designed to drive nails into other things and not beat other creatures into submission, but they would know that this is a tool that is a step up from the rock or the tree branch.

Simple tools produce simple results. This is the foundation of a substantial portion of the Software-as-a-Service (SaaS) model. SaaS is a model which allows companies to provide a simple tool in a simple way to lower the cost of the service to everyone.
Web performance data is not simple. Gathering the appropriate data can be as complex as the Web site being measured. The design and infrastructure that supports a SaaS site is usually far more complex than the service it presents to the customer. A service that measures the complexity of your site will likely not provide data that is easy to digest and turn into useful information.

As any organization who has purchased a Web performance measurement service, a monitoring tool, a corporate dashboard expecting instant solutions will tell you, there are no easy solutions. These tools are the hammer and just having a hammer does not mean you can build a house, or craft fine furniture.

In my experience, there are very few organizations that can craft a deep understanding of their own Web performance from the tools they have at their fingertips. And the Web performance data they collect about their own site is about as useful to them as a hammer is to a snake.

Copyright © 2024 Performance Zen

Theme by Anders NorenUp ↑