Tag: web performance concepts

Web Caching and Old Age

In 2002, I was invited to speak at an IBM conference in San Francisco. When it came time to give my presentation, no one showed up.

I had forgotten about it until I was perusing the Wayback Machine and found the PDF of my old presentation.

The interesting thing is that the discussion in this doc is still relevant, even though the web is a very different beast than it was in 2002. Caching headers and their deployment have not changed much since they were introduced.

And there are still entities out there who get them wrong.


If you like ancient web docs, check out what webperformance.org looked like in 2007. [Courtesy of the Wayback Machine]

Web Performance Concepts: Customer Anywhere

Companies are beginning to fully grasp the need to measure performance from all perspectives: backbone, last mile, mobile, etc. But this need is often driven by the operational perspective – “We need to know how our application/app is doing from all perspectives”.

While this is admirable, and better than not measuring at all, turning this perspective around will provide companies with a whole new perspective. Measure from all perspectives not just because you can, but because your customers demand performance from all perspectives.

The modern company needs to always keep in mind the concept of Customer Anywhere. The desire to visit your site, check a reservation, compare prices, produce coupons can now occur at the customer’s whim. Smartphones and mobile broadband have freed customers from the wires for the first time.
If I want to shop poolside, I want your site to be as fast over a mobile connection on my Android as it is on my WiFi iPad as it is on my Alienware laptop on ethernet. I don’t care what the excuse is: If it’s not fast, it’s not revenue.

Knowing how a site performs over the wire, in the browser, around the world made “Web” performance a lot harder. The old ways aren’t enough.

How does your “Web” performance strategy work with Customer Anywhere?

Effective Web Performance: What to Manage

One of the traditional areas of frustration for Operations and Development teams in the Web world is that their performance, Web performance, is measured from the outside-in.

The resistance of this camp is strong, and they will appear without warning, even from amongst the most enlightened of companies.

How can they be recognized?

You will hear their battle-cry, their mantra, their fundamental belief that their application, their infrastructure is a misunderstood victim. That if they could only get their one idea across, the whole of the company would be enlightened.
The fundamental tenet of this group is simple and short.

How can we manage the Internet?

The obvious fallacy of this argument is clear to any Web performance professional or business analyst: Customers get to our business across the Internet, not via psychic modem. In order to keep close tabs on the experience of our customers, the site, application, code must be measured from the outside-in.

In order to prevent making enemies and perpetuating already ossified corporate silos, take the initiative. Gently steer the discussion in a new direction by making this incredibly vast problem into one everyone in the company can understand. By adding a single word to the initial question, the fearful and reactive perspective can be dramatically shifted to one that could make the members of this camp see the light.

When you talk to these customers, change the question: How can we manage for the Internet?

Now the focus of the discussion is now proactive – is there something we are missing that could reduce the problems and/or prevent them from ever happening?
Taking the all-encompassing and awe-inspiring challenge that is the Internet and turning it into a Boy Scout moment may reinvigorate the internal conversation, and give people a sense of purpose. Now they will be galvanized to consider whether everything in their power is being done to prevent performance issues before bits hit the Internet.

Effective Web performance hinges on taking the obvious challenges that face all Web sites, and turning them into solutions that mitigate these challenges as much as possible. So, in the next team meeting, the next time you hear someone say that it’s just the Internet, ask what can still be done to manage the application more effectively for the Internet.

Web Performance: On the edge of performance

A decade of working in the Web performance industry can leave one with the idea that no matter how good a site is, there is always the opportunity to be better, be faster. However, I am beginning to believe, just from my personal experience on the Internet, that speed has reached its peak with the current technologies we have.

This does not bode well for an Internet that is shifting more directly to true read/write, data/interaction heavy Web sites. This needs to have home broadband that is not only fast, but which has equality for inbound and outbound connection speeds.

But will faster home broadband really make that much of a difference? Or will faster networks just show that even with the best connectivity to the Internet money can buy, Web sites are actually hurting themselves with poor design and inefficient data interaction designs?

For companies on the edge of Web performance, who are trying to push their ability to improve the customer experience as hard as possible, who are moving hard and fast to the read/write web, here are some ways you can ensure that you can still deliver the customer experience your visitors expect.

Confirm your customers’ bandwidth

This is pretty easy. Most reasonably powerful Web analytics tools can confirm this for you, breaking it down by dialup, and high broadband type. It’s a great way to ensure that your preconceptions about how your customers interact with your Web site meets the reality of their world.

It is also a way to see just how unbalanced your customers’ inbound and outbound connection speeds. If it is clear that traffic is coming from connection types or broadband providers that are heavily weighted towards download, then optimization exercises cannot ignore the effect of data uploads on the customer experience.

Design for customers’ bandwidth

Now that you’ve confirmed the structure of your customers’ bandwidth, ensure that your site and data interaction design are designed with this in mind. Data that uses a number of inefficient data calls behind the scenes in order to be more AJAXy may hurt itself when it tries to make those calls over a network that’s optimized for download and not upload.

Measure from the customer perspective

Web performance measurement has been around a long time. But understanding how the site performs from the perspective of true (not simulated) customer connectivity, right where they live and work, will highlight how your optimizations may or may not be working as expected.

Measurements from high-throughput, high-quality datacenter connections give you some insight into performance under the best possible circumstances. Measure from the customer’s desktop, and even the most thoughtfully planned optimization efforts may have been like attacking a mammoth with a closed safety pin: ineffective and it annoys the mammoth [to paraphrase Hugh Macleod].

As well as synthetic measurements, measure performance right from within the browser. Understanding how long it takes pages to render, how long it takes to show content above the fold, and to gather discrete times on complex Flash and AJAX events within the page will give you even more control over finding those things you can fix.

Takeaway

In the end, even assuming your customers have the best connectivity, and you have taken all the necessary precautions to get Web performance right, don’t assume that the technology can save you from bad design and slow applications.
Be constantly vigilant. And measure everything.

Methodology Before Measurement

Measure what is measurable, and make measurable what is not so.

Galileo Galilei

The greatest challenge facing companies today is not finding ways to measure performance. The key issue is one of understanding what should be measured and validating that there is agreement on what the purpose of the measurement is.

Organizations are complex. And with complexity arises the need to gather data for different purposes. In my series discussing Why Web Measurements?, I broke organizations down into four groups, each one having distinctly different needs for measurements and data.

While this series focuses on Web performance, the four categories (Customer Generation, Customer Retention, Business Operations, and Technical Operations) can be broadly applied to all aspects of your business.
In each of the four categories, whether it is for Web performance or financial analysis, determining what and why to measure is a critical predecessor to the establishment of measurements and the examination of data.

2009 will be a year of reflection and retrenchment. Companies will be examining all aspects of their business, all of their relationships with vendors, all of the ways they measure themselves. The question that must be asked before succumbing to the rushing panic of cost-cutting and layoffs is: Do you fundamentally understand why and what you measure and what it is really telling you?

Managing Web Performance: A Hammer is a Hammer

Give almost any human being a hammer, and they will know what to do with it. Modern city dwellers, ancient jungle tribes, and most primates would all look at a hammer and understand instinctively what it does. They would know it is a tool to hit other things with. They may not grasp some of the subtleties, such as that is designed to drive nails into other things and not beat other creatures into submission, but they would know that this is a tool that is a step up from the rock or the tree branch.

Simple tools produce simple results. This is the foundation of a substantial portion of the Software-as-a-Service (SaaS) model. SaaS is a model which allows companies to provide a simple tool in a simple way to lower the cost of the service to everyone.
Web performance data is not simple. Gathering the appropriate data can be as complex as the Web site being measured. The design and infrastructure that supports a SaaS site is usually far more complex than the service it presents to the customer. A service that measures the complexity of your site will likely not provide data that is easy to digest and turn into useful information.

As any organization who has purchased a Web performance measurement service, a monitoring tool, a corporate dashboard expecting instant solutions will tell you, there are no easy solutions. These tools are the hammer and just having a hammer does not mean you can build a house, or craft fine furniture.

In my experience, there are very few organizations that can craft a deep understanding of their own Web performance from the tools they have at their fingertips. And the Web performance data they collect about their own site is about as useful to them as a hammer is to a snake.

Web Performance: Blogs, Third Party Apps, and Your Personal Brand

The idea that blogs generate a personal brand is as old as the “blogosphere”. It’s one of those topics that rages through the blog world every few months. Inexorably the discussion winds its way to the idea that a blog is linked exclusively to the creators of its content. This makes a blog, no matter what side of the discussion you fall on, the online representation of a personal brand that is as strong as a brand generated by an online business.

And just as corporate brands are affected by the performance of their Web sites, a personal brand can suffer just as much when something causes the performance of a blog Web site to degrade in the eyes of the visitors. For me, although my personal brand is not a large one, this happened yesterday when Disqus upgraded to multiple databases during the middle of the day, causing my site to slow to a crawl.

I will restrain my comments on mid-day maintenance for another time.

The focus of this post is the effect that site performance has on personal branding. In my case, the fact that my blog site slowed to a near standstill in the middle of the day likely left visitors with the impression that my blog about Web performance was not practicing what it preached.

For any personal brand, this is not a good thing.
In my case, I was able to draw on my experience to quickly identify and resolve the issue. Performance returned to normal when I temporarily disabled the Disqus plugin (it has since been reactivated). However, if I hadn’t been paying attention, this performance degradation could have continued, increasing the negative effect on my personal brand.

Like many blogs, Disqus is only one of the outside services I have embedded in my site design. Sites today rely on AdSense, Lookery, Google Analytics, Statcounter, Omniture, Lijit, and on goes the list. These services have become as omnipresent in blogs as the content. What needs to be remembered is that these add-ons are often overlooked as performance inhibitors.

Many of these services are built using the new models of the over-hyped and mis-understood Web 2.0. These services start small, and, as Shel Israel discussed yesterday, need to focus on scalability in order to grow and be seen as successful, rather than cool, but a bit flaky. As a result, these blog-centric services may affect performance to a far greater extent than the third-party apps used by well-established, commercial Web sites.

I am not claiming that any one of these services in and of themselves causes any form of slowdown. Each has its own challenges with scaling, capacity, and success. It is the sheer number of the services that are used by blog designers and authors poses the greatest potential problem when attempting to debug performance slowdowns or outages. The question in these instances, in the heat of a particularly stressful moment in time, is always: Is it my site or the third-party?

The advice I give is that spoken by Michael Dell: You can’t manage what you can’t measure. Yesterday, I initiated monitoring of my personal Disqus community page, so I could understand how this service affected my continuing Web performance. I suggest that you do the same, but not just of this third-party. You need to understand how all of the third-party apps you use affect how your personal brand performance is perceived.

Why is this important? In the mind of the visitor, the performance problem is always with your site. As with a corporate site that sees a sudden rise in response times or decrease in availability, it does not matter to the visitor what the underlying cause of the issue is. All they see is that your site, your brand (personal or corporate), is not as strong or reliable as they had been led to believe.

The lesson that I learned yesterday, one that I have taught to so many companies but not heeded myself, is that monitoring the performance of all aspects of your site is critical. And while you as the blog designer or writer might not directly control the third-party content you embed in your site, you must consider how it affects your personal brand when something goes wrong.

You can then make an informed decision on whether the benefit of any one third-party app is outweighed by the negative effect it has on your site performance and, by extension, your personal brand.

Web Performance, Part IX: Curse of the Single Metric

While this post is aimed at Web performance, the curse of the single metric affects our everyday lives in ways that we have become oblivious to.

When you listen to a business report, the stock market indices are an aggregated metric used to represent the performance of a set group of stocks.

When you read about economic indicators, these values are the aggregated representations of complex populations of data, collected from around the country, or the world.

Sport scores are the final tally of an event, but they may not always represent how well each team performed during the match.

The problem with single metrics lies in their simplicity. When a single metric is created, it usually attempts to factor in all of the possible and relevant data to produce an aggregated value that can represent a whole population of results.
These single metrics are then portrayed as a complete representation of this complex calculation. The presentation of this single metric is usually done in such a way that their compelling simplicity is accepted as the truth, rather than as a representation of a truth.

In the area of Web performance, organizations have fallen prey to this need for the compelling single metric. The need to represent a very complex process in terms that can be quickly absorbed and understand by as large a group of people as possible.

The single metrics most commonly found in the Web performance management field are performance (end-to-end response time of the tested business process) and availability (success rate of the tested business process). These numbers are then merged and transformed by data from a number of sources (external measurements, hit counts, conversions, internal server metrics, packet loss), and this information is bubbled up in an organization. By the time senior management and decision-makers receive the Web performance results, that are likely several steps removed from the raw measurement data.

An executive will tell you that information is a blessing, but only when it speeds, rather than hinders, the decision-making process. A Web performance consultant (such as myself) will tell that basing your decisions on a single metric that has been created out of a complex population of data is madness.

So, where does the middle-ground lie between the data wonks and the senior leaders? The rest of this post is dedicated to introducing a few of the metrics that will, in a small subset of metrics, give a senior leaders better information to work from when deciding what to do next.

A great place to start this process is to examine the percentile distribution of measurement results. Percentiles are known to anyone who has children. After a visit to the pediatrician, someone will likely state that “My son/daughter is in the XXth percentile of his/her age group for height/weight/tantrums/etc”. This means that XX% of the population of children that age, as recorded by pediatricians, report values at or below the same value for this same metric.

Percentiles are great for a population of results like Web performance measurement data. Using only a small set of values, anyone can quickly see how many visitors to a site could be experiencing poor performance.

If at the median (50th percentile), the measured business process is 3.0 seconds, this means that 50% of all of the measurements looked at are being completed in 3.0 seconds or less.

If the executive then looks up to the 90th percentile and sees that it’s at 16.0 seconds, it can be quickly determined that something very bad has happened to affect the response times collected for the 40% of the population between these two points. Immediately, everyone knows that for some reason, an unacceptable number of visitors are likely experiencing degraded and unpredictable performance when they visit the site.

A suggestion for enhancing averages with percentiles is to use the 90th percentile value as a trim ceiling for the average. Then side-by-side comparisons of the untrimmed and trimmed averages can be compared. For sites with a larger number of response time outliers, the average will decrease dramatically when it is trimmed, while sites with more consistent measurement results will find their average response time is similar with and without the trimmed data.

It is also critical to examine the application’s response times and success rates throughout defined business cycles. A single response time or success rate value eliminates

  • variations by time of day
  • variations by day of week
  • variations by month
  • variations caused by advertising and marketing

An average is just an average. If at peak buiness hours, response times are 5.0 seconds slower than the average, then the average is meaningless, as business is being lost to poor performance which has been lost in the focus on the single metric.

All of these items have also fallen prey to their own curse of the single metric. All of the items discussed above aggregate the response time of the business process into a single metric. The process of purchasing items online is broken down into discrete steps, and different parts of this process likely take longer than others. And one step beyond the discrete steps are the objects and data that appear to the customer during these steps.

It is critical to isolate the performance for each step of the process to find the bottlenecks to performance. Then the components in those steps that cause the greatest response time or success rate degradation must be identified and targeted for performance improvement initiatives. If there are one or two poorly performing steps in a business process, focusing performance improvement efforts on these is critical, otherwise precious resources are being wasted in trying to fix parts of the application that are working well.

In summary, a single metric provides a sense of false confidence, the sense that the application can be counted on to deliver response times and success rates that are nearly the same as those simple, single metrics.

The average provides a middle ground, a line that says that is the approximate mid-point of the measurement population. There are measurements above and below this average, and you have to plan around the peaks and valleys, not the open plains. It is critical never to fall victim to the attractive charms that come with the curse of the single metric.

Web Performance, Part VIII: How do you define fast?

In the realm of Web performance measurement and monitoring, one of the eternal and ever-present questions remains “What is fast?”. The simple fact is that there is no single answer for this question, as it it isn’t a question with one quantitative answer that encompasses all the varied scenarios that are presented to the Web performance professional.

The answer that the people who ask the “What is fast?” question most often hear is “It depends”. And in most cases, it depends on the results of three distinct areas of analysis.

  1. Baselining
  2. Competitve Analysis
  3. Comparative Analysis

Baselining

Baselining is the process of examining Web performance results over a period of time to determine the inherent patterns that exist in the measurement data. It is critical that this process occur over a minimum period of 14 days, as there are a number of key patterns that will only appear within a period at least that long.

Baselining also provides some idea of what normal performance of a Web site or Web business process is. While this will provide some insight into the what can be expected from the site, in isolation it provides only a tiny glimpse into the complexity of how fast a Web site should be.

Baselining can identify the slow pages in a business process, or identify objects that may be causing noticeable performance degradation, its inherent isolation from the rest of the world it exists is its biggest failing. Companies that rely only on the performance data from their own sites to provide the context of what is fast are left with a very narrow view of the real world.

Competitive Analysis

All companies have competition. There is always a firm or organization whose sole purpose is to carve a niche out of your base of customers. It flows both ways, as your firm is trying to do exactly the same thing to other firms.

When you consider the performance of your online presence, which is likely accounting for a large (and growing) component of your revenue, why would you leave the effects of poor Web site performance your competitive analysis? And how do you know how your site is fairing against the other firms you are competing against on a daily basis?

Competitive analysis has been a key component of the Web performance measurement field since it appeared in the mid-1990s. Firms want to understand how they are doing against other firms in the same competitive space. They need to know if their Web site is at a quantitative advantage or disadvantage with these other firms.

Web sites are almost always different in their presentation and design, but they all serve the same purpose: To convert visitors to buyers. Measuring this process in a structured way allows companies to cut through the differences that exist in design and presentation and cut directly to heart of the matter: Show me the money.
Competitive measurements allow you to determine where your firm is strong, where it is weak, and how it should prioritize its efforts to make it a better site that more effectively serves the needs of the customers, and the needs of the business.

Comparative Analysis

Most astute readers will be wondering how comparative analysis differs from competitive analysis. The differences are, in fact, fundamental to the way they are used. Where competitive analysis provides insight into the unique business challenges faced by a group of firms serving the needs of similar customers, comparative analysis forces your organization to look at performance more broadly.

Your customers and visitors do not just visit your site. I know this may come as a surprise, but it’s true. As a result, they carry with them very clear ideas of how fast a fast site is. And while your organization may have overcome many challenges to become the performance leader in your sector, you can only say that you understand the true meaning of performance once you have stepped outside your comfort zone and compared yourself to the true leaders in performance online.

On a daily basis, your customers compare your search functionality to firms who do nothing but provide search results to millions of people each day. They compare how long it takes to autheticate and get a personalized landing page on your site to the experiences they have at their bank, their favourite retailers. The compare the speed with which specific product pages load.

They may not do this consciously. But these consumers carry with them an expectation of performance, and they know when your site is or is not delivering it.
So, how do you define fast? Fast is what you make it. As a firm with a Web site that is serving the needs of customers or visitors, you have to be ready to accept that there are others out there who have solved many of the problems you may be facing. Broaden your perspective and put your site in the harsh light of these three spotlights, and your organization will be on its way to evolving its Web performance perspective.

Copyright © 2024 Performance Zen

Theme by Anders NorenUp ↑