Category: Effective Web Performance

Web Performance, Part IX: Curse of the Single Metric

While this post is aimed at Web performance, the curse of the single metric affects our everyday lives in ways that we have become oblivious to.

When you listen to a business report, the stock market indices are an aggregated metric used to represent the performance of a set group of stocks.

When you read about economic indicators, these values are the aggregated representations of complex populations of data, collected from around the country, or the world.

Sport scores are the final tally of an event, but they may not always represent how well each team performed during the match.

The problem with single metrics lies in their simplicity. When a single metric is created, it usually attempts to factor in all of the possible and relevant data to produce an aggregated value that can represent a whole population of results.
These single metrics are then portrayed as a complete representation of this complex calculation. The presentation of this single metric is usually done in such a way that their compelling simplicity is accepted as the truth, rather than as a representation of a truth.

In the area of Web performance, organizations have fallen prey to this need for the compelling single metric. The need to represent a very complex process in terms that can be quickly absorbed and understand by as large a group of people as possible.

The single metrics most commonly found in the Web performance management field are performance (end-to-end response time of the tested business process) and availability (success rate of the tested business process). These numbers are then merged and transformed by data from a number of sources (external measurements, hit counts, conversions, internal server metrics, packet loss), and this information is bubbled up in an organization. By the time senior management and decision-makers receive the Web performance results, that are likely several steps removed from the raw measurement data.

An executive will tell you that information is a blessing, but only when it speeds, rather than hinders, the decision-making process. A Web performance consultant (such as myself) will tell that basing your decisions on a single metric that has been created out of a complex population of data is madness.

So, where does the middle-ground lie between the data wonks and the senior leaders? The rest of this post is dedicated to introducing a few of the metrics that will, in a small subset of metrics, give a senior leaders better information to work from when deciding what to do next.

A great place to start this process is to examine the percentile distribution of measurement results. Percentiles are known to anyone who has children. After a visit to the pediatrician, someone will likely state that “My son/daughter is in the XXth percentile of his/her age group for height/weight/tantrums/etc”. This means that XX% of the population of children that age, as recorded by pediatricians, report values at or below the same value for this same metric.

Percentiles are great for a population of results like Web performance measurement data. Using only a small set of values, anyone can quickly see how many visitors to a site could be experiencing poor performance.

If at the median (50th percentile), the measured business process is 3.0 seconds, this means that 50% of all of the measurements looked at are being completed in 3.0 seconds or less.

If the executive then looks up to the 90th percentile and sees that it’s at 16.0 seconds, it can be quickly determined that something very bad has happened to affect the response times collected for the 40% of the population between these two points. Immediately, everyone knows that for some reason, an unacceptable number of visitors are likely experiencing degraded and unpredictable performance when they visit the site.

A suggestion for enhancing averages with percentiles is to use the 90th percentile value as a trim ceiling for the average. Then side-by-side comparisons of the untrimmed and trimmed averages can be compared. For sites with a larger number of response time outliers, the average will decrease dramatically when it is trimmed, while sites with more consistent measurement results will find their average response time is similar with and without the trimmed data.

It is also critical to examine the application’s response times and success rates throughout defined business cycles. A single response time or success rate value eliminates

  • variations by time of day
  • variations by day of week
  • variations by month
  • variations caused by advertising and marketing

An average is just an average. If at peak buiness hours, response times are 5.0 seconds slower than the average, then the average is meaningless, as business is being lost to poor performance which has been lost in the focus on the single metric.

All of these items have also fallen prey to their own curse of the single metric. All of the items discussed above aggregate the response time of the business process into a single metric. The process of purchasing items online is broken down into discrete steps, and different parts of this process likely take longer than others. And one step beyond the discrete steps are the objects and data that appear to the customer during these steps.

It is critical to isolate the performance for each step of the process to find the bottlenecks to performance. Then the components in those steps that cause the greatest response time or success rate degradation must be identified and targeted for performance improvement initiatives. If there are one or two poorly performing steps in a business process, focusing performance improvement efforts on these is critical, otherwise precious resources are being wasted in trying to fix parts of the application that are working well.

In summary, a single metric provides a sense of false confidence, the sense that the application can be counted on to deliver response times and success rates that are nearly the same as those simple, single metrics.

The average provides a middle ground, a line that says that is the approximate mid-point of the measurement population. There are measurements above and below this average, and you have to plan around the peaks and valleys, not the open plains. It is critical never to fall victim to the attractive charms that come with the curse of the single metric.

Google Chrome: One thing we do know… (HTTP Pipelining)

 

All: If you got here via a search, realize this is an old post (2008) and that Chrome now supports HTTP Pipelining and SPDY HTTP/3.  Thanks, smp.

As a Web performance consultant, I view the release of Google Chrome with slightly different eyes than many. And one of the items that I look for is how the browser will affect performance, especially perceived performance on the end-user desktop.

One thing I have been able to determine is that the use of WebKit will effectively rule out (to the best of my knowledge) the availability of HTTP Pipelining in the browser.

HTTP Pipelining is the ability, defined in RFC 2616, to request multiple HTTP objects simultaneously across an open TCP connection, and then handle their downloads using the features built into the HTTP/1.1 specifications.

I had an Apple employee in a class I taught a few months back confirm that Safari (which is built on WebKit) cannot use HTTP Pipeling for reason that are known only to the OS and TCP stack developers at Apple.

Now, if the team at Google has found a way to circumvent this problem, I will be impressed.

Web Performance, Part VIII: How do you define fast?

In the realm of Web performance measurement and monitoring, one of the eternal and ever-present questions remains “What is fast?”. The simple fact is that there is no single answer for this question, as it it isn’t a question with one quantitative answer that encompasses all the varied scenarios that are presented to the Web performance professional.

The answer that the people who ask the “What is fast?” question most often hear is “It depends”. And in most cases, it depends on the results of three distinct areas of analysis.

  1. Baselining
  2. Competitve Analysis
  3. Comparative Analysis

Baselining

Baselining is the process of examining Web performance results over a period of time to determine the inherent patterns that exist in the measurement data. It is critical that this process occur over a minimum period of 14 days, as there are a number of key patterns that will only appear within a period at least that long.

Baselining also provides some idea of what normal performance of a Web site or Web business process is. While this will provide some insight into the what can be expected from the site, in isolation it provides only a tiny glimpse into the complexity of how fast a Web site should be.

Baselining can identify the slow pages in a business process, or identify objects that may be causing noticeable performance degradation, its inherent isolation from the rest of the world it exists is its biggest failing. Companies that rely only on the performance data from their own sites to provide the context of what is fast are left with a very narrow view of the real world.

Competitive Analysis

All companies have competition. There is always a firm or organization whose sole purpose is to carve a niche out of your base of customers. It flows both ways, as your firm is trying to do exactly the same thing to other firms.

When you consider the performance of your online presence, which is likely accounting for a large (and growing) component of your revenue, why would you leave the effects of poor Web site performance your competitive analysis? And how do you know how your site is fairing against the other firms you are competing against on a daily basis?

Competitive analysis has been a key component of the Web performance measurement field since it appeared in the mid-1990s. Firms want to understand how they are doing against other firms in the same competitive space. They need to know if their Web site is at a quantitative advantage or disadvantage with these other firms.

Web sites are almost always different in their presentation and design, but they all serve the same purpose: To convert visitors to buyers. Measuring this process in a structured way allows companies to cut through the differences that exist in design and presentation and cut directly to heart of the matter: Show me the money.
Competitive measurements allow you to determine where your firm is strong, where it is weak, and how it should prioritize its efforts to make it a better site that more effectively serves the needs of the customers, and the needs of the business.

Comparative Analysis

Most astute readers will be wondering how comparative analysis differs from competitive analysis. The differences are, in fact, fundamental to the way they are used. Where competitive analysis provides insight into the unique business challenges faced by a group of firms serving the needs of similar customers, comparative analysis forces your organization to look at performance more broadly.

Your customers and visitors do not just visit your site. I know this may come as a surprise, but it’s true. As a result, they carry with them very clear ideas of how fast a fast site is. And while your organization may have overcome many challenges to become the performance leader in your sector, you can only say that you understand the true meaning of performance once you have stepped outside your comfort zone and compared yourself to the true leaders in performance online.

On a daily basis, your customers compare your search functionality to firms who do nothing but provide search results to millions of people each day. They compare how long it takes to autheticate and get a personalized landing page on your site to the experiences they have at their bank, their favourite retailers. The compare the speed with which specific product pages load.

They may not do this consciously. But these consumers carry with them an expectation of performance, and they know when your site is or is not delivering it.
So, how do you define fast? Fast is what you make it. As a firm with a Web site that is serving the needs of customers or visitors, you have to be ready to accept that there are others out there who have solved many of the problems you may be facing. Broaden your perspective and put your site in the harsh light of these three spotlights, and your organization will be on its way to evolving its Web performance perspective.

Web Performance Concepts Series – Revisited

Two years ago I created a series of five blog articles, aimed at both business and technical readers, with the goal of explaining the basic statistical concepts and methods I use when analyzing Web performance data in my role as a Web performance consultant.

Most of these ideas were core to my thinking when I developed GrabPERF in 2005-2006, as I determined that it was vital that people not only receive Web performance measurement data for their site, but they receive it in a way that allows them to inform and shape the business and technical decisions they make on a daily basis.

While I come from a strong technical background, it is critical to be able to present the data that I work with in a manner that can be useful to all components of an organization, from the IT and technology leaders who shape the infrastructure and design of a site, to the marketing and business leaders who set out the goals for the organization and interact with customers, vendors and investors.

Providing data that helps negotiate the almost religious dichotomy that divides most organizations is crucial to providing a comprehensive Web performance solution to any organization.

These articles form the core of an ongoing series of discussion focused on the the pitfalls of Web performance analysis, and how to learn and avoid the errors others have already discovered.

The series went over like a lead balloon and this left me puzzled. While the basic information in the articles was technical and focused on the role that simple statistics play in affecting Web performance technology and business decisions inside an organization, they formed the core of what I saw as an ongoing discussion that organizations need to have to ensure that an organization moves in a single direction, with a single purpose.

I have decided reintroduce this series, dredging it from the forgotten archives of this blog, to remind business and IT teams of the importance of the Web performance data they use every day. It also serves as a guide to interpreting the numbers that arise from all the measurement methodologies that companies use, a map to extract the most critical information in the raging sea of data.

The five articles are:

  1. Web Performance, Part I: Fundamentals
  2. Web Performance, Part II: What are you calling average?
  3. Web Performance, Part III: Moving Beyond Average
  4. Web Performance, Part IV: Finding The Frequency
  5. Web Performance, Part V: Baseline Your Data

I look forward to your comments and questions on these topics.

The Dichotomy of the Web: Andy King's Website Optimization

Andy King's Website Optimization, O'Reilly 2008The Web is a many-splendored thing, with a very split personality. One side is drive to find ways to make the most money possible, while the other is driven to implement cool technology in an effective and efficient manner (most of the time).

Andy King, in Website Optimization (O’Reilly), tries to address these two competing forces in a way that both can understand. This is important because, as we all know from our own lives, most of the time these two competing parts of the same whole are right; they just don’t understand the other side.

I have seen this trend repeated throughout my nine years in the Web performance industry, five years as a consultant. Companies torn asunder, viewing the Business v. Technology interaction as a Cold War, one that occasionally flares up in odd places which serve as proxies between the two.

Website Optimization appears at first glance to be torn asunder by this conflict. With half devoted to optimizing the site for business and the other to performance and design optimization, there will be a cry from the competing factions that half of this book is a useless waste of time.

These are the organizations and individuals who will always be fighting to succeed in this industry. These are the people and companies who don’t understand that success in both areas is critical to succeeding in a highly competitive Web world.
The first half of the book is dedicated to the optimization of a Web site, any Web site, to serve a well-defined business purpose. Discussing terms such as SEO, PPC, and CRO can curdle the blood of any hardcore techie, but they are what drive the design and business purpose of a Web site. Without a way to get people to a site, and use the information on the site to do business or complete the tasks that they need to, there is no need to have a technological infrastructure to support it.

Conversely, a business with lofty goals and a strategy that will change the marketplace will not get a chance to succeed if the site is slow, the pages are large, and design makes cat barf look good. Concepts such HTTP compression, file concatenation, caching, and JS/CSS placement drive this side of the personality, as well as a number of application and networking considerations that are just too far down the rat hole to even consider in a book with as broad a scope as this one.

Although on the surface, the concepts discussed in this book will see many people put it down as it isn’t business or techie enough, those who do buy the book will show that they have a grasp of the wider perspective, the one that drives all successful sites to stand tall in a sea of similarity.

See the Website Optimization book companion site for more information, chapter summaries and two sample chapters.

Dear Apache Software Foundation: FIX THE MSIE SSL KEEPALIVE SETTINGS!

Dear Apache Software Foundation, and the developers of the Apache Web server:

I would like to thank you for developing a great product. I rely on it daily to host my own sites, and a large number of people on the Internet seem to share my love of this software.

However, it appears that you seem to want to maintain a simple flaw in your logic that continues to make me crazy. I am a Web performance analyst, and at least once a week I sigh, and shake my head whenever I stoop to use Microsoft Internet Explorer (MSIE) to visit secure sites.

I seems that in your SSL configurations, you continue to assume that ALL versions of MSIE can’t handle persistent connections under SSL/TLS.
Is this true? Is a bug initially caught in MSIE 5.x (5.0??) still valid for MSIE 6.0/7.0?

The short answer is: I don’t know.

It seems that no one in the Apache server team has bothered to go back and see if the current versions of MSIE — we are trying to track down the last three people use MSIE 5.x and help them — still share this problem.

In the meantime, can you change your SSL exclusion RegEx to something more, relevant for 2007?

Current RegEx:
SetEnvIf User-Agent ".*MSIE.*" nokeepalive
	ssl-unclean-shutdown
	downgrade-1.0 force-response-1.0
Relvant, updated REGEX:
SetEnvIf User-Agent ".*MSIE [1-5].*"
	nokeepalive ssl-unclean-shutdown
	downgrade-1.0 force-response-1.0

SetEnvIf User-Agent ".*MSIE [6-9].*"
	ssl-unclean-shutdown

Please? PLEASE? It’s so easy…and would solve so many performance problems…

Please?

Thank you.

Web Performance: Optimizing Page Load Time

Aaron Hopkins posted an article detailing all of the Web performance goodness that I have been advocating for a number of years.

To summarize:

  • Use server-side compression
  • Set your static objects to be cacheable in browser and proxy caches
  • Use keep-alives / persistent connections
  • Turn your browsers’ HTTP pipelining feature on

These ideas are not new, and neither are the finding in his study. As someone who has worked in the Web performance field for nearly a decade, these are old-hat. However, it’s always nice to have someone new inject some life back into the discussion.

Performance Improvement From Caching and Compression

This paper is an extension of the work done for another article that highlighted the performance benefits of retrieving uncompressed and compressed objects directly from the origin server. I wanted to add a proxy server into the stream and determine if proxy servers helped improve the performance of object downloads, and by how much.
Using the same series of objects in the original compression article[1], the CURL tests were re-run 3 times:

  1. Directly from the origin server
  2. Through the proxy server, to load the files into cache
  3. Through the proxy server, to avoid retrieving files from the origin.[2]

This series of three tests was repeated twice: once for the uncompressed files, and then for the compressed objects.[3]
As can be seen clearly in the plots below, compression caused web page download times to improve greatly, when the objects were retrieved from the source. However, the performance difference between compressed and uncompressed data all but disappears when retrieving objects from a proxy server on a corporate LAN.

uncompressed_pages
compressed_pages

Instead of the linear growth between object size and download time seen in both of the retrieval tests that used the origin server (Source and Proxy Load data), the Proxy Draw data clearly shows the benefits that accrue when a proxy server is added to a network to assist with serving HTTP traffic.

 MEAN DOWNLOAD TIME
Uncompressed Pages
Total Time Uncompressed — No Proxy0.256
Total Time Uncompressed — Proxy Load0.254
Total Time Uncompressed — Proxy Draw0.110
Compressed Pages
Total Time Compressed — No Proxy0.181
Total Time Compressed — Proxy Load0.140
Total Time Compressed — Proxy Draw0.104

The data above shows just how much of an improvement is gained by adding a local proxy server, explicit caching descriptions and compression can add to a Web site. For sites that do force a great of requests to be returned directly to the origin server, compression will be of great help in reducing bandwidth costs and improving performance. However, by allowing pages to be cached in local proxy servers, the difference between compressed and uncompressed pages vanishes.

Conclusion

Compression is a very good start when attempting to optimize performance. The addition of explicit caching messages in server responses which allow proxy servers to serve cached data to clients on remote local LANs can improve performance to even a greater extent than compression can. These two should be used together to improve the overall performance of Web sites.


[1]The test set was made up of the 1952 HTML files located in the top directory of the Linux Documentation Project HTML archive.

[2]All of the pages in these tests announced the following server response header indicating its cacheability:

Cache-Control: max-age=3600

[3]A note on the compressed files: all compression was performed dynamically by mod_gzip for Apache/1.3.27.

Using Client-Side Cache Solutions And Server-Side Caching Configurations To Improve Internet Performance

In todays highly competitive e-commerce marketplace, the performance of a web-site plays a key role in attracting new and retaining current clients. New technologies are being developed to help speed up the delivery of content to customers while still allowing companies to get their message across using rich, graphical content. However, in the rush to find new technologies to improve internet performance, one low-cost alternative to these new technologies is often overlooked: client-side content caching.

This process is often overlooked or dismissed by web administrators and content providers seeking to improve performance. The major concern that is expressed by these groups is that they need to ensure that clients always get the freshest content possible. In their eyes, allowing their content to be cached is perceived as losing control of their message.

This bias against caching is, in most cases, unjustified. By understanding how server software can be used to distinguish unique caching policies for each type of content being delivered, client-side performance gains can be achieved with no new hardware or software being added to an existing web-site system.

Caching

When a client requests web content, this information is either retrieved directly from the origin server, from a browser cache on a local hard drive or from a nearby cache server[1]. Where and for how long the data is stored depends on how the data is tagged when it leaves the web server. However, when discussing cache content, there are three states that content can be in: non-cacheable, fresh or stale.

The non-cacheable state indicates a file that should never be cached by any device that receives it and that every request for that file must be retrieved from the origin server. This places an additional load on both client and server bandwidth, as well as on the server which responds to these additional requests. In many cases, such as database queries, news content, and personalized content marked by unique cookies, the content provider may explicitly not want data to be cached to prevent stale data from being received by the client.

A fresh file is one that has a clearly defined future expiration date and/or does not indicate that it is non-cacheable. A file with a defined lifespan is only valid for a set number of seconds after it is downloaded, or until the explicitly stated expiry date and time is reached. At that point, the file is considered stale and must be re-verified (preferred as it requires less bandwidth) or re-loaded from the origin server.[2]

If a file does not explicitly indicate it is non-cacheable, but does not indicate an explicit expiry period or time, the cache server assigns the file an expiry time defined in the cache servers configuration. When that deadline is reached and the cache server receives a request for that file, the server checks with the origin server to see whether the content has changed. If the file is unchanged, the counter is reset and the existing content is served to the client; if the file is changed, the new content is downloaded, cached according to its settings and then served to the client.

A stale file is a file in cache that is no longer valid. A client has requested information that had previously been stored in the cache and the control data for the object indicates that it has expired or is too stale to be considered for serving. The browser or cache server must now either re-validate the file with or retrieve the file from the origin server before the data can served to the client.
The state of an item being considered for caching is determined using one or more of 5 HTTP header messages[3] two server messages, one client message, and two that can be sent by either the client or the server.[4] These headers include: Pragma: no-cache; Cache-Control; Expires; Last-Modified; and If-Modified-Since. Each of these identifies a particular condition that the proxy server must adhere to when deciding whether the content is fresh enough to be served to the requesting client.

Pragma: no-cache is an HTTP/1.0 client and server header that informs caching servers not to serve the requested content to the client from their cache (client-side) and not cache the marked information if they receive it (server-side). This response has been deprecated in favor of the new HTTP/1.1 Cache-Control header, but is still used in many browsers and servers. The continued use of this header is necessary to ensure backwards-compatibility, as it cannot be guaranteed that all devices and servers will understand the HTTP/1.1 server headers.

Cache-Control is a family of HTTP/1.1 client and server messages that can be used to clearly define not only if an item can be cached, but also for how long and how it should be validated upon expiry. This more precise family of messages replaces the older Pragma: no-cache message. There are a large number of options for this header field, but four that are especially relevant to this discussion.[5]

Cache-Control: private/public

This setting indicates what type of devices can cache the data. The private setting allows the marked items to be cached by the requesting client, but not by any cache servers encountered en-route. The public setting indicates that any device can cache this content. By default, public is assumed unless private is explicitly stated.

Cache-Control: no-cache

This is the HTTP/1.1 equivalent of Pragma: no-cache and can be used by clients to force an end-to-end retrieval of the requested files and by servers to prevent items from being cached.

Cache-Control: max-age=x

This setting allows the indicated files to be cached either by the client or the cache server for x seconds.

Cache-Control: must-revalidate

This setting informs the cache server that if the item in cache is stale, it must be re-validated before it can be served to the client.

A number of these settings can be combined to form a larger Cache-Control header message. For example, an administrator may want to define how long the content is valid for, and then indicate that, at the end of that period, all new requests must be revalidated with the origin server. This can be accomplished by creating a multi-field Cache-Control header message like the one below.

Cache-Control: max-age=3600, must-revalidate

Expires sets an explicit expiry date and time for the requested file. This is usually in the future, but a server administrator can ensure that an object is always re-validated by setting an expiry date that is in the past an example of this will be shown below.

Last-Modified can indicate one of several conditions, but the most common is the last time the state of the requested object was updated. The cache server can use this to confirm an object has not changed since it was inserted into the cache, allowing for re-validation, versus completely re-loading, of objects in cache.

If-Modified-Since is a client-side header message that is sent either by a browser or a cache server and is set by the Last-Modified value of the object in cache. When the origin server has not set an explicit cache expiry value and the cache server has had to set an expiry time on the object using its own internal configuration, the Last-Modified value is used to confirm whether content has changed on the origin server.

If the Last-Modified value on an object held by the origin server is newer than that held by the client, the entire file is re-loaded. If these values are the same, the origin server returns a 304 Not Modified HTTP message and the cache object is then served to the client and has its cache-defined counter reset.

Using an application trace program, clients are able to capture the data that flows out of and in to the browser application. The following two examples show how a server can use header messages to mark content as non-cacheable, or set very specific caching values.

Server Messages for a Non-Cacheable Object

HTTP/1.0 200 OK
Content-Type: text/html
Content-Length: 19662
Pragma: no-cache
Cache-Control: no-cache
Server: Roxen/2.1.185
Accept-Ranges: bytes
Expires: Wed, 03 Jan 2001 00:18:55 GMT

In this example, the server returns three indications that the content is non-cacheable. The first two are the Pragma: no-cache and Cache-Control: no-cache statements. With most client and cache server configurations, one of these headers on its own should be enough to prevent the requested object from being stored in cache. The web administrator in this example has chosen to ensure that any device, regardless of the version of HTTP used, will clearly understand that this object is non-cacheable.

However, in order to guarantee that this item is never stored in or served from cache, the Expires statement is set to a date and time that is in the past.[6] These three statements should be enough to guarantee that no cache serves this file without performing an end-to-end transfer of this object from the origin server with each request.

Specific Caching Information in Server Messages

HTTP/1.1 200 OK
Date: Tue, 13 Feb 2001 14:50:31 GMT
Server: Apache/1.3.12
Cache-Control: max-age=43200
Expires: Wed, 14 Feb 2001 02:50:31 GMT
Last-Modified: Sun, 03 Dec 2000 23:52:56 GMT
ETag: "1cbf3-dfd-3a2adcd8"
Accept-Ranges: bytes
Content-Length: 3581
Connection: close
Content-Type: text/html

In the example above, the server returns a header message Cache-Control: max-age=43200. This immediately informs the cache that the object can be stored in cache for up to 12 hours. This 12-hour time limit is further guaranteed by the Expires header, which is set to a date value that is exactly 12 hours ahead of the value set in the Date header message.[7]

These two examples present two variations of web server responses containing information that makes the requested content either completely non-cacheable or cacheable only for a very specific period of time.

How does caching work?

Content is cached by devices on the internet, and these devices then serve this stored content when the same file is requested by the original client or another client that uses that same cache. This rather simplistic description covers a number of different cache scenarios, but two will be the focus of this paper browser caching and caching servers.[8]

For the remainder of this paper, the caching environment that will be discussed is one involving a network with a number of clients using a single cache server, the general internet, and a server network with a series of web servers on it.

Browser Caching

Browser caching is what most people are familiar with, as all web browsers perform this behavior by default. With this type of caching, the web browser stores a copy of the requested files in a cache directory on the client machine in order to help speed up page downloads. This performance increase is achieved by serving stored files from this directory on the local hard drive instead of retrieving these same files from the web server, which resides across a much slower connection than the one between the hard-drive and the local application, when an item that is stored in cache is requested.

To ensure that old content is not being served to the client, the browser checks its cache first to see if an item is in cache. If the item is in cache, the browser then confirms the state of the object in cache with the origin server to see if the item has been modified at the source since the browser last downloaded it. If the object has not been modified, the origin server sends a 304 Not Modified message, and the item is served from the local hard drive and not across the much slower internet.

First Request for a file

REQUEST

GET /file.html HTTP/1.1
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, application/vnd.ms-powerpoint, application/vnd.ms-excel, application/msword, application/x-comet, */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)
Host: 24.5.203.101
Connection: Keep-Alive

RESPONSE

HTTP/1.1 200 OK
Date: Tue, 13 Feb 2001 20:00:22 GMT
Server: Apache
Cache-Control: max-age=604800
Last-Modified: Wed, 29 Nov 2000 15:28:38 GMT
ETag: "1df-28f1-3a2520a6"
Accept-Ranges: bytes
Content-Length: 10481
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Content-Type: text/html

In the above example[9], the file is retrieved from the server for the first time, and the server sends a 200 OK response and then returns the requested file. The items marked in blue indicate cache control data sent to the client by the server.

Second Request for a file

REQUEST

GET /file.html HTTP/1.1
Accept: */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
If-Modified-Since: Wed, 29 Nov 2000 15:28:38 GMT
If-None-Match: "1df-28f1-3a2520a6"
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)

RESPONSE

Host: 24.5.203.101
Connection: Keep-Alive
HTTP/1.1 304 Not Modified
Date: Tue, 13 Feb 2001 20:01:07 GMT
Server: Apache
Connection: Keep-Alive
Keep-Alive: timeout=5, max=100
ETag: "1df-28f1-3a2520a6"
Cache-Control: max-age=604800

The second request for a file sees the client send a request for the same object 40 seconds later, but with two additions. The server asks if the file has been modified since the last time it was requested by the client (If-Modified-Since). If the date in that field cannot be used by the origin server to confirm the state of the requested object, the client asks if the objects Etag tracking code has changed using the If-None-Match header message.[10] The origin server responds by verifying that object has not been modified and confirms this by returning the same Etag value that was sent by the client. This rapid client-server exchange allows the browser to quickly determine that it can serve the file directly from its local cache directory.

Caching Server

A caching server performs functions similar to those of a browser cache, only on a much larger scale. Where a browser cache is responsible for storing web objects for a single browser application on a single machine, a cache server stores web objects for a larger number of clients or perhaps even an entire network. With a cache server, all web requests from a network are passed through caching server, which then will serve the requested files to the client. The cache server can deliver content either directly from its own cache of objects, or by retrieving objects from the internet and then serving them to clients. [11]

Cache servers are a more efficient than browser caches as this network-level caching process makes the object available to all users of the network once it has been retrieved. With a browser cache, each user and, in fact, each browser application on a specific client must maintain a unique cache of files that is not shared with other clients or applications.

Also, cache servers use additional information provided by the web server in the headers sent along with each web request. Browser caches simply re-validate content with each request, confirming that the content has not been modified since it was last requested. Cache servers use the values sent in the Expires and Cache-Control header messages to set explicit expiry times for objects they store.

First Request for a file through a cache server

REQUEST

GET http://24.5.203.101/file.html HTTP/1.1
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, application/vnd.ms-powerpoint, application/vnd.ms-excel, application/msword, application/x-comet, */*
Accept-Language: en-us
Accept-Encoding: gzip, deflate
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)
Host: 24.5.203.101
Proxy-Connection: Keep-Alive

RESPONSE

HTTP/1.0 200 OK
Date: Tue, 16 Jan 2001 15:46:42 GMT
Server: Apache
Cache-Control: max-age=604800
Last-Modified: Wed, 29 Nov 2000 15:28:38 GMT
ETag: "1df-28f1-3a2520a6"
Content-Length: 10481
Content-Type: text/html
Connection: Close

The first request from the client through a cache server shows two very interesting things.[12] The first is that although the client request was sent out as HTTP/1.1, the server responded using HTTP/1.0. The browser caching example above demonstrated that the responding server uses HTTP/1.1. The change in protocol is the first clue that this data was served by a cache server.

The second item of interest is that the file that is initially served by the proxy server has a Date field set to January 16, 2001. This server is not serving stale data; this is the default time set by the cache server to indicate a new object that has been inserted in the cache.[13]

Second Request for a file through a cache server Second Browser

REQUEST

GET http://24.5.203.101/file.html HTTP/1.1
Host: 24.5.203.101
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; 0.7) Gecko/20010109
Accept: */*
Accept-Language: en
Accept-Encoding: gzip,deflate,compress,identity
Keep-Alive: 300
Connection: keep-alive

RESPONSE

HTTP/1.0 200 OK
Date: Tue, 16 Jan 2001 15:46:42 GMT
Server: Apache
Cache-Control: max-age=604800
Last-Modified: Wed, 29 Nov 2000 15:28:38 GMT
ETag: "1df-28f1-3a2520a6"
Content-Length: 10481
Content-Type: text/html
Connection: Close

Third Request for a file through a cache server Second Client Machine

REQUEST

GET http://24.5.203.101/file.html HTTP/1.0
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, application/vnd.ms-powerpoint,
application/vnd.ms-excel, application/msword, */*
Accept-Language: en-us
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0)
Host: 24.5.203.101
Proxy-Connection: Keep-Alive

RESPONSE

HTTP/1.0 200 OK
Date: Tue, 16 Jan 2001 15:46:42 GMT
Server: Apache
Cache-Control: max-age=604800
Last-Modified: Wed, 29 Nov 2000 15:28:38 GMT
ETag: "1df-28f1-3a2520a6"
Content-Length: 10481
Content-Type: text/html
Connection: Close

A second request through the cache server, using another browser on the same client configured to use the cache server, indicates that this client retrieved the file from the cache server, not from the origin server. The Date field is the same as the initial request and the protocol has once again been swapped from HTTP/1.1 to HTTP/1.0.

The third example shows that the object is now not only available to different browsers on the same machine, but now that it is available to different machines on the same network, using the same cache server. By requesting the same content from another client machine on the same network, it is clear that the object is served to the client by the cache server, as the Date field set to the same value observed in the previous two examples.

Why should data be cached?

Many web pages that are downloaded by web browsers today are marked as being non-cacheable. The theory behind this is that there is so much dynamic and personalized content on the internet today that if any of it is cached, people using the web may not have the freshest possible content or they may end up receiving content that was personalized for another client making use of the same cache server.

The dynamic and personalized nature of the web today does make this a challenge, but if the design of a web-site is examined closely, it can be seen that these new features of the web can work hand-in-hand with content caching.

How does caching the perceived user experience? In both the browser caching and caching server discussions above, it has been demonstrated that caching helps attack the problem of internet performance on three fronts. First, caching moves content closer to the client, by placing it on local hard-drives or in local network caches. With data stored on or near the client, the network delay encountered when trying to retrieve the data is reduced or eliminated.

Secondly, caching reduces network traffic by serving content that is fresh as described above. Cache servers will attempt to confirm with the origin server that the objects stored in cache if not explicitly marked for expiry are still valid and do not need to be fully re-loaded across the internet. In order to gain the maximum performance benefit from object caching, it is vital to specify explicit cache expiry dates or periods.

The final performance benefit to properly defining caching configurations of content on an origin server is that server load is reduced. If the server uses carefully planned explicit caching policies, server load can be greatly reduced, improving the user experience.

When examining how the configuration of a web server can be modified to improve content cacheability, it is important keep in mind two very important considerations. First, the content and site administrators must have a very granular level of control over how the content being served will or wont be cached once it leaves their server. Secondly, within this need to control how content is cached, ways should be found to minimize the impact that client requests have on bandwidth and server load by allowing some content to be cached.

Take the example of a large, popular site that is noted for its dynamic content and rich graphics. Despite having a great deal of dynamic content, caching can serve a beneficial purpose without compromising the nature of the content being served. The primary focus of the caching evaluation should be on the rich graphical content of the site.

If the images of this site all have unique names that are not shared by any other object on the site, or the images all reside in the same directory tree, then this content can be marked differently within the server configuration, allowing it to be cached.[14] A policy that allows these objects to be cached for 60, 120 or 180 seconds could have a large affect on reducing the bandwidth and server strain at the modified site. During this seemingly short period of time, several dozen of even several hundred different requests for the same object could originate from a large corporate network or ISP. If local cache servers can handle these requests, both the server and client sides of the transaction could see immediate performance improvements.

Taking a server header from an example used earlier in the paper, it can be demonstrated how even a slight change to the server header itself can help control the caching properties of dynamic content.

Dynamic Content

HTTP/1.1 200 OK
Date: Tue, 13 Feb 2001 14:50:31 GMT
Server: Apache/1.3.12
Cache-Control: no-cache, must-revalidate
Expires: Sat, 13 Jan 2001 14:50:31 GMT
Last-Modified: Sun, 03 Dec 2000 23:52:56 GMT
ETag: "1cbf3-dfd-3a2adcd8"
Accept-Ranges: bytes
Content-Length: 3581
Connection: close
Content-Type: text/html

Static Content

HTTP/1.1 200 OK
Date: Tue, 13 Feb 2001 14:50:31 GMT
Server: Apache/1.3.12
Cache-Control: max-age=43200, must-revalidate
Expires: Wed, 14 Feb 2001 02:50:31 GMT
Last-Modified: Sun, 03 Dec 2000 23:52:56 GMT
ETag: "1cbf3-dfd-3a2adcd8"
Accept-Ranges: bytes
Content-Length: 3581
Connection: close
Content-Type: text/html

As can been seen above, the only difference in the headers sent with the Dynamic Content and the Static Content are the Cache-Control and Expires values. The Dynamic Content example sets Cache-Control to no-cache, must-revalidate and Expires to one month in the past. This should prevent any cache from storing this data or serving it when a request is received to retrieve the same content.
The Static Content modifies these two settings, making the requested object cacheable for up to 12 hours (Cache-Control value set to 43,200) seconds and an Expires value that is exactly 12 hours in the future. After the period specified, the browser cache or caching server must re-validate the content before it can be served in response to local requests.

The must-revalidate item is not necessary, but it does add additional control over content. Some cache servers will attempt to serve content that is stale under certain circumstances, such as if the origin server for the content cannot be reached. The must-revalidate setting forces the cache server to re-validate the stale content, and return an error if it cannot be retrieved.

Differentiating caching policies based on the type of content served allows a very granular level of control over what is not cached, what is cached, and for how long the content can be cached for and still be considered fresh. In this way, server and web administrators can improve site performance a little or no additional development or capital cost.

It is very important to note that defining specific server-side caching policies will only have a beneficial affect on server performance if explicit object caching configurations are used. The two main types of explicit caching configurations are those set by the Expires header and the Cache-Control family of headers as seen in the example above. If no explicit value is set for object expiry, performance gains that might have been achieved are eliminated by a flood of unnecessary client and cache server requests to re-validate unchanged objects with the origin server.

Conclusion

Despite the growth of dynamic and personalized content on the web, there is still a great deal of highly cacheable material that is served to clients. However, many sites do not take advantage of the performance gains that can be achieved by isolating the dynamic and personalized content of their site from the relatively static content that is served alongside it.

Using the inherent ability to set explicit caching policies within most modern web-server applications, objects handled by a web-sever can be separated into unique content groups. With distinct caching policies for each defined group of web objects, the web-site administrator, not the cache administrator, has control over how long content is served without re-validation or re-loading. This granular control of explicit content caching policies can allow web-sites to achieve noticeable performance gains with no additional outlay for hardware and software.



Endnotes

[1] The proximity that is referred to here is network proximity, not physical proximity. For example, AOLs network has some of the worlds largest cache servers and they are concentrated in Virginia; however, because of the structure of AOLs network, these cache servers are not far from the client.

[2] A re-verify is preferred as it consumes less bandwidth than a full re-load of the object from the origin server. With a re-verification, the origin server just confirms that the file is still valid and the cache server can simply reset the timer on the object.

[3] An HTTP header message is a data-control message sent by a web client or a web server to indicate a variety of data transmission parameters concerning the requests being made. Caching information is included in the information sent to and from the server.

[4] There are actually a substantially larger number of header messages that can be applied to a client or a server data transmission to communicate caching information. The most up-to-date list of the messages can be found in section 13 of RFC 2616, Hypertext Transfer Protocol — HTTP/1.1.

[5] A complete listing of the Cache-Control settings can be found in RFC 2616, Hypertext Transfer Protocol — HTTP/1.1, section 14.9.

[6] The initial file request that generated this header was sent on February 12, 2001.

[7] The Date header message indicates the date and time on the origin server when it responded to the request.

[8] A third type of caching, Reverse Caching or HTTPD Accelerators, are used at the server side to place highly cacheable content into high-speed machines that use solid-state storage to make retrieval of these objects very fast. This reduces the load on the web servers and allows them to concentrate on the generation of dynamic and personalized content.

[9] The data shown here is just application trace data.

[10] The Etag or entity tag is used to identify specific objects on a web server. Each item has unique Etag value, and this value is changed each time the file is modified. As an example, the Etag for a local web file was captured. This data was re-captured after the file was modified two carriage returns were inserted.

Test 1 Original File

ETag: "21ccd-10cb-399a1b33"

Test 2 Modified File

ETag: "21ccd-10cd-3a8c0597"

[11] This is where the other name for a cache server comes from, as the cache server acts as a proxy for the client making the request. The term proxy server is outdated as the term proxy assumes that the device will do exactly as the client requests; this is not always the case due to the security and content control mechanisms which are a part of all cache servers today. The client isnt always guaranteed to receive the complete content they requested. Infact, many networks do not allow any content into the network that does not first go through the cache devices on that network.

[12] The data shown here is just application trace data. For a more complete example of what the application and network properties of a web object retrieval are, please see Appendix A and B.

[13] All the data captures used in this example were taken on February 11-14, 2001.

[14] The description used here is based on the configuration options available with the Apache/1.3.x server family, which allows caching options to be set down to the file level. Other server applications may vary in their methods of applying individual caching policies to different sets of content on the same server.

Copyright © 2024 Performance Zen

Theme by Anders NorenUp ↑