Category: Web Performance

The Dichotomy of the Web: Andy King's Website Optimization

Andy King's Website Optimization, O'Reilly 2008The Web is a many-splendored thing, with a very split personality. One side is drive to find ways to make the most money possible, while the other is driven to implement cool technology in an effective and efficient manner (most of the time).

Andy King, in Website Optimization (O’Reilly), tries to address these two competing forces in a way that both can understand. This is important because, as we all know from our own lives, most of the time these two competing parts of the same whole are right; they just don’t understand the other side.

I have seen this trend repeated throughout my nine years in the Web performance industry, five years as a consultant. Companies torn asunder, viewing the Business v. Technology interaction as a Cold War, one that occasionally flares up in odd places which serve as proxies between the two.

Website Optimization appears at first glance to be torn asunder by this conflict. With half devoted to optimizing the site for business and the other to performance and design optimization, there will be a cry from the competing factions that half of this book is a useless waste of time.

These are the organizations and individuals who will always be fighting to succeed in this industry. These are the people and companies who don’t understand that success in both areas is critical to succeeding in a highly competitive Web world.
The first half of the book is dedicated to the optimization of a Web site, any Web site, to serve a well-defined business purpose. Discussing terms such as SEO, PPC, and CRO can curdle the blood of any hardcore techie, but they are what drive the design and business purpose of a Web site. Without a way to get people to a site, and use the information on the site to do business or complete the tasks that they need to, there is no need to have a technological infrastructure to support it.

Conversely, a business with lofty goals and a strategy that will change the marketplace will not get a chance to succeed if the site is slow, the pages are large, and design makes cat barf look good. Concepts such HTTP compression, file concatenation, caching, and JS/CSS placement drive this side of the personality, as well as a number of application and networking considerations that are just too far down the rat hole to even consider in a book with as broad a scope as this one.

Although on the surface, the concepts discussed in this book will see many people put it down as it isn’t business or techie enough, those who do buy the book will show that they have a grasp of the wider perspective, the one that drives all successful sites to stand tall in a sea of similarity.

See the Website Optimization book companion site for more information, chapter summaries and two sample chapters.

GrabPERF: Yahoo issues today

Netcraft noted that Yahoo encountered a bit of a headache today. So I fired up my handy-dandy little performance system and had a look.

yahoo issues july 06 2007

Although for an organization and infrastructure the size of Yahoo’s this may have been a big event, in my experience, this was a “stuff happens on the Internet” sort of thing.

Move along people; there’s nothing to see. It is not the apocalyptic event that Netcraft is making it out to be. Google burps and barfs all the time, and everyone grumbles. But there is no need to run in circles and scream and shout.

Yeesh!

Dear Apache Software Foundation: FIX THE MSIE SSL KEEPALIVE SETTINGS!

Dear Apache Software Foundation, and the developers of the Apache Web server:

I would like to thank you for developing a great product. I rely on it daily to host my own sites, and a large number of people on the Internet seem to share my love of this software.

However, it appears that you seem to want to maintain a simple flaw in your logic that continues to make me crazy. I am a Web performance analyst, and at least once a week I sigh, and shake my head whenever I stoop to use Microsoft Internet Explorer (MSIE) to visit secure sites.

I seems that in your SSL configurations, you continue to assume that ALL versions of MSIE can’t handle persistent connections under SSL/TLS.
Is this true? Is a bug initially caught in MSIE 5.x (5.0??) still valid for MSIE 6.0/7.0?

The short answer is: I don’t know.

It seems that no one in the Apache server team has bothered to go back and see if the current versions of MSIE — we are trying to track down the last three people use MSIE 5.x and help them — still share this problem.

In the meantime, can you change your SSL exclusion RegEx to something more, relevant for 2007?

Current RegEx:
SetEnvIf User-Agent ".*MSIE.*" nokeepalive
	ssl-unclean-shutdown
	downgrade-1.0 force-response-1.0
Relvant, updated REGEX:
SetEnvIf User-Agent ".*MSIE [1-5].*"
	nokeepalive ssl-unclean-shutdown
	downgrade-1.0 force-response-1.0

SetEnvIf User-Agent ".*MSIE [6-9].*"
	ssl-unclean-shutdown

Please? PLEASE? It’s so easy…and would solve so many performance problems…

Please?

Thank you.

Web Performance: Optimizing Page Load Time

Aaron Hopkins posted an article detailing all of the Web performance goodness that I have been advocating for a number of years.

To summarize:

  • Use server-side compression
  • Set your static objects to be cacheable in browser and proxy caches
  • Use keep-alives / persistent connections
  • Turn your browsers’ HTTP pipelining feature on

These ideas are not new, and neither are the finding in his study. As someone who has worked in the Web performance field for nearly a decade, these are old-hat. However, it’s always nice to have someone new inject some life back into the discussion.

Performance Improvement From Caching and Compression

This paper is an extension of the work done for another article that highlighted the performance benefits of retrieving uncompressed and compressed objects directly from the origin server. I wanted to add a proxy server into the stream and determine if proxy servers helped improve the performance of object downloads, and by how much.
Using the same series of objects in the original compression article[1], the CURL tests were re-run 3 times:

  1. Directly from the origin server
  2. Through the proxy server, to load the files into cache
  3. Through the proxy server, to avoid retrieving files from the origin.[2]

This series of three tests was repeated twice: once for the uncompressed files, and then for the compressed objects.[3]
As can be seen clearly in the plots below, compression caused web page download times to improve greatly, when the objects were retrieved from the source. However, the performance difference between compressed and uncompressed data all but disappears when retrieving objects from a proxy server on a corporate LAN.

uncompressed_pages
compressed_pages

Instead of the linear growth between object size and download time seen in both of the retrieval tests that used the origin server (Source and Proxy Load data), the Proxy Draw data clearly shows the benefits that accrue when a proxy server is added to a network to assist with serving HTTP traffic.

 MEAN DOWNLOAD TIME
Uncompressed Pages
Total Time Uncompressed — No Proxy0.256
Total Time Uncompressed — Proxy Load0.254
Total Time Uncompressed — Proxy Draw0.110
Compressed Pages
Total Time Compressed — No Proxy0.181
Total Time Compressed — Proxy Load0.140
Total Time Compressed — Proxy Draw0.104

The data above shows just how much of an improvement is gained by adding a local proxy server, explicit caching descriptions and compression can add to a Web site. For sites that do force a great of requests to be returned directly to the origin server, compression will be of great help in reducing bandwidth costs and improving performance. However, by allowing pages to be cached in local proxy servers, the difference between compressed and uncompressed pages vanishes.

Conclusion

Compression is a very good start when attempting to optimize performance. The addition of explicit caching messages in server responses which allow proxy servers to serve cached data to clients on remote local LANs can improve performance to even a greater extent than compression can. These two should be used together to improve the overall performance of Web sites.


[1]The test set was made up of the 1952 HTML files located in the top directory of the Linux Documentation Project HTML archive.

[2]All of the pages in these tests announced the following server response header indicating its cacheability:

Cache-Control: max-age=3600

[3]A note on the compressed files: all compression was performed dynamically by mod_gzip for Apache/1.3.27.

mod_gzip Compile Instructions

The last time I attempted to compile mod_gzip into Apache, I found that the instructions for doing so were not documented clearly on the project page. After a couple of failed attempts, I finally found the instructions buried at the end of the ChangeLog document.

I present the instructions here to preserve your sanity.

Before you can actually get mod_gzip to work, you have to uncomment it in the httpd.conf file module list (Apache 1.3.x) or add it to the module list (Apache 2.0.x).


Now there are two ways to build mod_gzip: statically compiled into Apache and a DSO-File for mod_so. If you want to compile it statically into Apache, just copy the source to Apache src/modules directory and there into a subdirectory named ‘gzip’. You can activate it via a parameter of the configure script.

 ./configure --activate-module=src/modules/gzip/mod_gzip.a
 make
 make install

This will build a new Apache with mod_gzip statically built in.

The DSO-Version is much easier to build.

 make APXS=/path/to/apxs
 make install APXS=/path/to/apxs
 /path/to/apachectl graceful

The apxs script is normally located inside the bin directory of Apache.

Compressing Web Output Using mod_deflate and Apache 2.0.x


In a previous paper, the use of mod_gzip to dynamically compress the output from an Apache server. With the growing use of the Apache 2.0.x family of Web servers, the question arises of how to perform a similar GZIP-encoding function within this server. The developers of the Apache 2.0.x servers have included a module in the codebase for the server to perform just this task.

mod_deflate is included in the Apache 2.0.x source package, and compiling it in is a simple matter of adding it to the configure command.

	./configure --enable-modules=all --enable-mods-shared=all --enable-deflate

When the server is made and installed, the GZIP-encoding of documents can be enabled in one of two ways: explicit exclusion of files by extension; or by explcit inclusion of files by MIME type. These methods are specified in the httpd.conf file.


Explicit Exclusion

SetOutputFilter DEFLATE
DeflateFilterNote ratio
SetEnvIfNoCase Request_URI .(?:gif|jpe?g|png)$ no-gzip dont-vary
SetEnvIfNoCase Request_URI .(?:exe|t?gz|zip|bz2|sit|rar)$ no-gzip dont-vary
SetEnvIfNoCase Request_URI .pdf$ no-gzip dont-vary

Explicit Inclusion

DeflateFilterNote ratio
AddOutputFilterByType DEFLATE text/*
AddOutputFilterByType DEFLATE application/ms* application/vnd* application/postscript

Both methods enable the automatic GZIP-encoding of all MIME-types, except image and PDF files, as they leave the server. Image files and PDF files are excluded as they are already in a highly compressed format. In fact, PDFs become unreadable by Adobe’s Acrobat Reader if they are further compressed by mod_deflate or mod_gzip.

On the server used for testing mod_deflate for this article, no Windows executables or compressed files are served to visitors. However, for safety’s sake, please ensure that compressed files and binaries are not GZIP-encoded by your Web server application.

For the file-types indicated in the exclude statements, the server is told explicitly not to send the Vary header. The Vary header indicates to any proxy or cache server which particular condition(s) will cause this response to Vary from other responses to the same request.

If a client sends a request which does not include the Accept-Encoding: gzip header, then the item which is stored in the cache cannot be returned to the requesting client if the Accept-Encoding headers do not match. The request must then be passed directly to the origin server to obtain a non-encoded version. In effect, proxy servers may store 2 or more copies of the same file, depending on the client request conditions which cause the server response to Vary.

Removing the Vary response requirement for objects not handled means that if the objects do not vary due to any other directives on the server (browser type, for example), then the cached object can be served up without any additional requests until the Time-To-Live (TTL) of the cached object has expired.

In examining the performance of mod_deflate against mod_gzip, the one item that distinguished the two modules in versions of Apache prior to 2.0.45 was the amount of compression that occurred. The examples below demonstrate that the compression algorithm for mod_gzip produces between 4-6% more compression than mod_deflate for the same file.[1]

Table 1 – /compress/homepage2.html

CompressionSizeCompression %
No compression56380 bytesn/a
Apache 1.3.x/mod_gzip16333 bytes29% of original
Apache 2.0.x/mod_deflate19898 bytes35% of original

Table 2 – /documents/spierzchala-resume.ps

CompressionSizeCompression %
No Compression63451 bytesn/a
Apache 1.3.x/mod_gzip19758 bytes31% of original
Apache 2.0.x/mod_deflate23407 bytes37% of original

Attempts to increase the compression ratio of mod_deflate in Apache 2.044 and lower using the directives provided for this module produced no further decrease in transferred file size. A comment from one of the authors of the mod_deflate module stated that the module was written specifically to ensure that server performance was not degraded by using this compression method. The module was, by default, performing the fastest compression possible, rather than a mid-range compromise between speed and final file size.

Starting with Apache 2.0.45, the compression level of mod_deflate is configurable using the DeflateCompressionLevel directive. This directive accepts values between 1 (fastest compression speed; lowest compression ratio) and 9 (slowest compression speed; highest compression ratio), with the default value being 6. This simple change makes the compression in mod_deflate comparable to mod_gzip out of the box.

Using mod_deflate for Apache 2.0.x is a quick and effective way to decrease the size of the files that are sent to clients. Anything that can produce between 50% and 80% in bandwidth savings with so little effort should definitely be considered for any and all Apache 2.0.x deployments wishing to use the default Apache codebase.


[1] A note on the compression in mod_deflate for Apache 2.044 and lower: The level of compression can be modified by changing the ZLIB compression setting in mod_deflate.c from Z_BEST_SPEED (equivalent to “gzip -1”) to Z_BEST_COMPRESSION (equivalent to “gzip -9”). These defaults can also be replaced with a numeric value between 1 and 9.

More info on hacking mod_deflate for Apache 2.0.44 and lower can be found here.

Copyright © 2025 Performance Zen

Theme by Anders NorenUp ↑