Author: spierzchala

Blog Statistics Analysis: Page Views by Day of Week, or When to Post

Since I started self-hosting this blog again on August 6 2008, I have been trying to find more ways to pull traffic toward the content that I put up. Like all bloggers, I feel that I have important things to say (at least in the area of Web performance), and ideas that should be read by as many people as possible.

As well, I have realized that if I invest some time and effort into this blog, it can be a small revenue source that could get me that much closer to my dream of a MacBook Pro.

The Analysis

In a post yesterday morning, Darren Rowse had some advice on when the best time to release new post is. Using his ideas as the framework, I pulled the data out of my own tracking database and came up with the chart below. This shows the page view data between September 1 2007 and September 15 2008 based on the day of the week vistors came to the site.

Blog Page Views by Day of Week

Using this data and the general framework that Darren subscribes to, I should be releasing my best and newest thoughts in a week on Monday and Tuesday (GMT).
After Wednesday, I should release only less in-depth articles, with a focus on commentary on news and events. And I must learn to breathe, as I suffer from an ailment all to common in bipolars: a lack of patience.

A new post doesn’t immediately find its target audience unless you have hundreds or thousands (Tens? Ones?) of readers who are influential. If you are luckyin this regard, then these folks will leave useful comments, and through their own attention, help gently show people that a new post is something they should devote their valuable attention towards.

It takes a while for any post to percolate through the intertubes. So patience you must have.

Front-loaded v Long-tailed

Unless, of course, your traffic model is completely different than a popular blogger.
The one issue that I had with Darren’s guidance is that it applies only to blogs that are front-loaded. A front-loaded blog is one that is incredibly popular, or has a devoted, active audience who help push page views toward the most recent 3-5 posts. Once the wave has crested, or the blogger has posted something new, the volume of traffic to older posts falls off exponentially, except in the few cases of profound or controversial topics.

When I analyzed my own traffic, I found that the most of my traffic volume was aimed toward posts from 2005 and 2006. In fact, more recent posts are nowhere near as popular as these older posts. In contrast to the front-loaded blog, mine is long-tailed.

There are a number of influential items in my blog which have proven staying power, which draw people from around the world. They have had deep penetration into search engines, and are relvant to some aspect of peoples’ lives that keeps pulling them back.

Summary

I would highly recommend analyzing your traffic to see it is front-loaded or long-tailed. I know that I wish that this blog  was more front-loaded, with an active community of readers and commentators. However, I am also happy to see that I have created a few sparks of content that keep people returning again and again. If your blog is  long-tailed, then when you post becomes far less relevant than ensuring the freshness and validity of those few popular posts. Ensure that these are maintained and current so that they remain relevant to as many people as possible.

Web Performance: A Review of Steve Souders' High Performance Web Sites

It’s not often as a Web performance consulatant and analyst that I find a book that is useful to so many clients. It’s much more rare to discover a book that can help most Web sites improve their response times and consistency in fewer than 140 pages.

Steve Souders’ High Performance Web Sites (O’Reilly, 2007Companion Site) captures the essence of one-side of the Web performance problem succinctly and efficiently, delivering a strong message to a group he classifies as front-end engineers. It is written in a way that can be understood by marketing, line-of-business, and technical teams. It is written in a manner designed to provoke discussions within an organization with the ultimate goal of improving Web performance
Once these discussion have started, there may some shock withing these very organizations. Not only with the ease with which these rules can be implemented, but by the realization that the fourteen rules in this book will only take you so far.

The 14 Rules

Web performance, in Souders’ world, can be greatly improved by applying his fourteen Web performance rules. For the record, the rules are:

Rule 1 – Make Fewer HTTP Requests
Rule 2 – Use a Content Delivery Network
Rule 3 – Add an Expires Header
Rule 4 – Gzip Components
Rule 5 – Put Stylesheets at the Top
Rule 6 – Put Scripts at the Bottom
Rule 7 – Avoid CSS Expressions
Rule 8 – Make JavaScript and CSS External
Rule 9 – Reduce DNS Lookups
Rule 10 – Minify JavaScript
Rule 11 – Avoid Redirects
Rule 12 – Remove Duplicate Scripts
Rule 13 – Configure ETags
Rule 14 – Make AJAX Cacheable

From the Companion Site [here]

These rules seem simple enough. And, in fact, most of them are easy to understand, and, in an increasingly complex technical world, easy to implement. In fact, the most fascinating thing about the lessons in this book, for the people who think about these things everyday, is that they are pieces of basic knowledge, tribal wisdom, that have been passed down for as long as the Web has existed.
Conceptually, the rules can be broken down to:

  • Ask for fewer things
  • Move stuff closer
  • Make things smaller
  • Make things less confusing

These four things are simple enough to understand, as they emphasize simplicity over complexity.
For Web site designers, these fourteen rules are critical to understanding how to drive better performance not only in existing Web sites, but in all of the sites developed in the future. They provide a vocabulary to those who are lost when discussions of Web performance occur. The fourteen rules show that Web performance can be improved, and that something can be done to make things better.

Beyond the 14 Steps

There is, however, a deeper, darker world beneath the fourteen rules. A world where complexity and interrelated components make change difficult to accomplish.
In a simple world, the fourteen rules will make a Web site faster. There is no doubt about that. They advocate for the reduction object size (for text objects), the location of content closer to the people requesting it (CDNs), and the optimization of code to accelerate the parsing and display of Web content in the browser.
Deep inside a Web site lives the presentation and application code, the guts that keep a site running. These layers, down below the waterline are responsible for the heavy lifting, the personalization of a bank account display, the retrieval of semantic search results, and the processing of complex, user-defined transactions. The data that is bounced inside a Web application flows through a myriad of network devices — firewalls, routers, switches, application proxies, etc — that can be as complex, if not more so, than the network complexity involved in delivering the content to the client.
It is fair to say that a modern Web site is the proverbial duck in a strong current.
The fourteen rules are lost down here beneath the Web layer. In these murky depths, far from the flash and glamor, parsing functions that are written poorly, database table without indices, internal networks that are poorly designed can all wreak havoc on a site that has taken all fourteen rules to heart.
When the content that is not directly controlled and managed by the Web site is added into this boiling stew, another layer of possible complexity and performance challenge appears. Third parties, CDNs, advertisers, helper applications all come from external sources that are relied on to have taken not only the fourteen rules to heart, but also to have considered how their data is created, presented, and delivered to the visitors to the Web site that appears to contain it.

Remember the Complexity

High Performance Web Sites is a volume (a pamphlet really) that delivers a simple message: there is something that can be done to improve the performance of a Web site. Souders’ fourteen rules capture the items that can be changed quickly, and at low-cost.
However, if you ask Steve Souders’ if this is all you need to do to have a fast, efficient, and reliable Web site, he should say no. The fourteen rules are an excellent start, as they handle a great deal of the visible disease that infects so many Web sites.
However, like the triathlete with an undiagnosed brain tumor, there is a lot more under the surface that needs to be addressed in order to deliver Web performance improvements that can be seen by all, and support rapid, scalable growth.
This is a book that must be read. Then deeper questions must be asked to ensure that the performance of the 90% of a Web site design not seen by visitors matches the 10% that is.

Blog Statistics Analysis – What do your visitors actually read?

Steven Hodson of WinExtra posted a screenshot of his personal WordPress stats for the last three years last night. I then posted my stats for a similar period of time, and Steven shot back with some question about traffic, and the ebbs and flows of readers.

Being the stats nut that I am, I went and pulled the data from my own tracking data, and came up with this.

Blog Posts Read Each Month, By Year Posted

I made a conscious choice to analyze what year the posts being read were posted in. I wanted to understand when people read my content, which content kept people coming back over and over again. The chart above speaks for itself: through most of the last year it’s clear that the most popular posts were made in 2005.

What is also interesting is the decreasing interest in 2007 posts as 2008 progressed. Posts from 2006 remained steady, as there are a number of posts in that year that amount to my self-help guides to Web compression, mod_gzip, mod_deflate, and Web caching for Web administrators.

This data is no surprise to me, as I posted my rants against Gutter Helmet and their installation process in 2005. Those posts are still near the top of the Google search response for term “Gutter Helmet”. And improving the performance of a Web site is of great interest to many Apache server admins and Web site designers.

It is also clear is that self-hosting my blog and the posting renaissance it has provoked has driven traffic back to my site.So, what lessons did I learn from this data?

  1. Always remember the long tail. Every blogger wants to be relevant, on the edge, and showing that they understand current trends. The people who follow those trends are a small minority of the people who read blogs. Google and other search engines will expose them to your writings in the time of their choosing, and you may find that the three year-old post gets as much traffic as the one posted three hours ago
  2. Write often. I was in a blogging funk when my blog was at WordPress.com. As a geek, I believe that the lack of direct control over the look and feel of my content was the cause of this. In a self-hosted environment, I feel that I am truly the one in charge, and I can make this blog what I want.
  3. Be cautious of your fame. If your posts are front-loaded, i.e. if all your readers read posts from the month and year they are posted in, are you holding people’s long-term attention? What have you contributed to the ongoing needs of those who are outside the technical elite? What will drive them to keep coming to your site in the long run?

So, I post a challenge to other bloggers out there. My numbers are miniscule compared to the blogging elite, but I am curious to get a rough sense of how the long tail is treating you.

Web Performance: GrabPERF Performance Measurement System Needs YOU!

In 2004-2005, as a lark, I created my own Web performance measurement system, using PERL, PHP and MySQL. In August 2005, I managed to figure out how to include remote agents.
I dubbed it…GrabPERF. An odd name, but an amalgamation of “Grab” and “Performance” that made sense to my mind at the time. I also never though that it would go beyond my house, a couple of basement servers, and a cable modem.
In the intervening three years, I have managed to:

  • scale the system to handle over 250 individual measurements
  • involve nine remote measurement locations
  • move the system to the Technorati datacenter
  • provide key operational measurement data to system visitors

Although the system lives in the Technorati datacenter and is owned by them, I provide the majority of the day-to-day maintenance on a volunteer basis, if only to try and keep my limited coding skills up.
But this post is not about me. It’s about GrabPERF.
Thanks to the help of a number of volunteers, I have measurement locations in the San Francisco Bay Area, Washington DC, Boston, Portugal, Germany and Argentina.
While this is a good spread, I am still looking to gather volunteers who can host a GrabPERF measurement location. The areas where GrabPERF has the most need are:

  • Asia-Pacific
  • South Asia (India, Pakistan, Bangladesh)
  • UK and Continental Europe
  • Central Europe, including the ancestral homeland of Polska

It would also be great to get a funky logo for the system, so if you are a graphic designer and want to create a cool GrabPERF logo, let me know.
The current measurement system requires Linux, cURL and a few add-on Perl modules. I am sure that I could work on other operating systems, I just haven’t had the opportunity to experiment.
If you or your organization can help, please contact me using the GrabPERF contact form.

Web Performance: David Cancel Discusses Lookery Performance Strategies

David Cancel and I have had sort of a passing vague, same space and thought process, living in the same Metropolitan area kind of distant acquaintance for about the same year.
About 2-3 months ago, he wrote a pair of articles discussing the efforts he has undertaken in order to try and offload some of the traffic to the servers for his new company, Lookery. While they are not current, in the sense that time moves in one direction for most technical people, and is compressed into the events of the past eight hours and the next 30 minutes, these articles provide an insight that should not be missed.
These two articles show how easily a growing company that is trying to improve performance and customer experience can achieve measureable results on a budget that consists of can recycling money and green stamps.

Measuring your CDN

A service that relies on the request and downloading of a single file from a single location very quickly realizes the limitations that this model imposes as traffic begins to broaden and increase. Geographically diverse users begin to notice performance delays as they attempt to reach a single, geographically-specific server. And the hosting location, even one as large as Amazon S3, can begin to serve as the bottleneck to success.
David’s first article examines the solution path that Lookery chose, which was moving the tag, which drives the entire opportunity for success in their business model, onto a CDN. With a somewhat enigmatic title (Using Amazon S3 as a CDN?), he describes how the Lookery team measured the distributed performance of their JS tag using a free measurement service (not GrabPERF) and compared various CDNs against the origin configuration that is based on the Amazon S3 environment.
This deceptively simple test, which is perfect for the type of system that Lookery uses, provided that team with the data they needed to realize that they had made a good choice in choosing a CDN and that their chosen CDN was able to deliver improved response times when compared to their origin servers.

Check your Cacheability

Cacheability is a nasty word that my spell-checker hates. To define it simply, it refers to the ability of end-user browsers and network-level caching proxies to store and re-use downloaded content based on clear and explicit caching rules delivered in the server response header.
The second Article in David’s series describes how, using Mark Nottingham’s Cacheability Engine, the Lookery team was able to examine the way that the CDNs and the Origin site informed the visitor browser of the cacheability of the JS file that they were downloading.
Cacheability doesn’t seem that important until you remember that most small firms are very conscious of the Bandwidth outlay. These small startups arevery aware when their bandwidth usage reaches 250GB/month level (Lookery’s bandwidth usage at the time the posts were written). Any method that can improve end-user performance while stilll delivering the service they expect is a welcome addition, especially when it is low-cost to free.
In the post, David notes that there appears to be no way in their chosen CDN to modify the Cacheability settings, an issue which appears to have been remedied since the article went up [See current server response headers for the Lookery tag here].

Conclusion

Startups spend a lot of time imagining what success looks like. And when it comes, sometimes they aren’t ready for it, especially when it comes to the ability to handle increasing loads with their often centralized, single-location architectures.
David Cancel, in these two articles, shows how a little early planning, some clear goals, and targeted performance measurement can provide an organization with the information to get them through their initial growth spurt in style.

Web Performance: Your Teenage Web site

It’s critical to your business. It affects revenue. It’s how people who can’t come to you perceive you.
It’s your Web site.

Its complex. Abstract. Lots of conflicting ideas and forces are involved. Everyone says they now the best thing for it. Finger-pointing. Door slamming. Screaming.

Am I describing your Web site and the team that supports it? Or your teenager?
If you think of your Web site as a teenager, you begin to realize the problems that your facing. Like a teenager, it has grown physically and mentally, and, as a result, thinks its an experienced adult, ready to take on the world. However, let’s think of your site as a teenager, and think back to how we, as teenagers (yeah, I’m old), saw the world.

MOM! This doesn’t fit anymore!

Your Web site has grown as all of your marketing and customer service programs bear fruit. Traffic is increasing. Revenue is up. Everyone is smiling.

Then you wake up and realize that your Web site is too small for your business. This could mean that the infrastructure is overloaded, the network is tapped out, your connectivity is maxed, and your sysadmins, designers, and network teams are spending most of your day just firefighting.

Now, how can you grow a successful business, or be the hip kid in school, when your clothes don’t fit anymore?

But, you can’t buy an entire wardrobe every six months, so plan, consider your goals and destinations, and shop smart.

DAD! Everyone has one! I need to have one to be cool!

Shiny.

It’s a word that has been around for a long time, and was revived (with new meaning) by Firefly. It means reflective, bright, and new. It’s what attracts people to gold, mirrors, and highly polished vintage cars. In the context of Web sites, it’s the eye-candy that you encounter in your browsing, and go “Our site needs that”.
Now step back and ask yourself what purpose this new eye-candy will serve.
And this is where Web designers and marketing people laugh, because it’s all about being new and improved.

But can you be new and improved, when your site is old and broken?

Get your Web performance in order with what you, then add the stuff that makes your site pop.

But those aren’t the cool kids. I don’t hang with them.

Everyone is attracted to the gleam of the cool new Web sites out there that offer to do the same old thing as your site. The promise of new approaches to old problems, lower cost, and greater efficiencies in our daily lives are what prompt many of us to switch.

As a parent, we may scoff, realizing that maybe the cool kids never amounted to much outside of High School. But, sometimes you have to step back and wonder what makes a cool kid cool.

You have to step back and say, why are they attracting so much attention and we’re seen as the old-guard? What can we learn from the cool kids? Is your way the very best way? And says who?

And once you ask these questions, maybe you agree that some of what the cool kids do is, in fact, cool.

Can I borrow the car?

Trust is a powerful thing to someone, or to a group. Your instinctive response depends on who you are, and what your experiences with others have been like in the past.

Trust is something often found lacking when it comes to a Web site. Not between your organization and your customers, but between the various factions within your organization who are trying to interfere or create or revamp or manage the site.

Not everyone has the same goals. But sometimes asking a few questions of other people and listening to their reasons for doing something will lead to a discussion that will improve the Web site in a way that improves the business in the long run.
Sometimes asking why a teenager wants to borrow the car will help you see things from their perspective for a little while. You may not agree, but at least now it’s not a yes/no answer.

YOU: How was school today? – THEM: Ok.

Within growing organizations, open and clear communication tends to gradually shrivel and degenerate. Communications become more formal, with what is not said being as important as what is. Trying to find out what another department is doing becomes a lot like determining the state of the Soviet Union’s leadership based on who attends parades in Red Square.

Abstract communication is one of the things that separates humans from a large portion of the rest of the animal kingdom. There is nothing more abstract than a Web site, where physical devices and programming code produce an output that can only be seen and heard.

The need for communication is critical in order to understand what is happening in another department. And sometimes that means pushing harder, making the other person or team answer hard questions that they think you’re not interested in, or that you is non of your business.

If you are in the same company, it’s everyone’s business. So push for an answer, because working to create an abstract deliverable that determines the success or failure of the entire firm can’t be based on a grunt and a nod.

Summary

There are no easy answers to Web performance. But if you consider your Web site and your teams as a teenager, you will be able to see that the problems that we all deal with in our daily interactions with teens crop up over an over when dealing with Web design, content, infrastructure, networks and performance.

Managing all the components of a Web site and getting best performance out of it often requires you to have the patience of Job. But it is also good to carry a small pinch of faith in these same team;, faith  that everyone, whether they say it or not, wants to have the best Web site possible.

Thoughts on Web Performance at the Browser

Last week, lost in the preternatural shriek that emerged from the Web community around the release of Google Chrome, John Resig posted a thoughtful post on resources usage at the browser. In it, he states that the use of the Process Manager in Chrome will change how people see Web performance. In his words:

The blame of bad performance or memory consumption no longer lies with the browser but with the site.

Coming to the discussion from the realm of Web performance measurement, I realize that the firms I have worked with and for have not done a good job of analyzing this , and, in the name of science have tried to eliminate the variability of Web page processing from the equation.
The company I currently work for has realized that this is a gap and has released a product that measures the performance of a page in the browser.
But all of this misses the point, and goes to one of the reasons why I gave up on Chrome on my older, personal-use computer: Chrome exposes the individual load that a page places on a Web browser.
Resig highlights that browser that make use of shared resources shift the blame about poor performance out to the browser and away from the design of the page. Technologies that modern designers lean on (Flash, AJAX, etc.) all require substantially greater resource-consumption in a browser. Chrome, for good or ill, exposes this load to the user be instantiating a separate, sand-boxed process for each tab, clearing indicating which page is the culprit.
It will be interesting if designers take note of this, or ignore in pursuit of the latest shiny toy that gets released. While designers assume that all visitors run the cutting edge of machine, I can show them that a laptop that is still plenty useful is completely locked up when their page is handled in isolation.

Browser Wars II: Why I returned to Firefox

Since the release of Google Chrome on September 2, I have been using it as my day-to-day browser. Spending up to 80% of my computer time in a browser means that this was decision which affected a huge portion of my online experience.

I can say that I put Chrome through its paces, on a wide-variety of sites, from the simple to the extremely content-rich. From the mainstream, to the questionable.
This morning I migrated back to Firefox, albeit the latest Minefield/Firefox 3.1alpha.

The reasons listed below are mine. Switching back is a personal decision and everyone is likely to have their own reasons to do it, or to stay.

Advertising

I mentioned a few times during my initial use of Chrome that I was having to become used to the re-appearance of advertising in my browsing experience [here and here]. From their early release as extensions to Firefox, I have used AdBlock and AdBlock Plus to remove the annoyance and distraction of online ads from my browsing experience.

When I moved to Chrome, I had to accept that I would see ads. I mean, we were dealing with a browser distributed by one of the largest online advertising agencies. It could only be expected that they were not going to allow people to block ads out of the gate, if ever.

As the week progressed, I realized that I was finding the ads to be a distraction from my browsing experience. Ads impede my ability to find the information I need quickly.

Older Machines

My primary machine for online experiences at home is a Latitude D610. This is a 3-4 year-old laptop, with a single core. It is still far more computing power than most people actually need to enjoy the Web.

While cruising with Chrome, I found that Flash locked up the entire machine on a very regular basis. Made it unsuable. This doesn’t happen on my much more powerful Latitude D630, provided by my work. However, as I have a personal laptop, I am not going to use my work computer for my personal stuff, especially at home.

I cannot have a browser that locks up a machine when I simply close a tab. It appears that the vaunted QA division at Google overlooked the fact that people don’t all run the latest and greatest machines in the real world.

Auto-Complete

I am completely reliant on form auto-completes. Firefox has been doing this for me for a long time, and it is very handy to simply start typing and have Firefox say “Hey! This form element is called email. Here are some of the other things you have put into form elements called email.”

If you can build something as complex as the OmniBox, surely you can add form auto-completes.

The OmniBox

I hate it. I really do. I like having my search and addresses separate. I also like an address bar that remembers complete URLs (including those pesky parameters!), rather than simply the top-level domain name.

It is a cool idea, but it needs some refining, and some customer-satisfaction focus groups.

I Don’t Use Desktop-replacing Web Applications

I do almost all of my real work in desktop-installed Web applications. I have not made the migration to Web applications. I may in the future. But until then, I do not need a completely clean browsing experience. I mentioned that the battle between Chrome and Firefox will come down to the Container v. the Desktop – a web application container, or a desktop-replacing Web experience application.
In the last 48 hours, I have fallen back into the Web-desktop camp.

Summary

In the future, I will continue to use Chrome to see how newer builds advance, and how it evolves as more people begin dictating the features that should be available to it.

For my personal use, Chrome takes away too much from, and injects too much noise into, my daily Web experience to continue to use as the default browser. To quote more than a few skeptics of Chrome when it was relased – “It’s just another browser”.

DNS: Without it, your site does not exist

In my presentations and consultations on Web performance, I emphasize the importance of a correctly configured DNS system with the phrase: “If people can’t resolve your hostname, your site is dead in the water”.

Yesterday, it appears that the large anti-virus and security firm Sophos discovered this lesson the hard way.

Of course hindsight is perfect, so I won’t dwell for too long on this single incident. The lesson to be learned here is that DNS is complex and critical, yet is sometimes overlooked when considered the core issues of Web performance and end-user experience.

This complexity means that if an organization is not comfortable managing their own DNS, or want to broaden and deepen their DNS infrastructure, there are a large number of firms who will assist with this process. These firms whose entire business is based on managing large-scale DNS implementations for organizations.

DNS is critical. Never take it for granted.

Joost: A change to the program

In April 2007, I tried out the Joost desktop client.  [More on Joost here and here]
I was underwhlemed by the performance, and the fact that the application completely maxxed out my dual core CPU, my 2G of RAM, and my high-speed home broadband. I do remember thinking at the time that it seemed weird to have a Desktop Client in the first place. Well, as Om Malik reports this morning, it seems that I was not alone.
After this week’s hoopla over Chrome, moving in the direction of the browser seems like a wise thing to do. But I definitely hear far more buzz over Hulu than I do for Joost on the intertubes.

Update

Michael Arrington and TechCrunch weigh into the discussion.

Copyright © 2025 Performance Zen

Theme by Anders NorenUp ↑