Category: Uncategorized

GrabPERF Network Outage

Today, there was a network outage that affected the servers from September 21 2008 15:30 GMT until September 22 2008 01:45 GMT.
The data from this period has been cut and hourly averages have been re-calculated.
We apologize for the inconvenience.

Blog Advertising: Fred Wilson has Thoughts on Targeted Feed-vertising

Fred Wilson adds his thoughts to the conversation about a more intelligent way to target blog and social media advertising. His idea plays right into the ideas I discussed yesterday, ideas that emphasize that a new and successful advertising strategy can be dictated by content creators and bloggers by basing advertising rates on the level of interaction that an audience has with a post.
Where the model I proposed is one that is based on community and conversation, Fred sees an opportunityfor fims that can effectively inject advertising and marketing directly into the conversation, not added on as an afterthought.
Today’s conversations take place in the streams of Twitter and FriendFeed, and are solidly founded on the ideas of community and conversation. They are spontaneous, unpredictable. Marketing into the stream requires a level of conversational intelligence that doesn’t exist in contextual advertising. It is not simply the words on the screen, it is how those ads are being used.
For example, there is no sense trying to advertise a product on a page or in a conversation that is actively engaged in discussing the flaws and failings of that product. It makes an advertiser look cold, insensitive, and even ridiculous.
In his post, Fred presents examples of subtle, targeted advertising that appears in the streams of an existing conversation without redirecting or changing the conversation. As a VC, he recognizes the opportunity in this area.
Community and conversation focused marketing is potentially huge and likely very effective, if done in a way that does not drive people to filter their content to prevent such advertising. The advertisers will also have to adopt a clear code of behavior that prevents them from being seen as anything more than new-age spammers.
Why will it be more effective? It plays right to the marketers sweet spot: an engaged group, with a focused interest, creating a conversation in a shared community.
If that doesn’t set of the buzzword bingo alarms, nothing will.
It is, however, also true. And the interest in this new model of advertising is solely drive by one idea: attention. I have commented on the attention economy previously, and I stick to my guns that a post, a conversation, a community that holds a person’s attention in today’s world of media and information saturation is one that needs to be explored by marketers.
Rob Crumpler and the team at BuzzLogic announced their conversation ad service yesterday (September 18 2008). This is likely the first move in this exciting new area. And Fred and his team at Union Square recognize the potential in this area.

Web Performance: A Review of Steve Souders' High Performance Web Sites

It’s not often as a Web performance consulatant and analyst that I find a book that is useful to so many clients. It’s much more rare to discover a book that can help most Web sites improve their response times and consistency in fewer than 140 pages.

Steve Souders’ High Performance Web Sites (O’Reilly, 2007Companion Site) captures the essence of one-side of the Web performance problem succinctly and efficiently, delivering a strong message to a group he classifies as front-end engineers. It is written in a way that can be understood by marketing, line-of-business, and technical teams. It is written in a manner designed to provoke discussions within an organization with the ultimate goal of improving Web performance
Once these discussion have started, there may some shock withing these very organizations. Not only with the ease with which these rules can be implemented, but by the realization that the fourteen rules in this book will only take you so far.

The 14 Rules

Web performance, in Souders’ world, can be greatly improved by applying his fourteen Web performance rules. For the record, the rules are:

Rule 1 – Make Fewer HTTP Requests
Rule 2 – Use a Content Delivery Network
Rule 3 – Add an Expires Header
Rule 4 – Gzip Components
Rule 5 – Put Stylesheets at the Top
Rule 6 – Put Scripts at the Bottom
Rule 7 – Avoid CSS Expressions
Rule 8 – Make JavaScript and CSS External
Rule 9 – Reduce DNS Lookups
Rule 10 – Minify JavaScript
Rule 11 – Avoid Redirects
Rule 12 – Remove Duplicate Scripts
Rule 13 – Configure ETags
Rule 14 – Make AJAX Cacheable

From the Companion Site [here]

These rules seem simple enough. And, in fact, most of them are easy to understand, and, in an increasingly complex technical world, easy to implement. In fact, the most fascinating thing about the lessons in this book, for the people who think about these things everyday, is that they are pieces of basic knowledge, tribal wisdom, that have been passed down for as long as the Web has existed.
Conceptually, the rules can be broken down to:

  • Ask for fewer things
  • Move stuff closer
  • Make things smaller
  • Make things less confusing

These four things are simple enough to understand, as they emphasize simplicity over complexity.
For Web site designers, these fourteen rules are critical to understanding how to drive better performance not only in existing Web sites, but in all of the sites developed in the future. They provide a vocabulary to those who are lost when discussions of Web performance occur. The fourteen rules show that Web performance can be improved, and that something can be done to make things better.

Beyond the 14 Steps

There is, however, a deeper, darker world beneath the fourteen rules. A world where complexity and interrelated components make change difficult to accomplish.
In a simple world, the fourteen rules will make a Web site faster. There is no doubt about that. They advocate for the reduction object size (for text objects), the location of content closer to the people requesting it (CDNs), and the optimization of code to accelerate the parsing and display of Web content in the browser.
Deep inside a Web site lives the presentation and application code, the guts that keep a site running. These layers, down below the waterline are responsible for the heavy lifting, the personalization of a bank account display, the retrieval of semantic search results, and the processing of complex, user-defined transactions. The data that is bounced inside a Web application flows through a myriad of network devices — firewalls, routers, switches, application proxies, etc — that can be as complex, if not more so, than the network complexity involved in delivering the content to the client.
It is fair to say that a modern Web site is the proverbial duck in a strong current.
The fourteen rules are lost down here beneath the Web layer. In these murky depths, far from the flash and glamor, parsing functions that are written poorly, database table without indices, internal networks that are poorly designed can all wreak havoc on a site that has taken all fourteen rules to heart.
When the content that is not directly controlled and managed by the Web site is added into this boiling stew, another layer of possible complexity and performance challenge appears. Third parties, CDNs, advertisers, helper applications all come from external sources that are relied on to have taken not only the fourteen rules to heart, but also to have considered how their data is created, presented, and delivered to the visitors to the Web site that appears to contain it.

Remember the Complexity

High Performance Web Sites is a volume (a pamphlet really) that delivers a simple message: there is something that can be done to improve the performance of a Web site. Souders’ fourteen rules capture the items that can be changed quickly, and at low-cost.
However, if you ask Steve Souders’ if this is all you need to do to have a fast, efficient, and reliable Web site, he should say no. The fourteen rules are an excellent start, as they handle a great deal of the visible disease that infects so many Web sites.
However, like the triathlete with an undiagnosed brain tumor, there is a lot more under the surface that needs to be addressed in order to deliver Web performance improvements that can be seen by all, and support rapid, scalable growth.
This is a book that must be read. Then deeper questions must be asked to ensure that the performance of the 90% of a Web site design not seen by visitors matches the 10% that is.

Web Performance: GrabPERF Performance Measurement System Needs YOU!

In 2004-2005, as a lark, I created my own Web performance measurement system, using PERL, PHP and MySQL. In August 2005, I managed to figure out how to include remote agents.
I dubbed it…GrabPERF. An odd name, but an amalgamation of “Grab” and “Performance” that made sense to my mind at the time. I also never though that it would go beyond my house, a couple of basement servers, and a cable modem.
In the intervening three years, I have managed to:

  • scale the system to handle over 250 individual measurements
  • involve nine remote measurement locations
  • move the system to the Technorati datacenter
  • provide key operational measurement data to system visitors

Although the system lives in the Technorati datacenter and is owned by them, I provide the majority of the day-to-day maintenance on a volunteer basis, if only to try and keep my limited coding skills up.
But this post is not about me. It’s about GrabPERF.
Thanks to the help of a number of volunteers, I have measurement locations in the San Francisco Bay Area, Washington DC, Boston, Portugal, Germany and Argentina.
While this is a good spread, I am still looking to gather volunteers who can host a GrabPERF measurement location. The areas where GrabPERF has the most need are:

  • Asia-Pacific
  • South Asia (India, Pakistan, Bangladesh)
  • UK and Continental Europe
  • Central Europe, including the ancestral homeland of Polska

It would also be great to get a funky logo for the system, so if you are a graphic designer and want to create a cool GrabPERF logo, let me know.
The current measurement system requires Linux, cURL and a few add-on Perl modules. I am sure that I could work on other operating systems, I just haven’t had the opportunity to experiment.
If you or your organization can help, please contact me using the GrabPERF contact form.

Web Performance: David Cancel Discusses Lookery Performance Strategies

David Cancel and I have had sort of a passing vague, same space and thought process, living in the same Metropolitan area kind of distant acquaintance for about the same year.
About 2-3 months ago, he wrote a pair of articles discussing the efforts he has undertaken in order to try and offload some of the traffic to the servers for his new company, Lookery. While they are not current, in the sense that time moves in one direction for most technical people, and is compressed into the events of the past eight hours and the next 30 minutes, these articles provide an insight that should not be missed.
These two articles show how easily a growing company that is trying to improve performance and customer experience can achieve measureable results on a budget that consists of can recycling money and green stamps.

Measuring your CDN

A service that relies on the request and downloading of a single file from a single location very quickly realizes the limitations that this model imposes as traffic begins to broaden and increase. Geographically diverse users begin to notice performance delays as they attempt to reach a single, geographically-specific server. And the hosting location, even one as large as Amazon S3, can begin to serve as the bottleneck to success.
David’s first article examines the solution path that Lookery chose, which was moving the tag, which drives the entire opportunity for success in their business model, onto a CDN. With a somewhat enigmatic title (Using Amazon S3 as a CDN?), he describes how the Lookery team measured the distributed performance of their JS tag using a free measurement service (not GrabPERF) and compared various CDNs against the origin configuration that is based on the Amazon S3 environment.
This deceptively simple test, which is perfect for the type of system that Lookery uses, provided that team with the data they needed to realize that they had made a good choice in choosing a CDN and that their chosen CDN was able to deliver improved response times when compared to their origin servers.

Check your Cacheability

Cacheability is a nasty word that my spell-checker hates. To define it simply, it refers to the ability of end-user browsers and network-level caching proxies to store and re-use downloaded content based on clear and explicit caching rules delivered in the server response header.
The second Article in David’s series describes how, using Mark Nottingham’s Cacheability Engine, the Lookery team was able to examine the way that the CDNs and the Origin site informed the visitor browser of the cacheability of the JS file that they were downloading.
Cacheability doesn’t seem that important until you remember that most small firms are very conscious of the Bandwidth outlay. These small startups arevery aware when their bandwidth usage reaches 250GB/month level (Lookery’s bandwidth usage at the time the posts were written). Any method that can improve end-user performance while stilll delivering the service they expect is a welcome addition, especially when it is low-cost to free.
In the post, David notes that there appears to be no way in their chosen CDN to modify the Cacheability settings, an issue which appears to have been remedied since the article went up [See current server response headers for the Lookery tag here].

Conclusion

Startups spend a lot of time imagining what success looks like. And when it comes, sometimes they aren’t ready for it, especially when it comes to the ability to handle increasing loads with their often centralized, single-location architectures.
David Cancel, in these two articles, shows how a little early planning, some clear goals, and targeted performance measurement can provide an organization with the information to get them through their initial growth spurt in style.

Thoughts on Web Performance at the Browser

Last week, lost in the preternatural shriek that emerged from the Web community around the release of Google Chrome, John Resig posted a thoughtful post on resources usage at the browser. In it, he states that the use of the Process Manager in Chrome will change how people see Web performance. In his words:

The blame of bad performance or memory consumption no longer lies with the browser but with the site.

Coming to the discussion from the realm of Web performance measurement, I realize that the firms I have worked with and for have not done a good job of analyzing this , and, in the name of science have tried to eliminate the variability of Web page processing from the equation.
The company I currently work for has realized that this is a gap and has released a product that measures the performance of a page in the browser.
But all of this misses the point, and goes to one of the reasons why I gave up on Chrome on my older, personal-use computer: Chrome exposes the individual load that a page places on a Web browser.
Resig highlights that browser that make use of shared resources shift the blame about poor performance out to the browser and away from the design of the page. Technologies that modern designers lean on (Flash, AJAX, etc.) all require substantially greater resource-consumption in a browser. Chrome, for good or ill, exposes this load to the user be instantiating a separate, sand-boxed process for each tab, clearing indicating which page is the culprit.
It will be interesting if designers take note of this, or ignore in pursuit of the latest shiny toy that gets released. While designers assume that all visitors run the cutting edge of machine, I can show them that a laptop that is still plenty useful is completely locked up when their page is handled in isolation.

Joost: A change to the program

In April 2007, I tried out the Joost desktop client.  [More on Joost here and here]
I was underwhlemed by the performance, and the fact that the application completely maxxed out my dual core CPU, my 2G of RAM, and my high-speed home broadband. I do remember thinking at the time that it seemed weird to have a Desktop Client in the first place. Well, as Om Malik reports this morning, it seems that I was not alone.
After this week’s hoopla over Chrome, moving in the direction of the browser seems like a wise thing to do. But I definitely hear far more buzz over Hulu than I do for Joost on the intertubes.

Update

Michael Arrington and TechCrunch weigh into the discussion.

GrabPERF: State of the System

This is actually a short post to write, as the state of the GrabPERF system is currently very healthy. There was an eight-hour outage in early August 2008, but that was a fiber connectivity issue, not a system issue.
Over the history of ther service, we have been steadily increasing the number of measurements we take each day.

grabperf-measurements-per-day 

The large leap occurred when a very large number of tests were added to the system on a single day. But based on this data, the system is gathering more than 900,000 measurements every day.
Thanks to all of the people who volunteer their machines and bandwidths to support this effort!

Chrome v. Firefox – The Container and The Desktop

The last two days of using Chrome have had me thinking about the purpose of the Web browser in today’s world. I’ve talked about how Chrome and Firefox have changed how we see browsers, treating them as interactive windows into our daily life, rather than the uncontrolled end of an information firehose.
These applications, that on the surface seem to serve the same purpose, have taken very different paths to this point. Much has been made about Firefox growing out of the ashes of Netscape, while Chrome is the Web re-imagined.
It’s not just that.
Firefox, through the use of extensions and helper applications, has grown to become a Desktop replacement. Back when Windows for Workgroups was the primary end-user OS (and it wasn’t even an OS), Norton Desktop arrived to provide all of the tools that didn’t ship with the OS. It extended and improved on what was there, and made WFW a better place.
Firefox serves that purpose in the browser world. With its massive collections of extensions, it adds the ability to customize and modify the Web workspace. These extensions even allow the incoming content to be modified and reformatted in unique ways to suit the preferences of each individual. These features allowed the person using Firefox to feel in control, empowered.
You look at the Firefox installs of the tech elite, and no two installed versions will be configured in the same way. Firefox extends the browser into an aggregator of Web data and information customization.
But it does it at the Desktop.
Chrome is a simple container. There is (currently) no way to customize the look and feel, extend the capabilities, or modify the incoming or outgoing content. It is a simple shell designed to perform two key functions: search for content and interact with Web applications.
There are, of course, the hidden geeky functions that they have built into the app. But those don’t change what it’s core function is: request, receive, and render Web pages as quickly and efficiently as possible. Unlike Firefox’s approach, which places the app being the center of the Web, Chrome places the Web at the center of the Web.
There is no right or wrong approach. As with all things in this complicated world we are in, it depends. It depends on what you are trying to accomplish and how you want to get there.
The conflict that I see appearing over the next few months is not between IE and Firefox and Safari and Opera and Chrome. It is a conflict over what the people want from an application that they use all the time. Do they want a Web desktop or a Web container?

Chrome and Advertising – Google's Plan

Since I downloaded and started using Chrome yesterday, I have had to rediscover the world of online advertising. Using Firefox and Adblock Plus for nearly three years has shielded from their existence for the most part.
Stephen Noble, in a post on the Forrester Blog for Interactive Marketing Professionals, seems to discover that Chrome will be a source for injecting greater personalization and targeting into the online advertising market.
This is the key reason Chrome exists, right now.
While their may be discussions about the online platform and hosted applications, there are only a small percentage of Internet users who rely on hosted desktop-like applications, excluding email, in their daily work and life.
However, Google’s biggest money-making ventures are advertising and search. With control of AdSense and DoubleClick, there is no doubt that Google controls a vast majority of the targeted and contextual advertising market, around the world.
One of the greatest threats to this money-making is a lack of control of the platform through which ads are delivered. There is talk of IE8 blocking ads (well, non-Microsoft ads anyway), and one of the more popular extensions for Firefox is Adblock Plus. While Safari doesn’t have this ability natively built in, it can be supported by any number of applications that, in the name of Internet security, filter and block online advertisers using end-user proxies.
This threat to Google’s core revenue source was not ignored in the development of Chrome. One of the options is the use of DNS pre-fetching. Now I haven’t thrown up a packet sniffer, but what’s to prevent a part of the pre-fetching algorithm to go beyond DNS for certain content, and pre-fetch the whole object, so that the ads load really fast, and in that way are seen as less intrusive.
Ok, so I am noted for having a paraoid streak.
However, using the fastest rendering engine and a rocket-ship fast Javascript VM is not only good for the new generation of online Web applications, but plays right into the hands of improved ad-delivery.
So, while Chrome is being hailed as the first Web application environment, it is very much a context Web advertising environment as well.
It’s how it was built.

Copyright © 2025 Performance Zen

Theme by Anders NorenUp ↑