Category: Internet

Real User Measurement – A tool for the whole business

The latest trend A key tool in web performance measurement is the drive to implement the use of Real User Measurement (RUM) in a web performance measurement strategy. As someone who cut their teeth on synthetic measurements using distributed robots and repeatable scripts, it took me a long time to see the light of RUM, but I am now a complete convert – I understand that the richness and completeness of RUM provides data that I was blocked from seeing with synthetic data.

[UPDATE: I work for Akamai focusing on the mPulse RUM tool.]

The key for organizations is realizing that RUM is complementary to Synthetic Measurements. The two work together when identifying and solving tricky external web performance issues that can be missed by using a single measurement perspective.

The best way to adopt RUM is to use the dimensions already in place to segment and analyze visitors in traditional web analytics tools. The time and effort used in this effort can inform RUM configuration by determining:

  • Unique customer populations – registered users, loyalty program levels, etc
  • Geography
  • Browser and Device
  • Pages and site categories visited
  • Etc.

This information needs to bleed through so that it can be linked directly to the components of the infrastructure and codebase that were used when the customer made their visit. Limiting this data pool to the identification and solving of infrastructure, application, and operations issues isolates the information from a potentially huge population of hungry RUM consumers – the business side of any organization.

The Business users who fed their web analytics data into the setup of RUM need to see the benefit of their efforts. Sharing RUM with the teams that use web analytics and aligning the two strategies, companies can directly tie detailed performance data to existing customer analytics. With this combination, they can begin to truly understand the effects of A/B testing, marketing campaigns, and performance changes on business success and health. But business users need a different language to understand the data that web performance professionals consume so naturally.

I don’t know what the language is, but developing it means taking the data into business teams and seeing how it works for them. What companies will find is that the data used by one group won’t be the same as for the other, but there will be enough shared characteristics to allow the group to share a dialect of performance when speaking to each other.

This new audience presents the challenge of clearly presenting the data in a form that is easily consumed by business teams alongside existing analytics data. Providing yet another tool or interface will not drive adoption. Adoption will be driven be attaching RUM to the multi-billion dollar analytics industry so that the value of these critical metrics is easily understood by and made actionable to the business side of any organization.

So, as the proponents of RUM in web performance, the question we need to ask is not “Should we do this?”, but rather “Why aren’t we doing this already?”.

HTTP Compression – Have you checked ALL your browsers?

Apache has been my web server of choice for more than a decade. It was one of the first things I learned to compile and manage properly on linux, so I have a great affinity for it. However, there are still a few gotchas that are out there that make me grateful that I still know my way around the httpd.conf file.

HTTP compression is something I have advocated for a long time (just Googled my name and compression – I wrote some of that stuff?) as just basic common sense.
Make Stuff Smaller. Go Faster. Cost Less Bandwidth. Lower CDN Charges. [Ok, I can’t be sure of the last one.]

But, browsers haven’t always played nice. At least up until about 2008. After then, I can be pretty safe in saying that even the most brain-damaged web and mobile browsers could handle pretty much any compressed content we threw at them.

Oh, Apache! But where were you? There is an old rule that is still out there, buried deep in the httpd.conf file that can shoot you. I actually caught it yesterday when looking at a site using IE8 and Firefox 8 measurement agents at work. Firefox was about 570K while IE was nearly 980K. Turns out that server was not compressing CSS and JS files sent to IE due to this little gem:

 BrowserMatch \bMSIE !no-gzip gzip-only-text/html

This was in response to some issues with HTTP Compression in IE 5 and early versions of IE6 – remember them? – and was appropriate then. Guess what? If you still have this buried in your Apache configuration (or any web server or hardware device that does compression for you), break out the chisels: it’s likely your httpd.conf file hasn’t been touched since the stone age.

Take. It. Out. NOW!

Your site shouldn’t see traffic from any browsers that don’t support compression (unless they’re robots and then, oh well!) so having rules that might accidentally deny compression might cause troubles. Turn the old security ACL rule around for HTTP compression:

Allow everything, then explicitly disable compression.

That should help prevent any accidents. Or higher bandwidth bills due to IE traffic.

OCSP and the GoDaddy Event

The GoDaddy DNS event (which I wrote about here) has been the subject of many a post-mortem and water-cooler conversation in the web performance world for the last week. In addition to the many well-publicized issues that have been discussed, there was one more, hidden effect that most folks may not have noticed – unless you use Firefox.

Firefox uses OCSP lookups to validate the certificate of SSL certificates. If you go to a new site and connect using SSL, Firefox has a process to check the validity of SSL cert. The results are of the lookup cached and stored for some time (I have heard 3 days, this could be incorrect) before checking again.

Before the security wonks in the audience get upset, realize I’m not an OCSP or SSL expert, and would love some comments and feedback that help the rest of us understand exactly how this works. What I do know is that anyone who came to a site the relied on an SSL cert provided and/or signed by GoDaddy at some point in its cert validation path discovered a nasty side-effect of this really great idea when the GoDaddy DNS outage occurred: If you can’t reach the cert signer, the performance of your site will be significantly delayed.

Remember this: It was GoDaddy this time; next time, it could be your cert signing authority.

How did this happen? Performing an OCSP lookup requires a opening a new TCP connection so that an HTTP request can be made to the OCSP provider. A new TCP connection requires a DNS lookup. If you can’t perform a successful DNS lookup to find the IP address of the OCSP host…well, I think you can guess the rest.
Unlike other third-party outages, these are not ones that can be shrugged off. These are ones that will affect page rendering by blocking the downloading the mobile or web application content you present to customers.

I am not someone who can comment on the effectiveness of OCSP lookups in increasing web and mobile security. OCSP lookup for Firefox are simply one more indication of how complex the design and management of modern online applications is.

Learning from the near-disaster state and preventing it from happening again is more important that a disaster post-mortem. The signs of potential complexity collapse exist throughout your applications, if you take the time to look. And while something like OCSP may like like a minor inconvenience, when it affects a discernible portion of your Firefox users, it becomes a very large mouse scaring a very jumpy elephant.

Web Performance: Your opinion is only somewhat relevant

Context is everything. Where you stand when reading or watching something shapes the way you experience it. Just as Einstein explained to us in the Train/Platform Thought Experiment, the position of the observer dictates how the event is described and recorded.

There is no difference with web performance. When a company develops an online application and presents it to customers (it doesn’t matter if they are outside/retail or inside/partner/employee), the perspective of the team that approved, created, tested, and released the application becomes, as a VP at a previous company explained to me, “interesting, but irrelevant”.

Step away from the world of online application performance for a minute, and put yourself in the shoes of the customer; become a consumer. How do you feel when a site, application, or mobile app is slow to give you what you want? I’ll give you some idea:

The stress levels of volunteers who took part in the study rose significantly when they were confronted with a poor online shopping experience, proving the existence of ‘Web Stress’. Brain wave analysis from the experiment revealed that participants had to concentrate up to 50% more when using badly performing websites, while EOG technology* and behavioural analysis of the subjects also revealed greater agitation and stress in these periods. (“Web Stress: A Wake Up Call for European Business”, emphasis mine)

I know it comes from a competitor, but it is true. It applies to me; it applies to you. And web performance professionals need to step away from the screens for a minute and put themselves in the shoes of the people standing on the platform.

Everyday, your online applications change, grow, fail, falter, and evolve – the train is always moving. To the people on the platform, all they see is your train and how it’s moving compared to the other trains they have watched go by. You worked hard on your train, polishing the brass, adding new cars, even upgrading the engine. To you, the train is a magnificent achievement that everyone should admire, especially now that the new engine makes it so much faster!

The customer on the platform is measuring how your updated train is moving compared to the MAGLEV bullet train on the super-conducting rail next to you and asking “How come this train is so slow?”

The complexity of a modern web site is astounding, and improving performance by 0.4 seconds is often a feat worthy of applause…among web performance professionals. From the perspective of your customers, that 0.4 second improvement is still not enough.

Web performance is a numbers game. As an industry, we have been focused on one set of numbers for too long. The customer experience, not the stopwatch, has to drive your company to the next level of performance maturity. To do that, you have to step off your online application train and take a cold hard look at what you deliver to your customers, alongside them down on the platform.

Customer Experience: Standing on your own four legs

Tables. They’re pretty ubiquitous. You might even be using one right now (although in the modern mobile world, you may not. LAMP POST!).

A strong business is like a table, supported by four legs.

  • The Business. The reason that resources and people have been gathered together. There is a vision of what the group wants to do and what success looks like.
  • The Design. Don’t think style; think Design/Build. This is where the group takes the business idea and determines how they will make it happen, where the stores will be, what a datacenter looks like, who they will partner with.
  • The Presentation. How the Business and the Design are shown to people. How the shelves are stocked, the landing pages look, the advertising is placed, how the business looks to potential customers.
  • The Delivery. This is the critical part of how the business uses the systems they have designed and the presentation they have crafted to deliver something of value to the potential customer.

Without any one of these, an organization will fail to meet the most critical goal it has set to be successful: an experience that turns a visitor or browser into a customer.

All the Business and MBA grads in the audience are yawning, and slapping their Venti non-fat, no-whip, decaf soy lattés down on the table. This message isn’t for you. Well, it is, but you can stand up and give your chair to one of the people behind you.

Now that I have Dev, QA, and Operations sitting with me (remember, the Business guys are still in the back of the room, tapping away on their Blackberries), tell me what you think of this conceptual table. How does the Table of Customer Experience relate to you?

Ok, put down the Red Bulls and Monsters and listen: Everything that Dev, QA, or Operations does has an effect on the experience (negative or positive) of the potential customer. If one of the table legs is broken (or even shorter than the others), the rippling shockwaves will eventually affect the entire operation.

So, if I were to ask the member so of your organization how their daily activities supported the online application in each of these four areas, do you think they could answer?

Grab a white board. This is going to be a long day.

Picture courtesy of sashafatcat

The Joy of the Platform

In the last few months, I have found myself uttering the word platform on an almost daily basis. As I was flying home last night, I began to consider what that actually means.

In the world I work in, customers bought a product or a tool. The purchase is driven by a desire to solve a problem or prevent a problem from appearing in the first place. It was a point solution, a single point of entry into an organization and added a very limited amount of value to a siloed compartment of an organization for a limited period of time, before the next shiny toy came along that purported to do the same thing, only better.

Economies of scale be damned, full inefficiencies ahead!

Companies that sell platforms, or have begun to to consider doing more than just paying lip-service to the word, look at the world with a different filter. The customer is seen as a holistic entity, as complex as any patient who comes to a doctor for treatment. If two people come to a doctor with the flu, they don’t always get the same treatment, as the one patient may be sent home for rest and the other rushed to hospital because their compromised immune system means the flu will kill them without specialist care.

The best platforms are those that are focused on one to three key aspects of customers business or way of doing business and provide a unified way to perform these 1-3 functions. The customer should not be forced to go to completely different places to use each tool on the platform.

Platforms have unified flows, and customers can expect that using different parts of the platform will be easy to learn, as they all work the same way. An example of a bad platform is Microsoft Office. When you go to the File Menu in a Microsoft Office product, you know that regardless which product in the suite you are using, the same items will be there. Where Microsoft Office fails as a platform is in the way that the rest of the menus and actions are not unified, with Powerpoint behaving differently from Word, which are both different from Excel. Microsoft Office is a the case study of history is getting in the way of ease of use, of standalone products loosely linked, like cheap knock-offs of Lego™.
Platforms are truly extensible. If a customer needs an additional component of the platform, it can be enabled (after the appropriate business negotiations) in minutes.

Platforms need to allow simplicity when needed and complexity where required. While a 10-person company and a 10,000-person company have different needs, the same platform should be able to support these needs. Salesforce.com is a classic example of this – in their world, they don’t care what the size of your company is.

And, platforms have to guided by product management teams, to have a shared vision. Product management has to enforce a strong adherence to the core values of focus, unity, extensibility, and complex simple complexity. A product management team that lacks the leadership to drive these values will produce a broken platform

How does your platform compare against this checklist?

Compression and the Browser – Who Supports What?

The title is a question I ask because I hear so many different views and perspectives about HTTP compression from the people I work with, colleagues and customers alike.

There appears to be no absolute statement about the compression capabilities of all current (or in-use) browsers anywhere on the Web.
My standard line is: If your customers are using modern browsers, compress all text content — HTML (dynamic and static), CSS, XML, and Javascript. If you find that a subset of your customers have challenges with compression (I suggest using a cross-browser testing tool to determine this before your customers do), write very explicit regular expressions into your Web server or compression device configuration to filter the user-agent string in a targeted, not a global, way.

For example, last week I was on a call with a customer and they disabled compression for all versions of Internet Explorer 6, as the Windows XP pre-SP2 version (which they say you could not easily identify) did not handle it well. My immediate response (in my head, not out loud) was that if you had customers using Window XP pre-SP2, those machines were likely pwned by the Russian Mob. I find it very odd that an organization would disable HTTP compression for all Internet Explorer 6 visitors for the benefit of a very small number of ancient Windows XP installations.

Feedback from readers, experts, and browser manufacturers that would allow me to compile a list of compatible browsers, and any known issues or restrictions with browsers, would go a long way to resolving this ongoing debate.

UPDATE: Aaron Peters pointed me in the direction of BrowserScope which has an extensive (exhaustive?) list of browsers and their capabilities. If you are seeking the final word, this is a good place to start, as it tests real browsers being used by real people in the real world.

UPDATE – 09/24/2012: I found a site today that was still configured incorrectly. Please, please, check your HTTP Compression settings for ALL browsers your customers use. Including you MOBILE clients.

Flickr: When the cable breaks…

When I lived in Victoria, BC, there was always a ship idling in the harbour, engine turning over, a low steady hum that was always there when you went to the water.

Well, they have built an on-shore power plant for that ship, and it looks like they may have brought in a new one, but the vessel is always there…waiting.

When a cable breaks out in the North Pacific, this ship is gone in an hour. Apparently there are cable repair ships stationed all over the world…waiting.

Here’s Neal Stephenson’s article on the first segment of FLAG, and the whole submarine cable business.

The end of DNS as we know it?

DNS has been a great hidden mystery to most people who use the Internet regularly. As a Web performance analyst, I see the effects of poorly deployed or improperly maintained DNS services.

Business 2.0 brings this to the rest of you. While sounding a little apocalyptic, it does highlight a problem that those of us who work close to the ground know: DNS is inherently complex and fragile.

Complex in the sense that a single mis-step can bring down a site like Google, or prevent Comcast users from using the Internet (not just the Web). Complex in the sense that the software, even after being re-written from the ground up for BIND 9, requires an incredible level of knowledge and expertise to configure and maintain correctly.

I run caching BIND servers at my home, because I know how easy it is for a DNS outage to take me off the Internet. But the level of knowledge needed to set up that service for 5 computers is incredible.

Services such as UltraDNS and Akamai have made DNS management for large companies a core component of their service offerings. Nominum, home of Paul Mockapetris (father of BIND and DNS), sells a robust and scalable BIND replacement.

The question now is: what next? What could replace the DNS infrastructure? So far I haven’t been hearing a lot of conversation about this, because without DNS, nothing will work.

DNS and name resolution using DNS are integrated into EVERY operating system from phones to supercomputers. So is the question not what will replace DNS?, but what will replace BIND?

Don’t know….

The Long Tail Phenomenom

The Long Tail has been the latest phenom here in the blogosphere. Its discussion of the choice freedom released by online retailers and distributors should be no surprise to anyone who has been online for more than 2 weeks.

My experience with this Long Tail goes back to the Christmas in 1998 when I bought a copy of Christmas in Connecticut (the original, not the schlocky re-make) from Amazon. Paid duty and shipping to have it sent to Canada. Very few of my peers had ever heard of it, and the only taped copy was an old Betmax version pulled from TV years before.

The whole reason that the Internet retail channel was touted in the first place was for just the reason that Chris Anderson has “discovered” in the Long Tail: all-the-time access to everything in market niche X. So why is the blogosphere heralding this as a new discovery? It has been with us since the beginning. But when someone “invented” a term for it, it is a new idea that needs to be discussed.

It is the original idea behind the commercial Internet. It is not news.

Next story please.

Copyright © 2024 Performance Zen

Theme by Anders NorenUp ↑