Category: GrabPERF

Compression and the Browser – Who Supports What?

The title is a question I ask because I hear so many different views and perspectives about HTTP compression from the people I work with, colleagues and customers alike.

There appears to be no absolute statement about the compression capabilities of all current (or in-use) browsers anywhere on the Web.
My standard line is: If your customers are using modern browsers, compress all text content — HTML (dynamic and static), CSS, XML, and Javascript. If you find that a subset of your customers have challenges with compression (I suggest using a cross-browser testing tool to determine this before your customers do), write very explicit regular expressions into your Web server or compression device configuration to filter the user-agent string in a targeted, not a global, way.

For example, last week I was on a call with a customer and they disabled compression for all versions of Internet Explorer 6, as the Windows XP pre-SP2 version (which they say you could not easily identify) did not handle it well. My immediate response (in my head, not out loud) was that if you had customers using Window XP pre-SP2, those machines were likely pwned by the Russian Mob. I find it very odd that an organization would disable HTTP compression for all Internet Explorer 6 visitors for the benefit of a very small number of ancient Windows XP installations.

Feedback from readers, experts, and browser manufacturers that would allow me to compile a list of compatible browsers, and any known issues or restrictions with browsers, would go a long way to resolving this ongoing debate.

UPDATE: Aaron Peters pointed me in the direction of BrowserScope which has an extensive (exhaustive?) list of browsers and their capabilities. If you are seeking the final word, this is a good place to start, as it tests real browsers being used by real people in the real world.

UPDATE – 09/24/2012: I found a site today that was still configured incorrectly. Please, please, check your HTTP Compression settings for ALL browsers your customers use. Including you MOBILE clients.

GrabPERF – Database Failure

The GrabPERF database server failed sometime early this morning. The hosting facility is working to install a new machine, and then will begin the long process of restoring from backups and memory.
Updates will be posted here.

UPDATE – Sep 4 2009 22:00 GMT: The database listener is up and data is flowing into the database and can be viewed in the GrabPERF interface. However, I have lost all of the management scripts that aggregate and drop data. These will be critical as the new database server has a substantially smaller drive. There is a larger attached drive, and I will try and mount the data there.

It will likely take more time than I have at the moment to maintain and restore GrabPERF to its pre-existing state. You can expect serious outages and changes to the system in the next few weeks.

[Whining removed. Self-inflicted injuries are always the hardest to bear.]

UPDATE – Sep 5 2009 03:30 GMT: The Database is back up, and absorbing data. Attempts to move it to the larger drive on the system failed, so the entire database is running on an 11GB partition. <GULP>.

The two most vital maintenance scripts are also running the way they should be. I had to rewrite those from very old archives.

Status: Good, but not where I would like it. I will work with Technorati to see if there is something that I’m missing in trying to use the larger partition. Likely it comes down to my own lame-o linux admin skillz.

I want to thank the ops team from Technorati for spending time on this today. They did an amazing job of finding a machine for this database to live on in record time.
I have also learned the hard lesson of backups. May I not have to learn it again.

UPDATE – Sep 5 2009 04:00 GMT: Thanks again to Jerry Huff at Technorati. He pointed out that if I use a symbolic link, I can move the db files over to the large partition with no problem. Storage is no longer an issue.

[And, why you ask, is Tara Hunt (@missrogue) on this post. Hey, when I asked Tagaroo for Technorati images, this is what it gave me. It was a bit of a shock after 8 hours of mind-stretching recovery work, but hey, ask and ye shall receive.]

UPDATE – Sep 7 2009 01:00 GMT: Seems that I got myself into trouble by using the default MySQL configuration that came with the CentOS distro. As a result, I ran out of database connections! Something that I have chided others for, I did myself.
The symptom appeared when I reactivated my logging database, which runs against the same MySQL installation, just in a separate database. It started to use up the default pool of connections (100) and the agents couldn’t report in.

This has been resolved and everything is back to normal.

GrabPERF: What and Why

Why GrabPERF?

About four years ago, I had a bright idea that I would like to learn more about how to build and scale a small Web performance measurement platform. I’ve worked in the Web performance industry for nearly a decade now, and this was an experimental platform for me to examine and encounter many of the challenges that I see on a daily basis.

The effort was so successful and garnered enough attention during the initial blogging boom that I was able to sell the whole platform for a tiny (that is not a typo) sum to Technorati.

The name is taken from another experimental tool I wrote called GrabIT2 which uses the PHP cURL libraries to capture timings and HTML data for HTTP requests. It is an extension of my articles and writings on Web performance that started at Webperformance.org, and that have since moved to this blog.

What is GrabERF?

GrabPERF is a multi-location measurement platform, based on PERL, cURL, PHP, and MySQL that is designed to

  • Measure the base HTML or a single-object target using HTTP or HTTPS
  • Report the data to a central database (located in the San Francisco Area)
  • Report the data using a GUI or through text based download

Why not Full Pages with all Objects?

Reason 1: I work for a company that already does that. Lawyers and MBAs among you, do the math.

Reason 2: I am an analyst, not a programmer. The best I can say about my measurement script is hack job.

Why is the GrabPERF interface so clunky?

See reason 2 above.

If you want to write your own interface to the data, let me know.

Why has the interface not changed in nearly three years?

The current interface works. It’s simple, clean, and delivers the data that I and the regular users need to analyze performance issues. If there is something more that you would like to see, let me know!

I like what I see. How can I host a measurement location?

Just contact me, and I can provide you with a list of PERL modules you will need to install on your linux server. In return, I need a static IP address of the machine hosting the measurement agent.

How stable is GrabPERF?

Most of the time, I forget it’s even running. I have logged onto the servers and typed in uptime and discovered that it’s been 6 months or more since the servers have been re-booted.

It was designed to be simple, because that’s all I know how to do. The lack of complexity makes it effectively self-managing.

Shouldn’t all systems be that way?

What if my question isn’t asked / answered here?

Your should know the answer to this by now: contact me.

Black Friday 2008: The pain, the horror, the suffering

The GrabPERF Black Friday Dashboard is done for another year and there were two performance victims that suffered the most at the hands of the onslaught of bargain-hunters in the area of Web performance.

Some caveats that I need to mention about the GrabPERF measurement methodology.

  1. Only the base HTML file of each site is measured.
  2. Only the base HTML of the homepage is measured. This means that any issues that arose in the shopping process were not captured.

All of the sites in the GrabPERF Holiday Retail Measurement Index can be continually monitored on the GrabPERF Black Friday Dashboard. This page will be available until January 1 2009.

That said, the two primary performance victims this year are HP Shopping and Sears. We focus here on those that did not do that well because sites who have met the Web performance challenge and survived to fight another year are not as interesting from a learning perspective.

HP Shopping

hp-shopping-blackfriday-2008

HP Suffered the greatest response time problems, by effectively becoming unresponsive as of 09:00 EST. The greatest affect on overall response time came as a result of the First Byte time metric which is a solid proxy for measuring the server or application load, as it is the time between the initial client HTTP request and the server’s HTTP response.

Factored into the poor performance analysis is the fact that GrabPERF only captures data for the base HTML object. If the performance seen here is carried over to the download of all of the graphical content on the page, I would be surprised if anyone was able to make any kind of purchases on the HP web site on Black Friday.

Today, performance has returned to substantially lower levels, indicating that this application was simply not ready for the amount of traffic it received, or ran into a completely unexpected issue when the load increased.

Recommendation for 2009: Load Test the application using this year’s traffic metrics as a baseline for validating the scalability of the application.

Sears

sears-shopping-blackfriday-2008

Sears is a returning visitor from last year’s Black Friday measurements. Unfortunately, they return for exactly the same reason that they were on last year – scaling/capacity issues that appear as errors.

And these are the worst kind of errors. As can be seen in the graphic below, the Sears Web site announced to the whole world that they had over-reached and that they could not handle the incoming volume of traffic.

sears-error-image-blackfriday

What is interesting is that Sears owns properties that survived the day very well, namely Lands End. The question that must be posed is why does the parent site fail so badly when the child sites handle the traffic without difficulty?

Recommendation for 2009: Load testing for capacity, and meeting with the Lands End team to understand what they are doing to handle the load.

GrabPERF: Yahoo issues today

Netcraft noted that Yahoo encountered a bit of a headache today. So I fired up my handy-dandy little performance system and had a look.

yahoo issues july 06 2007

Although for an organization and infrastructure the size of Yahoo’s this may have been a big event, in my experience, this was a “stuff happens on the Internet” sort of thing.

Move along people; there’s nothing to see. It is not the apocalyptic event that Netcraft is making it out to be. Google burps and barfs all the time, and everyone grumbles. But there is no need to run in circles and scream and shout.

Yeesh!

GrabPERF: Some System Statistics

Over the last year, GrabPERF has been something that has caught the fancy of a few in the Blogging/Social Media world. It has given some perspective of how performance can affect business and image in the connected world.

But what of GrabPERF itself? It has been on a development hiatus for the last few months due to pressures from my “real” job and various trips (business and pleasure) that I have been undertaking. Over the last two weeks, I have been trying to clear out the extra measurements and focus the features and attention on the community that appears most interested in the data.

During this process, I heard back from some folks who had been using GrabPERF in stealth mode (even I can’t track all the hits!), and who asked, “Hey! Where did my data go?”. Glad to hear from all of you.

Just to give everyone some idea of the growth, here is a snapshot of aggregated daily performance and number of measurements.

GrabPERF Statistics (by day)

The number of measurements shot up, until I started culling the unused measurements. Over the last 3 weeks, average performance became extremely variable, and that’s when I began considering the culling. As well, the New York PubSub Agent appears to have gone permanently offline, as a part of their winding down process.

The fact that the system was taking 390,000 measurements per day still astounds me.

This was also comparable to the number of distinct sites we were measuring.

grabperf_stats-up-to-Aug062006-2

After the latest cull, we are down to 84 distinct tests, a level last seen on November 27, 2005.

I am pleased that the system has held together as well as it has.

GrabPERF Site Statistics | Web Analytics Index – Mar 08 2006

The Site Statistics | Web Analytics Index measurements have been running now for about 2.5 days, and I wanted to make some general comments on what I am seeing.

The methodolgy for testing is straightforward. I chose sites | services that allowed you to create a free (if limited) account to track your Web visitors, and allowed you to make these statistics available to for anyone to look at. Using this this, a measurement was established against the landing page that visitors would see if they chose to look at these publicly available statistics.

I am using this blog as the placeholder for the tracking “bugs”  used in this index (see the right-hand column).

From the graph above, it is clear that ShinyStat is the performance leader in this space. They have the smallest overall page size as well as the fastest and most reliable performance.

It is important to note that services such as WebTrends, Omniture, WebSideStory and Coremetrics are not included, as they are beyond the reach of most bloggers, and do not provide a public side to their data. Also, Google Analytics is not included, as they do not provide public access to the collected data.

The collected data is available in GrabPERF as both the Site Statistics Index, and as individual measurements.

The final move has occurred

So, the move to the new datacenter is complete. We finished off the final changes last night | early this morning, and the Web server and database are now running on a big fat pipe at 365 Main in downtown San Francisco.

How did I spring for new hardware and hot hosting? Well, I had a little help from some Friends of GrabPERF — Technorati.

About 3 months ago, Dave Sifry contacted me when we went through our last financial crisis and offered to host the whole kit and kaboodle. He put me in contact with Adam Hertz, who turned me over to Camille Riddle.

After driving Camille and her team nuts for two months, we switched everything over starting at about 23:30 EST (04:30 GMT) last night. There were a few hic and burps, and the DNS propagation may take a while to reach some of the most distant folks, but this morning, I issued the final “poweroff” command to my home-based Web server.

I want to thank the Technorati team for all if their help, and I look forward to continuing to deliver quality data to the blog community.

Niall Kennedy talks about the Technorati side here, along with a funky pic of the old GrabPERF datacenter.

Copyright © 2024 Performance Zen

Theme by Anders NorenUp ↑