Category: Web performance concepts

Web Performance: The Strength of Corporate Silos

When I meet with clients, I am always astounded by the strength of the silos that exist inside companies. Business, Marketing, IT, Server ops, Development, Network ops, Finance. In the same house, sniping and plotting to ensure that their team has the most power.

Or so it seems to the outsider.

Organizations are all fighting over the same limited pool of resources. Also, the organization of the modern corporation is devised to create this division, with an emphasis on departments and divisions over teams with shared goals. But even the Utopian world of the cross-functional team is a false dream, as the teams begin to fight amongst themselves for the same meager resources at a project, rather than a department level.

I have no solution for this rather amusing situation. Why is it amusing? As an outsider (at my clients and in my own company) I look upon these running battles as a sign of an organization that has lost its way. Where the need to be managed and controlled has overcome the need to create and accept responsibility.

Start-ups are the villages of the corporate world. Cooperation is high, justice is swift, and creative local solutions abound. Large companies are the Rio de Janeiro’s of the economy. Communication is so broken that companies have to run private phone exchanges to other offices. Interesting things have to be accomplished in the back-channel.

This has a severe effect on Web performance initiatives. Each group is constant battling to maintain control over its piece of the system, and ensure that their need for resources is fulfilled. That means one group wants to test K while another wants to measure Q and yet a third needs to capture data on E.

This leads to a substantial amount of duplication and waste when it comes to solving problems and moving the Web site forward. There is no easy answer for this. I have discussed the need for business and IT to find some level of understanding in previous posts, and have yet to find a company that is able break down the silos without reducing the control that the organization imposes.

The Dog and The Toolbox: Using Web Performance Services Effectively

The Dog and The Toolbox

One day, a dog stumbled upon a toolbox left on the floor. There was a note on it, left by his master, which he couldn’t read. He was only a dog, after all.

He sniffed it. It wasn’t food. It wasn’t a new chew toy. So, being a good dog, he walked off and lay on his mat, and had a nap.

When the master returned home that night, the dog was happy and excited to see him. He greeted his master with joy, and brought along his favorite toy to play with.
He was greeted with yelling and anger and “bad dog”. He was confused. What had he done to displease his master? Why did the master keep yelling at him, and pointing at the toolbox. He had been good and left it alone. He knew that it wasn’t his.

With his limited understanding of human language, he heard the words “fix”, “dishwasher”, and “bad dog”. He knew that the dishwasher was the yummy cupboard that all of the dinner plates went in to, and came out less yummy and smelling funny.

He also knew that the cupboard had made a very loud sound that had scared the dog two nights ago, and then had spilled yucky water on the floor. He had barked to wake his master, who came down, yelling at the dog, then yelling at the machine.
But what did fix mean? And why was the master pointing at the toolbox?

The Toolbox and Web Performance

It is far too often that I encounter companies that have purchased Web performance service that they believe will fix their problems. They then pass the day-to-day management of this information on to a team that is already overwhelmed with data.

What is this team supposed to do with this data? What does it mean? Who is going to use it? Does it make my life easier?

When it comes time to renew the Web performance services, the company feels gipped. And they end up yelling at the service company who sold them this useless thing, or their own internal staff for not using this tool.

To an overwhelmed IT team, Web performance tools are another toolbox on the floor. They know it’s there. It’s interesting. It might be useful. But it makes no sense to them, and is not part of what they do.

Giving your dog the toolbox does not fix your dishwasher. Giving an IT team yet another tool does not improve the performance of a Web site.

Only in the hands of a skilled and trained team does the Web performance of a site improve, or the dishwasher get fixed. As I have said before, a tool is just a tool. The question that all organizations must face is what they want from their Web performance services.

Has your organization set a Web performance goal? How do you plan to achieve your goals? How will you measure success? Does everyone understand what the goal is?

After you know the answers to those questions, you will know that that as amazing as he is, your dog will not ever be able to fix your dishwasher.

But now you know who can.

Managing Web Performance: A Hammer is a Hammer

Give almost any human being a hammer, and they will know what to do with it. Modern city dwellers, ancient jungle tribes, and most primates would all look at a hammer and understand instinctively what it does. They would know it is a tool to hit other things with. They may not grasp some of the subtleties, such as that is designed to drive nails into other things and not beat other creatures into submission, but they would know that this is a tool that is a step up from the rock or the tree branch.

Simple tools produce simple results. This is the foundation of a substantial portion of the Software-as-a-Service (SaaS) model. SaaS is a model which allows companies to provide a simple tool in a simple way to lower the cost of the service to everyone.
Web performance data is not simple. Gathering the appropriate data can be as complex as the Web site being measured. The design and infrastructure that supports a SaaS site is usually far more complex than the service it presents to the customer. A service that measures the complexity of your site will likely not provide data that is easy to digest and turn into useful information.

As any organization who has purchased a Web performance measurement service, a monitoring tool, a corporate dashboard expecting instant solutions will tell you, there are no easy solutions. These tools are the hammer and just having a hammer does not mean you can build a house, or craft fine furniture.

In my experience, there are very few organizations that can craft a deep understanding of their own Web performance from the tools they have at their fingertips. And the Web performance data they collect about their own site is about as useful to them as a hammer is to a snake.

Web Performance: Managing Web Performance Improvement

When starting with new clients, finding the low-hanging fruit of Web performance is often the simplest thing that can be done. By recommending a few simple configuration changes, these early stage clients can often reap substantial Web performance improvement gains.

The harder problem is that it is hard for organizations to build on these early wins and create an ongoing culture of Web performance improvement. Stripping away the simple fixes often exposes deeper, more base problems that may not have anything to do with technology. In some cases, there is no Web performance improvement process simply because of the pressure and resource constraints that are faced.

In other cases, a deeper, more profound distrust between the IT and Business sides of the organization leads to a culture of conflict, a culture where it is almost impossible to help a company evolve and develop more advanced ways of examining the Web performance improvement process.

I have written on how Business and IT appear, on the surface, to be a mutually exclusive dichotomy in my review of Andy King’s Website Optimization. But this dichotomy only exists in those organizations where conflict between business and technology goals dominate the conversation. In an organization with more advanced Web performance improvement processes, there is a shared belief that all business units share the same goal.

So how can a company without a culture of Web performance improvement develop one?

What can an organization crushed between limited resources and demanding clients do to make sure that every aspect of their Web presence performs in an optimal way?

How can an organization where the lack of transparency and the open distrust between groups evolve to adopt an open and mutually agreed upon performance improvement process? Experience has shown me that a strong culture of Web performance improvement is built on three pillars: Targets, Measurements, and Involvement.

Targets

Setting a Web performance improvement target is the easiest part of the process to implement. it is almost ironic that it is also the part of the process that is the most often ignored.

Any Web performance improvement process must start with a target. It is the target that defines the success of the initiative at the end of all of the effort and work.

If a Web performance improvement process does not have a target, then the process should be immediately halted. Without a target, there is no way to gauge how effective the project has been, and there is no way to measure success.

Measurements

Key to achieving any target is the ability to measure the success in achieving the target. However, before success can be measured, how to measure success must be determined. There must be clear definitions on what will be measured, how, from where, and why the measurement is important.

Defining how success will be measured ensures transparency throughout the improvement process. Allowing anyone who is involved or interested in the process to see the progress being made makes it easier to get people excited and involved in the performance improvement process.

Involvement

This is the component of the Web performance improvement process that companies have the greatest difficulty with. One of the great themes that defines the Web performance industry is the openly hostile relationships between IT and Business that exist within so many organizations. The desire to develop and ingrain a culture of Web performance improvement is lost in the turf battles between IT and Business.

If this energy could be channeled into proactive activity, the Web performance improvement process would be seen as beneficial to both IT and Business. But what this means is that there must be greater openness to involve the two parts of the organization in any Web performance improvement initiative.

Involving as many people as is relevant requires that all parts of the organization agree on how improvement will be measured, and what defines a successful Web performance improvement initiative.

Summary

Targets, Measurements, and Involvement are critical to Web performance initiatives. The highly technical nature of a Web site and the complexities of the business that this technology supports should push companies to find the simplest performance improvement process that they can. What most often occurs, however, is that these three simple process management ideas are quickly overwhelmed by time pressures, client demands, resource constraints, and internecine corporate warfare.

Web Performance: Blogs, Third Party Apps, and Your Personal Brand

The idea that blogs generate a personal brand is as old as the “blogosphere”. It’s one of those topics that rages through the blog world every few months. Inexorably the discussion winds its way to the idea that a blog is linked exclusively to the creators of its content. This makes a blog, no matter what side of the discussion you fall on, the online representation of a personal brand that is as strong as a brand generated by an online business.

And just as corporate brands are affected by the performance of their Web sites, a personal brand can suffer just as much when something causes the performance of a blog Web site to degrade in the eyes of the visitors. For me, although my personal brand is not a large one, this happened yesterday when Disqus upgraded to multiple databases during the middle of the day, causing my site to slow to a crawl.

I will restrain my comments on mid-day maintenance for another time.

The focus of this post is the effect that site performance has on personal branding. In my case, the fact that my blog site slowed to a near standstill in the middle of the day likely left visitors with the impression that my blog about Web performance was not practicing what it preached.

For any personal brand, this is not a good thing.
In my case, I was able to draw on my experience to quickly identify and resolve the issue. Performance returned to normal when I temporarily disabled the Disqus plugin (it has since been reactivated). However, if I hadn’t been paying attention, this performance degradation could have continued, increasing the negative effect on my personal brand.

Like many blogs, Disqus is only one of the outside services I have embedded in my site design. Sites today rely on AdSense, Lookery, Google Analytics, Statcounter, Omniture, Lijit, and on goes the list. These services have become as omnipresent in blogs as the content. What needs to be remembered is that these add-ons are often overlooked as performance inhibitors.

Many of these services are built using the new models of the over-hyped and mis-understood Web 2.0. These services start small, and, as Shel Israel discussed yesterday, need to focus on scalability in order to grow and be seen as successful, rather than cool, but a bit flaky. As a result, these blog-centric services may affect performance to a far greater extent than the third-party apps used by well-established, commercial Web sites.

I am not claiming that any one of these services in and of themselves causes any form of slowdown. Each has its own challenges with scaling, capacity, and success. It is the sheer number of the services that are used by blog designers and authors poses the greatest potential problem when attempting to debug performance slowdowns or outages. The question in these instances, in the heat of a particularly stressful moment in time, is always: Is it my site or the third-party?

The advice I give is that spoken by Michael Dell: You can’t manage what you can’t measure. Yesterday, I initiated monitoring of my personal Disqus community page, so I could understand how this service affected my continuing Web performance. I suggest that you do the same, but not just of this third-party. You need to understand how all of the third-party apps you use affect how your personal brand performance is perceived.

Why is this important? In the mind of the visitor, the performance problem is always with your site. As with a corporate site that sees a sudden rise in response times or decrease in availability, it does not matter to the visitor what the underlying cause of the issue is. All they see is that your site, your brand (personal or corporate), is not as strong or reliable as they had been led to believe.

The lesson that I learned yesterday, one that I have taught to so many companies but not heeded myself, is that monitoring the performance of all aspects of your site is critical. And while you as the blog designer or writer might not directly control the third-party content you embed in your site, you must consider how it affects your personal brand when something goes wrong.

You can then make an informed decision on whether the benefit of any one third-party app is outweighed by the negative effect it has on your site performance and, by extension, your personal brand.

Web Performance: Your Teenage Web site

It’s critical to your business. It affects revenue. It’s how people who can’t come to you perceive you.
It’s your Web site.

Its complex. Abstract. Lots of conflicting ideas and forces are involved. Everyone says they now the best thing for it. Finger-pointing. Door slamming. Screaming.

Am I describing your Web site and the team that supports it? Or your teenager?
If you think of your Web site as a teenager, you begin to realize the problems that your facing. Like a teenager, it has grown physically and mentally, and, as a result, thinks its an experienced adult, ready to take on the world. However, let’s think of your site as a teenager, and think back to how we, as teenagers (yeah, I’m old), saw the world.

MOM! This doesn’t fit anymore!

Your Web site has grown as all of your marketing and customer service programs bear fruit. Traffic is increasing. Revenue is up. Everyone is smiling.

Then you wake up and realize that your Web site is too small for your business. This could mean that the infrastructure is overloaded, the network is tapped out, your connectivity is maxed, and your sysadmins, designers, and network teams are spending most of your day just firefighting.

Now, how can you grow a successful business, or be the hip kid in school, when your clothes don’t fit anymore?

But, you can’t buy an entire wardrobe every six months, so plan, consider your goals and destinations, and shop smart.

DAD! Everyone has one! I need to have one to be cool!

Shiny.

It’s a word that has been around for a long time, and was revived (with new meaning) by Firefly. It means reflective, bright, and new. It’s what attracts people to gold, mirrors, and highly polished vintage cars. In the context of Web sites, it’s the eye-candy that you encounter in your browsing, and go “Our site needs that”.
Now step back and ask yourself what purpose this new eye-candy will serve.
And this is where Web designers and marketing people laugh, because it’s all about being new and improved.

But can you be new and improved, when your site is old and broken?

Get your Web performance in order with what you, then add the stuff that makes your site pop.

But those aren’t the cool kids. I don’t hang with them.

Everyone is attracted to the gleam of the cool new Web sites out there that offer to do the same old thing as your site. The promise of new approaches to old problems, lower cost, and greater efficiencies in our daily lives are what prompt many of us to switch.

As a parent, we may scoff, realizing that maybe the cool kids never amounted to much outside of High School. But, sometimes you have to step back and wonder what makes a cool kid cool.

You have to step back and say, why are they attracting so much attention and we’re seen as the old-guard? What can we learn from the cool kids? Is your way the very best way? And says who?

And once you ask these questions, maybe you agree that some of what the cool kids do is, in fact, cool.

Can I borrow the car?

Trust is a powerful thing to someone, or to a group. Your instinctive response depends on who you are, and what your experiences with others have been like in the past.

Trust is something often found lacking when it comes to a Web site. Not between your organization and your customers, but between the various factions within your organization who are trying to interfere or create or revamp or manage the site.

Not everyone has the same goals. But sometimes asking a few questions of other people and listening to their reasons for doing something will lead to a discussion that will improve the Web site in a way that improves the business in the long run.
Sometimes asking why a teenager wants to borrow the car will help you see things from their perspective for a little while. You may not agree, but at least now it’s not a yes/no answer.

YOU: How was school today? – THEM: Ok.

Within growing organizations, open and clear communication tends to gradually shrivel and degenerate. Communications become more formal, with what is not said being as important as what is. Trying to find out what another department is doing becomes a lot like determining the state of the Soviet Union’s leadership based on who attends parades in Red Square.

Abstract communication is one of the things that separates humans from a large portion of the rest of the animal kingdom. There is nothing more abstract than a Web site, where physical devices and programming code produce an output that can only be seen and heard.

The need for communication is critical in order to understand what is happening in another department. And sometimes that means pushing harder, making the other person or team answer hard questions that they think you’re not interested in, or that you is non of your business.

If you are in the same company, it’s everyone’s business. So push for an answer, because working to create an abstract deliverable that determines the success or failure of the entire firm can’t be based on a grunt and a nod.

Summary

There are no easy answers to Web performance. But if you consider your Web site and your teams as a teenager, you will be able to see that the problems that we all deal with in our daily interactions with teens crop up over an over when dealing with Web design, content, infrastructure, networks and performance.

Managing all the components of a Web site and getting best performance out of it often requires you to have the patience of Job. But it is also good to carry a small pinch of faith in these same team;, faith  that everyone, whether they say it or not, wants to have the best Web site possible.

Web Performance, Part IX: Curse of the Single Metric

While this post is aimed at Web performance, the curse of the single metric affects our everyday lives in ways that we have become oblivious to.

When you listen to a business report, the stock market indices are an aggregated metric used to represent the performance of a set group of stocks.

When you read about economic indicators, these values are the aggregated representations of complex populations of data, collected from around the country, or the world.

Sport scores are the final tally of an event, but they may not always represent how well each team performed during the match.

The problem with single metrics lies in their simplicity. When a single metric is created, it usually attempts to factor in all of the possible and relevant data to produce an aggregated value that can represent a whole population of results.
These single metrics are then portrayed as a complete representation of this complex calculation. The presentation of this single metric is usually done in such a way that their compelling simplicity is accepted as the truth, rather than as a representation of a truth.

In the area of Web performance, organizations have fallen prey to this need for the compelling single metric. The need to represent a very complex process in terms that can be quickly absorbed and understand by as large a group of people as possible.

The single metrics most commonly found in the Web performance management field are performance (end-to-end response time of the tested business process) and availability (success rate of the tested business process). These numbers are then merged and transformed by data from a number of sources (external measurements, hit counts, conversions, internal server metrics, packet loss), and this information is bubbled up in an organization. By the time senior management and decision-makers receive the Web performance results, that are likely several steps removed from the raw measurement data.

An executive will tell you that information is a blessing, but only when it speeds, rather than hinders, the decision-making process. A Web performance consultant (such as myself) will tell that basing your decisions on a single metric that has been created out of a complex population of data is madness.

So, where does the middle-ground lie between the data wonks and the senior leaders? The rest of this post is dedicated to introducing a few of the metrics that will, in a small subset of metrics, give a senior leaders better information to work from when deciding what to do next.

A great place to start this process is to examine the percentile distribution of measurement results. Percentiles are known to anyone who has children. After a visit to the pediatrician, someone will likely state that “My son/daughter is in the XXth percentile of his/her age group for height/weight/tantrums/etc”. This means that XX% of the population of children that age, as recorded by pediatricians, report values at or below the same value for this same metric.

Percentiles are great for a population of results like Web performance measurement data. Using only a small set of values, anyone can quickly see how many visitors to a site could be experiencing poor performance.

If at the median (50th percentile), the measured business process is 3.0 seconds, this means that 50% of all of the measurements looked at are being completed in 3.0 seconds or less.

If the executive then looks up to the 90th percentile and sees that it’s at 16.0 seconds, it can be quickly determined that something very bad has happened to affect the response times collected for the 40% of the population between these two points. Immediately, everyone knows that for some reason, an unacceptable number of visitors are likely experiencing degraded and unpredictable performance when they visit the site.

A suggestion for enhancing averages with percentiles is to use the 90th percentile value as a trim ceiling for the average. Then side-by-side comparisons of the untrimmed and trimmed averages can be compared. For sites with a larger number of response time outliers, the average will decrease dramatically when it is trimmed, while sites with more consistent measurement results will find their average response time is similar with and without the trimmed data.

It is also critical to examine the application’s response times and success rates throughout defined business cycles. A single response time or success rate value eliminates

  • variations by time of day
  • variations by day of week
  • variations by month
  • variations caused by advertising and marketing

An average is just an average. If at peak buiness hours, response times are 5.0 seconds slower than the average, then the average is meaningless, as business is being lost to poor performance which has been lost in the focus on the single metric.

All of these items have also fallen prey to their own curse of the single metric. All of the items discussed above aggregate the response time of the business process into a single metric. The process of purchasing items online is broken down into discrete steps, and different parts of this process likely take longer than others. And one step beyond the discrete steps are the objects and data that appear to the customer during these steps.

It is critical to isolate the performance for each step of the process to find the bottlenecks to performance. Then the components in those steps that cause the greatest response time or success rate degradation must be identified and targeted for performance improvement initiatives. If there are one or two poorly performing steps in a business process, focusing performance improvement efforts on these is critical, otherwise precious resources are being wasted in trying to fix parts of the application that are working well.

In summary, a single metric provides a sense of false confidence, the sense that the application can be counted on to deliver response times and success rates that are nearly the same as those simple, single metrics.

The average provides a middle ground, a line that says that is the approximate mid-point of the measurement population. There are measurements above and below this average, and you have to plan around the peaks and valleys, not the open plains. It is critical never to fall victim to the attractive charms that come with the curse of the single metric.

Google Chrome: One thing we do know… (HTTP Pipelining)

 

All: If you got here via a search, realize this is an old post (2008) and that Chrome now supports HTTP Pipelining and SPDY HTTP/3.  Thanks, smp.

As a Web performance consultant, I view the release of Google Chrome with slightly different eyes than many. And one of the items that I look for is how the browser will affect performance, especially perceived performance on the end-user desktop.

One thing I have been able to determine is that the use of WebKit will effectively rule out (to the best of my knowledge) the availability of HTTP Pipelining in the browser.

HTTP Pipelining is the ability, defined in RFC 2616, to request multiple HTTP objects simultaneously across an open TCP connection, and then handle their downloads using the features built into the HTTP/1.1 specifications.

I had an Apple employee in a class I taught a few months back confirm that Safari (which is built on WebKit) cannot use HTTP Pipeling for reason that are known only to the OS and TCP stack developers at Apple.

Now, if the team at Google has found a way to circumvent this problem, I will be impressed.

Web Performance, Part VIII: How do you define fast?

In the realm of Web performance measurement and monitoring, one of the eternal and ever-present questions remains “What is fast?”. The simple fact is that there is no single answer for this question, as it it isn’t a question with one quantitative answer that encompasses all the varied scenarios that are presented to the Web performance professional.

The answer that the people who ask the “What is fast?” question most often hear is “It depends”. And in most cases, it depends on the results of three distinct areas of analysis.

  1. Baselining
  2. Competitve Analysis
  3. Comparative Analysis

Baselining

Baselining is the process of examining Web performance results over a period of time to determine the inherent patterns that exist in the measurement data. It is critical that this process occur over a minimum period of 14 days, as there are a number of key patterns that will only appear within a period at least that long.

Baselining also provides some idea of what normal performance of a Web site or Web business process is. While this will provide some insight into the what can be expected from the site, in isolation it provides only a tiny glimpse into the complexity of how fast a Web site should be.

Baselining can identify the slow pages in a business process, or identify objects that may be causing noticeable performance degradation, its inherent isolation from the rest of the world it exists is its biggest failing. Companies that rely only on the performance data from their own sites to provide the context of what is fast are left with a very narrow view of the real world.

Competitive Analysis

All companies have competition. There is always a firm or organization whose sole purpose is to carve a niche out of your base of customers. It flows both ways, as your firm is trying to do exactly the same thing to other firms.

When you consider the performance of your online presence, which is likely accounting for a large (and growing) component of your revenue, why would you leave the effects of poor Web site performance your competitive analysis? And how do you know how your site is fairing against the other firms you are competing against on a daily basis?

Competitive analysis has been a key component of the Web performance measurement field since it appeared in the mid-1990s. Firms want to understand how they are doing against other firms in the same competitive space. They need to know if their Web site is at a quantitative advantage or disadvantage with these other firms.

Web sites are almost always different in their presentation and design, but they all serve the same purpose: To convert visitors to buyers. Measuring this process in a structured way allows companies to cut through the differences that exist in design and presentation and cut directly to heart of the matter: Show me the money.
Competitive measurements allow you to determine where your firm is strong, where it is weak, and how it should prioritize its efforts to make it a better site that more effectively serves the needs of the customers, and the needs of the business.

Comparative Analysis

Most astute readers will be wondering how comparative analysis differs from competitive analysis. The differences are, in fact, fundamental to the way they are used. Where competitive analysis provides insight into the unique business challenges faced by a group of firms serving the needs of similar customers, comparative analysis forces your organization to look at performance more broadly.

Your customers and visitors do not just visit your site. I know this may come as a surprise, but it’s true. As a result, they carry with them very clear ideas of how fast a fast site is. And while your organization may have overcome many challenges to become the performance leader in your sector, you can only say that you understand the true meaning of performance once you have stepped outside your comfort zone and compared yourself to the true leaders in performance online.

On a daily basis, your customers compare your search functionality to firms who do nothing but provide search results to millions of people each day. They compare how long it takes to autheticate and get a personalized landing page on your site to the experiences they have at their bank, their favourite retailers. The compare the speed with which specific product pages load.

They may not do this consciously. But these consumers carry with them an expectation of performance, and they know when your site is or is not delivering it.
So, how do you define fast? Fast is what you make it. As a firm with a Web site that is serving the needs of customers or visitors, you have to be ready to accept that there are others out there who have solved many of the problems you may be facing. Broaden your perspective and put your site in the harsh light of these three spotlights, and your organization will be on its way to evolving its Web performance perspective.

Web Performance Concepts Series – Revisited

Two years ago I created a series of five blog articles, aimed at both business and technical readers, with the goal of explaining the basic statistical concepts and methods I use when analyzing Web performance data in my role as a Web performance consultant.

Most of these ideas were core to my thinking when I developed GrabPERF in 2005-2006, as I determined that it was vital that people not only receive Web performance measurement data for their site, but they receive it in a way that allows them to inform and shape the business and technical decisions they make on a daily basis.

While I come from a strong technical background, it is critical to be able to present the data that I work with in a manner that can be useful to all components of an organization, from the IT and technology leaders who shape the infrastructure and design of a site, to the marketing and business leaders who set out the goals for the organization and interact with customers, vendors and investors.

Providing data that helps negotiate the almost religious dichotomy that divides most organizations is crucial to providing a comprehensive Web performance solution to any organization.

These articles form the core of an ongoing series of discussion focused on the the pitfalls of Web performance analysis, and how to learn and avoid the errors others have already discovered.

The series went over like a lead balloon and this left me puzzled. While the basic information in the articles was technical and focused on the role that simple statistics play in affecting Web performance technology and business decisions inside an organization, they formed the core of what I saw as an ongoing discussion that organizations need to have to ensure that an organization moves in a single direction, with a single purpose.

I have decided reintroduce this series, dredging it from the forgotten archives of this blog, to remind business and IT teams of the importance of the Web performance data they use every day. It also serves as a guide to interpreting the numbers that arise from all the measurement methodologies that companies use, a map to extract the most critical information in the raging sea of data.

The five articles are:

  1. Web Performance, Part I: Fundamentals
  2. Web Performance, Part II: What are you calling average?
  3. Web Performance, Part III: Moving Beyond Average
  4. Web Performance, Part IV: Finding The Frequency
  5. Web Performance, Part V: Baseline Your Data

I look forward to your comments and questions on these topics.

Copyright © 2024 Performance Zen

Theme by Anders NorenUp ↑