Month: September 2008

Metrics in Conversational and Community Marketing

There is clear dissatisfaction with the current state of marketing among the social media mavens.

So what can be done? Jeff Jarvis points out that the problem lies with measurement. I agree, as there is only value in a system where all of the people involved agree on what the metric of record will be, and how it can be validly captured.

Currently CPM is the agreed upon metric. In a feed based online world, how does a CPM model work? And, most importantly, why would I continue to place your ads on my site if all your doing is advertising to people based on the words on the page, rather than who is looking at the page and how often that page is looked at.

In effect, advertisers should be the ones thrying to figure out how to get into the community, get into the conversation. As an advertiser, don’t you want to be where the action is? But how do you find an engaged audience in an online world that makes a sand castle on the beach in a hurricane look stable?

The challenge for advertisers is to be able to find the active communities and conversations effectively. The challenge for content creators and communities is to understand the value of their conversations, the interactions that people who visit the site have with the content.

In effect, a social media advertising model turns the current model on its head. Site owners and community creators gain the benefit of being attractive to advertisers because of the community, not because of the content. And site owners who understand who visits their site, what content most engages them, how they interact with the system will be able to reap the greatest rewards by selling their community as a marketable entity.

And Steven Hodson rounds out the week’s think on communities by throwing out the subversive idea that communities are not always free (as in ‘beer’, not as in ‘land of’). If a community has paid for the privilege of coming together to participate in communal events and discussions, then can’t that become an area for site owners to further control the cost of advertising on their site?

While the benefit of reduced or no marketing content is the benefit of many for-pay communities, this benefit can be used by site owners by saying that an advertiser can have access to the for-pay community at the cost of higher ad rates and smaller ads. The free community is a completely different set of rules, but there are also areas in the free community that are of higher value than others.

In summary, the current model is broken. But there is no way to measure the value of a Twitter stream, a FriendFeed conversation, a Disqus thread, or a Digg rampage. And until there is, we are stuck with an ad model that based on the words on the page, and not the community that created the words.

Blog Advertising: Fred Wilson has Thoughts on Targeted Feed-vertising

Fred Wilson adds his thoughts to the conversation about a more intelligent way to target blog and social media advertising. His idea plays right into the ideas I discussed yesterday, ideas that emphasize that a new and successful advertising strategy can be dictated by content creators and bloggers by basing advertising rates on the level of interaction that an audience has with a post.
Where the model I proposed is one that is based on community and conversation, Fred sees an opportunityfor fims that can effectively inject advertising and marketing directly into the conversation, not added on as an afterthought.
Today’s conversations take place in the streams of Twitter and FriendFeed, and are solidly founded on the ideas of community and conversation. They are spontaneous, unpredictable. Marketing into the stream requires a level of conversational intelligence that doesn’t exist in contextual advertising. It is not simply the words on the screen, it is how those ads are being used.
For example, there is no sense trying to advertise a product on a page or in a conversation that is actively engaged in discussing the flaws and failings of that product. It makes an advertiser look cold, insensitive, and even ridiculous.
In his post, Fred presents examples of subtle, targeted advertising that appears in the streams of an existing conversation without redirecting or changing the conversation. As a VC, he recognizes the opportunity in this area.
Community and conversation focused marketing is potentially huge and likely very effective, if done in a way that does not drive people to filter their content to prevent such advertising. The advertisers will also have to adopt a clear code of behavior that prevents them from being seen as anything more than new-age spammers.
Why will it be more effective? It plays right to the marketers sweet spot: an engaged group, with a focused interest, creating a conversation in a shared community.
If that doesn’t set of the buzzword bingo alarms, nothing will.
It is, however, also true. And the interest in this new model of advertising is solely drive by one idea: attention. I have commented on the attention economy previously, and I stick to my guns that a post, a conversation, a community that holds a person’s attention in today’s world of media and information saturation is one that needs to be explored by marketers.
Rob Crumpler and the team at BuzzLogic announced their conversation ad service yesterday (September 18 2008). This is likely the first move in this exciting new area. And Fred and his team at Union Square recognize the potential in this area.

Blog Advertising: Toward a Better Model

This week, I have been discussing the different approaches to blog analytics that can be used to determine what posts from a blog’s archive are most popular, and whether a blog is front-loaded or long-tailed. The thesis is that it’s not always what the words in the blog are that are important.

In a guest post this morning at ProBlogger, Skellie discusses how the value of social media visitors is different and inherently more complex than the value of visitors generated from traditional methods, such as search and feedreaders. Her eight points further support my ideas that the old advertising models are not the best suited for the new blogging world.

Stepping away from the existing advertising models that have been used since blogging popularity exploded in 2005 and 2006, it is clear that the new, interactive social web model requires an advertising approach that centers on community and conversation, rather than the older idea of context and aggregated readership.

The Current Model

Current blog advertising falls into two categories:

  1. Contextual Ads. This is the Google model, and is based on the ad network auctioning off keywords and phrases to advertisers for the privilege of seeing their ad links or images appear on pages that contain those words or phrases.
  2. Sponsored Ads. Once a blog is popular enough and can prove a well-developed audience, the blogger can offer to sell space on his blog to advertisers who wish to have their products, offerings or companies presented to the target audience.

In my opinion, these two approaches fail blog owners.

Contextual ads understand the content of the page, but do not understand the popularity of the page, or its relationship to the popularity of other pages in the archive.Contextual ads lack a sense of community, a sense of conversation. While the model has proven successful, it does not maximize the reach that a blog has with its own audience.

Sponsored ads understand the audience that the blog reaches, but do not account for posts that draw the readers’ attention for the longest time, both in terms of time spent reading and thinking about the post as well as over time in an historical sense. The sponsored ad model assumes that all posts get equal attention, or drive community and conversation to the same degree.

The New Model

In the new model, more effective use of visitor analytics is vital to shaping the type and value of the ads sold. Studying the visitor statistics of a blog will allow the owners to see whether the blog is, in general, front-loaded or long-tailed.

If the blog has a front-loaded audience, the most recent posts are of higher value and could be auctioned of at higher prices. In order for this to work, both the ad-hoster and the advertiser would have to agree to the value of the most recent posts using a proven and open statistical analysis methodology. In the case of front-loaded blogs, this analysis methodology would have to demonstrate that there is a higher traffic volume for posts that are between 0-3 days old (setting a hypothetical boundary on front-loading).

For blogs that are long-tailed, those posts that continue to draw consistent traffic would be valued far more highly than those that fall out into the general ebb and flow of a bloggers traffic. These posts have proven historically that they appear highly in search results and are visited often.

In addition to the posts themselves, the comment stream has to be considered. Posts that generate an active conversation are farmore valuable those that don’t. Again, showing the value of the conversation is reliant of the ability to track the numbers of people in the conversation (through Disqus or some other commenting system).

This model can be further augmented by using a tool like Lookery that helps to clearly establish the demographics of the blog audience. Being able to pinpoint not only where on a blog to advertise but also who the visitors are who view those page, provides a further selling point for this new model and helps build faith in the virtues of a blog that sells space using this new, more effectively targeted advertising pricing structure.

Now, I separate the front-loaded and long-tailed blogs as if they are distinct.

Obviously these categories apply to nearly every blog as there are new posts that suddenly capture the imagination of an audience, and there are older posts that continue to provide specific information that draws a steady stream of traffic to them.

Summary

This is a very early stage idea, one that has no code or methodology to support it. However, I believe that the current contextual advertising model, one based solely on the content of the post, is not allowing the content creators and blog entities to take advantage of their most valuable resource – their own posts and the conversations that they create.

I also believe that blog owners are not taking advantage of their own best resource, Web analytics, to help determine the price for advertising of their site. Not all blog posts are created or read equally. Being able to very clearly show what drives the most eyeballs to your site is a selling point that can be used in a variable-price advertising model.

By providing tools to blog owners that intimately link the analytics they already gather and the advertising space they have to sell, a new advertising model can arise, one that is uniquely suited to the new Web. This advertising model will be founded in the concepts of conversation and community, providing more discretely targeted eyeballs to advertisers, and higher ad revenues to blog owners and content creators.

UPDATES

Appears that BuzzLogic has already started down this path. VentureBeat has commentary here.

Web Performance: Blogs, Third Party Apps, and Your Personal Brand

The idea that blogs generate a personal brand is as old as the “blogosphere”. It’s one of those topics that rages through the blog world every few months. Inexorably the discussion winds its way to the idea that a blog is linked exclusively to the creators of its content. This makes a blog, no matter what side of the discussion you fall on, the online representation of a personal brand that is as strong as a brand generated by an online business.

And just as corporate brands are affected by the performance of their Web sites, a personal brand can suffer just as much when something causes the performance of a blog Web site to degrade in the eyes of the visitors. For me, although my personal brand is not a large one, this happened yesterday when Disqus upgraded to multiple databases during the middle of the day, causing my site to slow to a crawl.

I will restrain my comments on mid-day maintenance for another time.

The focus of this post is the effect that site performance has on personal branding. In my case, the fact that my blog site slowed to a near standstill in the middle of the day likely left visitors with the impression that my blog about Web performance was not practicing what it preached.

For any personal brand, this is not a good thing.
In my case, I was able to draw on my experience to quickly identify and resolve the issue. Performance returned to normal when I temporarily disabled the Disqus plugin (it has since been reactivated). However, if I hadn’t been paying attention, this performance degradation could have continued, increasing the negative effect on my personal brand.

Like many blogs, Disqus is only one of the outside services I have embedded in my site design. Sites today rely on AdSense, Lookery, Google Analytics, Statcounter, Omniture, Lijit, and on goes the list. These services have become as omnipresent in blogs as the content. What needs to be remembered is that these add-ons are often overlooked as performance inhibitors.

Many of these services are built using the new models of the over-hyped and mis-understood Web 2.0. These services start small, and, as Shel Israel discussed yesterday, need to focus on scalability in order to grow and be seen as successful, rather than cool, but a bit flaky. As a result, these blog-centric services may affect performance to a far greater extent than the third-party apps used by well-established, commercial Web sites.

I am not claiming that any one of these services in and of themselves causes any form of slowdown. Each has its own challenges with scaling, capacity, and success. It is the sheer number of the services that are used by blog designers and authors poses the greatest potential problem when attempting to debug performance slowdowns or outages. The question in these instances, in the heat of a particularly stressful moment in time, is always: Is it my site or the third-party?

The advice I give is that spoken by Michael Dell: You can’t manage what you can’t measure. Yesterday, I initiated monitoring of my personal Disqus community page, so I could understand how this service affected my continuing Web performance. I suggest that you do the same, but not just of this third-party. You need to understand how all of the third-party apps you use affect how your personal brand performance is perceived.

Why is this important? In the mind of the visitor, the performance problem is always with your site. As with a corporate site that sees a sudden rise in response times or decrease in availability, it does not matter to the visitor what the underlying cause of the issue is. All they see is that your site, your brand (personal or corporate), is not as strong or reliable as they had been led to believe.

The lesson that I learned yesterday, one that I have taught to so many companies but not heeded myself, is that monitoring the performance of all aspects of your site is critical. And while you as the blog designer or writer might not directly control the third-party content you embed in your site, you must consider how it affects your personal brand when something goes wrong.

You can then make an informed decision on whether the benefit of any one third-party app is outweighed by the negative effect it has on your site performance and, by extension, your personal brand.

Blog Statistics Analysis: Page Views by Day of Week, or When to Post

Since I started self-hosting this blog again on August 6 2008, I have been trying to find more ways to pull traffic toward the content that I put up. Like all bloggers, I feel that I have important things to say (at least in the area of Web performance), and ideas that should be read by as many people as possible.

As well, I have realized that if I invest some time and effort into this blog, it can be a small revenue source that could get me that much closer to my dream of a MacBook Pro.

The Analysis

In a post yesterday morning, Darren Rowse had some advice on when the best time to release new post is. Using his ideas as the framework, I pulled the data out of my own tracking database and came up with the chart below. This shows the page view data between September 1 2007 and September 15 2008 based on the day of the week vistors came to the site.

Blog Page Views by Day of Week

Using this data and the general framework that Darren subscribes to, I should be releasing my best and newest thoughts in a week on Monday and Tuesday (GMT).
After Wednesday, I should release only less in-depth articles, with a focus on commentary on news and events. And I must learn to breathe, as I suffer from an ailment all to common in bipolars: a lack of patience.

A new post doesn’t immediately find its target audience unless you have hundreds or thousands (Tens? Ones?) of readers who are influential. If you are luckyin this regard, then these folks will leave useful comments, and through their own attention, help gently show people that a new post is something they should devote their valuable attention towards.

It takes a while for any post to percolate through the intertubes. So patience you must have.

Front-loaded v Long-tailed

Unless, of course, your traffic model is completely different than a popular blogger.
The one issue that I had with Darren’s guidance is that it applies only to blogs that are front-loaded. A front-loaded blog is one that is incredibly popular, or has a devoted, active audience who help push page views toward the most recent 3-5 posts. Once the wave has crested, or the blogger has posted something new, the volume of traffic to older posts falls off exponentially, except in the few cases of profound or controversial topics.

When I analyzed my own traffic, I found that the most of my traffic volume was aimed toward posts from 2005 and 2006. In fact, more recent posts are nowhere near as popular as these older posts. In contrast to the front-loaded blog, mine is long-tailed.

There are a number of influential items in my blog which have proven staying power, which draw people from around the world. They have had deep penetration into search engines, and are relvant to some aspect of peoples’ lives that keeps pulling them back.

Summary

I would highly recommend analyzing your traffic to see it is front-loaded or long-tailed. I know that I wish that this blogĀ  was more front-loaded, with an active community of readers and commentators. However, I am also happy to see that I have created a few sparks of content that keep people returning again and again. If your blog isĀ  long-tailed, then when you post becomes far less relevant than ensuring the freshness and validity of those few popular posts. Ensure that these are maintained and current so that they remain relevant to as many people as possible.

Web Performance: A Review of Steve Souders' High Performance Web Sites

It’s not often as a Web performance consulatant and analyst that I find a book that is useful to so many clients. It’s much more rare to discover a book that can help most Web sites improve their response times and consistency in fewer than 140 pages.

Steve Souders’ High Performance Web Sites (O’Reilly, 2007Companion Site) captures the essence of one-side of the Web performance problem succinctly and efficiently, delivering a strong message to a group he classifies as front-end engineers. It is written in a way that can be understood by marketing, line-of-business, and technical teams. It is written in a manner designed to provoke discussions within an organization with the ultimate goal of improving Web performance
Once these discussion have started, there may some shock withing these very organizations. Not only with the ease with which these rules can be implemented, but by the realization that the fourteen rules in this book will only take you so far.

The 14 Rules

Web performance, in Souders’ world, can be greatly improved by applying his fourteen Web performance rules. For the record, the rules are:

Rule 1 – Make Fewer HTTP Requests
Rule 2 – Use a Content Delivery Network
Rule 3 – Add an Expires Header
Rule 4 – Gzip Components
Rule 5 – Put Stylesheets at the Top
Rule 6 – Put Scripts at the Bottom
Rule 7 – Avoid CSS Expressions
Rule 8 – Make JavaScript and CSS External
Rule 9 – Reduce DNS Lookups
Rule 10 – Minify JavaScript
Rule 11 – Avoid Redirects
Rule 12 – Remove Duplicate Scripts
Rule 13 – Configure ETags
Rule 14 – Make AJAX Cacheable

From the Companion Site [here]

These rules seem simple enough. And, in fact, most of them are easy to understand, and, in an increasingly complex technical world, easy to implement. In fact, the most fascinating thing about the lessons in this book, for the people who think about these things everyday, is that they are pieces of basic knowledge, tribal wisdom, that have been passed down for as long as the Web has existed.
Conceptually, the rules can be broken down to:

  • Ask for fewer things
  • Move stuff closer
  • Make things smaller
  • Make things less confusing

These four things are simple enough to understand, as they emphasize simplicity over complexity.
For Web site designers, these fourteen rules are critical to understanding how to drive better performance not only in existing Web sites, but in all of the sites developed in the future. They provide a vocabulary to those who are lost when discussions of Web performance occur. The fourteen rules show that Web performance can be improved, and that something can be done to make things better.

Beyond the 14 Steps

There is, however, a deeper, darker world beneath the fourteen rules. A world where complexity and interrelated components make change difficult to accomplish.
In a simple world, the fourteen rules will make a Web site faster. There is no doubt about that. They advocate for the reduction object size (for text objects), the location of content closer to the people requesting it (CDNs), and the optimization of code to accelerate the parsing and display of Web content in the browser.
Deep inside a Web site lives the presentation and application code, the guts that keep a site running. These layers, down below the waterline are responsible for the heavy lifting, the personalization of a bank account display, the retrieval of semantic search results, and the processing of complex, user-defined transactions. The data that is bounced inside a Web application flows through a myriad of network devices — firewalls, routers, switches, application proxies, etc — that can be as complex, if not more so, than the network complexity involved in delivering the content to the client.
It is fair to say that a modern Web site is the proverbial duck in a strong current.
The fourteen rules are lost down here beneath the Web layer. In these murky depths, far from the flash and glamor, parsing functions that are written poorly, database table without indices, internal networks that are poorly designed can all wreak havoc on a site that has taken all fourteen rules to heart.
When the content that is not directly controlled and managed by the Web site is added into this boiling stew, another layer of possible complexity and performance challenge appears. Third parties, CDNs, advertisers, helper applications all come from external sources that are relied on to have taken not only the fourteen rules to heart, but also to have considered how their data is created, presented, and delivered to the visitors to the Web site that appears to contain it.

Remember the Complexity

High Performance Web Sites is a volume (a pamphlet really) that delivers a simple message: there is something that can be done to improve the performance of a Web site. Souders’ fourteen rules capture the items that can be changed quickly, and at low-cost.
However, if you ask Steve Souders’ if this is all you need to do to have a fast, efficient, and reliable Web site, he should say no. The fourteen rules are an excellent start, as they handle a great deal of the visible disease that infects so many Web sites.
However, like the triathlete with an undiagnosed brain tumor, there is a lot more under the surface that needs to be addressed in order to deliver Web performance improvements that can be seen by all, and support rapid, scalable growth.
This is a book that must be read. Then deeper questions must be asked to ensure that the performance of the 90% of a Web site design not seen by visitors matches the 10% that is.

Blog Statistics Analysis – What do your visitors actually read?

Steven Hodson of WinExtra posted a screenshot of his personal WordPress stats for the last three years last night. I then posted my stats for a similar period of time, and Steven shot back with some question about traffic, and the ebbs and flows of readers.

Being the stats nut that I am, I went and pulled the data from my own tracking data, and came up with this.

Blog Posts Read Each Month, By Year Posted

I made a conscious choice to analyze what year the posts being read were posted in. I wanted to understand when people read my content, which content kept people coming back over and over again. The chart above speaks for itself: through most of the last year it’s clear that the most popular posts were made in 2005.

What is also interesting is the decreasing interest in 2007 posts as 2008 progressed. Posts from 2006 remained steady, as there are a number of posts in that year that amount to my self-help guides to Web compression, mod_gzip, mod_deflate, and Web caching for Web administrators.

This data is no surprise to me, as I posted my rants against Gutter Helmet and their installation process in 2005. Those posts are still near the top of the Google search response for term “Gutter Helmet”. And improving the performance of a Web site is of great interest to many Apache server admins and Web site designers.

It is also clear is that self-hosting my blog and the posting renaissance it has provoked has driven traffic back to my site.So, what lessons did I learn from this data?

  1. Always remember the long tail. Every blogger wants to be relevant, on the edge, and showing that they understand current trends. The people who follow those trends are a small minority of the people who read blogs. Google and other search engines will expose them to your writings in the time of their choosing, and you may find that the three year-old post gets as much traffic as the one posted three hours ago
  2. Write often. I was in a blogging funk when my blog was at WordPress.com. As a geek, I believe that the lack of direct control over the look and feel of my content was the cause of this. In a self-hosted environment, I feel that I am truly the one in charge, and I can make this blog what I want.
  3. Be cautious of your fame. If your posts are front-loaded, i.e. if all your readers read posts from the month and year they are posted in, are you holding people’s long-term attention? What have you contributed to the ongoing needs of those who are outside the technical elite? What will drive them to keep coming to your site in the long run?

So, I post a challenge to other bloggers out there. My numbers are miniscule compared to the blogging elite, but I am curious to get a rough sense of how the long tail is treating you.

Web Performance: GrabPERF Performance Measurement System Needs YOU!

In 2004-2005, as a lark, I created my own Web performance measurement system, using PERL, PHP and MySQL. In August 2005, I managed to figure out how to include remote agents.
I dubbed it…GrabPERF. An odd name, but an amalgamation of “Grab” and “Performance” that made sense to my mind at the time. I also never though that it would go beyond my house, a couple of basement servers, and a cable modem.
In the intervening three years, I have managed to:

  • scale the system to handle over 250 individual measurements
  • involve nine remote measurement locations
  • move the system to the Technorati datacenter
  • provide key operational measurement data to system visitors

Although the system lives in the Technorati datacenter and is owned by them, I provide the majority of the day-to-day maintenance on a volunteer basis, if only to try and keep my limited coding skills up.
But this post is not about me. It’s about GrabPERF.
Thanks to the help of a number of volunteers, I have measurement locations in the San Francisco Bay Area, Washington DC, Boston, Portugal, Germany and Argentina.
While this is a good spread, I am still looking to gather volunteers who can host a GrabPERF measurement location. The areas where GrabPERF has the most need are:

  • Asia-Pacific
  • South Asia (India, Pakistan, Bangladesh)
  • UK and Continental Europe
  • Central Europe, including the ancestral homeland of Polska

It would also be great to get a funky logo for the system, so if you are a graphic designer and want to create a cool GrabPERF logo, let me know.
The current measurement system requires Linux, cURL and a few add-on Perl modules. I am sure that I could work on other operating systems, I just haven’t had the opportunity to experiment.
If you or your organization can help, please contact me using the GrabPERF contact form.

Web Performance: David Cancel Discusses Lookery Performance Strategies

David Cancel and I have had sort of a passing vague, same space and thought process, living in the same Metropolitan area kind of distant acquaintance for about the same year.
About 2-3 months ago, he wrote a pair of articles discussing the efforts he has undertaken in order to try and offload some of the traffic to the servers for his new company, Lookery. While they are not current, in the sense that time moves in one direction for most technical people, and is compressed into the events of the past eight hours and the next 30 minutes, these articles provide an insight that should not be missed.
These two articles show how easily a growing company that is trying to improve performance and customer experience can achieve measureable results on a budget that consists of can recycling money and green stamps.

Measuring your CDN

A service that relies on the request and downloading of a single file from a single location very quickly realizes the limitations that this model imposes as traffic begins to broaden and increase. Geographically diverse users begin to notice performance delays as they attempt to reach a single, geographically-specific server. And the hosting location, even one as large as Amazon S3, can begin to serve as the bottleneck to success.
David’s first article examines the solution path that Lookery chose, which was moving the tag, which drives the entire opportunity for success in their business model, onto a CDN. With a somewhat enigmatic title (Using Amazon S3 as a CDN?), he describes how the Lookery team measured the distributed performance of their JS tag using a free measurement service (not GrabPERF) and compared various CDNs against the origin configuration that is based on the Amazon S3 environment.
This deceptively simple test, which is perfect for the type of system that Lookery uses, provided that team with the data they needed to realize that they had made a good choice in choosing a CDN and that their chosen CDN was able to deliver improved response times when compared to their origin servers.

Check your Cacheability

Cacheability is a nasty word that my spell-checker hates. To define it simply, it refers to the ability of end-user browsers and network-level caching proxies to store and re-use downloaded content based on clear and explicit caching rules delivered in the server response header.
The second Article in David’s series describes how, using Mark Nottingham’s Cacheability Engine, the Lookery team was able to examine the way that the CDNs and the Origin site informed the visitor browser of the cacheability of the JS file that they were downloading.
Cacheability doesn’t seem that important until you remember that most small firms are very conscious of the Bandwidth outlay. These small startups arevery aware when their bandwidth usage reaches 250GB/month level (Lookery’s bandwidth usage at the time the posts were written). Any method that can improve end-user performance while stilll delivering the service they expect is a welcome addition, especially when it is low-cost to free.
In the post, David notes that there appears to be no way in their chosen CDN to modify the Cacheability settings, an issue which appears to have been remedied since the article went up [See current server response headers for the Lookery tag here].

Conclusion

Startups spend a lot of time imagining what success looks like. And when it comes, sometimes they aren’t ready for it, especially when it comes to the ability to handle increasing loads with their often centralized, single-location architectures.
David Cancel, in these two articles, shows how a little early planning, some clear goals, and targeted performance measurement can provide an organization with the information to get them through their initial growth spurt in style.

Web Performance: Your Teenage Web site

It’s critical to your business. It affects revenue. It’s how people who can’t come to you perceive you.
It’s your Web site.

Its complex. Abstract. Lots of conflicting ideas and forces are involved. Everyone says they now the best thing for it. Finger-pointing. Door slamming. Screaming.

Am I describing your Web site and the team that supports it? Or your teenager?
If you think of your Web site as a teenager, you begin to realize the problems that your facing. Like a teenager, it has grown physically and mentally, and, as a result, thinks its an experienced adult, ready to take on the world. However, let’s think of your site as a teenager, and think back to how we, as teenagers (yeah, I’m old), saw the world.

MOM! This doesn’t fit anymore!

Your Web site has grown as all of your marketing and customer service programs bear fruit. Traffic is increasing. Revenue is up. Everyone is smiling.

Then you wake up and realize that your Web site is too small for your business. This could mean that the infrastructure is overloaded, the network is tapped out, your connectivity is maxed, and your sysadmins, designers, and network teams are spending most of your day just firefighting.

Now, how can you grow a successful business, or be the hip kid in school, when your clothes don’t fit anymore?

But, you can’t buy an entire wardrobe every six months, so plan, consider your goals and destinations, and shop smart.

DAD! Everyone has one! I need to have one to be cool!

Shiny.

It’s a word that has been around for a long time, and was revived (with new meaning) by Firefly. It means reflective, bright, and new. It’s what attracts people to gold, mirrors, and highly polished vintage cars. In the context of Web sites, it’s the eye-candy that you encounter in your browsing, and go “Our site needs that”.
Now step back and ask yourself what purpose this new eye-candy will serve.
And this is where Web designers and marketing people laugh, because it’s all about being new and improved.

But can you be new and improved, when your site is old and broken?

Get your Web performance in order with what you, then add the stuff that makes your site pop.

But those aren’t the cool kids. I don’t hang with them.

Everyone is attracted to the gleam of the cool new Web sites out there that offer to do the same old thing as your site. The promise of new approaches to old problems, lower cost, and greater efficiencies in our daily lives are what prompt many of us to switch.

As a parent, we may scoff, realizing that maybe the cool kids never amounted to much outside of High School. But, sometimes you have to step back and wonder what makes a cool kid cool.

You have to step back and say, why are they attracting so much attention and we’re seen as the old-guard? What can we learn from the cool kids? Is your way the very best way? And says who?

And once you ask these questions, maybe you agree that some of what the cool kids do is, in fact, cool.

Can I borrow the car?

Trust is a powerful thing to someone, or to a group. Your instinctive response depends on who you are, and what your experiences with others have been like in the past.

Trust is something often found lacking when it comes to a Web site. Not between your organization and your customers, but between the various factions within your organization who are trying to interfere or create or revamp or manage the site.

Not everyone has the same goals. But sometimes asking a few questions of other people and listening to their reasons for doing something will lead to a discussion that will improve the Web site in a way that improves the business in the long run.
Sometimes asking why a teenager wants to borrow the car will help you see things from their perspective for a little while. You may not agree, but at least now it’s not a yes/no answer.

YOU: How was school today? – THEM: Ok.

Within growing organizations, open and clear communication tends to gradually shrivel and degenerate. Communications become more formal, with what is not said being as important as what is. Trying to find out what another department is doing becomes a lot like determining the state of the Soviet Union’s leadership based on who attends parades in Red Square.

Abstract communication is one of the things that separates humans from a large portion of the rest of the animal kingdom. There is nothing more abstract than a Web site, where physical devices and programming code produce an output that can only be seen and heard.

The need for communication is critical in order to understand what is happening in another department. And sometimes that means pushing harder, making the other person or team answer hard questions that they think you’re not interested in, or that you is non of your business.

If you are in the same company, it’s everyone’s business. So push for an answer, because working to create an abstract deliverable that determines the success or failure of the entire firm can’t be based on a grunt and a nod.

Summary

There are no easy answers to Web performance. But if you consider your Web site and your teams as a teenager, you will be able to see that the problems that we all deal with in our daily interactions with teens crop up over an over when dealing with Web design, content, infrastructure, networks and performance.

Managing all the components of a Web site and getting best performance out of it often requires you to have the patience of Job. But it is also good to carry a small pinch of faith in these same team;, faith  that everyone, whether they say it or not, wants to have the best Web site possible.

Copyright © 2024 Performance Zen

Theme by Anders NorenUp ↑