Real-time data will alter how brands’ sites work | Opinion | New Media Age

 

JONATHAN BRIGGS

Real-time data will alter how brands’ sites work

24 February 2011 | By Jonathan Briggs

The ‘web of now’ is probably as significant as the mobile and social web, and is highly linked to both

We’ve come a long way since the web of static documents. Today’s internet is a complex place of streaming video, cloud-hosted services, social activity, retail experiences and mobile apps, all buzzing with viewers or customers and all generating vast quantities of data and meta data. From status updates to dynamic price changes the web has become live.

This ‘web of now’ is probably as significant as the mobile and social web, and is highly linked to both, but has received less attention. That will soon change and it will affect how we work with clients. There’ll be more night shifts as 24/7 customer service becomes more common and effective brands will join news organisations in reacting to world events.

Pieces of the real-time web are emerging everywhere from Google Instant to Facebook monitoring, from Twitter-powered news to live analytics and dashboards. Keeping track of the pulse of transactional data and consumer comment will be an essential tool in every business.

Digital agencies will need to design data-collection systems to monitor the events inside the sites, apps and campaigns they create, as well as the rest of the internet where it affects the client. Luckily the data from all of these can be increasingly captured and mashed up using APIs and low-cost services. Taking customer, sales and stock data from inside the business and combining these with web or mobile activity will move from being the preserve of the large enterprise to the SME. The challenge is deciding what to monitor and what to ignore, and this will vary from business to business.

Visualisation of this real-time data will also be necessary, and configurable dashboards, such as Geckoboard, show what’s possible. Already some brands, such as Gatorade, have designed Mission Control-style rooms where teams watch and react to the flow of activity.

Both monitoring and visualisation present tremendous opportunities for digital agencies if we can persuade clients of the value that these insights will give them. We need to become more data-savvy and learn to provide deeper insights into what the data means. We’ve already seen the difference between agencies that simply install Google Analytics as part of a web build and those which offer their clients detailed interpretations and suggested actions.

The next step will be agents. These are computer systems that react to changes in data. At their simplest, they operate like Google Alerts but  could be programmed to change a price or update an AdWords bid. At their most complex, they may become ‘intelligent’ rule-based systems that collaborate to reason about the world on our behalf.

Big companies have had agent-based systems for a few years but it’ll be the opportunity to take these mainstream using newly emerging tools that will be important for agencies and their clients.

  • Have your say

    Article on 'web of now' and implications for companies and what this means for data use and insight.

    Personas Are Back In Style | Social Media Today

    Personas are making a comeback in strategic planning.

    As the social Internet fragments audiences understanding the coherent behaviors of the increasing number of audiences segments is becoming the critical first step in account planning.

    The modern use of personas as far as my research found, dates to the 1980s. Angus Jenkinson began to use them to develop the first CRM strategies as the CEO of a database marketing firm in the UK, and they were adopted by OgilvyOne as the centerpiece of their strategy development.

    He generally is credited for inventing the practice as part of customer research.   Personas appeared shortly after “account planning” was established in the UK as the third leg of the ad agency stool: copywriter, art director, and—using marketing research to represent the needs of the customer—the account planner.

    Personas in the modern era were integral in the work of Sigmund Freud and Carl Jung.  But it wasn’t until OglivyOne adopted them as “Customer Prints” that they became a framework for organizing all customer understandings.

    Personas were especially useful for interactive marketing because they could allow for imaginative integration of different customer behaviors and tasks, acting like use cases.

    Since the 1990s other types of customer definition have layered market research.  Behavioral analysis focused on end results, much like direct marketing had.  All that mattered was what behaviors led to a customer buying.  If that customer was a woman, or Hispanic, or living in a household with kids under age 5 didn’t matter as much

    Once a general target is defined, marketers can focus closer and closer on customers who are buying.  A/B testing of creative, re-targeting customers once they are identified as good prospects, and trying alternative forms of offer all evolve from the idea of trying to define what engagement a company has to do to get a specific customer to buy.

    Interactive online engagement makes all this cheap and easy to chase one customer until they buy.

    Personas have fallen in disrepute since the turn of the century, as marketing becomes more data driven. But they still can retain their original value—as proxies for real customers and their real wants, needs and behaviors.

    I think it’s even more important now because people inside business organizations—many, many people–haven’t been exposed directly to the customer and the marketing strategies the organization uses to speak to them.  Personas are valuable to make clear who the customers are and why they’re interested in the product.

    This is what I call the “inside out” effect.  Organizations will find more and more employees having greater direct communications with customers, prospects and strangers.  These employees need a way to get on the same page.  Personas can be a useful central tool in creating a model of the audiences the employees will be speaking with.

    Sumeet Kanwar is a brilliant analytics leader at OMD in Chicago.  He began his career at Sears, working in product management.  One line of products he worked with was dishwashers.  He told me they found out there were precisely three different audiences that bought dishwashers, each with a unique motivation:

    People re-modeling their existing kitchen.  They had some time to think about the different models and features and often were in the market for a better dishwasher.

    Second were people moving into a new home.  Usually there was a dishwasher in place but it was either broken or inadequate.  They need to buy a solid dishwasher and get a recommendation of someone to install it.

    Third were people whose dishwasher had blown up.  They were at Sears to buy a dishwasher right now, and get it installed as soon as possible.

    I don’t know if there was collateral demographic information about each of these audiences, for example those re-modelers were $75,000+ household income, the decision maker was female or male, etc. etc.  But it strikes me that these are very valid persona of a kind I would call behavioral personas.  Some college kids living in a frat house might have their dishwasher blow up, or my next door neighbors who are an elderly retired couple might as well.  Both fit the “disaster” persona, and they share a strong common decision making process. One might buy a more expensive dishwasher than another, but the key for the salesperson was to recognize the urgency and go into total solution provider mode—don’t worry, we’ll take care of you right away,

    Sears might want to emphasize somewhere in its advertising that “same day” delivery and installation was possible.

    Best Buy was renowned for its four personas–Buzz, Jill, Barry and Ray.  Buzz was the single, male nerd with disposable income who had to have the latest media and game technology, and so on.  Jill bought the electronics in the living room or family room, and often bought the family camera.

    I’m thinking about the value of persona in developing social media strategies for organizations.  General Mills has a newsletter called “Dinner for Two,” and another one for its “Box tops for Education” program for parent’s organizations pooling box tops to earn money and supplies for their kid’s school.

    These are two very distinct groups of people with strong collateral social media habits. A strategy to reach people cooking for two would be very different from trying to engage parents and encourage them to work with Box tops for Education in their school.

    I know at some firms, like Organic, they have reputations for blowing out personas to the Nth degree–whole rooms decorated the way Buzz or Judy would live, and so on.

    They key to using them is to not be afraid to let the natural organization rule.  If it’s behavioral, so be it.  The customers have to be understood in their actual contexts.  It’s tempting (and fun) to get a glimpse of a customer segment and then just start making up stuff, painting a picture of them like Henry James might, full of rich details from people you recall just like this.

    This is where the use of personas can fall apart,  you have to do the hard digging through data analysis, interviews, observation, and research.  We have to let the audiences speak for themselves and capture that in order to create persona that can drive effective customer segmentation

    Customer segmentation is the mandate for what Dakota Reese calls the “Contextual Web,” his term for Web 3.0. (Web 1.0 he calls the “Presentation Web” and Web 2.0 the “Transactional Web,” since these are the essential functions added at each evolution.)   Customers are coming to expect that companies will understand and respond to their needs in the context of their daily lives.

    Mobile and social communications speed this up and make it more complex.  The answer isn’t to simplify, it’s to clarify customer segments.

    Like Einstein said– “make things as simple as possible–but no simpler.”

     

    Them user stories sure are catching on...

    Quora on Designing an Organization That Designs Better Products | Co.Design

    Quora on Designing an Organization That Designs Better Products

    Rebekah Cox, employee number one at Quora, writes about what it means to be a designer at this new start-up.

    Quora is a community-sourced site with a collection of continually improving questions and answers.

    Quora’s first employee, Rebekah Cox, a product designer and manager at the company, and a former product design lead at Facebook, wanted to take her experience as a designer and as a manager to create a new type of design role at Quora.

    Here, she writes about how her previous work experience shaped that role at Quora.

    One thing you learn when you've worked in many different organizations (higher education, corporate, startup) and had many different roles (programmer, designer, design manager, interim director) is how little a single great designer can do on their own in a large organization. There are so many forces exerting pressure on a design that even truly remarkable designers are often sidelined and fall victim to political pressures, limited authority, executive whims, and eventually, reduced motivation.

    And yet, in all cases, the designer is ultimately responsible for the design. If something is awkward or doesn’t work, it’s the designer's fault.

    Witnessing the forces of pressure on either side of the equation as a designer is heartbreaking. On the one hand, when you are the actual designer, you're vulnerable and suspicious of your own output, but on the other, as the design manager, you have to answer to stakeholders and fulfill responsibilities that are beyond the realm of pure product design.

    I've designed the Quora product, and also how product design fits in the company.

    A lot of companies have tried to support designers by giving them "a seat at the table." What this usually means in practice, however, is that a designer is sitting at the table well after the important product decisions that influence the design have been made. This is usually where complicated and muddy designs are born. Someone wants to do X, while control Y can afford compromise Z, and then it is all packaged up and given to a designer to implement. It's hard for a designer to argue against the forces that are leading to that compromise because it's already too late. This existing system, in my opinion, is fundamentally broken.

    Knowing this and understanding the amazing opportunity of being the first employee of what I believe will become an important company--one that is trusted with helping everyone share their knowledge--I've spent a considerable amount of time not only designing the Quora product, but also designing how product design fits within the organization.

    To start that process, I asked myself several questions:

    - Where and how can a designer be most effective in creating a product and an experience that best meets strategic goals?
    - How can the designer be best suited to interact with engineering, product management, executives, etc.?
    - What are the strengths unique to designers? What are their weaknesses? How can both their strengths and weaknesses be leveraged to produce the best output?
    - How can I use that understanding to craft a simple, powerful role?

    And, most importantly:

    - How can I help designers avoid all the obstacles that they typically have to deal with and nurture them to achieve their best?

    To start, you really have to understand what design means, and in the context of a web product, it can mean a lot of things.

    For Quora, it means designing for whys (the product) and taking the most straightforward route possible for the hows (the interactions).

    Whys are questions like:

    -Why does a feature need to exist?
    -Why does a user want/need to take some action?
    -Why can't they use some other, similar feature?

    The hows are then driven by the answers to the whys—after all, why a user must enter a flow dictates how they progress through it.

    This has all sorts of ramifications on what it means to be a designer at Quora. It means that you must understand the craft of creating something that is not only aesthetically pleasing, but also functionally pleasing, and at its best, subconsciously pleasing.

    Some tools to accomplish these goals include:

    -Having a designer involved as early as possible and with as much authority and responsibility as possible
    -Giving designers some expectations normally reserved for product managers
    -Setting the expectation that designers should care deeply about the product
    -Fostering ownership for the rules and rationale for every UI component
    -Actively resisting the urge to build UI for UI's sake, and instead focusing singularly on the product's goals.

    Designers at Quora are expected to care about everything related to the products they help create and they have the ability to do anything necessary to make each great. Supporting designers in that role is where organizational design is most important: designers need the authority and the tools to operate most effectively.

    This is not for everyone and, frankly, not for most designers. Designers who are trained and optimized for the hows and those who have ignored the whys, are not great fits for this role. However, designers who are curious, designers who are passionate and designers who are talented enough to handle tremendous amounts of responsibility and authority are great fits for us. At many companies, even directors don't have a lot of ability to make something really great, but at Quora a designer can.

    Finally, Quora Product Designer isn't simply a title, it's an amazing role at an amazing company with as much care baked into the organization as has been baked in to the product itself. At Quora, we're working on creating not only the best possible product, but also the best possible environment for designers.

    [This post originally appeared on Quora; Top image: "Clearing the Way" by Kenny Louie]

    <div class="disqus-noscript">View the discussion thread.</div>

    Rebekah Cox

    Rebekah Cox

    Rebekah previously worked at the arcade in the mall where she was quickly promoted to cashier. Many years later she was a Product Design Lead at Facebook, building and designi... Read more

    Visit us at TFMA on stand E16 and win a Nintendo 3DS

    Interesting headline messaging

    From: smartFOCUS [mailto:marketing@smartfocusmarketing.com]
    Sent: 25 February 2011 11:37
    Subject: Visit us at TFMA on stand E16 and win a Nintendo 3DS

    view this email in a browser | whitelist smartFOCUS

    smartFOCUS.com

    Dear Phil,

    With TFM&A only a few days away, don't miss out on the opportunity to come and meet us on stand E16 and enter for our free prize draw for the chance of winning a fabulous new Nintendo 3DS.

    Nintendo 3DS

    All you have to do for a chance to win is complete our form and place it in the special box on our stand.

    Don't miss the opportunity to also come and see us present at the following seminar:

    The 3 big issues: Manage & optimise marketing investment to improve business results

    When:

    Tuesday 1 March - 1:30-2:00pm

    Where:

    Data Marketing Theatre

    Speaker:

    Tom Ratkovich
    President smartFOCUS Astech

    Topic:

    Intelligent Marketing



    We look forward to seeing you there and hope we can help you turn your information into insight...!!

    About smartFOCUS

    smartFOCUS solutions deliver the most powerful analysis and customer insight, enabling communication across all marketing channels. More than 700 companies worldwide rely on smartFOCUS to inspire customers, improve results and transform marketing performance. Clients and partners include AAA, ABN AMRO, Centre Parcs, EasyJet, EMI, Equifax, Eurocamp, Harrods, Landal Green Park, lastminute.com, Manchester United, QVC, Rabobank, RCI-GVN, Société Générale and Sony Europe.

    Visit our website to find out more about smartFOCUS

     

    smartFOCUS.com | contact us | privacy policy | unsubscribe

    You're receiving this newsletter because you're a client of smartFOCUS, or because you signed up on our website. You can unsubscribe at any time.

    This email is published and distributed by smartFOCUS, One Redcliff Street, Bristol, BS1 6NP, Registration No. 5189530.

     

    Cross-Visit Participation [Inside Omniture SiteCatalyst] | Omniture Industry Insights

    Cross-Visit Participation [Inside Omniture SiteCatalyst]

    Monday, 9 March 2009 @ 7:23, by Adam Greco

    In one of my earliest blog posts, I discussed how Conversion Variables (eVars) have different types of allocation.  For example, if someone comes to your site multiple times from a few different campaign tracking codes, you can choose to give credit for any Success Event to the first tracking code or the last tracking code using allocation.  The same is true for every eVar, but there may be times when you want to see all of the values passed to an eVar prior to the Success Event taking place.  In this post I will explain how to do this in SiteCatalyst through Cross-Visit Participation.

    Cross-Visit Participation
    So what is Cross-Visit Participation?  In Omniture-speak, it is a JavaScript plug-in that concatenates a series of values into an eVar for attribution purposes.  As alluded to above, the most common use of this plug-in is for what we call Campaign Stacking.  For example, let’s say that a visitor comes to your site from a CNN Display Ad and the tracking code is “cnn_123″ and the same visitor comes the next day from a Google Paid Search keyword with the tracking code “ggl_456″ and then applies for a credit card.  In this case, if you were using the Campaign Conversion Variable (eVar) and the allocation was set to “Most Recent” then the credit card application success event would be attributed to the Google keyword.  However, that visitor may never have conducted a search on Google had he/she not seen the display advertisement on CNN so we may inadvertently reduce the budget for CNN and increase it for Google only to be surprised later that we are not getting as many credit card applications as before.  Sometimes a little (incorrect) data is worse than no data at all!

    Omniture is well aware of this attribution issue and is taking steps to improve the overall accuracy in the future.  In the meantime, the Omniture Consulting team has created the Cross-Visit Participation plug-in (sometimes referred to as Campaign Stacking) that lets you see the entire picture.  So in the preceding example, while the Campaign eVar would always have the latest tracking code value, the plug-in would pass the historical string to a custom eVar:

    By doing this, when the credit card application success event is fired, the Google tracking code will get credit in the Campaigns report, but in the eVar12 report, the string “cnn_123:ggl_456″ would get credit so you, as the online marketer, could see what percentage of all credit card applications involved each combination of tracking codes.  This information can then be moved to Excel using the ExcelClient, where more in-depth analysis can take place.

    Variations of Cross-Visit Participation
    However, the value of this plug-in reaches far beyond the realm of campaigns.  There are many fun and creative ways you can use this plug-in such as:

    1. Tracking which site tools/calculators visitors used prior to success
    2. Tracking which products visitors viewed prior to success
    3. Tracking which videos visitors viewed prior to success
    4. Tracking which search terms visitors entered on your site prior to success

    The list goes on and on.  Any time you want to see a concatenated list of values in the order that they took place, you can consider using the Cross-Visit Participation plug-in.

    Important Things To Know About Cross-Visit Participation
    The following are some important things to know about Cross-Visit Participation:

    1. You need to work with Omniture Consulting to set-up this plug-in
    2. You must specify the maximum number of values you want to concatenate in the plug-in and the time period for them to be considered germane in impacting conversion
    3. The plug-in will not store duplicate values

    Real-World Example
    In this real-world example, I will build upon the preceding campaign example by using the Cross-Visit Participation plug-in to capture Marketing Channels.  Let’s imagine that your CMO comes to you and wants to know how often Greco Inc. is paying to drive people to its site from various Marketing Channels.  The CMO is primarily interested in seeing if paid acquisition converters are always coming to the site first from Paid Search and then Paid Display or vice versa.   Due to budget cuts, ad spending may have to be cut back and the CMO needs to justify which channels and channel combinations are important to continuing to drive applications.

    To answer this question, you would need to classify Campaign Tracking Codes by Marketing Channel and use Cross-Visit Participation to “stack” Marketing Channels (i.e. Paid Search, Display Ad, E-mail, etc…).  The classification of Campaign Tracking Codes by Marketing Channel should be straightforward (if you have read the SAINT Classifications post).  The second item needed is a concatenated string of “stacked” Marketing Channels so you can see the interplay between them prior to website conversion.  There are many ways to “stack” the channels ranging from creating SAINT Classifications of stacked campaign tracking codes to capturing the channel by using the Get Query Parameter plug-in (I suggest you work with Omniture Consulting to identify the best approach for your organization).

    Once you have both of these set-up, you are ready for analysis, so let’s dig in.  Let’s say that Greco Inc. has a Campaign Channel Conversion report, which is the report of classified campaign tracking codes, set to “Most Recent” allocation as follows:

    As you can see in this report, only 16.49% of Completed Applications are coming from Paid Display as compared to 66.59% coming from Paid Search (in this case, let’s assume e-mails are inexpensive) so your first thought might be to cut the Paid Display budget.  However, if we look at the report that shows the “stacked” Marketing Channels described above, we can see the interactions that visitors took with all Marketing Channels prior to completing an application as shown in this report:

    When we look at this report we notice something interesting.  Display Ads are often times part of the mix that leads to success, but Paid Search is most often the one that is the last touch prior to conversion.  In fact, if we add up all of the times that Paid Display is part of the mix (or used individually) the percent impacted is 47.43%.  If you only looked at the first report (Campaign Channels) you would see a skewed picture since the “Most Recent” allocation would attribute most of the success to Paid Search.  Thus, you have to ask yourself, how much of my success would still come through if Paid Display were not in the mix?  At 16.49% you might be willing to take the risk to save some money, but at 47.43%, you may not be willing to do so and might consider some more in-depth testing prior to making any rash marketing spend decisions!

    This is just one example of how Cross-Visit Participation can add more context to your analyses so that you can make the best marketing decisions possible.

    Have a question about anything related to Omniture SiteCatalyst?  Is there something on your website that you would like to report on, but don’t know how?  Do you have any tips or best practices you want to share?  If so, please leave a comment here or send me an e-mail at insidesitecatalyst@omniture.com and I will do my best to answer it right here on the blog so everyone can learn! (Don’t worry - I won’t use your name or company name!).  If you are on Twitter, you can follow me at http://twitter.com/Omni_man.

    Learn more about Omniture Consulting
    Learn more about Omniture University

    Attribution Modeling ~ Banners, Blondes, & the Bottom Line | Last Click News

    Attribution Modeling ~ Banners, Blondes, & the Bottom Line

    Published on February 22, 2011 by Mark Hughes   ·   No Comments

    C3 Metrics CEO on solving the mystery of your missing profits

    Raymond Chandler was the archetype detective storyteller, fond of using hard-boiled heroes, cold-blooded “thugs,” and hot-to-trot “dames” as his characters. One of his favorite quotes was: “I do a great deal of research…particularly in the apartments of tall blondes.”

    But solving mysteries with diligent research is also part of our daily regimen in online advertising. Every network, keyword, and overall campaign is a puzzle that sometimes changes week to week. The puzzle is solved with attribution modeling, but we’re just now scratching the surface.

    We’d like to think research methods we employ provide facts relevant to “our case.” But, incorrect facts will “never hold up in court.” Today, the facts are wrong.

    Despite that Internet advertising (a $70 billion/yr global industry) is the most trackable form of advertising on earth–the facts determining success of those billions of dollars, won’t hold up much longer.

    Why? Sadly, today’s outdated online ad tracking systems erroneously give 100% credit to the very last clicked or last viewed ad before a transaction.

    Mark Hughes, CEO
    C3 Metrics

    Example: if four Internet ads contribute to a transaction; today’s outdated systems allocate entire credit to the very last ad, completely ignoring the first three ads which actually drove the revenue.

    Zero credit to revenue drivers, and 100% credit to the last ad placed. It’s like discovering the prosecutor put away the wrong guy. Bad facts, bad decisions, bad outcome.

    Members of the jury…this is a $20 billion global problem.

    Now enter attribution modeling: at C3 Metrics (disclosure, I’m the CEO), a robust attribution model takes an enormous amount of ingredients, and reduces complexity to simplicity.

    At a basic level, C3 Metrics’ attribution modeling system assigns credit to Originators, Assists, and Converters within a transaction. An attribution model should capture every online media source from the top of the funnel where sales originate…down to the very bottom of the funnel. So in a $100 transaction, an Originator would receive a fraction of $100 attributed to them—and the Assist and Converter would also receive fractional credit of the $100 attributed respectively.

    100% of revenue credit is attributed and split among Originators, Assists, and Converters–accounting for the actual drivers of revenue. Then revenue and respective costs from paid media sources converge in a single, elegant ratio in the attribution model: Attributed Revenue-to-Spend Ratio (ARSR™).

    It’s a simple ratio any marketer can grasp: attributed revenue divided by corresponding spend. If you have a 4.0 ratio for a specific keyword, or specific Display campaign–you’re getting $4.00 in revenue for every dollar spent on that particular media source. If you have an ARSR of 1.25 for a particular media buy—you’re getting $1.25 in revenue for every dollar spent there.

    For brands that don’t transact dollars on their site, they simply assign a revenue value for: a dealer zip code lookup, configuring a vehicle online, or scheduling an appointment online.

    ARSR delivers knowledge ready to act on, versus information barely ready for analysis. The special sauce of the attribution model is the numerator of the ratio (attributed revenue). Media buyers easily identify media sources with high ratios to scale, and low ratios to cut or improve. Instead of taking weeks to analyze, it’s about an hour.

    But the jury wants the facts, and here they are: in the longest running attribution modeling study of its kind (2 yrs) the results are enough to get anyone promoted:

    a) 25%+ higher revenue on same ad-spend producing millions of dollars in incremental profit
    b) Display ROI improvement of 160%
    c) Search ROI improvement of 98%
    d) Accurate economic model to measure affiliate performance

    Case closed. The verdict: millions in profit added to the bottom line.

    Are you ready to solve your case with the right facts?

    Readers Comments (0)

    You must be logged in to post a comment.

    Good Software Takes Ten Years. Get Used To it. - Joel on Software

    Joel on Software

    Good Software Takes Ten Years. Get Used To it.

    by Joel Spolsky
    Saturday, July 21, 2001

    Have a look at this little chart:

    picture-lotus-notes:
    [Source: Iris Associates]

    This is a chart showing the number of installed seats of the Lotus Notes workgroup software, from the time it was introduced in 1989 through 2000. In fact when Notes 1.0 finally shipped it had been under development for five years. Notice just how dang long it took before Notes was really good enough that people started buying it. Indeed, from the first line of code written in 1984 until the hockey-stick part of the curve where things really started to turn up, about 11 years passed. During this time Ray Ozzie and his crew weren't drinking piña coladas in St Barts. They were writing code.

    The reason I'm telling you this story is that it's not unusual for a serious software application. The Oracle RDBMS has been around for 22 years now. Windows NT development started 12 years ago. Microsoft Word is positively long in the tooth; I remember seeing Word 1.0 for DOS in high school (that dates me, doesn't it? It was 1983.)

    To experienced software people, none of this is very surprising. You write the first version of your product, a few people use it, they might like it, but there are too many obvious missing features, performance problems, whatever, so a year later, you've got version 2.0. Everybody argues about which features are going to go into 2.0, 3.0, 4.0, because there are so many important things to do. I remember from the Excel days how many things we had that we just had to do. Pivot Tables. 3-D spreadsheets. VBA. Data access. When you finally shipped a new version to the waiting public, people fell all over themselves to buy it. Remember Windows 3.1? And it positively, absolutely needed long file names, it needed memory protection, it needed plug and play, it needed a zillion important things that we can't imagine living without, but there was no time, so those features had to wait for Windows 95.

    But that's just the first ten years. After that, nobody can think of a single feature that they really need. Is there anything you need that Excel 2000 or Windows 2000 doesn't already do? With all due respect to my friends on the Office team, I can't help but feel that there hasn't been a useful new feature in Office since about 1995. Many of the so-called "features" added since then, like the reviled ex-paperclip and auto-document-mangling, are just annoyances and O'Reilly is doing a nice business selling books telling you how to turn them off.

    So, it takes a long time to write a good program, but when it's done, it's done. Oh sure, you can crank out a new version every year or two, trying to get the upgrade revenues, but eventually people will ask: "why fix what ain't broken?"

    picture-fruit:

    Failure to understand the ten-year rule leads to crucial business mistakes.

    Mistake number 1. The Get Big Fast syndrome. This fallacy of the Internet bubble has already been thoroughly discredited elsewhere, so I won't flog it too much. But an important observation is that the bubble companies that were trying to create software (as opposed to pet food shops) just didn't have enough time for their software to get good. My favorite example is desktop.com, which had the beginnings of something that would have been great if they had worked on it for 10 years. But the build-to-flip mentality, the huge overstaffing and overspending of the company, and the need to raise VC every ten minutes made it impossible to develop the software over 10 years. And the 1.0 version, like everything, was really morbidly awful, and nobody could imagine using it. But desktop.com 8.0 might have been seriously cool. We'll never know.

    Mistake number 2. the Overhype syndrome. When you release 1.0, you might want to actually keep it kind of quiet. Let the early adopters find it. If you market it and promote it too heavily, when people see what you've actually done, they will be underwhelmed. Desktop.com is an example of this, so is Marimba, and Groove: they had so much hype on day one that people stopped in and actually looked at their 1.0 release, trying to see what all the excitement was about, but like most 1.0 products, it was about as exciting as watching grass dry. So now there are a million people running around who haven't looked at Marimba since 1996, and who think it's still a dorky list box that downloads Java applets that was thrown together in about 4 months.

    Keeping 1.0 quiet means you have to be able to break even with fewer sales. And that means you need lower costs, which means fewer employees, which, in the early days of software development, is actually a really great idea, because if you can only afford 1 programmer at the beginning, the architecture is likely to be reasonably consistent and intelligent, instead of a big mishmash with dozens of conflicting ideas from hundreds of programmers that needs to be rewritten from scratch (like Netscape, according to the defenders of the decision to throw away all the source code and start over).

    Mistake number 3. Believing in Internet Time. Around 1996, the New York Times first noticed that new Netscape web browser releases were coming out every six months or so, much faster than the usual 2 year upgrade cycle people were used to from companies like Microsoft. This led to the myth that there was something called "Internet time" in which "business moved faster." Which would be nice, but it wasn't true. Software was not getting created any faster, it was just getting released more often. And in the early stages of a new software product, there are so many important things to add that you can do releases every six months and still add a bunch of great features that people Gotta Have. So you do it. But you're not writing software any faster than you did before. (I will give the Internet Explorer team credit. With IE versions 3.0 and 4.0 they probably created software about ten times faster than the industry norm. This had nothing to do with the Internet and everything to do with the fact that they had a fantastic, war-hardened team that benefited from 15 years of collective experience creating commercial software at Microsoft.)

    Mistake number 4. Running out of upgrade revenues when your software is done. A bit of industry lore: in the early days (late 1980s), the PC industry was growing so fast that almost all software was sold to first time users. Microsoft generally charged about $30 for an upgrade to their $500 software packages until somebody noticed that the growth from new users was running out, and too many copies were being bought as upgrades to justify the low price. Which got us to where we are today, with upgrades generally costing 50%-60% of the price of the full version and making up the majority of the sales. Now the trouble comes when you can't think of any new features, so you put in the paperclip, and then you take out the paperclip, and you try to charge people both times, and they aren't falling for it. That's when you start to wish that you had charged people for one year licenses, so you can make your product a subscription and have permission to keep taking their money even when you haven't added any new features. It's a neat accounting trick: if you sell a software package for $100, Wall Street will value that at $100. But if you can sell a one year license for $30, then you can claim that you're going to get recurring revenue of $30 for the next, say, 10 years, which is worth $200 to Wall Street. Tada! Stock price doubles! (Incidentally, that's how SAS charges for their software. They get something like 97% renewals every year.)

    The trouble is that with packaged software like Microsoft's, customers won't fall for it. Microsoft has been trying to get their customers to accept subscription-based software since the early 90's, and they get massive pushback from their customers every single time. Once people got used to the idea that you "own" the software that you bought, and you don't have to upgrade if you don't want the new features, that can be a big problem for the software company which is trying to sell a product that is already feature complete.

    Mistake number 5. The "We'll Ship It When It's Ready" syndrome. Which reminds me. What the hell is going on with Mozilla? I made fun of them more than a year ago because three years had passed and the damn thing was still not out the door. There's a frequently-obsolete chart on their web site which purports to show that they now think they will ship in Q4 2001. Since they don't actually have anything like a schedule based on estimates, I'm not sure why they think this. Ah, such is the state of software development in Internet Time Land.

    But I'm getting off topic. Yes, software takes 10 years to write, and no, there is no possible way a business can survive if you don't ship anything for 10 years. By the time you discount that revenue stream from 10 years in the future to today, you get bupkis, especially since business analysts like to pretend that everything past 5 years is just "residual value" when they make their fabricated, fictitious spreadsheets that convince them that investing in sock puppets at a $100,000,000 valuation is a pretty good idea.

    Anyway, getting good software over the course of 10 years assumes that for at least 8 of those years, you're getting good feedback from your customers, and good innovations from your competitors that you can copy, and good ideas from all the people that come to work for you because they believe that your version 1.0 is promising. You have to release early, incomplete versions -- but don't overhype them or advertise them on the Super Bowl, because they're just not that good, no matter how smart you are.

    Mistake number 6. Too-frequent upgrades (a.k.a. the Corel Syndrome). At the beginning, when you're adding new features and you don't have a lot of existing customers, you'll be able to release a new version every 6 months or so, and people will love you for the new features. After four or five releases like that, you have to slow down, or your existing customers will stop upgrading. They'll skip releases because they don't want the pain or expense of upgrading. Once they skip a release, they'll start to convince themselves that, hey, they don't always need the latest and greatest. I used Corel PhotoPaint 6.0 for 5 years. Yes, I know, it had all kinds of off-by-one bugs, but I knew all the off-by-one bugs and compensated by always dragging the selection one pixel to the right of where I thought it should be.

    picture-roosevelt:

    Make a ten year plan. Make sure you can survive for 10 years, because the software products that bring in a billion dollars a year all took that long. Don't get too hung up on your version 1 and don't think, for a minute, that you have any hope of reaching large markets with your first version. Good software, like wine, takes time.


    College students: my company has paid summer internships in New York City, including free housing, free lunch, and the chance to develop software people will really use, with great mentors on interesting projects. Don't miss this chance of a lifetime. We only have a few spaces and they always go fast, so apply today. -->

    Have you been wondering about Distributed Version Control? It has been a huge productivity boon for us, so I wrote Hg Init, a Mercurial tutorial—check it out!

    Next:

    Hard-assed Bug Fixin'

    Want to know more?

    You’re reading Joel on Software, stuffed with years and years of completely raving mad articles about software development, managing software teams, designing user interfaces, running successful software companies, and rubber duckies.

    About the author.

    I’m Joel Spolsky, co-founder of Fog Creek Software, a New York company that proves that you can treat programmers well and still be highly profitable. Programmers get private offices, free lunch, and work 40 hours a week. Customers only pay for software if they’re delighted. We make FogBugz, an enlightened bug tracker designed to help great teams develop brilliant software, Kiln, which simplifies source control and code review, and Fog Creek Copilot, which makes remote desktop control easy. I’m also the co-founder of Stack Overflow.

    © 2000-2011 Joel Spolsky
    joel@joelonsoftware.com