Twitter is worth a lot, Twitter advertising is not, Bad journalism is worthless

I set out to write a quick correction on a bad article that was discussed on the NY-Tech mailing list earlier this week, but this ends up being half about why Technology journalists and bloggers should just stop – as they rarely know what they’re talking about.

The article “How Much Are Twitter’s Tweets Really Worth?” on BusinessWeek.com has been gaining a bit of buzz across the industry this week. It’s a pretty good summation about how advertising works on Twitter – not because it’s a concise overview, but because it’s about as mindless and poorly conceived an article as the concepts that it speaks about. The writer, Spencer E. Ante, is an associate editor for Business Week. He has an impressive resume and articles behind him, so perhaps this was a postmodern experiment, or maybe he was just hungover from New Years eve. Whatever the explanation is, I’d love to hear it – as its the worst written article I’ve read in ages. The article is no longer online, so I’ll have to use quotes from a cached version in my criticism below. Let’s all take a moment and thank the “Fair Use” clause of US Copyright Law.

# UPDATE
The article’s disappearance was not because of a paywall issue, but because it was – indeed – a steaming pile of shit. Businessweek now states:
> This story contained a factual error that rendered its premise incorrect. The story is no longer available. We regret the error.

I’m keeping this up, not to “rub it in”, but to note that the “factual errors” and “incorrect premise” are something that are pandemic to technology journalism. Writers at BusinessWeek, TechCrunch, Mashable, etc rarely know what they’re talking about – and giving them a podium to stand on is just… dangerous.

# Bad journalism is worthless , Twitter is worth a lot

The first half of Ante’s story is a schizophrenic overview of the recent search deals Twitter signed with Google and Microsoft. Ante starts:
> Google and Microsoft are paying Twitter $25 million to crawl the short posts, or tweets, that users send out on the micro-blogging service. It sounds like big money.

Sounds like big money? That **is** big money – Twitter is making $25 Million dollars to give two search engines a ToS license and access to index their data. In a world where Search Engine Optimization is a skillset or service, Twitter is getting paid by the major engines so they can optimize themselves. This is pretty much unheard of.

For whatever reason though, Ante then goes on to comment:
> But do the math and the payments look less impressive. Last year, Twitter’s 50 million users posted 8 billion tweets, according to research firm Synopsos, which means Google and Microsoft are paying roughly 3¢ for every 1,000 tweets. That’s a pittance in the world of online advertising.

This is where Ante shows that he must be drunk, hungover, or a complete idiot: This deal has absolutely nothing to do with online advertising. Google and Microsoft aren’t paying to advertise on Twitter, they’re paying to be able to show tweets in their own search engines. In fact, given how the integration of this deal works – where Tweets appear in the search engine results with a link back to Twitter – it should be Twitter who is paying the search engines. This is a syndication deal, not an advertising one. And this is to syndicate user-generated-content, not editorial! Twitter now has a giant ad, at the top of most search engine pages as syndicated content , and they got **paid** for it! Getting paid to advertise your brand, instead of paying for it, isn’t a pittance – it’s brilliant, revolutionary, and (dare I say) mavericky.

One of my companies is a media site. We’re not a “top media site” yet, but we’re hoping to grow there. Handling technology and operations, I deal with advertising networks from the publisher side a lot. Another one of my companies is advertising oriented, with a focused on optimizing online media buying and selling. Suffice to say, I know the industry well – which is why I find Ante’s next bit of information troubling:
> Top media sites often get $10 or $20 per thousand page views; even remnant inventory, leftover Web pages that get sold through ad networks, goes for 50¢ to $1 per thousand.

Here’s a quick primer. If you’re a media site with a decent enough brand or demographic, regardless of being at the “top” , you’re getting a fairly decent CPM. I don’t think Ante’s numbers are “right” for “top media sites” – in reality, top media destinations are a bit higher per inventory slot. Additionally, most web pages have multiple slots which together create a “Page CPM” that is the combination of the two. While each slot might get $10-20 , an average of 2 slots on a page would net $20-40. If you look at ad networks that publish their rates (like the premier blog network FederatedMedia.net) , or speak to a friend in the industry, you’ll get instant confirmation on this.

In terms of the remnant inventory, I think these numbers are even more off. Remnant inventory for random, run-of-the-mill websites and social networks will absolutely run in the 10¢ to $1 range. “Top” media sites are of a different caliber, and will monetize their remnant inventory at a higher range, usually in the $2-8 range, or utilize a behavioral tracking system that will net CPMs in that similar $2-8 range.

My main issue with this passage has nothing to do with numbers. What I find even more inappropriate, and wholly irresponsible, is that Twitter is not a “Top Media Site”. Twitter is undoubtedly a “Top Site”, however it is a social network or service. Twitter is not about providing media or content, it is about transactional activity and user-generated content. This is a big different in terms of online advertising. For a variety of reasons ( which mostly tie in to consumer attention span and use cases ) Social Networks have a significantly lower CPM – with most monetizing at a sub $2CPM rate, and a few occasionally breaking into a $2-8 range.

Ante’s comparisons just aren’t relevant in the slightest bit. Across the entirety of his article. But hey, there’s a quote to support this:
> The deals put “almost no value” on Twitter’s data, says Donnovan Andrews, vice-president of strategic development for the digital marketing agency Tribal Fusion.

Really? Really? A $25 Million Dollar deal to syndicate user-generated-content, puts “almost no value” on that data ? Either this quote must have been taken out-of-context, Donnovan Andrews has no idea what he’s talking about, or I just haven’t been given keys to the kool-aid fountain yet. Since Donnovan and I have a lot of friends in common (we’ve never met), and journalists tend to do this sort of thing… I’m going to guess that the quote is out of context.

# Twitter advertising is not (worth a lot)

The second half of Ante’s article is a bit more interesting, and shows the idiocy of Twitter advertisers:
> A few entrepreneurs are showing ways to advertise via Twitter. Sean Rad, chief executive of Beverly Hills-based ad network Ad.ly, has signed up 20,000 Twitter users who get paid for placing ads in their tweets. To determine the size of the payments, the startup has developed algorithms that measure a person’s influence. Reality TV star Kim Kardashian, with almost 3 million followers, gets $10,000 per tweet, while business blogger Guy Kawasaki fetches $900 per tweet to his 200,000 fans.

Using Twitter for influence marketing like “Paid Tweets” is a great idea – however these current incarnations are heavily favoring the advertising network, not the advertiser.

There is absolutely no way, whatsoever, to measure “reach” on Twitter – the technology, the service, and the usage patterns render this completely impossible. The number of Followers/Fans is a figure that merely represents “potential reach”; trying to discern the effective reach of each tweet is just a crapshoot.

When an advertiser purchases a CPM for an ad, they purchase 1000 impressions of the ad in a user’s browser. Software calculates the delivery of each ad to a browser, and those programs are routinely audited by respected accounting firms to ensure stability. Most advertisers, and all premium rate (as above) advertisers have strict requirements as to how many ads can be on a page (standard: max 2-3) and the position (require ads to be “above the fold”). 1000 deliveries roughly equates to 1000 impressions.

When an advertiser purchases a CPM on an email, they purchase 1000 deliveries of the email, featuring their ad, to users’ inboxes. When emails bounce or are undeliverable, they don’t count against this number – only valid addresses do. The 1000 deliveries are , usually, successful email handoffs. A term called the “Open Rate” refers to the percentage of those 1000 emails that are actually opened by the user, and load the pixel tracking software (this method usually works, it is not absolute but good enough). Typical Open Rates vary by industry, but tend to hover around a global 25%; with content-based emails around 35% , and marketing messages at 15%. With these figures in mind, 1000 email deliveries roughly equates to 250 impressions.

When an advertiser purchases a CPM on a Twitter, they merely purchase a branded endorsement (which is very valuable in its own right) that has a potential reach of X-Followers. This number of followers does not equate to the number of people who will see the tweet “above the fold”, nor does it equate to the number of people who will see the tweet on their page at all. Twitter has absolutely no offerings ( at the current time ) to count the number of people exposed to a tweet on their website – either at all, or in accordance with an optimal advertising situation. Twitter has itself stated that 80% of their traffic comes from their API – which makes those capabilities technically impossible for that traffic.

Gauging the number of Tweets sent out over the API won’t work either — Twitter applications built on the API tend to have “filtering” capabilities, designed to help users make sense of potentially hundreds of Tweets that come in every hour. When these client-side lists or filters are used, sponsored tweets may be delivered to the application- but they are never rendered on screen

Looking at common use-patterns of Twitter users, if someone is following a handful of active users, all Tweets that are at least an hour old will fall below the fold… and tweets that are older than two hours will fall onto additional pages. This means that twitter users would effectively need to be “constantly plugged in” to ensure a decent percentage of impressions on the sponsored tweets.

A lot of research has gone into understanding usage patterns in Twitter, as people try to derive what “real” users are: a significant number of Twitter accounts are believed to be “inactive” or “trials” – users who are following or followed-by less than 5-10 users; the projected numbers for “spam” accounts fluctuates daily. Even in the most conservative figures, these numbers are well into the double digits.

Social Marketing company Hubspot did a “State of the Twittersphere, June 2009” report. Some of their key findings make these “pay per tweet” concepts based on the number of followers even more questionable. Most notably, Hubspot determined that a “real” Twitter user tweets about once per day (the actual number is .97). Several different Twitter audits have pegged the average number of accounts followed by ‘seemingly real’ accounts ( based on number of followers, followings, and engagements with the platform, etc ) to be around 50 – so an average user should expect about 50 subscribed Tweets daily as well. The Twitter.com site shows 5 tweets “above the fold” ( which represents 20% of their traffic, and a quick poll twitter clients shows an average of 7 ). Assuming Tweets are spread out evenly during the day, an average user would need to visit Twitter about 9 times a day in order to ensure seeing sponsored Tweets. In the online publishing and social media world, expecting 9 visits per day, every day, by users is… ridiculously optimistic. Realistically, users likely experience a backlog of older, unseen, tweets on login – and sponsored tweets get lost in the mix.

As I stated before, the “celebrity advocacy” concept of a sponsored Tweet is very desirable concept for advertisers — and one that would decidedly command a higher rate than other forms of advertising. However, the concept of “Actual Reach” on Twitter is nebulous at best. A better pricing metric for Twitter-based advertising would be CPC (cost per click ) or CPA ( cost per action ) , where tweeters would be paid based on how many end-users clicked a link or fully completed a conversion process.

The "Bra Colors Facebook Status Meme" isn't really about Breast Cancer.

A bunch of Social Media Blogs and Journalists are reporting that there is a viral Social Media Breast Cancer Awareness Campaign, in which women post the colors of the bras as a Facebook status.

It’s a neat idea for a story, but its not true.

Aside from the fact that this viral campaign isn’t organized by any Breast Cancer Awareness non-profit or an advertising agency, and its a really bad idea for a Breast Cancer awareness campaign [ a) it’s more appropriate for lingerie designers, b) it dilutes the association with pink that the Susan G. Komen foundation has been fighting for ] one only needs to do a quick web-search to discover that this is really a weeks-old chain-letter meme that is constantly morphing and getting hijacked.

A week ago, someone posted a question on [Answers.Yahoo.com](http://answers.yahoo.com/question/index;_ylt=Ahbp.aJJpkF6aYN1JYX.rdIjzKIX;_ylv=3?qid=20091229223537AAHTqYE) , and a respondent copy-pasted the text as it appeared then:
> right girls let’s have some fun. write the color of the bra you’re wearing right now as your status on fb and dont tell the boys. they will be wondering what all the girls are doing with colors as their status. forward this to all the girls online

Several other respondents confirmed this was the letter in that posting, and this is only one of dozens of similar explanations of this across the internet dated last week.

At some point over the last few days, someone decided to hijack the meme and make it a little more socially responsible – and they added the Breast Cancer bit to it. It’s nice, and its sweet, and its a great way to turn around a stupid internet joke into something serious. If someone looks at a one of these Facebook status postings today, no matter the author’s intent, they’ll associate with a tie-in to Breast cancer, since that’s what current media coverage states.

Nevertheless, the meme is not necessarily about Breast Cancer awareness. It’s currently getting interpreted as such, but only some participants share that intent.

Use Case Scenarios are important for product development: The "Search" Feature

Whenever a new project starts, we do a few standard things:

– Identify the general product / idea
– Identify several classes of users it appeals to
– Draft Use Case Scenarios for each user class

If, for example, your project is a “game”:

– you might identify the general idea as a game played on a court where two teams each try to sink a ball into a basket;
– the user classes would be children, competitive sports – high school, college, professional , casual adults;
– a use case scenario might be an adult goes to a gym to work out and sees 5 other friends who want to play a game together.

Use cases can really help help you focus on specific product features — figuring out what have the greatest utility, broadest appeal, or largest differentiators against competitive goods and services. They’re often created both during team brainstorming sessions and as homework for the various client ‘stakeholders’ in a project. These stakeholders who best represent the end-consumers should create at least 1/3 of the Use Cases, and should sign-off on all of them. In a startup/corporate environment, that would mean the Product Manager and perhaps some C-Level executives; in an agency environment that would mean the Client and their team, not the internal strategist or team. Why? Because when the stakeholders drive the Use Case creation, you have better insight into the core business goals, market opportunity, and targeted user demographics.

Like everything else in your project, your Use Cases will shift with time as your product matures and you get a better idea of who your actual audience is — so you’ll always have to revisit them to update and add new scenarios. Despite this changing nature, it is unbelievably important to really think things through and create detailed use cases. In the past year alone, I’ve been part of three projects that all became seriously derailed and stressed because of bad Use Case Scenarios on the same exact product feature — the “Search” function — so I’ll use that as a paradigm.

In every situation, the original use cases described something very simple, like:

– I type in /chocolate/ and it shows me a list of recipes that match chocolate. like in the title.”

But then they progressed as the stakeholders used the first version:

– When I type in /chocolate/ it should show me a list of recipes that have chocolate in the title, or as an ingredient.”

And then they progress a little more:

– There is chocolate in the description of this item , and it’s not showing up in search. I meant for the description to be part of it too.”

And then…:

– “Someone commented and said this recipe could be good with chocolate, that should be in the search results. But it should go later in the results.”

Oh no:

– “Wait a second… why am I not seeing chefs/authors who write about chocolate. they’re most certainly relevant.”

And then, overload…:

– “This kinda works. But I should be able to narrow these results down, like in Yahoo or Google. And we should show more info from the recipe in here. What about a picture ? And misspellings / near spellings ? It should detect those. People spell certain ingredients differently. We have a lot of Europeans searching, how will é ç and other characters match in search or recipes ? This seems to be broken. It is broken. This sucks, you’re wasting my time and money.”

To the stakeholder , there is no difference between these requests — they specified a search function, and they expected it to work a certain way; the product team failed at each interval to deliver on their expectations. To the stakeholder, the search function is a “black box” — they don’t know and don’t care if the mechanics behind each iteration are different… it’s a search box!

To the product team though, each iteration was a completely different product and each one required vastly different amounts of resources.

The first iteration — searching on the title — was a simple and straightforward search on a single field… and described as such, a team would just search directly on the database. The resources allocated to this would be minimal – it’s literally a few lines of code to implement.

As the search use case gets refined, the product design moves from searching on a single field to searching on multiple fields — probably using joins and views — and calculating search results. By the end of the product refinement, its quite clear that a simple in-house search solution can’t deliver the experience or results the stakeholder actually wants, so we need to look into other solutions like Solr/Lucene , Sphinx or Xapian. These advanced options aren’t terribly difficult to implement — but they go beyond a single search function into running and maintaining separate search servers , configuring the engines, creating services to index documents, creating resultset rules for sorting, creating error-handlers for when the search system is down, etc etc etc. The simple “Search” button grew from a few lines of code into a considerable undertaking that requires dedicated people, days of work, and a constant tailoring of the resultset rules.

Eventually the product teams will scream “Feature Creep!” and a manager will flatly say “Out of Scope.” Items like this are unfortunately both — but they shouldn’t be. The intent and expectations of the stakeholder have rarely changed in this process, they just failed to articulate their wants and expectations. The blame, however, is shared: the client should have better described their needs; the product manager should have asked better questions and better managed the stakeholders “Use Case homework”.

With a properly written out Use Case Scenario — in which the stakeholder actually illustrates the experience they expect — the product team is likely recommended the latter scenario, and offer tiered suggestions leading up to the desired expectations with the resources/costs at each point.

Unfortunately the status quo is for stakeholders to half-ass the Use Case. Few product or project managers will pick up on the shortcoming , and the tech team will never pick up on it. So “Search” — and any other feature — gets reduced to a line item with little description or functional specification, and when development beings it becomes built in the easiest / simplest way to satisfy that request. This predictably and unilaterally results in expectations being failed and the project getting derailed. Not only are the simplest and most robust solutions to “search” built, but every single step in between — costing dollars and immeasurable team spirit and energy.

The old adage about medication — An ounce of prevention is worth a pound of cure — holds extremely well as a truth about product development. Articulating exactly what you want and need to accomplish before development begins will save dollars and countess hours of stress.

10 Startup / Interactive Lessons ( which I learned the hard way )

Over the past 12 years, I learned these 10 things the hard way.

# 10 You and your team are not your core audience.
You’re a super user, which probably corresponds to a 5-10% demographic of product traffic, and where you want your users to one-day be. You’ve got great insights and direction, but you can’t make a product that is only “for you”. Remember about that other 90%. Unless your business model suggests you can ignore them all! Generally speaking, 20% of your users will account for 80% of your traffic – so try to remember that other 10-15%. You should also track metrics every few weeks — see how your users break down into usage patterns, and see where your team falls in there.

# 9 If your team isn’t using your product on a daily basis – you need a new team, a new product, or both.
You’ve got a huge issue if your team isn’t using your product on a daily basis. They’re going to have different usage patterns than your core demographic, but if you’re not building something that they want-to or can use on a daily basis… you’ve either got the wrong team, the wrong product, or both. Don’t accept excuses, don’t try to rationalize behavior. The bottom line is that if your team isn’t full of passionate and dedicated users of your product, and you can’t sell them on it… how can you expect your team to convince consumers and investors ? You can’t.

# 8 “If you build it they will come” == bullsh*t
You need a solid marketing plan, for your site or your new features. Just putting something out there won’t suffice — people need to learn that your product is awesome. If you don’t have the resources to drive people to your product, rethink your resource allocations *immediately* — maybe you can scale back your vision to save some resources for marketing. People need to know that your product exists, and they’ll learn how to use it by good example — those are two tasks that your team needs to lead on. Also remember that despite what you think and how hard you work, whatever you build won’t be the most amazing thing in the world — so make sure you have resources budgeted to be nimble and respond to users…

# 7 Jack be Nimble, Jack be Quick…
If you’re a consumer oriented product, you’ll often need to change direction , add features, etc many many times after launching. You need a technology platform and internal process that lets you do that. People love to talk about getting their startup going by outsourcing and offshoring the development. This is such an incredibly bad idea. To illustrate, try to count the number of startups you know of that outsourced their product development and had a successful exit. I can count them all on a single hand — and still have fingers left.
Why? If you go the outsource route, it means you’ve decided “This is what our product MUST be” — but when your users help you realize what your product SHOULD be… you’re facing change orders, new contracts, and even trying to reserve some other company’s time. Then you have to deal with the transfer of knowledge and technology when you eventually need to move in-house — figuring out how you can have your internal team support and extend a product that someone else built. If you’re going to contract something out, do a prototype or a microsite or a feature — but don’t have someone else build your core product you, it’s a proven recipe for failure.
In simpler terms, you can’t outsource your core business competency.

# 6 Listen to your lawyers, don’t obey them.
It’s easy to forget that lawyers give legal advice, not legal rules — and that at their very cores, lawyers mitigate risk while entrepreneurs take risks. I don’t mean to suggest that you should be doing anything specifically “risky” or illegal, but that you remember your lawyers will always push you towards solutions approaching 0% risk – which means you may miss many marketing, product, and business opportunities. Good marketing and successful products often push the limits of what is allowed; opening up your company to some amount of liability may be a risk that offers a far greater reward than any penalty you can incur.

# 5 Product Management is not Project Management
This confusion seems to inflict folks in the East Coast and Advertising / Interactive fields. ( if you’re from a West Coast software background, you’re probably immune ). A Project Manager handles resource allocation and making sure that deliverables and commitments keep to a schedule. A Product Manager makes sure that the deliverables actually make sense, and represent/understand the Business Goals, Market Opportunity, Competitive Advantage, and End Users. Product Management is a role — Project Management is a task. Whether you’re working on a startup, online product, or interactive campaign : you need to have a capable Product Manager who is part of the day-to-day checkin process. You also need to make sure to make sure that the people who handle resource allocation understand the roles, responsibilities, and workflow of each person they’re managing — otherwise you have some departments slacking off while others are completely overloaded trying to meet deadlines that were either unreasonably imposed on them, or that they agreed to without understanding the full scope.

# 4 If you have a good idea, it’ll probably get stolen.
This is just how things work – people are often inspired by someone else, or they’re ruthless and copy it verbatim. The exception is when someone else had the same good idea on their own — but then you’ll probably have people trying to steal that idea too, effectively doubling the rampant thievery going on. Arrgh! If you’ve been out in the market for a while and no one is competing with you, you may want to ask yourself why ? Competition doesn’t just validate your idea, it also gives you the chance to better measure the market opportunity and how the audience responds by looking at your competitors. If you stole your idea from someone else you know all this already, so there’s no need to address you too.

# 3 Nothing is confidential. Trust is an arbitrary term. Respect is earned.
The only people that you can trust to keep a secret are your lawyers, because they’ll be disbarred and lose their career. Proving that someone leaked a secret, shared a “confidential” presentation, violated an NDA, etc is not only hard to do, but very costly — which is why people do that all time. If you’re honest and forthcoming in all your dealings, word will spread and you’ll increasingly meet more people who are similar. You shouldn’t expect that anyone will keep a secret just because you asked them to — and you should always be prepared for the worst and expect the opposite.
This isn’t to say that you shouldn’t bother with privacy contracts, but that you should be smart about what you share. The vast majority of potential partners and investors will scoff at an NDA in preliminary meetings, but as your relationship progresses and they need access to more proprietary information — your internal numbers, market research, bookkeeping, etc — negotiating for an NDA is commonplace. You should always ask yourself if you really think this group is serious about working with you, or trying to do market research of their own for another project or investment with a competitor.

# 2 When it comes to a market opportunity, you can trust your gut – the experts aren’t always right.
Two charming examples about how “I was right” and “they were wrong” involved music and net experts telling me that “there will only be MySpace, and no other sites will ever be relevant for music”, and internet experts mandating that social network walls will never come down so portable identity / users will never happen. I’m not trying to flatter myself with this — neither of those companies had a successful exits, just a series of patent applications and legal headaches on one of them trying to keep the products afloat. I do mean to suggest that this is a very common situation — and Bessemer Venture Partners has a quirky take on this, they maintain an “anti-portfolio” of successful projects they turned down. As a word of caution – while the experts may be wrong about your market opportunity… they may be right about the monetization / business viability. Any time someone shoots down your ideas, you should use their arguments to both try and build a better/stronger product, and also to disprove the viability — because they could be right and may have just saved you from a lot of headaches, grief and capital losses.

# 1 Listen to your users — but be smart about how you proceed.
These days everyone says “Listen to your users” — and you should, its a good mantra. However, please remember that you need to analyze what your users say , not just take it at face value. One of my companies makes a lot of product decisions based on user feedback, and we do extensive “User Acceptance Testing” and Focus Groups whenever we want to test out an idea, or launch something new. We always profile / qualify the users who give us feedback to determine what kind of user they are ( ie: super user, industry insider, mass market, etc ) , and make note of both what they say and what they do. It never ceases to amaze me how many people think that they’re a super-user — when they’re barely a casual/incidental user; or how many users say that they really love a particular feature, that it is the most important, and they want more things like it — while their usage patterns and other interview questions show a strong preference and reliance on another feature. Listening to your users isn’t just keeping track of what they say — it encompasses understanding what they mean, discovering what they forgot to say, and working with them to enrich their experience.

# Note

I didn’t learn these all at once, and I didn’t make all the mistakes myself. I did make some myself; others were imposed on my by management or partners. In every situation my life was complicated by these issues – and I can only hope others don’t repeat these mistakes.

If NewsCorp really were to recuse itself from Search Engines…

I see a week playing out like this:

Monday
NewsCorp Publisher sites block search engines. Their traffic plummets.

Tuesday
Search Engines drop MySpace, IGN, Beliefnet… because they can and need to humble NewsCorp. Their traffic plummets too.

Wednesday
Analysts give bleak outlook for NewsCorp strategy, scream outrage, lower rating of stock.

Thursday
Rumors circulate that Rupert Murdoch is begging to get re-indexed. His peons start making phone calls.

Friday
No one can be bothered to answer the phone or email. Seeing as its Friday, everyone decides to just make NewsCorp sweat it out. The web properties are officially operating at a loss, Advertisers are not happy, and the traffic is jeopardizing Advertiser and Ad network relations.

Saturday
People are damn glad its not a trading day.

Saturday
On the 7th day, he rested. He was not an employee of NewsCorp, who are going batshit crazy trying to up their traffic.

Monday
News Corp gets reindexed. But not until a few hours /after/ the start of trading… because that’s what people like to do.

Tuesday
It turns out that Murdoch bought most of the devalued NewsCorp stock the previous day, upping his ownership to 50+%. Analysts raise the rating back to previous levels, and the value rises.

And perhaps…
Police find the mangled carcass of a newborn baby in a dumpster close to the NewsCorp offices. It’s heart has been clawed and chewed out, and it looks as if someone had been drinking tears straight from its eyes.

OpenID is bad for Registration

OpenID is a really useful protocol that allows users to login and authenticate — and I’m all for providing users with services based on it — but I’ve ultimately decided that it’s a bad idea when Registration is involved.

The reason is simple: in 99% of implementations, OpenID merely creates a consumer of your services; it does not create a true user of your system — it does not create a customer.

Allowing for OpenID registrations only gives you a user that is authenticated to another service. That’s it. You don’t have an authenticated contact method – like an email address, phone number, screen name, inbox, etc; you don’t have a channel to contact that customer for business goals like customer retention / marketing, or legal issues like security alerts or DMCA notices.

The other 1% of implementations are a tricky issue. OpenID 1.0 has something called “Simple Registration Extensions”, some of this has been bundled into 2.0 along with “Attribute Exchange”. These protocols allow for the transfer of profile data, such as an Email Address, from one party to another — so the fundamental technology is there.

What does not exist is a concept of verifiability or trust. There is no way to ensure that the email address or other contact method provided to you is valid — the only thing that OpenID proves, is that the user is authoritatively bound to their identity URL.

The only solution to this problem is for websites to limit what systems can act as trusted OpenID providers — meaning that my website may trust an OpenID registration or data from a large provider like MySpace or Facebook, but not from a self-hosted blog install.

While this seems neat on some levels, it quickly reduces OpenID to merely be a mechanism for interacting with established social sites — or, perhaps better stated, a more Open Standards way of implementing “Facebook Connect” across multiple providers. A quick audit of sites providing users with OpenID logins limited to trusted partners showed them overwhelmingly offering logins only though OpenID board members. In itself, this isn’t necessarily bad. My company FindMeOn has been offering similar registration bootstrapping services based on a proprietary stack mixed with OpenId for several years; this criticism is partially just a retelling of how others had criticized our products — that it builds as much user-loyalty into the Identity Providing Party as it does into the Identity Requesting Party. In layman’s terms – that means that offering these services strengthens the loyalty of the consumer to company you authenticate to as much as it offers you a chance to convert that user. In some situations this is okay – but as these larger companies continue to grow and compete with the startups and publishers that build off their platforms, questions are spawned as to whether this is really a good idea.

This also means that if you’re looking at OpenID as a registration method with some sort of customer contact method ensured, you’re inherently limited to a subset of major trusted providers OR going out and signing contracts with additional companies to ensure that they can provide you with verified information. In either situation, OpenID becomes more about being a Standards Based way of doing authentication than it is about being a Distributed Architecture.

But consider this — if you’re creating some sort of system that is leveraging into the large-scale social network to provide identity information, OpenID may be too limiting. You may get to work with more networks by using the OpenID standard, but your interaction will be minimal; If you were to use the network integration APIs , you could support fewer networks, however you’d be able to have a richer — and more viral — experience.

Ultimately, using OpenID for registration is a business decision that everyone needs to make for their own company — and that decision will vary dependent upon a variety of factors.

My advice is to remember these key points:

– If the user interaction you need is simply commenting or ‘responding’ to something, binding to an authoritative URL may suffice

– If the user interaction you need requires creating a customer, you absolutely need a contact method : whether it’s an email, a verified phone number, an ability to send a message to the user on a network, etc

– If you need a contact method, OpenID is no longer a Distributed or Decentralized framework — it is just a standards based way of exchanging data, and you need to rely on B2B contracts or published public policies of large-scale providers to determine trust.

– Because of limited trust, Network Specific APIs may be a better option for registration and account linking than OpenID — they can provide for a richer and more viral experience.

5 Quick Tips to Make Your Mac Faster

While my MacBook is getting repaired, I’m back to using an old iBook G4 as my portable device.

These are some tips I’ve found to make it run a little bit faster.

If you’ve got any others, let me know.

# Disable Spotlight

For many people, Spotlight is a great utility. For everyone I know, it’s a feature they never use powered by a background task that runs needlessly.

If you don’t use Spotlight, you can shut it off really easy: in System Preferences, just select Spotlight and add your actual harddrive, not a folder, to the list of ‘private’ items that won’t be indexed.

# Disable Dashboard

Dashboard is neat, but its always running… taking up memory and cpus.

To kill it off, enter this in terminal

defaults write com.apple.dashboard mcx-disabled -boolean YES
killall Dock

Should you ever want to re-enable it (why? for weather or calculator?!? )

defaults write com.apple.dashboard mcx-disabled -boolean NO
killall Dock

# Disable Junk Mail Filtering

Most folks I know have a really good server-side junk mail sorter. That means once mail comes in, it’s been sorted… so your mac is just analyzing it again… and its not very fast at it. In your Mail.app preferences, you can enable / disable junk mail sorting. Turning it off makes my computers significantly happier when messages come in.

# Kill Your Shadows

If you’re on 10.4 , you can run (http://unsanity.com/haxies/shadowkiller)[ShadowKiller] , a neat app that just enables/disables shadows. The shadows on OSX are a great feature on telling one window from another — but if you can do without them, your machine will be noticeably faster.

# Disable Transparency in Toolbar Widgets

I use the awesome iStat package from iSlayer.com to show the current CPU / Memory loads and network traffic on my machines. This toolbar application, along with many others, offer the chance to make their widget opaque. Running opaque stuff is just about always faster than running tranparancy/shading things. If you’ve got any apps that offer this option, take it.

How to Plot an Address with the Yahoo Maps API

The Yahoo Maps API is powerful, but largely undocumented.

I’m guessing all the docs / marketing materials were written by developers and project managers, with little product management considered — because its damn near impossible to do the simplest things.

My needs weren’t intense , I merely wanted to do this:

– Include a map on a web page that plots an address

One would be amazed at how assbackwards the Yahoo and Google APIs are. Doing something simple like that is a complete PITA. Neither offer the ability to do that “out-of-the-box”.

At the end of this posting is a sample of code that will do this.

Before I get into that, I’ll talk about my learnings from fighting with APIs and a lack of documentation for over 3 hours.

The main issue I had was with the difference between an address and a geopoint. Yahoo will let you instantiate a map from an address, however placing markers must be done with a geopoint. The yahoo library is asynchronous — so when you render the map, it has no idea what the geopoint is… so you’ll need to leverage into their callback chain hooks to later derive the geopoint for the address , or mapcenter. Of course, none of this behavior is documented. Nor are any of the class methods documented in full.

I eventually came up with two possibilities:

– generate a map from an address, in a callback query the center geopoint & label it
– generate a geopoint from an address, in a callback draw the map and label it

I ended up going with the latter. here is the code


The great IE6 debate, make your business your perspective

The world’s least favorite browser, Internet Explorer 6 (IE6), is in the headlines again as a new “movement” of web developers seek to drop all support for it.

For those that are somehow unfamiliar with the situation, IE6 is the bastard offspring of Microsoft’s failed attempt at dominating the “browser wars” of the early 2000s. In an attempt to get developers to adopt the Microsoft way, the company decided to forgo industry standards and create their own. The plan failed. Miserably.

It also cost companies countless (and needless) dollars to support both the industry standards and Microsoft variations… and greatly stifled innovation and progress in user experience. Instead of developers moving forward in projects, they had to spend more time (and budgets) defining and testing IE6 compliance.

In a perfect world, undoubtedly, we would be without IE6. Like everyone else I loathe IE6. But, sadly, our world is far from perfect. It is, in fact, downright cruel. Microsoft not only gave [cursed] the world with IE6, but they also gave [shafted] us the inability to rid ourselves of it with ease.

Ay, there’s the rub.

IE6 is the latest supported Microsoft browser for the Windows NT 4, 98, 2000 and ME systems. Those systems can not run a more current Internet Explorer unless a costly Operating System upgrade is pursued.

On first thought one might think that a free option — like the [kickass] Mozilla Firefox browser would be an obvious option for these systems. Unfortunately these systems suffer from the next wound in our situation — much like later versions of Windows, like XP and 2003, they are often plagued with administrative security controls that complicate or completely prohibit upgrades and installations. A large number of these IE6 installs are what is termed “institutional” — large organizations that have Windows installed throughout entire departments, buildings, or companies. When operators of computers in these institutional situations do not have administrative rights to install software on their own, they can not install updates (SP/IE7) or free downloads (Firefox).

A few weeks ago, at a State Department Town Hall Meeting [ [news article](http://www.theregister.co.uk/2009/07/13/firefox_and_us_state_department/) ] a worker asked Secretary of State Hillary Clinton:

“Can you please let the staff use an alternative web browser called Firefox?”

The next two lines are very telling of the same exact situation that keeps IE6 around:

The answer is, at the moment: Its an expense question,” Kennedy said. Then someone in the audience pointed out that Firefox is free.

“Nothing is free,” Kennedy responded. “It’s a question of the resources to manage multiple systems. It is something we’;re looking at…It has to be administered. The patches have to be loaded. It may seem small, but when you’re running a worldwide operation and trying to push, as the Secretary rightly said, out FOBs [for remote log-ins] and other devices, you’re caught in the terrible bind of triage of trying to get the most out that you can, but knowing you can’t do everything at once.”

What people often forget, is that even free software and updates have integration and management costs associated with their installations. As organizations grow in size or their security needs grow in complexity, the integration costs for software deployment — free or not — grow as well. It’s worth mentioning that many companies have based significant portions of their revenue streams in providing professional services. In fact, it is not uncommon for Open Source companies like RedHat to offer software for free , while they charge for support and maintenance and vie for enterprise contracts.

The culmination of all this is that we’re stuck with IE6. For how long? Who knows… Much of this will probably depend on how hardware and software cycles through its usage period with institutions. It could conceivably stick around for months… but possibly linger for years. I worked on a contract in 2003 for a Fortune 100 company where rooms full of employees worked on Windows95 or earlier, some machines only had DOS screens.

In an effort to more quickly rid the world of IE6 , designers and developers have banded together to suggest ways of dropping support — ranging from blocking browsers to forcing upgrades. Discussions like this are dangerous and irresponsible.

Whether IE6 should or should not be supported is not a decision for a bunch of advocates to lead — it is something that every organization should decide on a per-project basis by carefully weighing their goals and audience metrics.

The overall market share of IE6 is estimated to be somewhere around 2% in 2009… but this number is only a global installation base, and not indicative of traffic. The number of IE6 visits to a particular site vary drastically by both site, topic, and userbase demographic.

Several consulting engagements I had in 2009 on high traffic content publication sites showed double-digits for IE6 visitors; one site had a whopping 24%, another 17%. A high-traffic financial services company I consulted with in 2008 factored in 32% of visitors on IE6 into their application design. A friend messaged me today, his high volume online shopping company averaged 14% of unique users on IE6.

Conversely, friends working on fancy Web2.0 interactive projects often showed 1% or less of IE6 visits. The social news site Digg.com [released some numbers last month](http://blog.digg.com/?p=878) on their IE6 usage. Digg actually derived patterns by breaking their numbers down, and showed that 10% of visitors, 5% of page views and 1% of ‘transactions’ happened via the IE6 browser; Digg then announced that , based on these learnings, they would eventually phase out the support of ‘transactions’ while maintaining support on reading.

Suffice to say, IE6 impacts websites quite differently from one another.

Before getting into the details of dealing with IE6, let’s outline some points on why we all hate it:

– IE6 is not standards compliant, meaning that it renders differently than everything else and extra work must be taken to make simple things look halfway decent
– IE6 is bad technology
– IE6 is old technology
– IE6 stifles innovation from its limitations and necessary extra work

That’s out of the way. We agree. We don’t like it. Let’s hold hands. Rainbows.

But we have a problem… people still use IE6, and we can’t change that — even if we close ours eyes and wish real hard.

IE6 is very much like abortion – everyone agrees that they’re against it, they just fight over what to do about it.

The fight is largely drawn between those passionate about web development, and those concerned with business.

I’m an entrepreneur. I handle technology, operations and businessy things like marketing , product management, and PL sheets… so I approach this problem with that mindset, and say you need to be responsible, and address it properly. Don’t follow a movement blindly, be an adult — find out if you have a problem and deal with it.

The first step in dealing with a potential IE6 problem is to start looking at your site logs and marketing research for your demographic. Find out what systems your current users are using, what your market opportunity is, and what/where your projected growth is. Your current users may not use IE6, but your business operations are likely to scale to a significant number of users. On the flip side, you may have a huge percentage of IE6 visits now… but in 6 months your planned expansion will make that percentage tiny. The point is to look at the present and the future… uou might be surprised to learn that IE6 is simply not an issue for you, or that supporting it will make or break your project.

The second step is to remember that these are only numbers , and they may not tell the entire story and value of browsers to your project. By that, I mean that when looking at browser statistics, one must keep in mind what the purpose of their website is.

If a website is designed to generate revenue through advertising or shopping cart sales, dropping IE6 support could equate to dropping 15% of gross revenue. The question to ask then becomes: What is more costly – 15% of revenue, or the resources needed to make something IE6 compatible ? In most situations I’ve encountered, a 15% yearly revenue drop projected to cost far more than a few more weeks of design & development. Recent engagements I was involved in dealt with situations like this:

– $40k IE6 development retrofit when two year projections showed $3MM in lost potential revenue
– $8k in additional costs to build in IE6 support from the outset, when 6 month projections showed a difference in 30% difference in potential users based on marketing partnerships

Many websites aren’t designed for revenue generation — they’re a component of a marketing/advertising campaign. In these situations, the question becomes : What is more valuable – a larger audience, or a smaller audience with a richer experience ? In many situations I’ve been in, the smaller audience with a richer experience has been worth far more to the organization.

This isn’t to suggest that there are two only scenarios; there are thousands of possibilities, which is why IE6 support must be addressed on a per-project basis. Some web properties would require egregious amounts of capital to provide the same level of IE6 support, others would be wholly impossible to deliver a comparable experience with the browser.

The inability to factor in situations like there are why I refer to the anti-IE6 crowd as zealots, cultish , and often ignorant or idiotic — they’re unable to reconcile or even acknowledge significant IE6 usage or business importance , even when these numbers are starting them in the face or driving the revenue that pays their salaries.

Being costly to support, limiting, stifling-innovation, or being old-software aren’t valid reasons to drop browser support — they’re just motivations for rhetoric and dogma.

Web projects exist for one of two primary reasons:
– making money
– delivering an awesome experience

Most projects strive to achieve both, but one — and only one — can be the primary driving factor: a website is either a business or a hobby first and foremost, while the other reason is secondary.

If your project is a hobby and want to say “Screw IE6″… then sure, do it!

If you’re a business, weigh your options and think about the impact intelligently. Don’t think of movements. Don’t think of dogma. Think of dollars and cents.

If your goal is to make money, you need to decide on who/what your audience is — and if fostering a better relation with other uses is worth writing them off. This might be a viable concept, but unless you’re doing something aggressively web2.0 for that demographic, it probably won’t be.

Once you’ve realized the presence and extent of IE6 issues, think of smart ways to handle them. Remember that dealing with IE6 doesn’t mean that you do full support or full deny — but that you can tier user experience. Take cues from folks like [Toby Boudreaux of The Barbarian Group](http://www.tobyjoe.com/2009/08/dont-stop-supporting-ie6/) who suggests that you “can think of IE6 as a perfectly viable user agent for consuming content, but cost prohibitive for rendering top-tier experience design,” and “understand the complexities facing the people your cocky designers and lazy developers want to patronize or abandon”.

Also remember that if you are in fact a business, your design and development teams (both in house and vendors) are often completely clueless about your operations and goals. You should always listen to them about web stuff — they know their areas well (that’s why you hired them!). But remember that they work for you, and if your business needs to support certain products or experiences for specific demographics, it’s their job to get it done — and that’s a job that you can easily give to someone else.

You may also remind them, politely, that the 15% drop in revenue they suggest your company make for their cause correlates very well with what your company spends on their salaries and benefits.

Finally, remember that the costs for retrofitting support and designing with it at the outset can be extremely different as well. A recent retrofit I was involved with, in which IE6 was discounted at the outset of the project, took 8 weeks of development time to achieve 2/3 of the functionality after launch. Had IE6 compatibility been decided before work began, the design, ux and development of the project would have proceeded more differently and cheaply.

The world would be a much better place without IE6. Developers and designers could do cooler things, sites would work faster, things would be more awesome, maybe even unicorns would return to the Earth again. A world without IE6 would really and truly be that amazing. But dropping support is simply not a viable or responsible business option in many situations — and this is something that developers and designers will have to deal with just as harshly as those who write the checks for all the extra time spent on support.

Is Apple in Danger of Anti-Trust Proceedings ?

Update:
*I just learned that Apple dropped all DRM – and the two-tiered system… which completely invalidates my arguments below. So… nevermind!*

Apple just blocked the Palm Pre’s ability to interface with iTunes, as reported in [Information Week ](http://www.informationweek.com/news/personal_tech/smartphones/showArticle.jhtml?articleID=218500862).

Granted – Palm’s interfacing with iTunes was questionable and a hack — it basically emulated being an iPod to trick the software into compatibility. Apple’s reasoning for the update is that it “addresses an issue with verification of Apple devices”, which would often be fair for most companies and software makers…

However one needs to think about Apple’s market share and new positioning as a vertical mobile provider. Not only is Apple the dominant player in the marketplace, but they’ve fixtured themselves into the entire chain and locking out competition at every step — they’ve become the a primary force that is controlling the Hardware and Retail of music devices, the Software to install on them, and even the music distribution.

Sound familiar? Just a few years ago, the US and the EU took Microsoft to task for something similar — bundling and tying the Windows & IE browser together, and then into PC’s at reduced rates to block out competitive operating systems and browsers.

Palm’s approach was largely unethical in many ways and definitely a dirty hack. It’s also something that can — and will — probably be re-introduced as developers play cat&mouse with Apple to work-around device ID hacks , just as others have been re-enabling jailbreaks to the iPhone on every OS update.

But Palm’s recent situation brings into light some larger questions — through iTunes, Apple bundles the following things together: iPhone / IPod support and management, music purchasing, music management.

By integrating all those things together, Apple has created what is essentially not only a distinct market advantage, but an anti-competitive practice :

– users can not put the songs purchased through Apple on a non-Apple device , unless they pay for a premium for DRM free content
– users need to run additional software in order to install non-Apple procured music from their library onto a non-Apple device
– owners of non-Apple devices are obviously penalized for owning their device — through needing to handle countless workarounds , paying more for content — and are given incentives to abandon their current setups for a complete Apple solution

While Palm’s approach to the situation is largely questionable, Apple’s handling of it is illustrating a lot of the same parallels to what forced Microsoft to unbundle its software… and I’d wager that Apple may very well have ‘shot themselves in the foot’ with this, since few people knew/thought/cared about it before today.

Within the context of US and EU anti-trust laws — and not thinking about ‘fairness’ to Apple… it seems to me like Palm could push the Justice Department for Anti-Trust measures against Apple — and that could affect not just the iTunes support of devices, but the entirety of Apple’s music retail business.

I’m not a lawyer – nor do I pretend to be one. But this seems to have little to do with interpretations of US statutes and laws than it does referencing history and case law. A good lawyer may very well be able to give Apple a free pass on this — but you don’t need to be a lawyer to see that there is more than enough correlations between this situation and successfully prosecuted anti-competitive practices for this to make it to the courts.

For more info on US anti-trust practices, check out this interesting article on the tying/bundling distinctions and applicabilities — with some great case studies on what is considered illegally Anti-Competitive and what is not [Antitrust In Distribution – Tying, Bundling and Loyalty Discounts, Resale Pricing Restraints, Price Discrimination – Part I](http://www.metrocorpcounsel.com/current.php?artType=view&artMonth=April&artYear=2006&EntryNo=4751)