What a Product Manager Is and Isn't, and Why You Should Probably Stop Trying to Hire One.

I’ve had a lot of people contact me over the past two years trying to recruit me for a Product Manger role or looking for referrals to qualified candidates. I have a solid network and am well respected in NY Technology, Advertising and Publishing circles — so I’m used to constant pings by Executives I’ve consulted with or recruiters I’ve worked with and am happy to help when I can.

I feel compelled to write a post because out of several dozen inquiries for positions titled with some variation of “Product Manger”, only one was actually involved with any sort of product management. The rest? Sigh…

There’s been a huge conflation of terms with regard to “product management” in the past few years and it seems to be over-represented in NYC area. This conflation really needs to stop. Now.

The role of a Product Manager has a bit of variation in it’s definition, but it’s usually something around the lines of “the person who is ultimately responsible for a product”. In a large organization, Product Managers are essentially divisional GMs or ‘micro-ceos’; in smaller ( and tech ) organizations, they tend to be inter-disciplinary people who might report to a “head of product” or directly to the CEO.

Generally speaking: Product Managers are highly skilled and highly experienced professionals, often with extensive background across one or more areas, who are tasked with developing or fine-tuning what a ‘product’ should be to best achieve business goals.

Most “Product Managers” I’ve known can be categorized like this:

* Most have 10+ years of professional experience, with pretty impressive track records; rarely do they have less than 5years experience;
* They either have advanced degrees like an MBA, MS, PHD or work-based equivalent, i.e. a C/VP/D level employee who have done some stellar work;
* All are experts / authorities in at least one discipline — and can somewhat function in whatever roles they oversee/interact with, as they’ve quite a bit of experience working across them. They understand when the Engineers are slacking off or overworking, when the Marketers have a ridiculous request, and when the project managers are over/under promising.

Sometimes people have a strong technical background – but that’s not a requirement, it’s a bonus over their experience leading teams and deeply understanding the marketplace they’re working in.

To give some quick examples:

1. I was recently at eConsultancy’s Digital Cream NYC event, in a room full of 150 people who were mostly Chief Marketing Officers / VPs of Marketing. If I were a technology company in the advertising space or a publisher looking to sell innovative new ad solutions, I would want to recruit a Product Manager from the attendee list. This is rather simple – the person who could best manage my advertising product, would be an expert in advertising. Few (if any) people there had any coding experience whatsoever.

2. Several publications that I know of built out Editorial Product departments staffed with former Senior Editors and Operational Editors. What better way to deliver on editorial needs than by hiring a seasoned journalist ?

3. A friend literally wrote the book on a certain technology, and is often called in to advise on different implementations of it — addressing the costs to scale/iterate, user behaviors, implementations, etc. He tends to advise people in a very “product management” capacity.

4. When Facebook buys a startup, their executive staff tend to be acquired as Product Managers to own a section of the Facebook experience.

Some of the things a Product Manager typically does is:

* Understand and manage the business goals: identify the best business opportunities , create and push products to address them.
* Understand the functionality and scope of the product: if it’s technology, they can code; if it’s a marketing product, they understand how and why advertising is bought.
* Understand the customers: make sure people will want to consume the product
* Make decisions and be qualified to make them: balance a mix of Strategic Decisions ( into markets or users ) and Operations ( costs to iterate – both financially and team morale )
* Manage the process : work with P&L sheets, quarterback the scope/design/build/deploy/sales process.
* other things I’m too tired to note. Product Managers are tasked with balancing the goals of the Organization against the needs of multiple types of Consumers and the people/resources to build them. It’s a lot of work, but it’s amazing fun for a lot of us.

The scores of “Product Manager” positions that are plentiful in NYC right now are nothing like my descriptions above – they tend to be a hybrid of skills belonging to a Digital Producer ( in the adverting world ) and Project Manager ( in , well any industry ). They are mostly what I consider entry level – with a max of 3 total years work experience , but often in the 1-2 range.

These positions tend to be highly administrative , require no expertise or inter-disciplinary skills, and don’t even have access to seeing budgets — much less managing them or trying to affect revenue operations. Sometimes they’ll include a bit of customer development work, but most often they don’t. These positions completely lack a “Strategy” component, tending to either be a very entry level position or a mislabling for the most incredibly experienced and talented Project Manager you’ve ever met.

Almost always these roles become filled by someone who honestly shouldn’t have that job. One of my more favorite “Product Manager” interactions was with someone who had just assumed the new role as their second-ever job, with their first job being several years as a Customer Service representative. If the company provided Customer Service, it would have been a really good fit — but the company provided a very technical service, and their “Product Manager” was really functioning more like a mix of an “Account Manager” and “Digital Producer”, they were visibly out of their element and unable to understand the needs of their clients or the capabilities of their team.

This is really a dis-service to everyone involved.

* It makes a potential employers look foolish to actual Product Managers , and labeled as a company to avoid.
* It skips over a huge pool of extremely talented Digital Producers and Project Managers who would excel at these roles.
* It creates a generation of early-career professionals with the title of a Product Manager, but without the relevant experience or skills to back it up.

Because “Product Manager” is so often a role that an experienced professional transitions into, it’s not uncommon to see someone with 1-2 years of “Product Manager” in their title, but a resume that shows 3 years as a Vice President and 5 years as a Director at a previous employer. You might even see someone with 3 years of “Product Manager” as a title — but an additional 9 years of “Digital Producer” or “Project Manager” experience behind them as well. Plenty of professionals from the Production side transition into Product Management too, once they’re well versed in their respective industries.

Mindless recruiters ( and certain nameless conglomerates ) of NYC don’t understand this though. They just focus on buzz-words: if someone has been in “product” for 2 years, they target them as if they’ve only been a professional for that long. It’s all too common for the salary cap of a not-really-a-product-manager position to be 1/4 the targeted recruit’s current salary. The compensation package and role should be commensurate with the full scope of someone’s work — i.e. 12 years, not 3 years.

So my point is simple – if you’re hiring a “Product Manger” you should really think at what you expect out of the role.

* If you’re really looking for a “Project Manager” or “Digital Producer” — which you most likely are — change your posting and recruit that person. You’ll find a great employee and give them a job they really want and care about. If you manage to get a Product Manager in that role, they’re going to be miserable and walk out the door.

* If you realize that you’re looking for a role that its both strategic and operational — and is going to be one of the most important hires for your organization or division, then either hire someone with relevant Product Management experience OR hire a relevant expert to be your “Product Manager”.

Dreamhost UX Creates Security Flaw

Last week I found a Security flaw on Dreamhost caused by the User Experience on their control panel. I couldn’t find a security email, so I posted a message on Twitter. Their Customer Support team reached out and assured me that an email response would be addressed. Six days later I’ve heard nothing from them, so I feel forced to do a public disclosure.

I was hoping that they would do the responsible thing, and immediately fix this issue.

## The issue:

If you create a Subversion repository, there is a checkbox option to add on a “Trac” interface – which is a really great feature, as it can be a pain to set up on their servers yourself (something I’ve usually done in the past).

The exact details of how the “one-click” Trac install works aren’t noted though, and the integration doesnt “work as you would probably expect” from the User Experience path.

If you had previous experience with Trac, and you were to create a “Private” SVN repository on Dreamhost – one that limits access to a set of username/passwords – you would probably assume that access to the Trac instance is handled by the same credentials as the SVN instance, as Trac is tightly integrated into Subversion.

If you had no experience with Trac, you would probably be oblivious to the fact that Trac has it’s own permissions system, and assume your repository is secured from the option above.

The “one click” Trac install from Dreamhost is entirely unsecured – the immediate result of checking the box to enable Trac on a “private” repository, is that you inherently are publicly publishing that repo from within the Trac browser.

For example, if you were to install a private subversion and one-click Trac install onto a domain like this:

my.domain.com/svn
my.domain.com/trac

The /svn source would be private however it would be publicly available under /trac/browser due to the default one-click install settings.

Here’s a marked-up screenshot of the page that shows the conflicting options ( also on http://screencast.com/t/A2VQT5gOVkK )

I totally understand how the team at Dreamhost that implemented the Trac installer would think their approach was a good idea, because in a way it is. A lot of people who are familiar with Trac want to fine-tune the privileges using Trac’s own very-robust permissions system, deciding who can see the source / file tickets / etc. The problem is that there is absolutely no mention of an alternate permissions system contained within Trac – or that someone may need to fine-tune the Trac permissions. People unfamiliar with Trac have NO IDEA that their code is being made public, and those familiar with Trac would not necessarily realize that a fully unsecured setup is being created. I’ve been using Trac for over 8 years , and the thought of the default integrations being setup like this is downright silly – it’s the last thing I would expect a host to do.

I think it would be totally fine if there is just a “Warning!” sign next to the “enable Trac” — with a link to Trac’s wiki for customization , or instructions ( maybe even a checkbox option ) on how a user can have Trac use the same authorization file as subversion.

But, and this is a huge BUT, people need to be warned that clicking the ‘enable Trac’ button will publish code until Trac is configured. People who are running Trac via an auto-install need to be alerted of this immediately.

This can be a huge security issue depending on what people store in Subversion. Code put in Subversion repositories tends to contain Third Party Account Credentials ( Amazon AWS Secrets/Keys, Facebook Connect Secrets, Paypal/CreditCard Providers, etc ), SSH Keys for automated code deployment, full database connection information, administrator/account default passwords — not to mention the exact algorithms used for user account passwords.

## The fix

If you have a one-click install of Trac tied to Subversion on Dreamhost and you did not manually set up permissions, you need to do the following IMMEDIATELY:

### Secure your Trac installation

If you want to use Trac’s own privileges, you should create this .htaccess file in the meantime to disable all access to the /trac directory

deny from all

Alternately, you can map access your Trac install to the Subversion password file with a .htaccess like this:

AuthType Basic
AuthUserFile /home/##SHELL_ACCOUNT_USER##/svn/##PROJECT_NAME##.passwd
AuthName “##PROJECT_NAME##”
require valid-user

### Audit your affected code and services.

* All Third Party Credentials should be immediately trashed and regenerated.
* All SSH Keys should be regenerated
* All Database Accounts should be reset.
* If you don’t have a secure password system in place , you need up upgrade

## What are the odds of me being affected ?

Someone would need to figure out where your trac/svn repos are to exploit this. Unless you’ve got some great obscurity going on, it’s pretty easy to guess. Many people still like to deploy using files served out of Subversion (it was popular with developers 5 years ago before build/deploy tools became the standard) , if that’s the case and Apache/Nginx aren’t configured to reject .svn directories — your repo information is public.

When it comes to security, play it safe. If your repo was accidentally public for a minute, you should wipe all your credentials.

Want to win? Make it easier, not harder.

In March of 2011 I represented Newsweek & The Daily Beast at the Harvard Business School / Committee of Concerned Journalists “Digital Leaders Summit”. Just about every major media property sent an executive there, and I was privileged enough to represent the newly formed NewsBeast (Newsweek+TheDailyBeast had recently merged, but have since split).

Over the course of two days, we covered a lot of concerns across the industry – analyzing who was doing things right and how/why others were making mistakes.

On the first day of the summit we looked at how Amazon was posturing itself for digital book sales – where their profits were hoping to be, where their losses were expected, and strategies for finding the optimal price structure for digital goods.

Inevitably, the conversation sidetracked to the Apple Ecosystem, which had just announced Subscriptions and their eBooks plan — consequently being their new competitor.

One of the other 30 or so people in attendance was Jeffrey Zucker from NBC, who went into his then-famous “digital pennies vs. analog dollars” diatribe. He made a compelling, intelligent, and honest argument that captivated the minds and attention of the entire room. Well, most of the room.

I vehemently disagreed with all his points and quickly spoke up to grab the attention of the floor… “apologizing” from breaking with the conventional view of this subject, and asking people to look at the situation from another point of view. Yes, it was true as Zucker stated that Apple standardized prices for digital downloads and set the pricing on their terms – not the producer’s. Yet, it was true that Apple allowed for records to be purchased “in part” and not as a whole – shifting purchase patters, and yes to a lot of other things.

And yes – Jeffrey Zucker didn’t say anything that was “wrong” – everything he said was right. But it was analyzed from the wrong perspective. Simply put, Zucker and most of the other delegates were only looking at portion of the scenario and the various mechanics at play. The prevailing wisdom in the room was way off the mark… by miles.

Apple didn’t gain dominance in online music because of their pricing system or undercutting retailers – which everyone believed. Plain and simple, Apple took control of the market because they made it fundamentally easier and faster for someone to legally buy music than to steal it. When they first launched (and still in 2012) it takes under a minute for someone to find and buy an Album or Single in the iTunes store. Let me stress that – discovery, purchase and delivery takes under a minute. Apple’s servers were relatively fast at the start as well – an entire album could be downloaded within an hour.

In contrast, to legally purchase an album in the store would take at least two hours – and at the time they first launched, encoding an album to work on an MP3 player would take another hour. To download a record at that time would be even longer: services like Napster (already dead by the iTunes launch) could take a day to download; torrent systems could take a day; while file upload sites were generally faster, they suffered from another issue that torrents and other options did as well – mislabeled and misdirected files.

Possibly the only smart thing the Media Industry has ever done to curb piracy is what I call the “I Am Spartacus” method — wherein “crap” files are mislabeled to look like Top 40 hits. For example: in expectation of a new Jay-Z record, internet filesharing sites are flooded with uploads that bear the name of the record… but contain white noise, another record, or an endless barrage of insults (ok, maybe not the last one… but they should).

I pretty much shut the room up at that point, and began a diatribe of my own – which I’ll repeat and continue here…

At the conference, Jeffrey Zucker and some other media executives tended to look at the digital economy like this: If there are 10 million Apple downloads of the new Beyonce record or the 2nd Season of “Friends”, those represent 10 million diverted sales of a $17.99 CD – or 10MM diverted sales of a $39.99 dvd. If Apple were to sell the CD for 9.99 with a 70% cut, they’re only seeing $7 in revenue for every $17.99 — 100 million times. Similarly, if 10MM people are watching Friends for $13.99 (or whatever cost) on AppleTV instead of buying $29.99 box sets, that’s about $20 lost per viewer — 10 million times.

To this point, I called bullshit.

Digital goods such as music and movies have incredibly diminished costs for incremental units, and for most of these products they are a secondary market — records tend to recoup their various costs within the first few months, and movies/tv-shows tend to have been wildly profitable on-TV / in-Theaters. The music recording costs 17.99 and the DVD 29.99 , not because of fixed costs and a value chain… but because $2 of plastic, or .02¢ of bandwidth, is believed by someone to be able to command that price.

Going back to our real-life example, 10MM downloads of “Friends” for 13.99 doesn’t equate to 10MM people who would have purchased the DVD for $39.99. While a percentage of the 10MM may have been willing to purchase the DVDs for the higher price, another — larger — percentage would not have. By lowering the price from 39.99 to 13.99, the potential market had likely changed from 1MM consumers to 10MM. Our situation is not an “apples-to-apples” comparison — while we’re generating one third the revenue, we’re moving ten times as many units and at a significantly lower cost (no warehousing, mfg, transit, buybacks, etc).

While hard copies are priced to cover the actual costs associated with manufacturing and distributing the media, digital media is flexibly priced to balance convenience with maximized revenue.

Typical retail patterns release a product at a given introductory price (e.g. $10) for promotional period, raise it to a sustained premium for an extended period of time (e.g. $17), then lower it via deep discounted promotions for holiday sales or clearance attempts (e.g. $5). Apple ignored the constant re-pricing and went for a standardized plan at simple price-points.

Apple doesn’t charge .99¢ for a song, or $1.99 for a video because of some nefarious plan to undervalue media — they came up with those prices because those numbers can generate significant revenue while being an inconsequential purchase. At .99¢ a song or $9.99 an album, consumer’s simply don’t think. We’re talking about a dollar for a song, or a ten dollar bill for a record.

Let me rephrase that, we’re talking about a fucking dollar for a song. A dollar is a magical number, because while it’s money, it’s only a dollar. People lose dollar bills all the time, and rationalize the most ridiculous of purchases away… because it’s only a dollar. It’s four quarters. You could find that in the street or in your couch. A dollar is not a barrier or a thought. You’ll note that a dollar is not far off from the price of a candy bar, which retailers incidentally realized long ago that “Hey – let’s put candy bars next to the cash registers and keep the prices relatively low, so people make impulse buys and just add it onto their carts”.

Do you know what happens when you charge a dollar for something? People just buy it. At 13.99 – 17.99 for a cd, people look at that as a significant purchase — one that competes with food, vacations, their children’s college savings. When you charge a dollar a song – or ten dollars a record – people don’t make those comparisons… they just buy.

And buy, and buy, and buy. Before you know it, people end up buying more goods — spending more money overall on media than they would have under the old model. Call me crazy, but I’d rather sell 2 items with little incremental cost at $9.99 each than 1 item at $13.99 — or even 1 item at $17.99.

Unfortunately, the current stable of media executives – for the most part – just don’t get this. They think a bunch of lawyers, lobbyists and paying off politicians for sweetheart legislations are the best solution. Maybe that worked 50 years ago, but in this day and age of transparency and immediacy, it justq doesn’t.

Today: you need to swallow you pride, realize that people are going to steal, that the ‘underground’ will always be ahead of you, and instead of wasting time + money + energy with short-term bandaids which try to remove piracy ( and need to be replaced every 18months ) — you should invest your time and resources into making it easier and cheaper to legally consume content. Piracy of goods will always exist, it is an economic and human truth. You can fight it head-on, but why? There will always be more pirates to fight; they’re motivated to free content, and they’re doubly motivated to outsmart a system. Fighting piracy is like a chinese finger trap.

Instead of spending millions of dollars chasing 100% market share that will never happen (and I can’t stress that enough, it will never happen), you could spend thousands of dollars addressing the least-likely pirates and earn 90% of the market share — in turn generating billions more in revenue each year.

Until decision makers swallow their pride and admit they simply don’t understand the economics behind a digital world, media companies are going to constantly and mindlessly waste money. Almost every ( if not EVERY ) attempt at Digital Rights Management by major media companies has been a catastrophe – with most just being a waste of money, while some have resulted in long term compliance costs. I can’t say this strongly enough: nearly the entire industry of Digital Rights Management is a complete failure and not worth addressing.

Today, the media industry is at another crossroads. Intellectual property rights holders are getting incredibly greedy , and trying to manipulate markets which they clearly don’t understand. In the past 12 hours I’ve learned how streaming rights to Whitney Houston movies were pulled from major digital services after her death to increase DVD sales [ I would have negotiated with digital companies for an incremental ‘fad’ premium, expecting the hysteria to die down before physical goods could be made ], and read a dead-on comic by The Oatmeal on how it has – once again – become easer to steal content than to legally purchase it [ http://theoatmeal.com/comics/game_of_thrones ].

As I write this (Feb 2012) it is faster to steal a high quality MP3 (or FLAC) of record than it is to either: a) rip the physical CD to the digital version or b) download the item from iTunes ( finding/buying is still under a minute ). Regional release dates for music , movies and TV are unsynchronized (on purpose!) , which ends up in the perverse scenario where people in different regions become incentivized to traffic content to one another — i.e. a paying subscriber of a premium network in Europe would illegally download an episode when it first airs on the affiliate in the United States, one month before the European date.

Digital economics aren’t rocket science, they’re drop-dead simple:

  1. If you make things fast and easy to legally purchase, people will purchase it.
  2. If you make things cheap enough, people will buy them – without question , concern, or weighing the purchase into their financial plans.
  3. If you make it hard or expensive for people to legally purchase something, they will turn to “the underground” and illegal sources.
  4. Piracy will always exist, innovators will always work to defy Digital Rights Management, and as much money as you throw at creating anti-piracy measures… there will always be a large population of brilliant people working to undermine them.

My advice is simple: pick your battles wisely. If you want to win in digital media, focus on the user experience and maximizing your revenue generating audience. If your content is good, people will either buy it or steal it – if your content is bad, they’re going somewhere else.

I’m glad to no longer be in corporate publishing. I’m glad to be back in a digital-only world, working with startups , advertising agencies, and media companies that are focused on building the future… not trying to save an ancient business model.

2016 Update

Re-reading this, I can’t help but draw the parallels to the explosion of Advertising and Ad Blocking technologies in recent years. Publishers have gotten so greedy trying to extract every last cent of Advertising revenue and including dozens of vendor/partner javascript tags, that they have driven even casual users to use Ad Blocking technologies.

Python Fun : Upgrading to 2.7 on OSX ; Installing Mysql-Python on OSX against MAMP ( ruby gem too)

I needed to upgrade from Python 2.6 to 2.7 and ran into a few issues along the way. Learn from my encounters below.

# Upgrading Python
Installing the new version of Python is really quick. Python.org publishes a .dmg installer that does just about everything for you. Let me repeat “Just about everything”.

You’ll need to do 2 things once you install the dmg, however the installer only mentions the first item below:

1. Run “/Applications/Python 2.x/Update Shell Command”. This will update your .bash_profile to look in “/Library/Frameworks/Python.framework/Versions/2.x/bin” first.

2. After you run the app above, in a NEW terminal window do the following:

* Check to see you’re running the new python with `python –version` or `which python`
* once you’re running the new python, re-install anything that installed an executable in bin. THIS INCLUDES SETUPTOOLS , PIP , and VIRTUALENV

It’s that second thing that caught me. I make use of virtualenv a lot, and while I was building new virtualenvs for some projects I realized that my installs were building against `virtualenv` and `setuptools` from the stock Apple install in “/Library/Python/2.6/site-packages” , and not the new Python.org install in “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages”.

It’s worth nothing that if you install setuptools once you’ve upgraded to the Python.org distribution, it just installs into the “/Library/Frameworks/Python.framework” directory — leaving the stock Apple version untouched (basically, you can roll back at any time).

# Installing Mysql-Python (or the ruby gem) against MAMP

I try to stay away from Mysql as much as I can [ i <3 PostgreSQL ], but occasionally I need to run it: when I took over at TheDailyBeast.com, they were midway through a relaunch on Rails , and I have a few consulting clients who are on Django. I tried to run cinderella a while back ( http://www.atmos.org/cinderella/ ) but ran into too many issues. Instead of going with MacPorts or Homebrew, I've opted to just use MAMP ( http://www.mamp.info/en/index.html ) There's a bit of a problem though - the persons who are responsible for the MAMP distribution decided to clear out all the mysql header files, which you need in order to build the Python and Ruby modules. You have 2 basic options: 1. Download the "MAMP_components" zip (155MB) , and extract the mysql source files. i often used to do this, but the Python module needed a compiled lib and I was lazy so... 2. Download the tar.gz version of Mysql compiled for OSX from http://dev.mysql.com/downloads/mysql/ Whichever option you choose, the next steps are generally the same: ## Copy The Files ### Where to copy the files ? mysql_config is your friend. at least the MAMP one is. Make sure you can call the right mysql_config, and it'll tell you where the files you copy should be stashed. Since we're building against MAMP we need to make sure we're referencing MAMP's mysql_config
iPod:~jvanasco$ which myqsl_config
/Applications/MAMP/Library/bin/mysql_config

iPod:~jvanasco$ mysql_config
Usage: /Applications/MAMP/Library/bin/mysql_config [OPTIONS]
Options:
–cflags [-I/Applications/MAMP/Library/include/mysql -fno-omit-frame-pointer -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDONT_DECLARE_CXA_PURE_VIRTUAL]
–include [-I/Applications/MAMP/Library/include/mysql]
–libs [-L/Applications/MAMP/Library/lib/mysql -lmysqlclient -lz -lm]
–libs_r [-L/Applications/MAMP/Library/lib/mysql -lmysqlclient_r -lz -lm]
–plugindir [/Applications/MAMP/Library/lib/mysql/plugin]
–socket [/Applications/MAMP/tmp/mysql/mysql.sock]
–port [3306]
–version [5.1.44]
–libmysqld-libs [-L/Applications/MAMP/Library/lib/mysql -lmysqld -ldl -lz -lm]

### Include

Into /Applications/MAMP/include you need to place the mysql include files into a subdirectory called “mysql”


mkdir -P /Applications/MAMP/Library/include
cp -Rp MySQL-Distribution/include /Applications/MAMP/Library/include/mysql

### Lib

Into /Applications/MAMP/Library/lib you need to place the mysql lib files


mkdir -P /Applications/MAMP/Library/include
cp -Rp MySQL-Distribution/lib /Applications/MAMP/Library/lib/mysql

## Configure the Env / Installers

Note: if you’re installing for a virtualenv, this needs to be done after it’s been activated.

Set the archflags on the commandline:


export ARCHFLAGS="-arch $(uname -m)"

### Python Module

I found that the only way to install the module is by downloading the source ( off sourceforge! ).

I edited site.cfg to have this line:


mysql_config = /Applications/MAMP/Library/bin/mysql_config

Basically, you just need to tell mysql to use the MAMP version of mysql_config to figure everything out itself.

the next steps are simply:


python setup.py build
python setup.py install

If you get any errors, pay close attention to the first few lines.

If you see something like the following within the first 10-30 lines, it means the various files we placed in the step above are not where the installer wants them to be:


_mysql.c:36:23: error: my_config.h: No such file or directory
_mysql.c:38:19: error: mysql.h: No such file or directory
_mysql.c:39:26: error: mysqld_error.h: No such file or directory
_mysql.c:40:20: error: errmsg.h: No such file or directory

If you look up a few lines, you might see something like this:


building '_mysql' extension
gcc-4.0 -fno-strict-aliasing -fno-common -dynamic -g -O2 -DNDEBUG -g -O3 -arch i386 -Dversion_info=(1,2,3,'final',0) -D__version__=1.2.3 -I/Applications/MAMP/Library/include/mysql -I/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c _mysql.c -o build/temp.macosx-10.5-i386-2.7/_mysql.o -fno-omit-frame-pointer -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ -DIGNORE_SIGHUP_SIGQUIT -DDONT_DECLARE_CXA_PURE_VIRTUAL

note how we see “/Applications/MAMP/Library/include/mysql” in there. When I quickly followed some instructions online that had all the files in /include — and not in that subdir — this error popped up. Once I changed the directory structure to match what my mysql_config wanted, the package installed perfectly.

### Ruby Gem

Assuming you’re using bundler:


bundle config build.mysql
--with-mysql-include=/Applications/MAMP/Library/include/mysql/
--with-mysql-lib=/Applications/MAMP/Library/lib
--with-mysql-config=/Applications/MAMP/Library/bin/mysql_config

and then before you do a `bundle install` , set the env vars


> export ARCHFLAGS="-arch x86_64"

or


> export ARCHFLAGS="-arch $(uname -m)"

## Test it

If things install nicely, lets make sure it works…


$ipod:~ jvanasco$ python
>>> import _mysql

Oh , crap:


Traceback (most recent call last):
File "", line 1, in
File "build/bdist.macosx-10.5-i386/egg/_mysql.py", line 7, in
File "build/bdist.macosx-10.5-i386/egg/_mysql.py", line 6, in __bootstrap__
ImportError: dlopen(/Users/jvanasco/.python-eggs/MySQL_python-1.2.3-py2.7-macosx-10.5-i386.egg-tmp/_mysql.so, 2): Library not loaded: libmysqlclient.18.dylib
Referenced from: /Users/jvanasco/.python-eggs/MySQL_python-1.2.3-py2.7-macosx-10.5-i386.egg-tmp/_mysql.so
Reason: image not found

Basically what’s happening is that as you run it, mysql_python drops a shared object in your userspace. That shared object is referencing the location that the Mysql.org distribution placed all the library files — which differs from where we placed them in MAMP.

There’s a quick fix — add this to your bash profile, or run it before starting mysql/your app:

export DYLD_LIBRARY_PATH="$DYLD_LIBRARY_PATH:DYLD_LIBRARY_PATH=/Applications/MAMP/Library/lib/mysql"

# Conclusion

There are too many posts on this subject matter to thank. A lot of people posted variations of this method – to which I’m very grateful – however no one addressed troubleshooting the Python process , which is why I posted this.

I also can’t stress the simple fact that if the MAMP distribution contained the header and built library files, none of this would be necessary.

Facebook Developer Notes – Javascript SDK and Asynchronous Woes

I’m quickly prototyping something that needs to interact with Facebook’s API and got absolutely lost by all their documentation – which is plentiful, but poorly curated.

I lost a full day of time trying to figure out why my code wasn’t doing what I wanted it to do, trying to understand how it works so I could figure out what I was actually telling it to do. I eventually hit the “ah ha!” moment where I realized that by following the Facebook “getting started” guides, I was telling my code to do embarrassingly stupid things. This all tends to dance around the execution order , which isn’t really documented at all. Everything below should have been very obvious — and would have been obvious, had I not gone through the “getting started” guides, which really just throws you off track.

Here’s a collection of quick notes that I’ve made.

## Documentation Organization

Facebook has made *a lot* of API changes over the past few years, and all the information is still up on their site… and out on the web. While they’re (thankfully) still supporting deprecated features, their documentation doesn’t always say what is the preferred method or not – and the countless 3rd party tutorials and StackOverflow activity don’t either. The “Getting Started” documentation and on-site + github code samples also doesn’t tie together with the API documentation well either. If you go through the tutorials and demos, you’ll see multiple ways to handle a login/registration button… yet none seem to resemble what is going on in the API. There’s simply no uniformity, consistency, or ‘official recommendations’.

I made the mistake of going through their demos and trying to “learn” their API. That did more damage than good. Just jump into the [Javascript SDK API Reference Documentation](https://developers.facebook.com/docs/reference/javascript/) itself. After 20 minutes reading the API docs themselves, I realized what was happening under the hood… and got everything I needed to do working perfectly within minutes.

## Execution Order

The Javascript SDK operations in the following manner:

1. Define what happens on window.fbAsyncInit – the function the SDK will call once Facebook’s javascript code is fully loaded. This requires, at the very least, calling the FB.init() routine. FB.init() registers your app against the API and allows you to actually do things.
2. Load the SDK. this is the few lines of code that start “(function(d){ var js, id = ‘facebook-jssdk’;…” .
3. Once loaded, the SDK will call “window.fbAsyncInit”
4. window.fbAsyncInit will call FB.init() , enabling the API for you.

The important things to learn from this are :

1. If you write any code that touches the FB namespace _before_ the SDK is fully loaded (Step 3), you’ll get an error.
1. If you write any code that touches the FB namespace _before_ FB.init() is called (Step 4), you’ll get an error.
1. You should assume that the entire FB namespace is off-limits until window.fbAsyncInit is executed.
1. You should probably not touch anything in the FB namespace until you call FB.init().

This means that just about everything you want to do either needs to be:

1. defined or run after FB.init()
2. defined or run with some sort of callback mechanism, after FB.init()

That’s not hard to do, once you actually know that’s what you have to do.

## Coding Style / Tips

The standard way the Facebook API is ‘instructed to integrated is to drop in a few lines of script. The problem is that the how&why this works isn’t documented well, and is not linked to properly on their site. Unless you’re trying to do exactly what the tutorials are for – or wanting to code specific Facebook API code on every page, you’ll probably get lost trying to get things to run in the order that you want.

Below I’ll mark up the Facebook SDK code and offer some of ideas on how to get coding faster than I did… I wasted a lot of time going through the Facebook docs, reading StackOverflow and reverse engineering a bunch of sites that had good UX integrations with Facebook to figure this out.

// before loading the Facebook SDK, load some utility functions that you will write

One of the move annoying things I encountered, is that Facebook has that little, forgettable, line in their examples that read:

// Additional initialization code here

You might have missed that line, or not understood its meaning. It’s very easy to do, as its quite forgettable.

That line could really be written better as :

// Additional initialization code here
// NEARLY EVERYTHING YOU WRITE AGAINST THE FACEBOOK API NEEDS TO BE INITIALIZED / DEFINED / RUN HERE.
// YOU EITHER NEED TO INCLUDE YOUR CODE IN HERE, OR SET IT TO RUN AFTER THIS BLOCK HAS EXECUTED ( VIA CALLBACKS, STACKS, ETC ).
// (sorry for yelling, but you get the point)

So, let’s explore some ways to make this happen…

In the code above I called fb_Utils.initialize() , which would have been defined in /js/fb_Utils.js (or any other file) as something like this:

// grab a console for quick logging
var console = window['console'];


// i originally ran into a bunch of issues where a function would have been called before the Facebook API inits.
// the two ideas i had were to either:
// 1) pass calls through a function that would ensure we already initialized, or use a callback to retry on intervals
// 1) pass calls through a function that would ensure we already initialized, or pop calls into an array to try after initialization
// seems like both those ideas are popular, with dozens of variations on each used across popular sites on the web
// i'll showcase some of them below

var fb_Utils= {
	_initialized : false
	,
	isInitialized: function() {
		return this._initialized;
	}
	,
	// wrap all our facebook init stuff within a function that runs post async, but is cached across the site
	initialize : function(){
		// if you wanted to , you could migrate into this section the following codeblock from your site template:
		// -- FB.init({
		// --    appId : 'app_id'
		// --    ...
		// -- });
		// i looked at a handful of sites, and people are split between calling the facebook init here, or on their templates
		// personally i'm calling it from my templates for now, but only because i have the entire section driven by variables


		// mark that we've run through the initialization routine
		this._initialized= true;

		// if we have anything to run after initialization, do it.
		while ( this._runOnInit.length ) { (this._runOnInit.pop())(); }
	}
	,
	// i checked StackOverflow to see if anyone had tried a SetTimeout based callback before, and yes they did.
	// link - http://facebook.stackoverflow.com/questions/3548493/how-to-detect-when-facebooks-fb-init-is-complete
	// this works like a charm
	// just wrap your facebook API commmands in a fb_Utils.ensureInit(function_here) , and they'll run once we've initialized
	ensureInit :  function(callback) {
		if(!fb_Utils._initialized) {
			setTimeout(function() {fb_Utils.ensureInit(callback);}, 50);
		} else {
			if(callback) { callback(); }
		}
	}
	,
	// our other option is to create an array of functions to run on init
	_runOnInit: []
	,
	// we can then wrap items in fb_Utils.runOnInit(function_here) , and they
	runOnInit: function(f) {
		if(this._initialized) {
			f();
		} else {
			this._runOnInit.push(f);
		}
	},
	// a few of the Facebook demos use a function like this to illustrate the api
	// here, we'll just wrap the FB.getLoginStatus call , along with our standard routines, into fb_Utils.handleLoginStatus()
	// the benefit/point of this, is that you have this routine nicely compartmentalized, and can call it quickly across your site
	handleLoginStatus : function(){
			FB.getLoginStatus(
				function(response){
					console.log('FB.getLoginStatus');
					console.log(response);
					if (response.authResponse) {
						console.log('-authenticated');
					} else {
						console.log('-not authenticated');
					}
				}
			);
		}
	,
	// this is a silly debug tool , which we'll use below in an example
	event_listener_tests : function(){
		FB.Event.subscribe('auth.login', function(response){
		  console.log('auth.login');
		  console.log(response);
		});
		FB.Event.subscribe('auth.logout', function(response){
			  console.log('auth.logout');
			  console.log(response);
		});
		FB.Event.subscribe('auth.authResponseChange', function(response){
			  console.log('auth.authResponseChange');
			  console.log(response);
		});
		FB.Event.subscribe('auth.statusChange', function(response){
			  console.log('auth.statusChange');
			  console.log(response);
		});
	}
}

So, with some fb_Utils code like the above, you might do the following to have all your code nicely asynchronous:

1. Within the body of your html templates, you can call functions using ensureInit()

fb_Utils.ensureInit(fb_Utils.handleLoginStatus)
fb_Utils.ensureInit(function(){alert("I'm ensured, but not insured, to run sometime after initialization occurred.);})

2. When you activate the SDK – probably in the document ‘head’ – you can decree which commands to run after initialization:

window.fbAsyncInit = function() {
	// just for fun, imagine that FB.init() is located within the fb_Utils.initialize() function
	FB.init({});
	fb_Utils.runOnInit(fb_Utils.handleLoginStatus)
	fb_Utils.runOnInit(function(){alert("When the feeling is right, i'm gonna run all night. I'm going to run to you.");})
	fb_Utils.initialize();
};

## Concluding Thoughts

I’m not sure if I prefer the timeout based “ensureInit” or the stack based “runOnInit” concept more. Honestly, I don’t care. There’s probably a better method out there, but these both work well enough.

In terms of what kind of code should go into the fb_Utils and what should go in your site templates – that’s entirely a function of your site’s traffic patterns — and your decision of whether-or-not a routine is something that should be minimized for the initial page load or tossed onto every visitor.

OMG! Apple is trying to patent someone's app! [ no they're not ]

A tumblr posting just popped up on my radar about Apple trying to patent an app that is identical to one by the company Where-To [Original Posting Here]

The author shows a image comparing a line drawing in Apple’s patent to a screenshot of an application called “Where-To”. The images are indeed strikingly similar.

The author then opens:
>> It’s pretty easy to argue that software patents are bad for the software industry.

Well yes, it is pretty easy to argue that. It’s also pretty easy to argue that Software Patents are really good for the software industry. See, you can cherry-pick edge cases for both arguments and prove either point. You can make an easy argument out of anything, because it’s easier to do that and argue on black&white philosophical beliefs than it is to think about complex systems.

That’s a huge problem with bloggers though– they don’t like to think. They just like to react.

The author continues:

>Regardless of where you stand on that issue, however, it must at least give you pause when Apple, who not only exercises final approval over what may be sold on the world’s largest mobile software distribution platform, but also has exclusive pre-publication access (by way of that approval process) to every app sold or attempted to be sold there, quietly starts patenting app ideas.

> But even if you’re fine with that, how about this: one of the diagrams in Apple’s patent application for a travel app is a direct copy, down to the text and the positions of the icons, of an existing third-party app that’s been available on the App Store for years.

Believe it or not this happens ALL THE TIME. It’s not uncommon to see major technology companies have images from their biggest competitors in their patent diagrams. Patent diagrams are meant to illustrate concepts, and if someone does something very clear — then you copy it. So you might see a Yahoo patent application that shows advertising areas that read “Ads by Google” ( check out the “interestingness” application Flickr filed a few years ago ), or you might have an Apple patent application that shows one very-well-done user interface by another company being used as an example to convey an idea. This isn’t “stealing” ( though I wonder how someone can argue both against and for intellectual property in the same breath ) – it’s just conveying a concept. Conveying a concept or an interface in a patent doesn’t mean that you’re patenting it, it just means you’re using it to explain a larger concept.

The blogger failed to mention a few really key facts:

1. This was 1 image out of 10 images.
2. Other screenshots included a sodoku game, an instant message, a remote control for an airline seat’s console, a barcoded boarding pass, and a bunch of other random things.
3. The Patent Application is titled “Systems And Methods For Accessing Travel Services Using A Portable Electronic Device” — it teaches about integrating travel services through a mobile device. Stuff like automating checking, boarding , inflight services and ground options for when you land. The Where-To app shows interesting things based on geo-location.

You don’t need to read the legalese claims to understand the two apps are entirely unrelated — you could just read the title, the abstract, or the laymans description. If someone did that, they might learn this was shown as an interface to navigate airport services:

> In some embodiments, a user can view available airport services through the integrated application. As used herein, the term “airport services” can refer to any airport amenities and services such as shops, restaurants, ATM’s, lounges, shoe-shiners, information desks, and any other suitable airport services. Accordingly, through the integrated application, airport services can be searched for, browsed, viewed, and otherwise listed or presented to the user. For example, an interface such as interface 602 can be provided on a user’s electronic device. Through interface 602, a user can search for and view information on the various airport services available in the airport.

Apple’s patent has *nothing* to do with the design or functionality of the Where-To app. They’re not trying to patent someone else’s invention, nor are they trying to patent a variation of the invention or any portion of the app. They just made a wireframe of a user interface that they liked (actually, it was probably their lawyer or draftsman) to illustrate an example screen.

Either 2 things happened:

1. The blogger didn’t bother reading the patent, and just rushed to make conclusions of his own.
2. The blogger read the patent, but didn’t care — because there was something in there that could be controversial.

Whichever reason doesn’t matter — both illustrates my underlying point that 99% of people who are talking about software patents should STFU because they’re unable or unwilling to address complex concepts. Whenever patent issues come up, the outspoken masses have knee-jerk reactions based on ideology (on all sides of the issue), and fail to actually read or investigate an issue.

There was even a comment where someone noted:

> Filing date is December 2009….which means Apple’s priority date is December 2008. From what I can see, this app went on sale in mid 2009….going to be hard to argue it is prior art.

They didn’t bother reading the application either. On the *very first line* , we see:

> [0001]This application claims the benefit of U.S. Provisional Patent Application No. 61/147,644, filed on Jan. 27, 2009, which is hereby incorporated by reference herein in its entirety.

How the commenter decided that *December 2008* was a priority date bewilders me. The actual priority date is written in that very first line! They also brought up the concept of ‘Priority’ – which is interesting because that suggests they understand how the USPTO works a bit. “Priority” lets an applicant use an earlier date as their official filing date under certain conditions — either a provisional application is turned into a non-provisional application, or a non-provisional application is split into multiple applications. In both of these cases no new material can be submitted to the USPTO after the ‘priority date’ – It’s just a convenient way to let inventors file information about their invention quickly, and have a little more time to get the legal format in full compliance. A provisional application does have 1 year to be turned into a a non-provisional application — but there’s no backwards clock to claim priority based on your filing date.

I’ve been growing extremely unsatisfied with Apple over the past few years, and I’d love to see them get ‘checked’ by the masses over an issue. Unfortunately, there is simply no issue here.

*Update: The brilliant folks at TechCrunch have just stoked the fire on this matter too, citing the original posting and then improperly jumping to their own conclusions. They must be really desperate for traffic today. Full Article Here*

And the biggest Brand mistake of the month goes to — Target.

Congratulations to Target on being the dumbest Brand of the month — possibly the year.

After the Supreme Court decision that rendered corporate campaign contributions legal and limitless, Target made a contribution to a Minnesota politician named Tom Emmer. Emmer is against gay marriage — and while I disagree with his beliefs — he does have a right to them.

Target’s contribution, however, has created a serious issue for their brand that may snowball out of control. While many politicians are smart enough to avoid hot-button issues like marriage – for both electability and contributions – Emmer embraces them. Instead of making donations to a generic candidate , who happens to oppose Gay Rights, Target stupidly entered the fray of the Gay Marriage debate by funding someone who is actively campaigning against them. Brilliant.

To make things even worse, Emmer is a supporter ( both financially and personally ) of Bradlee Dean, an unconventional minister / rock musician with some fairly extreme views on homosexuality, including the supporting the practice of executing gays and lesbians.

So Target contributed money to Emmer, Emmer said some things that are offensive to many of their customers, and then Emmer gave some money in turn to Dean who said things are beyond offensive to even more of their customers. That’s a fine mess they’re in.

Target is going to be giving tons of money to hundreds of candidates , because we live in a society where cash contributions mean political access and favors. Few people will have the foresight, or ability, to figure out which of the people they need to support to get some patronage are – for lack of a better phrase – polarizing assholes. This is a sad fact, but its unavoidable.

Anyone in PR and branding with half a brain knows that mistakes happen and people can forgive. But instead of condemning the situation, saying “This is awful – as are the comments”, backstepping out of the situation, and then making a 10x contribution to a politically related yet entirely non-offensive charity ( like a halfway house for at-risk LGBT teens ), Target said nothing. Days later they issued a statement that basically says “So what? Deal with it. We’ll contribute equally to politicians on both sides as we see fit, and this isn’t our fault.”.

Sorry, that’s not good enough. In fact this is bad, downright stupid, and will hurt the Target brand dearly. Instead of distancing themself from hate-speech and a politicized situation, Target is defending their actions. Consumers are now becoming outraged not only at the politics of the situation, but the arrogance of the corporate stance.

In a few weeks, Target will probably be forced to make amends and have a press conference where they apologize to hurting customers but they did no real wrong, and then make some sort of token goodwill gesture or contribution. It will be a touching moment that is perfectly executed after being orchestrated by a PR fix-it consultancy along with a gay lobby group that makes them realize that they can severely hurt the brand and bottom-line. Unfortunately this will be a forced moment – and one that should have come much sooner.

Making contributions to candidates is a dangerous game; your brand can become tied up in political nightmares no one should face. Most large contributers are smart enough to donate to Political Action Committees (PACS) that are rather nebulous — Save the Earth, Save the Environment, Save the Puppies, etc — then let them deal with funneling money to political campaigns. In fact, many PACs are nothing but intermediaries and shell groups designed to make political contributions to candidates with controversial stances non-offensive. Contributions like this can ensure candidates get their payoffs, and contributors get their patronage. Why Target strayed from this puzzles me.

Target injected its brand into a heated political topic, and shouldn’t have. Target had a lot of opportunities to backstep and pull out and they didn’t – in fact, they made things much worse. The subject matter of the debate is irrelevant — this could have been healthcare, sick puppies, immigration, or really anything — a mass-market brand should always come across as politically neutral.

Don't Get Too Excited About the LOC's Copyright Decisions

Today the Library of Congress announced new laws ( perhaps more accurately interpretations of existing laws , as their rulings created ‘exemptions’ to the DMCA laws ) designed to strengthen the concepts of “Fair Use” in as it applies to the corpus of U.S. Copyright law.

The LOC’s decisions are both shocking and enlightening — few expected such an interpretation could ever be possible given the extensive amount of lobbying special interests spend before lawmakers.

Honestly, while I’d agree that their decisions are “correct” and within the spirit of the law, I’m completely fucking floored they had the balls to do the right thing. This is – undoubtedly – a HUGE day in U.S. law.

In a nutshell, the Library of Congress said that it does not violate the DMCA or Copyright Law to circumvent digital protections — that is to say that one is free to descramble a DVD for legal use, jailbreak a digital device (ie: iphone), or circumvent a hardware dongle for legally obtained software. For years people have said that a common-sense and fair interpretation of the law should allow for these things — but industry lobbies used highly paid lawyers with bizarre reasonings and countless campaign donations to influence the development of laws to suit their interests.

While I’m very excited about this win for democracy and fairness, I’m not entirely sure that the decisions are anything to be excited about in terms of ‘resolution’ to these issues.

While the Library of Congress has clarified the law to allow for these types of uses as *not* a violation of Copyright , they have not (nor are they probably able to) ensured that these are rights that may not be given up through contract law.

For decades, lawyers have relied upon contract law to make up for deficiencies in copyright law – creating new protections for their clients by sidestepping any arguments around copyright. For example, while it would not be a violation of US Copyright Law (under the new interpretations) for a user to modify Apple’s software, there could exist a contractual clause — like an End User License Agreement [EULA] or Terms of Service [TOS] between a consumer and Apple or their cellphone carrier to make modification of the device prohibited. Apple could then sue customers based not on Copyright, but on Contract Law.

If you don’t think these types of contracts would come into play, look at the full text of TOS and EULA of software that you buy… or websites that you use like Facebook or MySpace. You might note numerous passages that talk about who can access the servers and under what conditions — large media companies like these routinely use Contract law to chip away at access to fair-use content. Expecting industries to become more relaxed at this practice, while they lose certain copyright protections they believed they had, is nothing short of ridiculous.

It would have been truly remarkable if Congress were to ensure that people have irrevocable rights to circumvent copy protections and modify devices — rights that can not be given up or outlawed within any contract. Sadly we don’t have that yet. However, this decision also means that the numerous lawsuits that the media lobby might bring up in these areas would not be in federal courts and handled by federal investigative agencies — but that they would be in civil courts with the plaintiffs responsible for their entire bill. I’ll drink to that!

Everyone's talking about the need for a privacy oriented Open Source solution for an open social graph

And a lot of people are asking me “Weren’t you doing that four years ago?”

Well yes, I was. In fact I still do.

My company FindMeOn Open Sourced a lot of technology that enables a private and security based open social graph, in 2006

The [findmeon node standard](http://findmeon.org/projects/findmeon_node_standard/index.html) allows people to create ad-hoc links between nodes in an graph. Cryptographic key signing allows publicly unconnected links to be verifiably joined together to trusted parties.

Our commercial service manages node generation and traversing the graph. Even using an account linked to a third party, as ourselves, privacy is maintained .

– [A syntax highlighted example is on the coprorate site](http://findmeon.com/tour/?section=illustrated_example)
– [The way the commercial + open source stuff melds is explained in this image](http://findmeon.com/tour/?section=abstracted)

There’s also a bunch of graphics and images related to security based inter-network social graphs on my/our Identity research site. A warning though, half of it is about monetizing multi-network social graphs:

– [IdentityResearch](http://www.destructuring.net/IdentityResearch)