Each quarter, I (try) and put out a competitor analysis document for the main product I work on, GeoService, which is a little bit special for the company and uses a framework I came up with about a year ago.
The sales teams like bits of this, as does, it turns out, our board of directors. For certain people who read it, they will always want feature comparisons.
I try and warn people away from this as its not overly effective and it gets the business too focused on shipping features, instead of outcomes.
“It’s not an arms race,” I usually say.
This is how I put it together.
1. Start with a chairmans letter
I got this idea a few months back when I was involved in our IPO process. I spent a long time with the rest of the product and design team working on our prospectus and at the front of the document is something called a Chairmans letter.
At the beginning of our competitor analysis document, I figured I could do the same kind of thing.
I write something similar to the rest of the company about the state of play, whats changing, what they need to be aware of, and what the overall vision is for each product.
I have even toyed with the idea of talking about why we think the OKRs for the teams will help us address some of the challenges listed, just to ensure everyone is aligned.
This part of the letter is something that the sales team really requested. Something to get excited about. So I write the letter with the aim of setting the tone and giving some confidence that what we’re doing isn’t crazy.
One thing I covered in our recent letter is what Warren Buffet calls a Wide Moat of competitive advantage. As a product manager, I think about this endlessly, and our goal is to have a stream of people working on features that focus on this principal.
What’s a wide moat? Something that will create a moat around your business and protect it from competition. This is straight out of Peter Theil’s playbook. The theory goes that for something to fit the definition, it needs to be truely useful, hard to replicate, have an element of intellectual property, and of a magnitude of 10x better than the competition.
You want to build a monopoly, not a competitor.
2. Primary vs Secondary Competitors
I think its easy to get caught up on who your competition is and generally speaking I actually think you shouldn’t really worry about them much at all. Focus on the customer, not the competition.
In that regard, I think Jason Fried has the right idea. Focus on the customers, and the rest will fall into line.
But, for the purposes of putting out a document like this, I think its important to make a distinction between the 2 groups, primary, the ones you come across a lot, and secondary, the ones you have to really hunt for.
There is probably room to include true disrupters too. Ergo; Uber is a competitor to a Taxi, sure, but teleportation might be the true black swan.
I like to pick the top 5 for the primary group. By Top, I mean the ones that come up most in sales calls when people are comparing solutions and the ones that come up in churn data — ones you might be losing customers to. But here is the special bit, and we’ll cover this later, I have a way to work out who the true top 5 are, and thats what really counts. Kind of…
3. A holistic Comparison
There are lots of ways to compare (and evaluate) a company. This is by no means perfect, but I have found it represents the most honest and customer centric view, and I know the people who’ve seen this document do say its pretty good. Keep in mind, this might not work for all industries or all products.
First you have sections of interest.
- Growth, product, service, leadership.
Then in each section, you have attributes. 19 things that matter.
1. Market Share (what % of the market do they own. If you don’t know this, it’s pretty simple to guess. You find out the user numbers, then find the overall size of the market, then represent them as a %. ie. if there is 100,000 total customers in the market, and your competitor has 5,000 users, then your competitor owns 5% of the total market share. Then, to score it out of 5, you look at all of the competitors on a bell curve, put them in groups, the low group scores 1, the top group scores 5.)
2. Employee to User ratio (This is my favourite metric and one I’m most proud of. I remember reading Benjamin Grahams the Intelligent Investor a while back and realising the importance of return on equity. This number is a reflection of that. The idea is that if you have a team of 5 and your competitor has a team of 50, if you can land 100,000 users with that many people, and your competitor only has 5,000 users, then your ratios are better. You represent it like this.
1:28,000. So for every 1 employee, you earn 28,000 users.
Its kind of like a technical user version of return on equity. I call it the ETU ratio.
3. Enterprise support (Can they do big accounts. This can matter over the long term because bigger accounts mean bigger ETU scores and its more likely they will become profitable faster.)
4. High profile accounts (do they have any whales. Big Whale accounts can attract a better return on marketing spend. Smaller ones can also be a detractor for Enterprise customers. Like if you are BP, and you go to a potential customers website and its all small businesses you’ve never heard of on their website, they might be scared away)
5. Usability (is it easier to use. Careful with this one. Be objective. Lots of bias danger here.)
6. On-boarding (is it easy to learn in the first few days. Very different to usability. The same Bias dangers apply)
7. Mobility coverage (how many mobile platforms do they cover)
8. App satisfaction (what do customers, and perhaps the agency that works for them, rate their apps, represented as an average)
9. Integrations (whats their coverage like. Unless your Slack, everyone tends to score low here.)
10. Feature-set (do they have heaps of coverage on the feature matrix. Sometimes having lots of functionality is useful, but we score it less than others. Less is indeed, more.)
11. Affordability (This is different to price. Does it seem affordable, given everything you know about it?)
12. Overall product quality. This is more or less a proxy for NPS data. The best way to get an accurate representation of this is to bring potential customers into a room, show them the top 5 products, and have them rank them. But that can be a bit tricky to organise.
13. Support Response times (how long does it take to get a response on a problem.)
14. Support coverage (do they do 24x7 kinda thing)
15. Velocity (how fast can they ship software. One thats hard to measure but if you put in the work you can do it. The way I do it is go through blogs and release notes for each release for the last few months or year, and get a sense of how many BIG features they ship on the regular. A startup I think does this the best is Product Board. They seem to be able to ship huge features of high quality often.)
16. Innovation (how innovative are they? Some companies just work on the feature arms race, and others are what Peter Theil would call Zero to One ideas. Companies that are innovative tend to perform better over the long run.)
17. Management (what is the quality of their management like. This is another one from the Intelligent Investor. The quality of a companies management is paramount. It is also difficult to score without Bias. To do this, I usually look at past performance. Tenure, have the management and board worked on big startups before? Is there a track record of delivery or are they learning as you go. Glassdoor reviews can speak volumes here. Also if the company is public, I like to get data from Simply Wall.st. Its a great app that turns a lot of this data into a simple infographic.)
18. Industry focus (how focused are they on a market. The more a company narrows its focus, the better it performs, generally speaking. I like companies that know who they are)
19. Company Age (how old is it. New companies tend to be faster and more focused. They also tend to be focused on the customer, where as after about 7–8 years, they tend to become more internal focused. Trying to work out how to fix things inside. Young is good, old is good, the middle is probably were you suffer the most. There are exceptions to this everywhere, so again, we weight this low. The only reason we included this attribute is when we interviewed customers, it was important to them.)
4. Then comes the scary bit.
You score your company too. Then, I display 2 sets of Spider / radius graphs side by side, to show a holistic representation about how the competition and you compare across a wide range of areas.
So you have 19 attributes you are scoring. Nice work. But the problem is they aren’t all equal, and it is hard to work out how much of a weighting to put on each one.
For instance, overall product quality is infinitely more important than support coverage usually. But by how much?
To start, you won’t really know.
The best way to get this is to ask customers directly. We went out to our customers, and potential customers, and asked them how important each attribute was. This gave us a baseline.
Lets say you give a weighting of x5 to the attribute Overall product quality, and one of your competitors scores a 3 out of 5 for that.
You simply take 3 (their score) and multiple it by 5 (giving you a score of 15) for that particular attribute.
Your list might look like this
Overall Product Quality = 4x (3 score) = 12
Company Age =3x (5 score) = 15
Usability = 10x (score 2) = 20
Affordability = 3 (score 5) = 15
Total Competitor score = 62
This is what I call the primary score.
- So what you’ll end up with is spider charts which compare areas.
- and Primary scores telling you who to watch out for.
6. Why use primary scores give you true competitors?
For each company, I use the above metrics and their weights to issue a primary score. My theory goes, the primary scores and the primary competitors should correlate pretty closely. I’ve always wanted to try build an algorithm or something that removes the anomalies and outliers to make that true, but baby steps I guess.
I guess another way to look at it, the primary scores are your primary competitors. If the ones you wrote down are different to the ones that are displayed by the scores, either the sales stream is giving you the wrong intel, or you’re competing so far down on the ladder that the real competitors don’t even know who you are, or my model is wrong — which is possible too.
Or, you are picking the wrong attributes. The best way to work out what attributes are important, is to, again, ask your customers.
So, you show the primary score evaluations like this.
Competitor 1 = 739 (Rank #1)
You = 721 (Rank #2)
Competitor 2 = 451 (Rank #3)
Competitor 3 = 412 (Rank #4)
Competitor 3 = 398 (Rank #5)
7. Feature Comparison
Next you do the thing you shouldn’t do. You compare features, as if that were a useful comparison. Because no matter what — someone asks for it. So you may as well do it.
8. Then finally
You do something useful. You write up actually useful advice to the sales team so they can compete. I write it like this
Advantages they have
They have a killer reporting feature.
Weakness they have
They are slow to respond to customers, so yeah…
How we position ourselves against them
We do this, because its good. They don’t do this, and thats bad for you.
How we define them as a tweet.
One of my favourite books is “Eating the big Fish, how to build a challenger brand.” A comment in the book rings true here. You want to define your brand, because if you don’t, your competitors do. And this is the opportunity to do just that. A quick sentence about the size of a tweet thats catchy that everyone can remember.
You want to craft a clear message that defines your competition in a way that strengthens your advantages and exploits their weakness. The sales team can stay on message with this in competitive displacement opportunities and after a while, if you do it right, the market will probably echo the sentiment. Politicians call this framing the argument. It works.
How to circulate it
We put it all in a flashy InDesign document and circulate it to the entire business.
The whole process takes about a week and a half once you get your spreadsheets and stuff sorted out.
What this doesn’t cover that I wish it did.
So there are some attributes that are just too difficult to get from the outside.
- ARPU (Average Revenue Per User)
- LTV (Lifetime value)
- NPS (Net Promoter Score)
- and CAC (Cost to Aquire a Customer)
I think I’d love to find a way of seeing that and comparing them. To be honest, those are probably much more useful than what I do anyway.
If you have any feedback I’d love to hear it.