Really, really weird.
Imagine a company with 2mm customers. Reasonable-sized, not huge.
How many people do you know well? Maybe 100 people? Think about the absolute weirdest person you know. That company has customers that are literally 100 times weirder than the weirdest person you know. In fact, they've got 200 of them.
It's a bad idea to think you know what customers are going to do without testing, measuring, and finding out.
Saturday, June 27, 2009
Bozoing Campaign Measurements V
Here's a classic: toss all negative results.
Clearly, everything we do is positive, right?
Nope. Anything that can have an effect can have a negative effect.
(I've met a number of marketing people that really truly believe that people wait at home looking forward to their telemarketing calls. And that calling something 'viral' in a powerpoint is enough to actually create a viral marketing campaign).
There's another factor. Depressingly, a lot of marketing campaigns do absolutely nothing. Random noise takes over; half will be a little positive and half will a little negative. Toss the negative results and you're left with a bunch of positive results. Add them up and suddenly you've got significant positive results from random noise. This is bad.
I've seen an interesting variant on this technique from a very well-paid consultant. Said VWPC analyzed 20 different campaigns and reported extensively on the one campaign that had results that were significant at a 5% level.
Clearly, everything we do is positive, right?
Nope. Anything that can have an effect can have a negative effect.
(I've met a number of marketing people that really truly believe that people wait at home looking forward to their telemarketing calls. And that calling something 'viral' in a powerpoint is enough to actually create a viral marketing campaign).
There's another factor. Depressingly, a lot of marketing campaigns do absolutely nothing. Random noise takes over; half will be a little positive and half will a little negative. Toss the negative results and you're left with a bunch of positive results. Add them up and suddenly you've got significant positive results from random noise. This is bad.
I've seen an interesting variant on this technique from a very well-paid consultant. Said VWPC analyzed 20 different campaigns and reported extensively on the one campaign that had results that were significant at a 5% level.
Sunday, June 7, 2009
Bozoing Campaign Measurements - IV
Another installment in the "How to Bozo Simple Campaign Analysis". I've got a lot of them. It's amazing how inventive people get when it comes to messing up data.
Anyway, this is from a customer onboarding program. When the company got a new customer, they would give them a call in a month to see how things were going. There was a carefully held out control group. The reporting, needless to say, wasn't test and control. It was "total control" vs. "the test group that listened to the whole onboarding message". The goal was to enhance customer retention.
The program directors were convinced that the "recieve the call or not" decision was completely random; and given that it was completely random the reporting should be concentrated on only those that were effected by the program (that again -- it's amazing how often the idea comes up).
Clearly, the decision to respond to telemarketing is a non-random decision, and I have no idea what lonely neurons fired in the directors brains to make them think that. To start with, someone who is at home to take a call during business hours is going to be a very different population that people that go to work. More importantly, a person that thinks highly of a company is much more likely to listen to a call than someone who isn't that fond of a company.
Unsurprisingly, the original reporting showed a strong positive result. When I finally did the test/control analysis, the result showed that there was no real effect from the campaign.
Anyway, this is from a customer onboarding program. When the company got a new customer, they would give them a call in a month to see how things were going. There was a carefully held out control group. The reporting, needless to say, wasn't test and control. It was "total control" vs. "the test group that listened to the whole onboarding message". The goal was to enhance customer retention.
The program directors were convinced that the "recieve the call or not" decision was completely random; and given that it was completely random the reporting should be concentrated on only those that were effected by the program (that again -- it's amazing how often the idea comes up).
Clearly, the decision to respond to telemarketing is a non-random decision, and I have no idea what lonely neurons fired in the directors brains to make them think that. To start with, someone who is at home to take a call during business hours is going to be a very different population that people that go to work. More importantly, a person that thinks highly of a company is much more likely to listen to a call than someone who isn't that fond of a company.
Unsurprisingly, the original reporting showed a strong positive result. When I finally did the test/control analysis, the result showed that there was no real effect from the campaign.
Subscribe to:
Posts (Atom)