Another installment in the "How to Bozo Simple Campaign Analysis". I've got a lot of them. It's amazing how inventive people get when it comes to messing up data.
Anyway, this is from a customer onboarding program. When the company got a new customer, they would give them a call in a month to see how things were going. There was a carefully held out control group. The reporting, needless to say, wasn't test and control. It was "total control" vs. "the test group that listened to the whole onboarding message". The goal was to enhance customer retention.
The program directors were convinced that the "recieve the call or not" decision was completely random; and given that it was completely random the reporting should be concentrated on only those that were effected by the program (that again -- it's amazing how often the idea comes up).
Clearly, the decision to respond to telemarketing is a non-random decision, and I have no idea what lonely neurons fired in the directors brains to make them think that. To start with, someone who is at home to take a call during business hours is going to be a very different population that people that go to work. More importantly, a person that thinks highly of a company is much more likely to listen to a call than someone who isn't that fond of a company.
Unsurprisingly, the original reporting showed a strong positive result. When I finally did the test/control analysis, the result showed that there was no real effect from the campaign.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment