Our next contestant comes from the telecom world.
What the group was doing was evaluating marketing campaigns over the course of several years. Does an attrition-prevention campaign have any effect after three years? This is an absolutely wonderful thing to do, of course, but not the way they went about it.
The campaigns were in a series of mailings that went out to customers that were about to go off contract, and the offer was a monetary reward to renew their contract for a year. Each campaign had a carefully selected control group.
The dead-obvious thing to do is to compare the treatment group vs. the control group, but that's not what got done. What happened was the analysis compared the whole control group to the customers in the treatment group that renewed their contract, because clearly "customers that didn't renew their contract weren't effected by the campaign".
Sound familiar?
Why doing analysis this way is a bad idea: before the mailing on contract renewal, customers are going to have a certain basic affinity towards the company. Some are going to love it, some are going to hate it, some are going to be on the fence. When the customers get the offer the ones that already hate the company will toss the offer, the ones that love the company will take free money for staying with a company they like, and the ones on the fence may or may not take the offer and have their future behavior change. So, to a good extent a retention program like this isn't changing behavior but instead is sorting the customers into buckets based on how they already feel about the company. Comparing "total control group" to "contract renewers" confounds two effects, one effect of the customers predisposition to the company and the second effect of having some customers renew their contracts for a reward. Moreover, this comparison doesn't actually answer the real question: does the program have a meaningful, measurable impact on churn? To answer the real question in the right way Keep Things Simple and Statistical and do a straight treatment vs. control.
Tuesday, May 19, 2009
Monday, May 18, 2009
How to Bozo Campaign Measurements
You know, at their heart statistical measurements are basically the easiest thing in the world to do, especially when it comes to direct marketing. Set up your test, randomly split the population, run the test, measure the results. It pretty much takes serious work to mess this up. It's amazing how many bright people leap at the chance to go the extra mile and find an inventive way to bozo a measurement.
The first exhibit is a database expert working for a customer contact project at a bank. A customer comes in, talks to the teller, and the system 1) randomly assigns the customer to the control group or not if this is the first time the customer has hit the system, otherwise it looks up the customer's status and then 2) makes a suggestion for a product cross-sell. The teller may or may not use the suggestion, depending on how appropriate the teller thinks the offer is for the customer and/or how busy the branch is and if there is time available to talk to the customer.
So now, we've got the simplest test/control situation possible. What the DBA decided was to toss out all the customers where no offer was made, on the theory that if no offer was made then the program had no effect. So, all the reporting was done on "total control group" vs. "treatment group that received the offer", creating a confounding effect. The teller decision to make the offer or not was highly non-random. The kind of person that comes in at rush hour (where the primary concern of the teller is handling customers and keeping wait times down) is going to be very different from the kind of person that comes during the slow time in the middle of the afternoon.
The project team understood this confounding, that in their reporting they were mixing up two different effects, and talked for over two years about how to overcome this confounding when all they had to do was be lazier and report on the random split.
The first exhibit is a database expert working for a customer contact project at a bank. A customer comes in, talks to the teller, and the system 1) randomly assigns the customer to the control group or not if this is the first time the customer has hit the system, otherwise it looks up the customer's status and then 2) makes a suggestion for a product cross-sell. The teller may or may not use the suggestion, depending on how appropriate the teller thinks the offer is for the customer and/or how busy the branch is and if there is time available to talk to the customer.
So now, we've got the simplest test/control situation possible. What the DBA decided was to toss out all the customers where no offer was made, on the theory that if no offer was made then the program had no effect. So, all the reporting was done on "total control group" vs. "treatment group that received the offer", creating a confounding effect. The teller decision to make the offer or not was highly non-random. The kind of person that comes in at rush hour (where the primary concern of the teller is handling customers and keeping wait times down) is going to be very different from the kind of person that comes during the slow time in the middle of the afternoon.
The project team understood this confounding, that in their reporting they were mixing up two different effects, and talked for over two years about how to overcome this confounding when all they had to do was be lazier and report on the random split.
Friday, February 13, 2009
The Data Daemon
Appropos of "Murphy's Laws of Data" , I find it useful to imagine that data is created by a little deamon and his job is to make me look like a durn fool.
Saturday, May 3, 2008
Learning from LTV at LTC: It's About Understanding
Ultimately, success is about understanding. Build teams that will take the time to understand the business and all parts of the project, where every member of the team understands all parts of the projects as a whole, share this understanding in full with anybody who wants to learn, and carry this detailed understanding forward in the enterprise.
Learning from LTV at LTC: Build Complete Teams
ypically projects are done by assembling cross-functional teams from different areas, each person with a narrow responsibility. This is a very efficient way of handling day-to-day business but an ineffective way of getting business-changing projects done. This is especially true is the project is going to be going on for a while.
The key to our success was having a complete team that could handle all phases of the project. There was no point in the project that we threw the project over the wall to another team, or caught something that another team was throwing at us. When we were working with other teams we established working relationships with them and brought those teams into the project. Every member on the LTV team could speak to all aspects of the project and have meaningful input into all aspects of the project.
Let me give an example of what can happen with fragmented, siloed teams. I was working on updating a project that had been launched several years before. There was one team that extracted the data from a datamart, another that took the data and loaded it into a staging area, and a third team that loaded the data from the staging area into the application. I asked the question “who can guarantee that the data in the application is right”? Thunderous silence. No one could guarantee that the final data was right, or even that their step was correct; all they could promise was that their scripts had run without obvious error.
If I had to give a name to this approach I'd call it the “A-Team” approach: complete functional teams that understand each other's areas.
The key to our success was having a complete team that could handle all phases of the project. There was no point in the project that we threw the project over the wall to another team, or caught something that another team was throwing at us. When we were working with other teams we established working relationships with them and brought those teams into the project. Every member on the LTV team could speak to all aspects of the project and have meaningful input into all aspects of the project.
Let me give an example of what can happen with fragmented, siloed teams. I was working on updating a project that had been launched several years before. There was one team that extracted the data from a datamart, another that took the data and loaded it into a staging area, and a third team that loaded the data from the staging area into the application. I asked the question “who can guarantee that the data in the application is right”? Thunderous silence. No one could guarantee that the final data was right, or even that their step was correct; all they could promise was that their scripts had run without obvious error.
If I had to give a name to this approach I'd call it the “A-Team” approach: complete functional teams that understand each other's areas.
Labels:
data mining,
information engineering,
lifetime value,
projects
Learning from LTV at LTC: Tell Everything
In a project like this the team gains a great deal of understanding about how the business works and there is always the temptation to keep that understanding within the team. The argument I have heard is that by keeping all the details hidden then the team will maintain control over the results of the project. What I've seen actually happen is that when a team tries to keep secrets others just don't believe them.
In the LTV project we made the decision to explain every detail to anybody who asked. The result was that people had a great deal of faith in what we produced. Even if people disagreed with the decisions that we made in the project, they understood and could respect the decisions.
In the LTV project we made the decision to explain every detail to anybody who asked. The result was that people had a great deal of faith in what we produced. Even if people disagreed with the decisions that we made in the project, they understood and could respect the decisions.
Labels:
data mining,
information engineering,
lifetime value,
projects
Learning from LTV at LTC: Build Understanding
Projects that change an organization demand that the project group build a substantial understanding of that the business is, what it could be, and how the project can help the business get there. That understanding needs to stay withing the organization after the project is officially complete. There is a vast difference between the understanding that comes from seeing a presentation on a project and the understanding that comes from actually doing the work.
Projects that are important to the company need to be living, evolving things and that means that the detailed understanding of the project needs to stay accessible to the organization. With LTV, as soon as it came out people wanted additional work and we could do it because we knew the nuts and bolts.
Projects that are important to the company need to be living, evolving things and that means that the detailed understanding of the project needs to stay accessible to the organization. With LTV, as soon as it came out people wanted additional work and we could do it because we knew the nuts and bolts.
Labels:
data mining,
information engineering,
lifetime value,
projects
Subscribe to:
Posts (Atom)