Saturday, May 3, 2008
Learning from LTV at LTC: It's About Understanding
Ultimately, success is about understanding. Build teams that will take the time to understand the business and all parts of the project, where every member of the team understands all parts of the projects as a whole, share this understanding in full with anybody who wants to learn, and carry this detailed understanding forward in the enterprise.
Learning from LTV at LTC: Build Complete Teams
ypically projects are done by assembling cross-functional teams from different areas, each person with a narrow responsibility. This is a very efficient way of handling day-to-day business but an ineffective way of getting business-changing projects done. This is especially true is the project is going to be going on for a while.
The key to our success was having a complete team that could handle all phases of the project. There was no point in the project that we threw the project over the wall to another team, or caught something that another team was throwing at us. When we were working with other teams we established working relationships with them and brought those teams into the project. Every member on the LTV team could speak to all aspects of the project and have meaningful input into all aspects of the project.
Let me give an example of what can happen with fragmented, siloed teams. I was working on updating a project that had been launched several years before. There was one team that extracted the data from a datamart, another that took the data and loaded it into a staging area, and a third team that loaded the data from the staging area into the application. I asked the question “who can guarantee that the data in the application is right”? Thunderous silence. No one could guarantee that the final data was right, or even that their step was correct; all they could promise was that their scripts had run without obvious error.
If I had to give a name to this approach I'd call it the “A-Team” approach: complete functional teams that understand each other's areas.
The key to our success was having a complete team that could handle all phases of the project. There was no point in the project that we threw the project over the wall to another team, or caught something that another team was throwing at us. When we were working with other teams we established working relationships with them and brought those teams into the project. Every member on the LTV team could speak to all aspects of the project and have meaningful input into all aspects of the project.
Let me give an example of what can happen with fragmented, siloed teams. I was working on updating a project that had been launched several years before. There was one team that extracted the data from a datamart, another that took the data and loaded it into a staging area, and a third team that loaded the data from the staging area into the application. I asked the question “who can guarantee that the data in the application is right”? Thunderous silence. No one could guarantee that the final data was right, or even that their step was correct; all they could promise was that their scripts had run without obvious error.
If I had to give a name to this approach I'd call it the “A-Team” approach: complete functional teams that understand each other's areas.
Labels:
data mining,
information engineering,
lifetime value,
projects
Learning from LTV at LTC: Tell Everything
In a project like this the team gains a great deal of understanding about how the business works and there is always the temptation to keep that understanding within the team. The argument I have heard is that by keeping all the details hidden then the team will maintain control over the results of the project. What I've seen actually happen is that when a team tries to keep secrets others just don't believe them.
In the LTV project we made the decision to explain every detail to anybody who asked. The result was that people had a great deal of faith in what we produced. Even if people disagreed with the decisions that we made in the project, they understood and could respect the decisions.
In the LTV project we made the decision to explain every detail to anybody who asked. The result was that people had a great deal of faith in what we produced. Even if people disagreed with the decisions that we made in the project, they understood and could respect the decisions.
Labels:
data mining,
information engineering,
lifetime value,
projects
Learning from LTV at LTC: Build Understanding
Projects that change an organization demand that the project group build a substantial understanding of that the business is, what it could be, and how the project can help the business get there. That understanding needs to stay withing the organization after the project is officially complete. There is a vast difference between the understanding that comes from seeing a presentation on a project and the understanding that comes from actually doing the work.
Projects that are important to the company need to be living, evolving things and that means that the detailed understanding of the project needs to stay accessible to the organization. With LTV, as soon as it came out people wanted additional work and we could do it because we knew the nuts and bolts.
Projects that are important to the company need to be living, evolving things and that means that the detailed understanding of the project needs to stay accessible to the organization. With LTV, as soon as it came out people wanted additional work and we could do it because we knew the nuts and bolts.
Labels:
data mining,
information engineering,
lifetime value,
projects
Saturday, April 26, 2008
LTV at LTC: Learning from it: Design Rules
In software it's all about the implementation – actually writing the code. In business intelligence projects actually doing the implementation isn't that big a deal. There are lots of packages to make implementation easy compared to writing software from scratch. What that means is that business intelligence projects are all about the design, and the design team needs to be in control and actively involved in all stages of the project.
Labels:
data mining,
design,
lifetime value,
projects
Thursday, April 24, 2008
LTV at LTC: The Large Activity Based Costing (ABC) Project
During and after the LTV project, there was yet a fourth value-based project at LTC. The Finance department brought in a large consulting company to design a database for activity-based costing to help LTC get a handle on their operational expenses. The goal was to build an ABC database where a manager could look at expenses, drill down into the specific line items, and then drill into the company and customer activity that was causing those expenses and so have a clear grasp of the actions needed to manage expenses.
The project started out by having the consultants come in and have roughly a year of large meetings on what should go into the system. This was done without considering implementation issues. At then end of the meetings a large and detailed specification was developed, which was then handed off to the LTC IT department. The LTC IT department estimated that implementation would cost several million dollars and the project was killed right then and there.
In many respects, the ABC project was the antithesis of the LTV project.
The project started out by having the consultants come in and have roughly a year of large meetings on what should go into the system. This was done without considering implementation issues. At then end of the meetings a large and detailed specification was developed, which was then handed off to the LTC IT department. The LTC IT department estimated that implementation would cost several million dollars and the project was killed right then and there.
In many respects, the ABC project was the antithesis of the LTV project.
- Instead of identifying a group within the company to build the project, an outside consultant was brought in to run the project. This meant that the understanding that comes from doing a project like this left LTC with the consultants.
- There was a complete disconnect between the design and implementation teams. This meant that implementation issues were not considered during the design, and that the design could not be modified later to take implementation factors into consideration.
- Instead of a small group working to understand the business, ABC had large meetings to poll people on their issues. This meant that every possible issue was included in the project design. Because the design was simply thrown over a fence to implementation there wasn't any negotiation over project scope to achieve what was reasonable.
Labels:
data mining,
information engineering,
lifetime value,
projects
Tuesday, April 22, 2008
LTV at LTC: The International Consulting Company (ICC)
At the same time as our project was going going an International Consulting Company was brought in to do pretty much an identical project, Lifetime Value for customers. We were able to work fairly closely together and our projects wound up being very similar. The ICC team was very valuable to us in that ICC was working with the CMO directly and so our project was able to gain tremendous credibility through association and to some degree confusion with the ICC project.
ICC and our group had slightly different methodologies; ours was adopted because we had resources to deploy the results in the data warehouse and the ICC didn't.
ICC and our group had slightly different methodologies; ours was adopted because we had resources to deploy the results in the data warehouse and the ICC didn't.
Labels:
data mining,
information engineering,
lifetime value,
projects
Thursday, April 17, 2008
Pay Close Attention to What Everybody Tells You to Ignore
Organizations develop blind spots. The drill is: people decide something isn't important, so all of the reporting ignores it, so nobody thinks about it, so nobody gets it put on their goals, and the cycle reinforces itself. Opportunities develop that everyone ignores.
A good example is involuntary attrition (attrition due to bad debt) at LTC. People concentrated on voluntary attrition and ignored involuntary attrition, and forgot that involuntary attrition was very much the result of a voluntary choice on the customers part. As a result, there were actually more opportunities for helping LTC with respect to involuntary attrition than voluntary attrition.
A good example is involuntary attrition (attrition due to bad debt) at LTC. People concentrated on voluntary attrition and ignored involuntary attrition, and forgot that involuntary attrition was very much the result of a voluntary choice on the customers part. As a result, there were actually more opportunities for helping LTC with respect to involuntary attrition than voluntary attrition.
Wednesday, April 16, 2008
LTV at LTC: After the Project -- Education and Explanations
When the LTV project was rolled out and data was being published I immediately found myself with two new tasks: educating the company about the LTV project and explaining why particular customers got negative value.
I anticipate that education will be part of any analytic project. The most important decision we made about education was to explain everything. There was no part of the LTV system that we did not discuss and even give specific parameters for. Explaining everything allowed people to understand the LTV system.
What really made people accept the LTV system was being able to answer why particular customers had negative scores. In particular we got a number of calls from Customer Care. LTV had been integrated into the Customer Care system and it effected what kind of equipment offers could be made to customers. The Customer Care department needed to know why some high-revenue customers were getting low or negative value.
We were able to answer questions like this easily and convincingly. As it turned out, the usual reason high revenue customers had negative LTV was because they hadn't actually paid their bill in a number of months. Being able to answer these questions went a long way to establishing our credibility.
I anticipate that education will be part of any analytic project. The most important decision we made about education was to explain everything. There was no part of the LTV system that we did not discuss and even give specific parameters for. Explaining everything allowed people to understand the LTV system.
What really made people accept the LTV system was being able to answer why particular customers had negative scores. In particular we got a number of calls from Customer Care. LTV had been integrated into the Customer Care system and it effected what kind of equipment offers could be made to customers. The Customer Care department needed to know why some high-revenue customers were getting low or negative value.
We were able to answer questions like this easily and convincingly. As it turned out, the usual reason high revenue customers had negative LTV was because they hadn't actually paid their bill in a number of months. Being able to answer these questions went a long way to establishing our credibility.
LTV at LTC: Building the System
Building the LTV system took a small team approximately two months out of a year spent on the project, from building the lifetime models to coding the formulas to finally building a system of monthly HTML reports. Ironically actually building the LTV project was the simplest part of the whole project.
Monday, April 14, 2008
LTV at LTC: Alarms and Diversions; the New Media Department (NMD)
In LTC, we had a department dedicated to exploring new technologies and new media applications. The technology to really make NMD's projects really go wasn't slated to go live until the year after the LTV project, but they were still very interested in the LTV project. Their interest culminated in a meeting that nearly ended the LTV project.
NMD had segmented the customer base, and had identified the segment they wanted to market to. NMD was horrified that one of their potential customers might get a poor score, and so perhaps not get the best possible service. Never mind the equal possibility that their potential customers might get good scores and receive preferential treatment – NMD was terrified at the possibility of anything bad possibly happening to their potential base. The most vivid quote of the meeting was “We have to stop this!”
If NMD really tried to stop the LTV project, I am fairly sure that we could have overcome their resistance but I'm certain that if the meeting ended there we would have a lot of unnecessary turmoil. What I did was I put back on my Project Designer hat and let NMD specify a value formula just for them that would identify the customers NMD most wanted. This approach was successful because I was able to promise right then and there that NMD could design the formula the way NMD wanted and that it would be published along with the other LTV scores.
In a typical project situation there would have been an initial meeting with NMD, their concerns would have been taken back the the larger group, possible solutions discussed, project forms filled out and signed off on, and all this over a course of several weeks. During these weeks NMD would have solidified their position and the LTV project would have been threatened by a protracted political fight that would weaken the project at best and conceivably stop the project all together.
NMD had segmented the customer base, and had identified the segment they wanted to market to. NMD was horrified that one of their potential customers might get a poor score, and so perhaps not get the best possible service. Never mind the equal possibility that their potential customers might get good scores and receive preferential treatment – NMD was terrified at the possibility of anything bad possibly happening to their potential base. The most vivid quote of the meeting was “We have to stop this!”
If NMD really tried to stop the LTV project, I am fairly sure that we could have overcome their resistance but I'm certain that if the meeting ended there we would have a lot of unnecessary turmoil. What I did was I put back on my Project Designer hat and let NMD specify a value formula just for them that would identify the customers NMD most wanted. This approach was successful because I was able to promise right then and there that NMD could design the formula the way NMD wanted and that it would be published along with the other LTV scores.
In a typical project situation there would have been an initial meeting with NMD, their concerns would have been taken back the the larger group, possible solutions discussed, project forms filled out and signed off on, and all this over a course of several weeks. During these weeks NMD would have solidified their position and the LTV project would have been threatened by a protracted political fight that would weaken the project at best and conceivably stop the project all together.
Sunday, April 13, 2008
Effective vs. Efficient Teams
There's a difference between 'effectiveness' and 'efficiency'
An organization is efficient when each person knows their job, does their job, and there is no slack in the organization. Every person knows exactly what they need to. No less and no more.
This is typically what enterprises drive for: efficient organizations. Typically there will be a communication layer that has minimal understanding of technical details in general and almost no understanding of the technical details in the enterprise. This layer will then communicate to the technical teams, that have almost no knowledge of the business details of the enterprise. The technical teams are actively discouraged from having contact with anyone in the enterprise aside from their designated contacts in the communication layer.
An extreme case is the production team. Every one on that team has a very precise job description and come what may they do not want to deviate from that job description. Each member of the team knows only what is necessary in order to do their job, and quite intentionally does not have any knowledge beyond that.
An efficient team like this is almost completely ineffective at doing anything new. To start with, they have no time. Moreover, no one on an efficient team has the kind of overview necessary to tackle a project that is at all innovative. Depressingly, "something new" can be "getting the data right". If a field may or may not have correct data, then it requires a great deal of research to verify the problem, identify the true problem, and fix the problem. If the data problem does not prevent any of the efficient team's scripts from running, then the efficient team won't care very much one way or another about the data being right or wrong. They can do their job the way they were told, and by the design of the team that is all they care about. More than once and at more than one company I've had a production team simply refuse to fix data problems in their systems.
An effective team should have members that have specialties but each can engage on every other person's areas. The managers and business contacts should be able to effective talk to and in a pinch do the technical aspects; the more technical people should have a good grounding in the overall enterprise and should be able to represent the effective team in meetings.
Effective teams often do not make good efficient teams. People that can be on effective teams are rare and valuable, and the grind of production work can easily grind on them. Enterprises need efficient teams to get the work done, but in budget pressures effective teams can often get pushed aside and that's a mistake.
An organization is efficient when each person knows their job, does their job, and there is no slack in the organization. Every person knows exactly what they need to. No less and no more.
This is typically what enterprises drive for: efficient organizations. Typically there will be a communication layer that has minimal understanding of technical details in general and almost no understanding of the technical details in the enterprise. This layer will then communicate to the technical teams, that have almost no knowledge of the business details of the enterprise. The technical teams are actively discouraged from having contact with anyone in the enterprise aside from their designated contacts in the communication layer.
An extreme case is the production team. Every one on that team has a very precise job description and come what may they do not want to deviate from that job description. Each member of the team knows only what is necessary in order to do their job, and quite intentionally does not have any knowledge beyond that.
An efficient team like this is almost completely ineffective at doing anything new. To start with, they have no time. Moreover, no one on an efficient team has the kind of overview necessary to tackle a project that is at all innovative. Depressingly, "something new" can be "getting the data right". If a field may or may not have correct data, then it requires a great deal of research to verify the problem, identify the true problem, and fix the problem. If the data problem does not prevent any of the efficient team's scripts from running, then the efficient team won't care very much one way or another about the data being right or wrong. They can do their job the way they were told, and by the design of the team that is all they care about. More than once and at more than one company I've had a production team simply refuse to fix data problems in their systems.
An effective team should have members that have specialties but each can engage on every other person's areas. The managers and business contacts should be able to effective talk to and in a pinch do the technical aspects; the more technical people should have a good grounding in the overall enterprise and should be able to represent the effective team in meetings.
Effective teams often do not make good efficient teams. People that can be on effective teams are rare and valuable, and the grind of production work can easily grind on them. Enterprises need efficient teams to get the work done, but in budget pressures effective teams can often get pushed aside and that's a mistake.
LTV at LTC: The LTC IT Department
During this same time we were going through a long series of meetings with the LTC IT department. The motto of the LTC-IT was “we will give you anything you want, just tell us what columns you need in your flat file”. For instance, the LTC-IT project documentation had extensive sections for listing data elements extracted and the databases they were extracted from – but only a minimal project memo section to describe what to do with those data elements. A project that didn't involve extracting data into a file was almost impossible to describe using the IT project documentation.
The LTC-IT department and my group had a very contentious relationship from the start. For instance, the LTC-IT was maintaining a marketing data warehouse, but they refused to allow marketing employees to access the data warehouse. We had to fill out data requests that a small group of data pullers would fulfill. Quite quickly my group found a back door into our own data warehouse simply to do our jobs.
The organizational interface between LTC-IT and the rest of the company was a Project Management Office – the PMO. PMOs were in theory supposed to have neither an understanding of the business nor an understanding of technical details but were supposed to facilitate communication between the business and technical side. In practice, because the PMOs wrote the project documents before technical IT got involved they ended up making critical technical design decisions and setting business objectives.
The LTV project was going to be unique for LTC. Employees in the marketing department were going to be developing formulas and code for the IT department to implement instead of giving the IT department high-level business concepts for design and implementation. The project manager (PM) and I spent a number of months working out the details of the interaction between our departments. When the project got to upper PMO management it was soundly rejected. According to the PMO office the LTC-IT did not have the technical competence to support model implementation and that the Marketing Department would have to supply the technical expertise to implement the LTV models.
I was overjoyed at this news. It meant minimal contact with the LTC-IT and PMO and that I could have direct oversight over the most critical matters.
The next issue we had to resolve with the LTC-IT department was where to run the LTV system. If we were going to be putting scores into the data warehouse, the LTV-IT insisted that the code be run on a server (not a problem). They also pointed out that instead of getting our own server it made a lot more sense to share a server with another department (again, not a problem). The LTV-IT found a server for us – with 275MB available disk space. Now we have a problem. Considering the potential impact of the LTV project 275MB was fairly ridiculous. Fortunately, we were able to design a trimmed-down process that fit in 275MB.
This was where wearing multiple hats on the project became very handy. A design group that was separate from implementation would have made sure that the design was complete enough and robust enough to cover all contingencies, and it would have been a lot larger that 275MB. Because design and implementation were the same we knew exactly where to cut corners.
The LTC-IT department and my group had a very contentious relationship from the start. For instance, the LTC-IT was maintaining a marketing data warehouse, but they refused to allow marketing employees to access the data warehouse. We had to fill out data requests that a small group of data pullers would fulfill. Quite quickly my group found a back door into our own data warehouse simply to do our jobs.
The organizational interface between LTC-IT and the rest of the company was a Project Management Office – the PMO. PMOs were in theory supposed to have neither an understanding of the business nor an understanding of technical details but were supposed to facilitate communication between the business and technical side. In practice, because the PMOs wrote the project documents before technical IT got involved they ended up making critical technical design decisions and setting business objectives.
The LTV project was going to be unique for LTC. Employees in the marketing department were going to be developing formulas and code for the IT department to implement instead of giving the IT department high-level business concepts for design and implementation. The project manager (PM) and I spent a number of months working out the details of the interaction between our departments. When the project got to upper PMO management it was soundly rejected. According to the PMO office the LTC-IT did not have the technical competence to support model implementation and that the Marketing Department would have to supply the technical expertise to implement the LTV models.
I was overjoyed at this news. It meant minimal contact with the LTC-IT and PMO and that I could have direct oversight over the most critical matters.
The next issue we had to resolve with the LTC-IT department was where to run the LTV system. If we were going to be putting scores into the data warehouse, the LTV-IT insisted that the code be run on a server (not a problem). They also pointed out that instead of getting our own server it made a lot more sense to share a server with another department (again, not a problem). The LTV-IT found a server for us – with 275MB available disk space. Now we have a problem. Considering the potential impact of the LTV project 275MB was fairly ridiculous. Fortunately, we were able to design a trimmed-down process that fit in 275MB.
This was where wearing multiple hats on the project became very handy. A design group that was separate from implementation would have made sure that the design was complete enough and robust enough to cover all contingencies, and it would have been a lot larger that 275MB. Because design and implementation were the same we knew exactly where to cut corners.
Labels:
data mining,
information engineering,
lifetime value,
LTV,
projects
Friday, April 11, 2008
LTV at LTC: Difficult Allies
A particularly long and difficult set of meetings that we had was with the LTC Finance Group (FG).
These meetings were, of course, difficult. LTV projects by their nature are focused on customers and their value and that makes LTV projects in the Marketing Department's area; but LTV also involves financial impact and financial data and that makes it part of the Finance Department's area. I suspect that if there is an LTV project where Marketing and Finance are not arguing about the details then the project isn't being taken seriously by either department.
The FG was currently managing an LTV-like project and had been for a number of years. What FG did was to look at revenue and cost data and then give profitability data by rate plan. Profitability by rate plan really wasn't that useful to LTC. It gave no understanding of the 'why' behind customer value or how to treat individual customers.
We needed the FG for was to understand the cost metrics associated with customer activity. In particular, our executive sponsor insisted that we have FG's approval for out LTV project. We went to the FG with the question “What is the right formula to calculate customer related costs?” -- and they refused.
Well, they didn't refuse, exactly. What happened after that was a long series of meetings with various financial people, getting one small piece of data from each. Then came time to put all the pieces together and life started getting difficult.
The FG refused to either accept or reject our meeting requests, meaning we could never be sure if a meeting was actually on or not until we made the call. They would also invite themselves to other meetings, so we had to be ready to talk about LTV issues at any time. Our questions got answered obliquely. For instance, when we asked about the best way to handle network minutes the reply was “What would happen if all of our customers leave?”
As it developed, the FG and our group did develop a substantial difference of opinion in how to value customers. It revolved around how to handle capital expenses. The FG was adamant that any customer valuation include capital expense; I felt strongly that the customer LTV should not. I had two reasons. First, no future customer activity could effect capital projects that had already been purchased. LTV should be about customer impacts to LTC, not things that individual customers had little impact on. Second, including capital expenses would mean that 25% of the customers would have negative value; without capital expenses 7% of the customers would be have negative value.
Let me digress here on negative LTV. By and large, in any company there will be some customers that cost more in company resources than they bring in in revenue. One of the best goals of any LTV project is correctly identifying customers of negative value so they can be understood, targeted, addressed, and if necessary 'fired'. It is absolutely natural to take customers with negative LTV and take them out of customer retention programs.
Cutting 7% of the base out of marketing programs at LTC designed to increase retention would have minimal impact on the overall attrition results. Taking 25% of the customer base out of retention marketing programs would have a definite impact on the corporate retention efforts.
LTC lived and died by retention and attrition. If LTV hurt retention then LTV would be quickly and quietly abandoned.
We spent months in rounds of inconclusive meetings with FG, asking again and again about the correct formulation. Suddenly, they agreed with us and we could go forwards. As we found out later, their agreement was by accident. The FG simply misread the formula we were laying out and thought it included capital expenses. By the time the FG realized they had made a mistake the LTV system was already in production and going forwards.
There is a bit of an irony here. If the FG had simply worked with us and given us their cost formula we would have taken it uncritically and not done the research to discover the issues around capital expenses.
Despite our differences the two groups did come to an understanding and became allies on many issues. Both groups wanted to move LTC from gross revenue to sustainable profit. We both realized that two areas that LTC had ignored, off-net expenses1 and bad debt expenses, were critical to profitability.
These meetings were, of course, difficult. LTV projects by their nature are focused on customers and their value and that makes LTV projects in the Marketing Department's area; but LTV also involves financial impact and financial data and that makes it part of the Finance Department's area. I suspect that if there is an LTV project where Marketing and Finance are not arguing about the details then the project isn't being taken seriously by either department.
The FG was currently managing an LTV-like project and had been for a number of years. What FG did was to look at revenue and cost data and then give profitability data by rate plan. Profitability by rate plan really wasn't that useful to LTC. It gave no understanding of the 'why' behind customer value or how to treat individual customers.
We needed the FG for was to understand the cost metrics associated with customer activity. In particular, our executive sponsor insisted that we have FG's approval for out LTV project. We went to the FG with the question “What is the right formula to calculate customer related costs?” -- and they refused.
Well, they didn't refuse, exactly. What happened after that was a long series of meetings with various financial people, getting one small piece of data from each. Then came time to put all the pieces together and life started getting difficult.
The FG refused to either accept or reject our meeting requests, meaning we could never be sure if a meeting was actually on or not until we made the call. They would also invite themselves to other meetings, so we had to be ready to talk about LTV issues at any time. Our questions got answered obliquely. For instance, when we asked about the best way to handle network minutes the reply was “What would happen if all of our customers leave?”
As it developed, the FG and our group did develop a substantial difference of opinion in how to value customers. It revolved around how to handle capital expenses. The FG was adamant that any customer valuation include capital expense; I felt strongly that the customer LTV should not. I had two reasons. First, no future customer activity could effect capital projects that had already been purchased. LTV should be about customer impacts to LTC, not things that individual customers had little impact on. Second, including capital expenses would mean that 25% of the customers would have negative value; without capital expenses 7% of the customers would be have negative value.
Let me digress here on negative LTV. By and large, in any company there will be some customers that cost more in company resources than they bring in in revenue. One of the best goals of any LTV project is correctly identifying customers of negative value so they can be understood, targeted, addressed, and if necessary 'fired'. It is absolutely natural to take customers with negative LTV and take them out of customer retention programs.
Cutting 7% of the base out of marketing programs at LTC designed to increase retention would have minimal impact on the overall attrition results. Taking 25% of the customer base out of retention marketing programs would have a definite impact on the corporate retention efforts.
LTC lived and died by retention and attrition. If LTV hurt retention then LTV would be quickly and quietly abandoned.
We spent months in rounds of inconclusive meetings with FG, asking again and again about the correct formulation. Suddenly, they agreed with us and we could go forwards. As we found out later, their agreement was by accident. The FG simply misread the formula we were laying out and thought it included capital expenses. By the time the FG realized they had made a mistake the LTV system was already in production and going forwards.
There is a bit of an irony here. If the FG had simply worked with us and given us their cost formula we would have taken it uncritically and not done the research to discover the issues around capital expenses.
Despite our differences the two groups did come to an understanding and became allies on many issues. Both groups wanted to move LTC from gross revenue to sustainable profit. We both realized that two areas that LTC had ignored, off-net expenses1 and bad debt expenses, were critical to profitability.
Wednesday, April 9, 2008
Blog in your native Indic Script!
Blogger somehow wants me to start blogging in Indic. It somehow thinks I'm a native Hindi.
I'm still tring to figure out how it came to that conclusion. Really bad data mining I guess :-). Or maybe it's just asking everybody to blog in Indic.
I'm still tring to figure out how it came to that conclusion. Really bad data mining I guess :-). Or maybe it's just asking everybody to blog in Indic.
LTV at LTC: The New Economy Consulting Company (NECC)
One lengthly set of meetings that did work out very well as with the New Economy Consulting Company (NECC), a company that started out specializing in Internet marketing but by 2002 had branched out into general customer relationship management consulting.
NECC was leading their own LTV project which ultimately got nowhere but our project was able to use many of their insights.
NECC had realized that LTV comes in four flavors:
Usually, when we think of LTV we think of just (1). At LTC all four metrics were very useful.
LTC had a very large customer acquisition cost; it often took a year for a customer to pay of their acquisition cost. By tracking a customer's past and future values separately we were able to see the full impact of different acquisition strategies.
LTC's direct marketing concentrated on retention efforts, and the difference between Expected Future Value and Potential Future Value became the natural metric to run attrition campaigns against (i.e., if a customer's EFV was $250, and PFV was $475, then $475 - $250 = $225; if we plan on recovering 10% of the difference between EFV and PFV then for that customer we shouldn't spend more that $22.50 to do so).
The NECC project never got beyond PowerPoint because they had made no plans for implementation. Once the report was complete the project faded away.
Moreover, NECC never really dove into the financials of LTC. What they did was to go around asking people what they thought was important and put together the subjective, narrow opinions. A project like LTV is a rare chance to look at a problem holistically and completely and the chance to bring new understanding to the enterprise as a whole should not be missed.
NECC was leading their own LTV project which ultimately got nowhere but our project was able to use many of their insights.
NECC had realized that LTV comes in four flavors:
- Expected Future Value
- Total Past Value
- Potential Future Value: What would the customer be worth is there was no churn?
- Expected Life Value: (1) + (2)
Usually, when we think of LTV we think of just (1). At LTC all four metrics were very useful.
LTC had a very large customer acquisition cost; it often took a year for a customer to pay of their acquisition cost. By tracking a customer's past and future values separately we were able to see the full impact of different acquisition strategies.
LTC's direct marketing concentrated on retention efforts, and the difference between Expected Future Value and Potential Future Value became the natural metric to run attrition campaigns against (i.e., if a customer's EFV was $250, and PFV was $475, then $475 - $250 = $225; if we plan on recovering 10% of the difference between EFV and PFV then for that customer we shouldn't spend more that $22.50 to do so).
The NECC project never got beyond PowerPoint because they had made no plans for implementation. Once the report was complete the project faded away.
Moreover, NECC never really dove into the financials of LTC. What they did was to go around asking people what they thought was important and put together the subjective, narrow opinions. A project like LTV is a rare chance to look at a problem holistically and completely and the chance to bring new understanding to the enterprise as a whole should not be missed.
Tuesday, April 8, 2008
LTV at LTC: First, Meetings
Of course once we got approval we didn't start building the system. We started having meetings about building the system.
The first set of planned meetings didn't actually happen, which was a very good thing. Our manager wanted us hold bi-weekly meetings with managers from across the marketing organization. These meetings would have been a disaster.
We didn't know enough about LTV in general and customer behavior at LTC in specific to be able to lead these meetings. We would have had a group of senior managers taking about a project that got at the heart of how LTC did business with no real agenda for these meetings. As I found out in the course of the project, LTC was an information-starved company and very few people had a good idea of the real internal financials of the company. The most likely result of these planned meetings would have been tangential suggestions and demands that would have misdirected the project.
One of the lessons we learned from this project was how important it is to manage the meetings around a project: early meetings should be held with those necessary to get the project done but large meetings with the simply interested should be avoided until the leaders can bring a lot of understanding and direction to the meetings.
The first set of planned meetings didn't actually happen, which was a very good thing. Our manager wanted us hold bi-weekly meetings with managers from across the marketing organization. These meetings would have been a disaster.
We didn't know enough about LTV in general and customer behavior at LTC in specific to be able to lead these meetings. We would have had a group of senior managers taking about a project that got at the heart of how LTC did business with no real agenda for these meetings. As I found out in the course of the project, LTC was an information-starved company and very few people had a good idea of the real internal financials of the company. The most likely result of these planned meetings would have been tangential suggestions and demands that would have misdirected the project.
One of the lessons we learned from this project was how important it is to manage the meetings around a project: early meetings should be held with those necessary to get the project done but large meetings with the simply interested should be avoided until the leaders can bring a lot of understanding and direction to the meetings.
Saturday, April 5, 2008
LTV at LTC: Project Approval
The first step of the LTV project was to get IT approval and budgeting. In order to get the project used we needed to get the results loaded into the company data warehouse; in order to get that load we needed IT support. At the time we were also planning on having the LTV production system managed by the IT department; fortunately we wound up running the production system ourselves.
LTC had just instituted a strict resource allocation process for IT projects. Each project had to be justified in terms of return on investment based on financial analysis and passed by a committee of representatives of the various branches of the business. On the face of things this is a very straightforwards process but LTV was almost wrecked here.
The first issue was that customer value was a substantial change from the way LTC thought about customers. LTC had been committed to maintaining all their customers and fighting attrition (customers leaving the company) across the board. The idea that some customers were more valuable than others, and in fact some customers cost LTC more than their were worth, was a foreign concept. Because LTV represented a new way of thinking about the business the LTV project could not be valued in the attrition-based results metrics that were approved for use.
The second issue is that the IT project approving committee was composed of representatives from a broad spectrum of departments in LTC. In theory this was to ensure that the projects that were approved would be useful to the entire company. In practice, projects were decided on by committee members that had no IT experience, no experience in the processes of other departments, and not enough time to truly research the issues they were being asked to decide on. What happened was that projects got decided by corporate politics: the person reputation of the executive champion.
It was in getting our initial approval that our executive champion (EC) shone. Our EC had a considerable reputation within the company. It terms of making the ROI cutoff, what we did was to figure out the attrition gain necessary to make the cutoff and the EC promised to delivery that gain. We knew that the LTV system wasn't targeted at reducing attrition per se, but we also knew that if the project was at all successful getting approval after the fact would not be an issue.
The long approval process did have a substantial benefit: the series of meetings made most of the company aware of the LTV project.
LTC had just instituted a strict resource allocation process for IT projects. Each project had to be justified in terms of return on investment based on financial analysis and passed by a committee of representatives of the various branches of the business. On the face of things this is a very straightforwards process but LTV was almost wrecked here.
The first issue was that customer value was a substantial change from the way LTC thought about customers. LTC had been committed to maintaining all their customers and fighting attrition (customers leaving the company) across the board. The idea that some customers were more valuable than others, and in fact some customers cost LTC more than their were worth, was a foreign concept. Because LTV represented a new way of thinking about the business the LTV project could not be valued in the attrition-based results metrics that were approved for use.
The second issue is that the IT project approving committee was composed of representatives from a broad spectrum of departments in LTC. In theory this was to ensure that the projects that were approved would be useful to the entire company. In practice, projects were decided on by committee members that had no IT experience, no experience in the processes of other departments, and not enough time to truly research the issues they were being asked to decide on. What happened was that projects got decided by corporate politics: the person reputation of the executive champion.
It was in getting our initial approval that our executive champion (EC) shone. Our EC had a considerable reputation within the company. It terms of making the ROI cutoff, what we did was to figure out the attrition gain necessary to make the cutoff and the EC promised to delivery that gain. We knew that the LTV system wasn't targeted at reducing attrition per se, but we also knew that if the project was at all successful getting approval after the fact would not be an issue.
The long approval process did have a substantial benefit: the series of meetings made most of the company aware of the LTV project.
Friday, April 4, 2008
LTV at LTC: A Unusual Start
My involvement in the project started in January 2002. I was managing a modeling / statistical analysis group in the marketing department of LTC. We had a consultant do an initial proof-of-concept and it became my job to fully flesh out the approach and put LTV into production.
Already, the project was off to an unusual start. I was simultaneously
Usually, these are four different people. I believe the project's success was do in no small part to all four roles being being condensed into one person. Whenever issues came up I could simply make a decision instead of having to a) document the issue b) have meetings on the issue c) discuss possible solutions d) document the final solution e) get written agreement on the change from all parties f) finally implement the solution.
For larger projects it may not be possible to be as concentrated as this, but I do think there needs to be one vision behind the project, someone who understands both the technical aspects and the business aspects of the project. Without one person that has a deep understanding of the different aspects of the project and can share that understanding with the rest of the team, none of the parts of the project will fit together.
Already, the project was off to an unusual start. I was simultaneously
- The primary business owner/ representative.
- The project manager.
- The chief analytic designer.
- The head of implementation.
Usually, these are four different people. I believe the project's success was do in no small part to all four roles being being condensed into one person. Whenever issues came up I could simply make a decision instead of having to a) document the issue b) have meetings on the issue c) discuss possible solutions d) document the final solution e) get written agreement on the change from all parties f) finally implement the solution.
For larger projects it may not be possible to be as concentrated as this, but I do think there needs to be one vision behind the project, someone who understands both the technical aspects and the business aspects of the project. Without one person that has a deep understanding of the different aspects of the project and can share that understanding with the rest of the team, none of the parts of the project will fit together.
Labels:
business intelligence,
LTV,
projects,
statistics
Monday, March 31, 2008
The Secret Laws of Analytic Projects
The First Certainty Principle: C~ 1/K ; Certainty is inversely proportional to knowledge.
A person who really understands data and analysis will understand all the pitfalls and limitations, and hence be constantly caveating what they say. Somebody who is simple, straightforward, and 100% certain usually has no idea what they are talking about.
The Second Certainty Principle: A ~ C ; The attractiveness of results is directly proportional to the certainty of the presenters.
Decision-makers are attracted to certainty. Decision-makers usually have no understanding of the intricacies of data mining. What they often need is simply someone to tell them what they should do.
Note that #1 and #2 together cause a lot of problems.
The Time-Value Law: V ~ 1/P ; The value of analysis is inversely
proportional to the time-pressure to produce it.
If somebody want something right away, that means they want it on a whim not real need. The request that comes in at 4:00 for a meeting at 5:00 will be forgotten by 6:00. The analysis that can really effect a business has been identified through careful thought, and people are willing to wait for it. (A cheery thought for those late-night fire drills.)
The First Bad Analysis Law: Bad analysis drives out good analysis.
Bad analysis invariably conforms to people's pre-conceived notions, so they like hearing it. It's also 100% certain in it's results, no caveats, nothing hard to understand, and usually gets produced first. This means the good analysis always has an uphill fight.
The Second Bad Analysis Law: Bad Analysis is worse than no analysis.
If there is no analysis, people muddle along by common sense which usually works out OK. To really mess things up requires a common direction which requires persuasive analysis pointing in that direction. If that direction happens to be into a swamp, it doesn't help much.
A person who really understands data and analysis will understand all the pitfalls and limitations, and hence be constantly caveating what they say. Somebody who is simple, straightforward, and 100% certain usually has no idea what they are talking about.
The Second Certainty Principle: A ~ C ; The attractiveness of results is directly proportional to the certainty of the presenters.
Decision-makers are attracted to certainty. Decision-makers usually have no understanding of the intricacies of data mining. What they often need is simply someone to tell them what they should do.
Note that #1 and #2 together cause a lot of problems.
The Time-Value Law: V ~ 1/P ; The value of analysis is inversely
proportional to the time-pressure to produce it.
If somebody want something right away, that means they want it on a whim not real need. The request that comes in at 4:00 for a meeting at 5:00 will be forgotten by 6:00. The analysis that can really effect a business has been identified through careful thought, and people are willing to wait for it. (A cheery thought for those late-night fire drills.)
The First Bad Analysis Law: Bad analysis drives out good analysis.
Bad analysis invariably conforms to people's pre-conceived notions, so they like hearing it. It's also 100% certain in it's results, no caveats, nothing hard to understand, and usually gets produced first. This means the good analysis always has an uphill fight.
The Second Bad Analysis Law: Bad Analysis is worse than no analysis.
If there is no analysis, people muddle along by common sense which usually works out OK. To really mess things up requires a common direction which requires persuasive analysis pointing in that direction. If that direction happens to be into a swamp, it doesn't help much.
Labels:
analysis,
april fools,
ha-ha only serious,
projects
Sunday, March 30, 2008
LTV at LTC
I've written quite a bit about unsuccessful Information Engineering projects; now I want to write about a successful one.
How can you change a company? Give people the information they need to make decisions they never thought they could and that changes how they think about the enterprise. The trouble is, any organization will put up a lot of resistance to change.
In 2002 I managed a Life-Time Value (LTV) project at a Large Telecommunications Company (LTC) that did change the enterprise. LTV is an attempt to measure the overall economic impact of each customer to the enterprise over their expected life. Ideally this is concrete numeric data so we can ask “Is this customer worth $300 in new equipment for them if they will stay with us for two more years”?
The LTV project allowed people to think about the business in new ways, the project was embraced by the Chief Marketing Officer, and the project saved $15 million each year in direct marketing costs while adding to the revenue from marketing programs simply by not spending money to retain customers that LTC was losing money on.
There are a lot of articles about how to do LTV calculations. This time I want to talk about all the corporate politics around sheparding the LTV project to success.
How can you change a company? Give people the information they need to make decisions they never thought they could and that changes how they think about the enterprise. The trouble is, any organization will put up a lot of resistance to change.
In 2002 I managed a Life-Time Value (LTV) project at a Large Telecommunications Company (LTC) that did change the enterprise. LTV is an attempt to measure the overall economic impact of each customer to the enterprise over their expected life. Ideally this is concrete numeric data so we can ask “Is this customer worth $300 in new equipment for them if they will stay with us for two more years”?
The LTV project allowed people to think about the business in new ways, the project was embraced by the Chief Marketing Officer, and the project saved $15 million each year in direct marketing costs while adding to the revenue from marketing programs simply by not spending money to retain customers that LTC was losing money on.
There are a lot of articles about how to do LTV calculations. This time I want to talk about all the corporate politics around sheparding the LTV project to success.
Wednesday, March 26, 2008
Data-Driven Organizations are a Bad Idea
Consider: it really takes only a few facts to make a decision, but it takes a wealth of insight to know what the relevant facts are for the decision.
In a data-driven company, every single analysis generates facts, and every single one of those facts indicates a possible decision. In a data-driven organization people really have very little guidance to make decisions. Even worse, the uncertainty that all the possible decisions that could be made drives people to ask for more analysis. More analysis means more facts generated which means more possible decisions suggested, which means an even greater confusion and the problem gets worse. The end result is that decisions get made for really very arbitrary reasons, usually the last fact someone say before they were forced to decide. I think it's better to rely on intuition and experience that to try to make sense out of a sea of random, contradictory facts.
What works is to have a decision-driven organization. Understand what kind of decisions the organization needs to make, understand the basis on which these decisions should be made and be explicit about it, and then once that blueprint for decision-making has been made then build the information needed for the decision.
In a data-driven company, every single analysis generates facts, and every single one of those facts indicates a possible decision. In a data-driven organization people really have very little guidance to make decisions. Even worse, the uncertainty that all the possible decisions that could be made drives people to ask for more analysis. More analysis means more facts generated which means more possible decisions suggested, which means an even greater confusion and the problem gets worse. The end result is that decisions get made for really very arbitrary reasons, usually the last fact someone say before they were forced to decide. I think it's better to rely on intuition and experience that to try to make sense out of a sea of random, contradictory facts.
What works is to have a decision-driven organization. Understand what kind of decisions the organization needs to make, understand the basis on which these decisions should be made and be explicit about it, and then once that blueprint for decision-making has been made then build the information needed for the decision.
Tuesday, March 25, 2008
I don't like books
I'm not that big a fan of data mining books. Every article I've read, or book I've read, or class I've taken, has been about what works. About the only way to find out what doesn't work is to have project blow up on you and be sweating blood at 2 a.m. trying to figure out why all the nice algorithms didn't work out the way they were supposed to.
Friday, March 21, 2008
Good Data, Bad Decisions
Barnaby S. Donlon in the BI Review (http://www.bireview.com/bnews/10000989-1.html) gives a good description of how data goes to information, to knowledge, and then to decisions. He's saying all the right things, and all the things I've been hearing for years, but you know -- I don't think it works anything like that.
When we start with the data, it's all too much. It's too easy to generate endless ideas, endless leads, endless stories. I've seen it happen when an organization suddenly gets analytic capability.
Before, the organization was very limited in it's abilities to make decisions because they had limited information. The organizational leaders have ideas, and because of the lack of information they have no way of deciding what is a good idea or a bad idea. After the organization starts an analytic department, then suddenly every idea that the leadership gets can be investigated. The paradoxical result is that the leadership still can't make informed decisions. Every idea generates an analysis, and virtually every analysis can generate some kind of results. Without data, the result is inertia; with too much data the result is tail-chasing.
The right way to do this is to begin with the end. Think about the decisions that need to be made. Then think about how to make those decisions in the best possible way. Starting with the end means the beginning -- the data, the analysis, the information -- is focused and effective.
When we start with the data, it's all too much. It's too easy to generate endless ideas, endless leads, endless stories. I've seen it happen when an organization suddenly gets analytic capability.
Before, the organization was very limited in it's abilities to make decisions because they had limited information. The organizational leaders have ideas, and because of the lack of information they have no way of deciding what is a good idea or a bad idea. After the organization starts an analytic department, then suddenly every idea that the leadership gets can be investigated. The paradoxical result is that the leadership still can't make informed decisions. Every idea generates an analysis, and virtually every analysis can generate some kind of results. Without data, the result is inertia; with too much data the result is tail-chasing.
The right way to do this is to begin with the end. Think about the decisions that need to be made. Then think about how to make those decisions in the best possible way. Starting with the end means the beginning -- the data, the analysis, the information -- is focused and effective.
Labels:
business intelligence,
data,
decisions,
information
Information Design: What does it take to be successful?
All of the examples that I have given are of poor information design. Some of them have had more or less success, but they all had substantial flaws. There's a reason I'm saying that information design is a missing profession.
Why is it so hard? First off, true information design projects are fairly rare. BI is usually about straightforwards reporting and ad-hoc analysis. People don't get much of a chance to practice the discipline.
Information design requires a lot of other disciplines. It takes statistics but isn't limited to statistics. Data mining can help but can easily bog down a project in complicated solutions. It requires being able to think about information in very sophisticated ways and then turn around and think about information very naively.
It requires knowing the nuances of an organization. Who are the clients? The users? What is the organizational culture? What does the organization know about itself? What does the organization strongly believe that just isn't so? It's not impossible for an outside consultant to come in and do information design, but it is impossible for a company to come it with a one-size-fits-all solution. When it comes to information design, one size fits one.
Because the profession of information design hasn't been developed yet, it isn't included in project plans and proposals. For two of the projects above information design wasn't even thought of and for the third it wasn't done well because the clients true needs weren't uncovered.
Why is it so hard? First off, true information design projects are fairly rare. BI is usually about straightforwards reporting and ad-hoc analysis. People don't get much of a chance to practice the discipline.
Information design requires a lot of other disciplines. It takes statistics but isn't limited to statistics. Data mining can help but can easily bog down a project in complicated solutions. It requires being able to think about information in very sophisticated ways and then turn around and think about information very naively.
It requires knowing the nuances of an organization. Who are the clients? The users? What is the organizational culture? What does the organization know about itself? What does the organization strongly believe that just isn't so? It's not impossible for an outside consultant to come in and do information design, but it is impossible for a company to come it with a one-size-fits-all solution. When it comes to information design, one size fits one.
Because the profession of information design hasn't been developed yet, it isn't included in project plans and proposals. For two of the projects above information design wasn't even thought of and for the third it wasn't done well because the clients true needs weren't uncovered.
Labels:
data mining,
design,
engineering,
information,
statistics
Thursday, March 20, 2008
Daily Churn: The Project was a Complete Success and the Client Hated Us
The story eventually had a less than desirable ending. After producing accurate daily forecasts for months our work was replaced by another group's work, with the predictions that were much higher than ours. It turned out that having attrition sometimes higher than predictions and sometimes lower was very stressful to upper management and what they really wanted to be told wasn't an accurate prediction of attrition but that they were beating the forecast.
Ultimately the problem was a large difference between what management wanted and what they said they wanted. What management said they wanted was an attrition forecast at a daily level that was very accurate. To this end my group was constantly refining and testing models using the most recent data we could get. What this meant was that all the most recent attrition programs were already baked into the forecasts.
What management really wanted to be told was the effect of their attrition programs, and by the design of the forecasts there was no way they could see any effect. It must have been very disheartening to look at the attrition forecasts month after month and being told in essence your programs were having no effect.
What my group should have done is to go back roughly a year, before all of the new attrition programs started, and to build our forecasts using older data. Then we could make the comparison between actual and forecasts and hopefully see an effect of programs.
Surprisingly, I've met other forecasters that found themselves with this same problem: their forecasts were accurate and they got the project taken away and given to a group that just made sure management was beating the forecast.
Ultimately the problem was a large difference between what management wanted and what they said they wanted. What management said they wanted was an attrition forecast at a daily level that was very accurate. To this end my group was constantly refining and testing models using the most recent data we could get. What this meant was that all the most recent attrition programs were already baked into the forecasts.
What management really wanted to be told was the effect of their attrition programs, and by the design of the forecasts there was no way they could see any effect. It must have been very disheartening to look at the attrition forecasts month after month and being told in essence your programs were having no effect.
What my group should have done is to go back roughly a year, before all of the new attrition programs started, and to build our forecasts using older data. Then we could make the comparison between actual and forecasts and hopefully see an effect of programs.
Surprisingly, I've met other forecasters that found themselves with this same problem: their forecasts were accurate and they got the project taken away and given to a group that just made sure management was beating the forecast.
Wednesday, March 19, 2008
Daily Churn Prediction
The next project gone off I want to talk about is when my group created daily attrition forecasts for a company.
Attrition is when a customer leaves a company. I was charged with producing daily attrition forecasts that had to be within 5% of the actual values over a month. The forecast vs. actual numbers would be feed up to upper management to understand the attrition issues of the company and the effect new company programs were having on attrition.
Because my group had been working at the company for a few years we were able to break the attrition down by line of business, into voluntary and involuntary (when customers don't pay their bills), we were able to build day-of-week factors (more people call to leave the company on a Monday) and system processing factors (delays from the time a person calls to have their service canceled and when the service is actually canceled). Our forecasts performed within 3% of actual attrition. Often we were asked to explain individual day's deviations from predictions which we were always able to do – invariably major deviations were the result of processing issues, such as the person that processed a certain type of attrition taking a vacation and doubling up their processing the next week.
We were able to break down the problem like this because we knew the structure of the information that the company data contained and we were able to build a system that respected that information.
The analysis was a complete success but the project died. Why tommorrow.
Attrition is when a customer leaves a company. I was charged with producing daily attrition forecasts that had to be within 5% of the actual values over a month. The forecast vs. actual numbers would be feed up to upper management to understand the attrition issues of the company and the effect new company programs were having on attrition.
Because my group had been working at the company for a few years we were able to break the attrition down by line of business, into voluntary and involuntary (when customers don't pay their bills), we were able to build day-of-week factors (more people call to leave the company on a Monday) and system processing factors (delays from the time a person calls to have their service canceled and when the service is actually canceled). Our forecasts performed within 3% of actual attrition. Often we were asked to explain individual day's deviations from predictions which we were always able to do – invariably major deviations were the result of processing issues, such as the person that processed a certain type of attrition taking a vacation and doubling up their processing the next week.
We were able to break down the problem like this because we knew the structure of the information that the company data contained and we were able to build a system that respected that information.
The analysis was a complete success but the project died. Why tommorrow.
Tuesday, March 18, 2008
Premiums from Credit Data II
A new team, including myself, was brought in to take a second pass at the project. What we did was to 1) look at the data to make sure we had a valid data set, validated with the client 2) make sure we had standards to meet that were appropriate to the project and 3) started with a simple solution and then built more complex solutions. What approach 3) meant was that very quickly we had some solution in hand, and then we could proceed to imporve our solution through project iterations.
The project didn't work out in the end. The relationship with the client had been irrevocably poisoned by the previous failure.
But we were able to do the project the right way the second time.
The project didn't work out in the end. The relationship with the client had been irrevocably poisoned by the previous failure.
But we were able to do the project the right way the second time.
Monday, March 17, 2008
Premiums from Credit Data: Going Wrong
The modeling effort ran into trouble. The models were drastically underperforming from what was anticipated. The team tried every modeling approach they could think of, with little success. Eventually the whole project budget was used up in this first unsuccessful phase with little to show for it. I was brought in at the end but couldn't help much.
There's a long list of things that went wrong.
The team forgot the project they were on. They were using approaches appropriate to marketing response models and they were working in a different world. Doing 40% better than random doesn't work well for marketing response models but here it meant we could improve the insurance company rate models by 40% which is fairly impressive. Before the project started the team needed to put serious thought into what success would look like.
The team let an initial step in the project take over the project. At the least, that initial step should have been ruthlessly time-boxed. Since that initial step wasn't directly on the path towards the outcome it should not have been in the project.
The team didn't do any data exploration. When I was brought onto the project near the end, one of the first things that I did was to look closely at the data. What I found was that over 10% of the file had under $10 in six-month premiums, and many other records had extremely low six-month premiums. In other words, a large chunk of the data we were working with wasn't what we think of as insurance policies.
This goes to an earlier point, that often DBAs know the structure of their data very well but often have very little idea of the distribution and informational content of their data. Averages, minimums, maximums, most of what we can get easily through SQL don't tell the story. One has to look closely at all the values and usually this means using specialized software packages to analyze data.
We got a second chance later, fortunately.
There's a long list of things that went wrong.
The team forgot the project they were on. They were using approaches appropriate to marketing response models and they were working in a different world. Doing 40% better than random doesn't work well for marketing response models but here it meant we could improve the insurance company rate models by 40% which is fairly impressive. Before the project started the team needed to put serious thought into what success would look like.
The team let an initial step in the project take over the project. At the least, that initial step should have been ruthlessly time-boxed. Since that initial step wasn't directly on the path towards the outcome it should not have been in the project.
The team didn't do any data exploration. When I was brought onto the project near the end, one of the first things that I did was to look closely at the data. What I found was that over 10% of the file had under $10 in six-month premiums, and many other records had extremely low six-month premiums. In other words, a large chunk of the data we were working with wasn't what we think of as insurance policies.
This goes to an earlier point, that often DBAs know the structure of their data very well but often have very little idea of the distribution and informational content of their data. Averages, minimums, maximums, most of what we can get easily through SQL don't tell the story. One has to look closely at all the values and usually this means using specialized software packages to analyze data.
We got a second chance later, fortunately.
Labels:
credit scores,
crm,
models,
prediction,
premiums,
projects
Wednesday, March 12, 2008
The Next Fiasco - Premiums from Credit Data
A company I was with was building a modeling system to look at individual credit history, compare it with insurance premiums and losses, and identify customers where the insurance premium was either too high or too low. I was only peripherally involved with the project and only brought in at the end. What we were asked to predict was the overpayment or underpayment ratio so the insurance companies could adjust their premiums.
The project started by receiving large files from the client and starting the model building process. The team decided to start out with a simpler problem by predicting if there was a claim or not, and once that problem was solved using the understanding gained to move on to the larger problem.
Things didn't work out so well.
The project started by receiving large files from the client and starting the model building process. The team decided to start out with a simpler problem by predicting if there was a claim or not, and once that problem was solved using the understanding gained to move on to the larger problem.
Things didn't work out so well.
Labels:
credit scores,
crm,
data mining,
statistics
Monday, March 10, 2008
Righting the Wrong-Sizer
In order to fix this problem the company has to do some hard thinking about what kind of company they want to be and what kind of customers they want to have. Other things being equal companies want the customers to pay more for goods and services and the customers want to pay less; on the other hand companies want to attract customers and customers are willing to pay for goods and services they want. This means that in order to maximize the total return there is a real tension between maximizing the price (to get as much as possible from each customer) and minimizing the price (to attract customers and make sure they stay). How to resolve that tension is by no means trivial. One option is to assume that “our customers are stupid people and won't care that their bill just went up” but I don't think that's a good long-term strategy.
Ideally we want to find services that are cheap for the company but that customers like a lot. Standard customer surveys will just give us average tendencies when what we care about the preferences of each individual customer. Fortunately we have an excellent source of that customer's preferences: the rate plan they are on. Let's assume that the customers are in fact decently smart and are using roughly the best rate plan for them, but they might need some help fine tuning their plan.
Take the customer rate plans and divide them up into families. When a customer calls up, look at their actual usage and calculate their monthly bill in the different rate plans in their families. If a customer can save money by switching rate plans, move them but keeping them in their rate plan family. This method makes sure the customer is getting a good deal and sticking within their known preferences, and the company is still maintaining a profitable relationship with the customer.
Ideally we want to find services that are cheap for the company but that customers like a lot. Standard customer surveys will just give us average tendencies when what we care about the preferences of each individual customer. Fortunately we have an excellent source of that customer's preferences: the rate plan they are on. Let's assume that the customers are in fact decently smart and are using roughly the best rate plan for them, but they might need some help fine tuning their plan.
Take the customer rate plans and divide them up into families. When a customer calls up, look at their actual usage and calculate their monthly bill in the different rate plans in their families. If a customer can save money by switching rate plans, move them but keeping them in their rate plan family. This method makes sure the customer is getting a good deal and sticking within their known preferences, and the company is still maintaining a profitable relationship with the customer.
Saturday, March 8, 2008
What's Wrong with the Wrong-Sizer?
Let's start with the customer usage profile. To start out with a project that's intended to give individual recommendations to customers and start that project by assuming that all customers act the same is amazingly dense. The Rate Plan Optimizer project manager explained that they had a study done several years ago saying that most customers were fit pretty well by their profile.
First off, a study done a few years ago doesn't mean that much in a constantly changing world, not when data can be updated easily. Second, even if most customers are pretty well fit by the profile that means that some customers are badly fit by the profile and will be negatively impacted by the system's recommendations.
The reason that the IT department went with using a one-size-fits all usage pattern was that the customer data warehouse did not actually have customer usage data in it, only how the customer was billed. The IT department should have taken this project as an excuse to get the usage data into the data warehouse. The customer recommendations could have been been done at an actual customer level.
The next major problem with the Rate Plan Optimization project was choosing the rate plan that was most profitable to the company and then suggesting the customer adopt that plan. In other words, the Rate Plan Optimizer had the goal of making the customer's bills as large as possible and making sure the customer got the worst possible plan from the customer's standpoint.
How to fix it? That's tomorrow.
First off, a study done a few years ago doesn't mean that much in a constantly changing world, not when data can be updated easily. Second, even if most customers are pretty well fit by the profile that means that some customers are badly fit by the profile and will be negatively impacted by the system's recommendations.
The reason that the IT department went with using a one-size-fits all usage pattern was that the customer data warehouse did not actually have customer usage data in it, only how the customer was billed. The IT department should have taken this project as an excuse to get the usage data into the data warehouse. The customer recommendations could have been been done at an actual customer level.
The next major problem with the Rate Plan Optimization project was choosing the rate plan that was most profitable to the company and then suggesting the customer adopt that plan. In other words, the Rate Plan Optimizer had the goal of making the customer's bills as large as possible and making sure the customer got the worst possible plan from the customer's standpoint.
How to fix it? That's tomorrow.
Thursday, March 6, 2008
Real Examples: The Rate Plan Wrong-Sizer
That's a hypothetical example of information design; let's talk about some examples where I don't think that design was done so well, and how I think it could have been made better.
The Rate Plan Wrong-Sizer
I was working for a telecommunications company when my group was introduced to the Rate Plan Optimizer Project. IT had just spent one million dollars in development budget and they needed a group to take over the product.
The goal of the Rate Plan Optimizer was to help customer service reps suggest rate plan improvements to customers. The product did this by
The product had a number of parameters that could be managed, and IT wanted our group to do the managing.
I can't tell you that much about the parameters because my group got as far away from the project as quickly as we could. The project was broken enough that no amount of parameter tweaking could fix it and we didn't want to take the blame for generating bad customer experiences.
What's wrong with the Rate Plan Optimization Project and how should it have been designed? More tomorrow.
The Rate Plan Wrong-Sizer
I was working for a telecommunications company when my group was introduced to the Rate Plan Optimizer Project. IT had just spent one million dollars in development budget and they needed a group to take over the product.
The goal of the Rate Plan Optimizer was to help customer service reps suggest rate plan improvements to customers. The product did this by
- Assume every customer had exactly the same usage patterns with the only difference being their minutes of use and then
- Look at a series of rate plans and suggest to the customer the plan that would be most profitable to the company.
The product had a number of parameters that could be managed, and IT wanted our group to do the managing.
I can't tell you that much about the parameters because my group got as far away from the project as quickly as we could. The project was broken enough that no amount of parameter tweaking could fix it and we didn't want to take the blame for generating bad customer experiences.
What's wrong with the Rate Plan Optimization Project and how should it have been designed? More tomorrow.
Wednesday, March 5, 2008
Building an Attrition System
We're talking about setting up an attrition intervention system.
This is all about information: how to get customer care reps the exact information they need to help out our customers.
The first big step is getting commitment to build a system and do it right. A well-done simple policy is a lot better than a badly done sophisticated policy. The next step is getting commitment to test the system at every level. Customers are fickle creatures and we don't understand how they will react to our best efforts. I'll have to say something about how to measure campaigns soon, but right now let's just say that we need to do it.
Let's start with the intervention. The obvious thing is to try to throw money at customers, but buying customers can get very expense quickly. What will often work better is to talk with them and just solve their problems. But here you need a good understanding of what their problems are. We can do this by a combination of data analysis, focus groups, surveys, and talking to customer reps. There are a couple of dangers here. 1) Trying to do this by simply building an attrition model. Attrition models will typically tell us the symptoms of attrition , but not the root causes. 2) Relying on the intuitions of executive management. Executives often have some ideas about attrition but rarely have a comprehensive understanding of why customers actually leave.
The next step is trying to get an understanding of the finances involved. What are the financial implications of, say, reversing a charge the customer didn't understand? It's going to be different for one customer that has done this once and another customer that habitually tries to take advantage of the system.
Everything, everything, everything needs to be checked against hard numbers. We have experiences and form opinions on these experiences but until be check we don't know what's really going on.
The last step is what people usually start with: building an attrition model to tell when customer are likely to leave. A standard attrition model won't really give us the information we need. We don't just need the chance someone is going to leave. We need to match customer with intervention; that's a much more specific type of information.
This is all about information: how to get customer care reps the exact information they need to help out our customers.
The first big step is getting commitment to build a system and do it right. A well-done simple policy is a lot better than a badly done sophisticated policy. The next step is getting commitment to test the system at every level. Customers are fickle creatures and we don't understand how they will react to our best efforts. I'll have to say something about how to measure campaigns soon, but right now let's just say that we need to do it.
Let's start with the intervention. The obvious thing is to try to throw money at customers, but buying customers can get very expense quickly. What will often work better is to talk with them and just solve their problems. But here you need a good understanding of what their problems are. We can do this by a combination of data analysis, focus groups, surveys, and talking to customer reps. There are a couple of dangers here. 1) Trying to do this by simply building an attrition model. Attrition models will typically tell us the symptoms of attrition , but not the root causes. 2) Relying on the intuitions of executive management. Executives often have some ideas about attrition but rarely have a comprehensive understanding of why customers actually leave.
The next step is trying to get an understanding of the finances involved. What are the financial implications of, say, reversing a charge the customer didn't understand? It's going to be different for one customer that has done this once and another customer that habitually tries to take advantage of the system.
Everything, everything, everything needs to be checked against hard numbers. We have experiences and form opinions on these experiences but until be check we don't know what's really going on.
The last step is what people usually start with: building an attrition model to tell when customer are likely to leave. A standard attrition model won't really give us the information we need. We don't just need the chance someone is going to leave. We need to match customer with intervention; that's a much more specific type of information.
Monday, March 3, 2008
An Example of Information Design
Let's say we're designing an attrition (attrition is when a customer leaves a company) system. When a customer call customer care, we give the representative a recommendation. We have a lot of options:
If a company is going through a period of low attrition, ignoring attrition may be the best response. There can easily be more important problems for an organization to worry about. I have seen this happen in companies where attrition has been a critical focus of the company: the incremental effect of a new attrition-focused system is small. However, if attrition is a problem in the company (1) can be a foolish approach. It is usually impossible to tell who exactly is going to leave but good analytic design can tell you how to make bets and get a good return on your efforts.
Solution (2) is what companies usually do, and if the policy is well thought out this can be sufficient.
Solutions (3) and (4), while apparently more sophisticated that solutions (1) and (2) are asking for trouble, How are service representatives supposed to interpret the data they are given? If we give customer service representatives a raw score without guidance, then good representatives will worry about their interpretations and the attrition score will become a source of stress. If we give an “Attrition Threat: Yes / No” flag, then we've lost the ability to distinguish between a slight risk and a substantial risk. we'll be giving the representatives clear guidance but that guidance probably not be appropriate and the company will be worse off than if they had no policy.
What we want to do is solution (5): break the base down into segments with guidance and insight in each segment, making sure that our intervention is appropriate and effective in every case.
Still, there are lots of right and wrong ways of doing this -- more tomorrow.
- We can't ever know exactly who is going to leave so let's not address the problem.
- Have an overall policy that treats all customers exactly the same.
- Present an attrition score to the customer representative.
- Present an attrition threat flag to the customer representative.
- Give a graded response with reasons and some specific recommendations to the representative.
If a company is going through a period of low attrition, ignoring attrition may be the best response. There can easily be more important problems for an organization to worry about. I have seen this happen in companies where attrition has been a critical focus of the company: the incremental effect of a new attrition-focused system is small. However, if attrition is a problem in the company (1) can be a foolish approach. It is usually impossible to tell who exactly is going to leave but good analytic design can tell you how to make bets and get a good return on your efforts.
Solution (2) is what companies usually do, and if the policy is well thought out this can be sufficient.
Solutions (3) and (4), while apparently more sophisticated that solutions (1) and (2) are asking for trouble, How are service representatives supposed to interpret the data they are given? If we give customer service representatives a raw score without guidance, then good representatives will worry about their interpretations and the attrition score will become a source of stress. If we give an “Attrition Threat: Yes / No” flag, then we've lost the ability to distinguish between a slight risk and a substantial risk. we'll be giving the representatives clear guidance but that guidance probably not be appropriate and the company will be worse off than if they had no policy.
What we want to do is solution (5): break the base down into segments with guidance and insight in each segment, making sure that our intervention is appropriate and effective in every case.
Still, there are lots of right and wrong ways of doing this -- more tomorrow.
Friday, February 29, 2008
A Missing Profession: Information Engineering
In business intelligence it seems to me like there is a missing profession: information engineering.
Business Intelligence (BI) solutions ultimately aren't about the data that an organization has: they are about the information that the data carries. This information has to be uncovered, it has to be validated, and it has to be refined in a way that is usable.
What Information Engineering Isn't
Information is different from data. For instance, imagine a bank; we've got checking balances, transactions, and monthly trends. That's data. What does this data mean about the chance that the customer will leave the bank? What new accounts and services they might want? The chance that there are fraudulent activities associated with the account? That's information. Data is something that is clear and unambiguous; information needs to be inferred from the data available. Information ultimately is meaning and that makes it both messy and rewarding.
Information design relies on database design but isn't database design.
Paradoxically data can be wrong, or noisy, or incomplete, and still carry a lot of information. For instance I was working on customer purchasing behavior and I found a segmentation code that carried a lot of data about purchase patterns. I asked about this segmentation and found it was done over a decade ago and it was considered obsolete because it was done so long ago – even though when I investigated it was very useful. Whoever had done the segmentation in the first place had clearly done an damn good job.
Information engineering isn't software engineering. Computer programs like a web browser function by presenting data in a certain form, regardless of the content. If a web page properly follows the HTML protocols then a browser can show the page, regardless if the page is the IBM main page or a blog for a cat. This means that there are clearly right and wrong software solutions. Either the pages display or they don't, and if some pages don't display that's a bug that needs to be fixed. Information engineering doesn't have clear right and wrong but it does have better and worse answers. An information engineering answer can work – produce a number where a number needs to go – but not be very good.
What is Information Engineering?
Information, like data, like hardware, needs to be crafted, extracted, built. The end use needs to be understood and the end user accounted for. Information Engineering is usually invisible. Somebody wants a number, somebody gets a number, and if that number is any good is left to the person putting the system together.
Next up: some examples. I'm getting tired of this abstract pablum, and I'm one of the most abstract guys on the planet.
Business Intelligence (BI) solutions ultimately aren't about the data that an organization has: they are about the information that the data carries. This information has to be uncovered, it has to be validated, and it has to be refined in a way that is usable.
What Information Engineering Isn't
Information is different from data. For instance, imagine a bank; we've got checking balances, transactions, and monthly trends. That's data. What does this data mean about the chance that the customer will leave the bank? What new accounts and services they might want? The chance that there are fraudulent activities associated with the account? That's information. Data is something that is clear and unambiguous; information needs to be inferred from the data available. Information ultimately is meaning and that makes it both messy and rewarding.
Information design relies on database design but isn't database design.
Paradoxically data can be wrong, or noisy, or incomplete, and still carry a lot of information. For instance I was working on customer purchasing behavior and I found a segmentation code that carried a lot of data about purchase patterns. I asked about this segmentation and found it was done over a decade ago and it was considered obsolete because it was done so long ago – even though when I investigated it was very useful. Whoever had done the segmentation in the first place had clearly done an damn good job.
Information engineering isn't software engineering. Computer programs like a web browser function by presenting data in a certain form, regardless of the content. If a web page properly follows the HTML protocols then a browser can show the page, regardless if the page is the IBM main page or a blog for a cat. This means that there are clearly right and wrong software solutions. Either the pages display or they don't, and if some pages don't display that's a bug that needs to be fixed. Information engineering doesn't have clear right and wrong but it does have better and worse answers. An information engineering answer can work – produce a number where a number needs to go – but not be very good.
What is Information Engineering?
Information, like data, like hardware, needs to be crafted, extracted, built. The end use needs to be understood and the end user accounted for. Information Engineering is usually invisible. Somebody wants a number, somebody gets a number, and if that number is any good is left to the person putting the system together.
Next up: some examples. I'm getting tired of this abstract pablum, and I'm one of the most abstract guys on the planet.
Thursday, February 28, 2008
Information Engineering and Design
Why am I doing this?
We design hardware. We've been doing electrical engineering for years.
We design data streams. Database engineers have been around forever.
What this is about is designing information, engineering information. People have been doing that for a while, but there isn't any name for it, no discipline, not set of best practices.
It's not databases, it's not statistics, it's not data mining, it's something that doesn't really exist yet.
Information is to data as data is to hardware.
I should have fun trying to figure this out.
Subscribe to:
Posts (Atom)