Last week the Nobel prize in Economics (or more correctly “the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel”) was awarded to David Card, Joshua Angrist, and Guido Imbens. At Predictive Insights we’re very excited by the award since these three economists have been responsible for developing, applying and popularising a set of analytical tools we use in our day-to-day work.
Figuring out the link between cause and effect is fundamental in many business an research settings. In some cases, the researcher has the ability to set up an experiment, randomly assign people to different groups – one of which gets the ‘treatment’, and then observe the differences in outcomes. This is what generally happens in settings like medical trials. If the two groups are large enough, and similar enough (similarity is achieved through random allocation between the groups) then the experimenter can very accurately determine the effectiveness of a ‘treatment’. Over the past few decades these approaches have become much more common in social science, with researchers running experiments of different policy interventions through ‘randomised control trials’ (RCTs). The 2019 Nobel Prize in Economics was awarded to Esther Duflo, Abhijit Banerjee, and Michael Kremer for their work in this area. They’re also now common in the tech space through A/B testing.
However, in many settings the person interested in determining the causal relationship is a passive observer – they cannot alter the conditions or allocate people to different groups. In these contexts ‘natural experiments’ are required to determine the impact of a change or ‘treatment’ on the outcome of interest. These natural experiments rely on some sort of external or random variation to induce experiment-like conditions. The 2021 Nobel prize is for the work that Card, Angrist, and Imbens have done in developing and applying a set of tools which can be applied to estimate causality through these natural experiments.
To those not familiar with this approach, natural experiments can be confusing and so some examples can make things clearer.
Alex Tabarrok has a good write-up of how Angrist (with Alan Krueger 1991) uses a natural experiment – the quarter of a person’s birth – to determine the impact of school on earnings. One of the reasons people stay in school is because they may have more ability and may be doing better. Thus, finding a positive relationship between the length of time in school and post-school wages may be just picking up the effect of ability. To get the true ‘impact’ of school, we need to somehow control for this. Angrist and Krueger exploit when a person was born in the year, which is essentially random, to create a natural experiment to do this.
In the US children born in December can start school almost a year earlier than those who are born in January and just a little bit younger. However, all students can quit school at 16 regardless of when they started. These two groups of individuals who are born a small number of days or weeks apart thus can be in school for quite different lengths of time. The effect of this longer period in school on earnings. The effect of this longer period in school on earnings is clear from the picture below – those born in quarter 1 have less education and lower earnings than those born in quarter 4. This relationship implies an increase of income of about 10% for that additional year of education.
Another example of exploiting outside variation to create the conditions for a natural experiment is the work of Angrist with Lavy to examine the relationship between school class size and educational outcomes. Here they use Maimonides rule, a rule which limited class sizes in Israel to 40 students, to create a threshold around which to compare outcomes. What this allows them to do is compare the outcomes on a standardised test of those students in a school with one class of, say 38 students, to the outcomes of an observationally similar school with 42 students in the year and thus two classes. What they show is that educational outcomes are better in the smaller classes.
So how do we use these approaches at Predictive Insights? We use these methodologies primarily in two areas. The first is in the explicit causal analytics work we do for clients. This is in work like determining the impact of loyalty programmes on customer acquisition, where we exploit external variation like changes in eligibility criteria to create natural experiments. The second is to use these approaches to identify the impacts of past promotions, price changes and other offers, which can then be used by our clients for strategic planning but is also incorporated in our demand forecasting technologies.
Natural experiments is not the only area where we’re combining award-winning academic work with practical applications. We’re pretty certain that the ground-breaking work being done by Susan Athey on causal inference in machine learning frameworks, which we use extensively, will be acknowledged by the Nobel committee in the next few years.
Figuring out the link between cause and effect is fundamental in many business and research settings.