Sheila's Fails? The statistics of biological risk
Why the ECJ insurance judgment might not be the right road after all
Just the first worm from a pretty big can
There are many ways in which men and women are observably statistically different – from work absenteeism rates, to job commitment – and it would make as much, or as little sense to pre-vet individuals for employment based on their gender as it does to use gender to measure insurance risk.
Where the court may have scored something of an own goal is in its attempt to relegate statistical evidence to a special, less valid category than other forms of evidence.
If we go back to the original view of Advocate General Juliane Kokott last September, she is of the opinion that "the exception in question [insurance] does not relate to any clear biological differences between insured persons. On the contrary, it concerns cases in which different insurance risks can at most be associated statistically with gender."
This has been distilled, since, into the view that "statistical" differences are not the same as "biological" ones. That is a peculiar view – and one that is also at odds with the way in which equality law works.
Direct discrimination is, quite simply, discrimination on the grounds of a particular "protected characteristic".
"No women, blacks or gays" would be direct discrimination. Indirect discrimination involves the application of a condition that, although applied equally, tends to hit one group disproportionately by comparison with another. "No one under six foot" is indirect discrimination because it tends to affect women more than men. It is a statistical fact that women tend to be shorter than men, so such a condition, unless required for clear operational reasons, would be unlawful.
So the law permits statistical facts. In fact, a reading of judgments in this area would suggest an active encouragement of statistics used in this way. As one English court declared not that long ago: where recruitment outcomes, in terms of relative frequency, can be shown to be statistically disproportionate, it is likely that a discriminatory policy is being applied, even if unintended. In such case, an employer would be guilty of discrimination and would need to change their recruitment practices – or face penalties.
What did the Advocate General mean? In her earlier opinion, she appears to draw a distinction between biological factors, such as the costs associated with pregnancy, and statistical factors that do not represent any clear biological differences. It’s a hard distinction to maintain – and certainly one that is not otherwise held to in law.
If this raises issues for the European Court, it also opens an entire can of worms for the insurance industry.
When it comes to forecasting future outcomes, the statistician’s task boils down to identifying the degree of variance at play in possible outcomes, and apportioning that variance to underlying factors. Random variability is excluded.
What’s left tends to be mostly due to three or four main factors, distributed in geometric fashion: analysis of most human behaviours often gives rise to a series of explanatory components, with around 50 per cent of the variance taken up by the first, 25 per cent by the second, and so on.
If, as seems likely, gender is a high-ranking component of human variability, the ability of the insurance industry to predict outcomes has just been significantly reduced.
What then of other factors? Make and model of car, for instance?
Here comes a problem. Insofar as make and model are factors independent of gender, they can still be used as risk predictors. But where they overlap – where the risk due to type of car driven links directly to gender – then this, too, has just been removed from the equation. Dual driver policies are cheaper as inherently less risky – but if they tend to be linked to one gender more than the other, they could also now be at risk.
Ultimately, that is the real issue for insurers. They can’t simply exclude inner city areas from insurance, because in today’s UK that would almost certainly result in a degree of indirect racial discrimination.
From December 2012, they cannot use gender explicitly as a factor in setting premiums: in the long run, though, as they pore over their detailed charts and risk calculations, the picture may be far worse: because many of the other factors they might instinctively put in place of gender could well correlate with gender. So they could soon be outlawed too.
Whatever happens, this is not the last we will hear of this case. Implicit in the interaction of statistical fact and statistical discrimination is a view of society that calls for a much greater evening out of difference than we are used to at present. If that is what we want, this ruling is a positive step forward: if not, it is a door opening into a world that some will find increasingly difficult to bear. ®
Sponsored: DevOps and continuous delivery