No. 13: Unconsciously Biased and Consciously Unbiased

This blog might be considered an epilogue to my previous blog entitled “Implicit and Explicit Models.” It is based on the recent article in Science entitled “Dissecting racial bias in an algorithm used to manage the health of populations.” [1] The article details and quantifies health disparities between Whites and Blacks based on a widely used commercial (and proprietary) algorithm to predict patient risk scores. With a higher risk score, a patient may be provided access to more intensive health care management programs, presumably leading to better health outcomes. It turns out that Whites get higher risk scores via the algorithm and therefore have greater access to better care. The article cites other examples of how algorithms encode racial or gender biases in the results they produce.

I will also mention a very interesting book that I read Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil. It is a good read (be wary of some political views that creep into the scientific arguments) covering her direct experiences as well as other findings. She reports on how bias creeps into sentencing guidelines in our judicial system, rating teachers in our educational system etc. In my opinion it is essential reading and should serve as a cautionary guide for data scientists and statisticians alike.

I bring this to your attention because I believe there is also a connection to Bayesian thinking. I believe the biases encoded into algorithms are in no way intentional or conscious effort by the really smart people who make them (though I admit there are some nefarious people who do create algorithms that are intentionally trying to manipulate our opinions/beliefs/political views). Nonetheless, the biases are real and represent the fact that we have implicit models (one might say  “priors”) that govern our thinking. We cannot help it; that’s how the human brain works.

Of course, when making algorithms, someone has to decide on its ingredients – what data/variables are included, the form of the predictive function, the objective function to optimize – as well as making it operational – what constitutes a well-fitting model, what data is used to train the algorithms. While these may be conscious decisions involving discussions by teams of quantitative scientists and their subject matter expert colleagues, unconscious biases seep into the thinking and the process. As noted in the Science article, making such ingredients more transparent would help in their independent evaluation and the reduction of bias.

There is an analogy that I have argued with Bayesian thinking. Many ask me about “the prior.” Where does it come from? How do you quantify it? Who decides? These are all legitimate questions and the development of a prior is hard work. The first step is to admit that a prior exists in our minds (i.e. an implicit model). [Side Note: There is an organization called Frequentists Anonymous that has a 10-step program in which the first step is to admit openly and publicly, “I have a prior.” From there, the other nine steps of therapy can begin. … OK, this is a joke, but you get the point.] Then one can figure out how to make that prior explicit and begin to describe it mathematically or statistically. That description can be simple – i.e. a simple point estimate of the probability that the null hypothesis is true – like pr(H0 is true) = 0.40. Alternatively, it can be a quite complex description of the probability distribution governing possible values of a parameter in a larger model. In general, we know a lot more than ‘the parameter lies in the interval (-1000, 1000); rather we might more intelligently say the parameter lies in the interval (-1, 4) with the most likely value being 0. That information garnered from experience or understanding of the model/context can be converted, for example, into a suitable gamma distribution or even something as simple as a triangular distribution. Everyone knows something about the experiment, the model, the hypothesis. Write it down!

In a very recent e-mail exchange with Dr. Jonathan Jarow (formerly of FDA) about Bayesian stats, my blogs and priors (which he says jokingly should be called “anterior probabilities” in the medical community to complement “posterior probabilities”), Jonathan made an interesting comment/insight that was new – at least to me. He noted that Bayesians prefer to do all the hard work up-front in terms of quantifying the prior. Then, when the experiment is done and the prior is combined with the data in a statistically suitable way, the answer comes out pretty easily in the form of a posterior estimate or distribution. Frequentists do all the hard work on the back end. Sure, they prespecify a hypothesis, test statistic etc. (note that this happens for Bayesians as well), but the hard work is on the back end, trying to put the p-value in context and understand how much “evidence” a p-value is worth (see Blog 7: What Does p<0.05  Mean, Anyway). I think that is one reasonable characterization of the distinction between Bayesian and frequentist thinking. It is also another reason why I would support a Bayesian approach – it requires more deliberate and explicit thinking in advance of an experiment; it is a higher level, more rigorous form of pre-specification, a concept that statisticians advocate as important, if not essential, for drawing reliable conclusions from experimental data.

We can never eliminate our biases. But we can move from being unconsciously biased to consciously unbiased (or at least less biased) by making our prior knowledge more explicit in quantifiable ways.

Epilogue to this Epilogue:

See this article in the Wall Street Journal that came out shortly after the Science publication.

“New York Regulator Probes United Health Algorithm for Racial Bias: Financial Services Department is investigating whether algorithm violates state anti-discrimination law”

References

[1]  Obermeyer et al., Science 366, 447–453 (2019).  25 October 2019.

[2]  O’Neill, Cathy, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York, Broadway Books. 2016.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s