Can Big Data Be Racist?

Big Data powers the predictive models that can tell Target you’re pregnant before your physician even knows. It enables specific segmentation of huge datasets, like the algorithm that created the 76,897 micro-genres keeping us hooked on Netflix, and it has given us the ability to crowdsource real-time insights, like the groundbreaking revelation that Americans are equally keen to #deportbieber as Canadians are for us to #keepbieber.

In other words, Big Data is currently the best method we have for making sense of an increasingly complex world. But it’s also imperfect, because all of this data and the decisions we make with it are never completely objective. Consider the case of St. George’s Hospital Medical School, which built a Big Data algorithm to automate its admissions process. The idea was to reduce variability and increase objectivity, but instead the school inadvertently institutionalized bias against women and minorities. This happened because it was relying on historical admissions data, which unduly favored white male candidates. The model did exactly what it was supposed to do – and the opposite of what was intended.

When we translate cultural clichés and stereotypes into empirically verifiable datasets we introduce subjectivity into a discipline that strives for objectivity. When we imbue our Big Data insights with our race-based biases we project our prejudices onto subsequent observations. It’s inevitable. So is Big Data racist? The answer is complicated.

Leave a Comment

Your email address will not be published.

You may also like

Crayon Yoda

Pin It on Pinterest