Obscuritus.ca A nerd making a nerdy blog

The Healthcare Panopticon

I hear a lot about Big Data these days - and it's becoming pretty cliche to be worried about them and their effects on freedom, privacy, or other things that hackers like to rattle on about. However - I think one of the most important things about the modern Panopticon is that it's usually based on earnest belief. I fully believe that the people who are trying to gather all of our data into a single bundle, track, collate, and control people are earnest and believe in making the world a better place.

In case you want to know the end of this quickly - the people who believe in the Big Data Utopia have never examined what that means.

This is primarily a response to a talk at TEDxU (The talk by Dana Fox - starting around 2:35:00 - about Cognitive Human Services Delivery). This seems like a super earnest talk - with a goal of making sure that everyone is happy. But happiness isn't just through service delivery. Society is larger than simply the people who deliver services, and sometimes the people who deliver services are not the people you want to share with.

One of the pillars of information security is called PII (Personally Identifiable Information) - this is information that can identify a person. It's often emails, credit card numbers, addresses, names, birth dates, location data, or other such tools.This is typically considered the holy grail of data - and leaking PII without consent is how something is given the status of an information leak. If someone is leaking information - the question is if someone is leaking PII. Collecting PII is basically a crime so heinous that the NSA wont engage in it. The typical argument is about whether or not it's OK to collect and correlate enough metadata to personally identify someone (a surprisingly small amount). And it can be very hard to deidentify data when you release it (a good example being the "Deidentified" NYC taxicab database). However - the TEDx Talk goes well beyond any other that I've seen. This suggestion deals with something greater than PII.

Beyond PII is something called PHI. PHI is protected health information - this is not things like email addresses, or addresses - this is things that are more ingrained into people's life. This is someone's medical history - the diseases they have, their sexual orientation, their gender identity, or their interactions with their therapist. These are far more private moments for people. This is what is being suggested for sharing. Someone suggesting that we share PHI across databases can't simply say that we should put it all in a Watson database and then just provide it to people for interactions. There needs to be more consideration of this process as a whole.

Most importantly - we need to look at how this data will be stored and collected. Who controls the connection? Who controls what is collected? Who controls the storage? Is this storage part of a national database? Is it part of a not for profit? Who controls the not for profit? Worse - is it controlled by a for profit company? If this is a for profit company - then is our health information part of a scheme to make money? The largest question about this database is who controls it.

Any time you build a giant database of people's information - you need to ask how a police state would use this information. If we have a database of everyone's medical information, we can release this information in order to harass, embarrass, or endanger. We can also leak information for reasons that are not in the interest of the people whose information it is, for example, the Canadian woman who was denied entry to the US due to a medical condition. In theory - a person's own interest is not the same as that of a state. Which is why we have a concept called privacy, in order to not be treated as an extension of the state by the state - people need to have privacy. If people are being tracked - then what's to stop the border patrol from denying you a vacation? What's to stop the police from approaching you as a violent person? What's to stop a for profit company from trying to manipulate people with mental illnesses into buying things they don't need?

Now - one of the things I often hear in response to this is the idea that we can simply try to define who is able to access this information. Limit it to a specific group of people - make it only a specific board of doctors can access it - for example. There are three major problems with this.

But - you can't just make this choice for yourself. You need to make this choice for every single person.

We live in a society where there is prejudice. Many people believe things - for example - you hear about people believing that there is a form of racial hierarchy. People believe that their religion calls for holy war. People believe that homosexuals are evil and should be killed. Prejudice and discrimination exist. And if you cannot hide your gender identity from the police if you discuss it with your therapist - then you literally have no way to get help for people's gender dysphoria. If people can't get help for their mental health issues - because they're worried that this will make their interactions with police or professors embarrassing or deadly. Worried that the people who have access to this database of information will change the way they treat them. If, as a teenager, they briefly enter an in patient program for suicidal ideation, and then twenty years later - will the border patrol be aware of this? Will they be allowed to take vacation?

Stigma around mental health already exists - and people with mental health issues already worry about that information getting out. Increasing that potential by giving away details to untrained laymen (in the police, or schools, or information they're comfortable with sharing with their Psychiatrist but not their GP) is not a service, instead it's a tool to harass, harm, and oppress. In theory - this database would be the world's largest force multiplier of people's prejudice. There will no longer be "passing privilege", people will never be able to make choices about their disclosures, and they will never be able to choose how they enter an interaction with authority.

A lot of the ideas that I am seeing around Big Data assume that we live in a Utopia. That there are no stigmas. That algorithms are perfect. That doctors, lawyers, or police are inhumanly perfect - and able to act completely without prejudice. I am inclined toward a utopianist mindset myself. I want to believe all people are perfect. But, unfortunately, they're not - and some tools of utopia, when applied into an imperfect world, instead create a dystopia.