(Image by BioMed Central)
Last week we looked at a study by Belksy that claims that genetic data may measure the educational and career success of people throughout their lives. Today I want to look at a few ethical considerations when using health data. I want discuss two theoretical concepts related to the ethics of health data – progressive caution and a privacy framework and see how these theories do in real life.
In the book Ethics, Computing and Medicine, Goodman states that a standard for public health informatics includes determining a standard for health practice, a standard for health research and a standard for the use of information technology. He defines “progressive caution” as the tension between the need for scientific progress and the demands of a robust set of public health data ethics. Ethics function as both a floor and a ceiling – that is, an aspirational goal and also minimal standards practitioners may not fall below without being found negligent. Ethics thrives on new technology and science but there should be a balance of wanting science to progress but not at any ethical cost.
Progressive caution does not provide a straightforward answer to the tension between ethics and progress. There is a balance that seeks to minimize risk to individuals and society’s public health but also not to the point of unreasonably ‘restricting liberty’ (198). The path to scientific progress has to be tempered by carefully examining and implementing ethics standards appropriately within context. Another way to look at “progressive caution” is as a yield sign would be when one is driving on the road. Proceed forward, but exercise caution when doing so. Research involving human subjects and the data collected and used while doing so needs to have certain restrictions to protect the individual. Goodman sums up the concept by asking the question “How should we arrange things so that we enjoy the benefits of new technology while reducing, minimizing or mitigating the (potential) harms?” (199). As Richards also states “Big data will be revolutionary, but we should ensure that it is a revolution that we want, and one that is consistent with values we have long cherished like privacy, identity and individual power” (46).
Now that we’ve talked a little about progressive caution, I want to move to the theory of a privacy framework for managing personal data. Gopalkrishnan and colleagues in 2012 proposed a framework for managing privacy concerns. Increasingly individuals have input about what is done with data about themselves. It is not easy for individuals to have a clear and transparent look at what is being done with their data. The privacy framework suggested is one where individuals can express the boundaries around which data is private. Technologies could be created to monitor these privacy boundaries for individuals.
I would probably use this type of “opt-in” privacy framework for any of my health-related data since this is highly personal data that I feel shouldn’t be used for marketing purposes. I would grant health data access to research organizations if I felt they would appropriately manage its privacy and if I thought my data would benefit other’s health. I probably would not participate in Gopalakrishnan’s framework for other types of my data because I don’t feel the benefit I would get is greater than the cost of time to monitor potential privacy violations. Practically speaking I think this framework is a great idea but the implementation of sorting through many layers of personal data owned by many entities will take a while and will be costly to implement. I think like progressive caution, more research needs to be done into incentive models to make this framework more attractive for corporations and individuals.