The Ethics of Artificial Intelligence, Part 1: Transparency

2017-08-03_16-24-25

I’ve blogged about the question of ethics in a February blog on using genetic data and how big data can exclude in a September 2016 blog. Today I’m beginning a four-part series on ethical issues raised in data science and in particular ways artificial intelligence needs to include ethics as part of the entire process.

It’s a question on everyone’s minds – how can I trust decisions made by computers? There’s the book Weapons of Math Destruction by Cathy O’Neil that claims big data increases inequality and threatens democracy. There are researchers making this topic their life work such as Bostrom, Anderson and many more. Corporate partners, non-profits and others in the field are beginning to particularly emphasize ways to rigorously defend the algorithms that power artificial intelligence. And many data science programs, including mine at Indiana University are including ethics in their curriculums.

Because I feel you can’t have trust without transparency, we will begin the ethics discussion here. Devney Hamilton and others have brought up great points that we cannot accurately make predictions about the future using biased historical data in ‘social realms that we’re simultaneously trying to change’. For example if we know there is bias in the data about people such as exists in the criminal justice system, using this bias data to train machine learning models will just give us bias models. The ability for the math behind the decision to be transparent to all is a great starting point to ensure ethical machine behavior.

Communicating the assumptions of the features and algorithms used in artificial intelligence is important. If a company describes its process of loan making while maintaining privacy and security standards, the public could hold the company accountable to prevent bias. The decisions being made with artificial intelligence that involve people are just too important to not have this transparency. And yet, the big data community has a long way in building ethics requirements into their products and services.

Furthermore, ethical artificial intelligence needs to include automated ways to detect bias such as described by Hardt et al. Their research addresses the issue of avoiding discrimination against what they call ‘protected attributes’ in the data such as race, color, religion, gender, different ability, or family status. Their framework requires intentional choices about how the data is processed before a machine learning model is applied to it. The more we can take subjectivity out of encoding the data will help minimize and hopefully eliminate bias.

Have you seen great examples of transparency in artificial intelligence? If so, I’d love to hear about them and how they could be scaled across different domains. Next week I’ll be talking about how trust is a key component of the ethics of artificial intelligence.

Advertisements

4 thoughts on “The Ethics of Artificial Intelligence, Part 1: Transparency

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s