Last week I began a four-part series on the ethics of artificial intelligence (AI) and in particular talked about transparency. Today I want to continue this discussion by looking at some research by Dr. Guruduth Banavar at IBM Research. How does the issue of trust factor into decisions that are made by computers after they have been given a little or a lot of input from humans? If we are to reap the benefits of AI, we need to first trust it.
Banavar says that two things are needed in order to trust AI: 1) AI needs to behave as we expect it to and 2) AI needs a system of best practices that aligns with social norms and values. Let’s discuss his first requirement in a bit more detail. Banavar argues that humans tend to ‘trust things that behave as we expect them to.’ In 2017 as we continue to live in an era of increasing mistrust, data scientists and technologists need to think about how this mistrust translates to computers. In a study by Tech Pro of 534 people, 56% said they were afraid of AI or think AI will be harmful for society. In this one study, are people afraid because they don’t understand AI or because of personal negative experience with AI? Is there more research out there that can help us answer this whether people are afraid or not?
Banavar further argues that trust depends on establishing a system of best ethical practices. I agree with Banavar that trust should theoretically happen with time as industry standards that include ethics are implemented, documented and transparent. But of course, my personal opinion is biased towards technology for good by my age, gender, and own background. I don’t think having a system of industry standards alone will solve the trust issue. Even if the future of artificial intelligence includes more human ethics and morals such as trustworthiness and altruism and these ethics could be widely communicated and understood by everyone, there will probably always be some mistrust. The debate over whether to trust machines has been raging for at least 20 years between many experts so I don’t expect to resolve it today in this blog.
There has been a whole lot of negative media attention into unconscious bias in algorithms, but is the negative press warranted? How can researchers and practitioners get the word out about the positive effects of artificial intelligence such as using natural language processing to predict court cases? And even if we get the word out, will it be enough to overturn the tide of skepticism in a skeptic age? I’m left pondering the role of the data scientist in establishing ethical standards into their algorithms and weighing the positive and negative implications of artificial intelligence.
(Image by Cosmoso)