Can AI be ethical? Part one: 101 – What is ethical AI?

Can AI be ethical? Part one: 101 – What is ethical AI?

Can AI be ethical?

 

In this series we are discussing the pros and cons of AI, and the ethical issues that arise out of its use.

Today’s subject is: 101 – What is ethical AI?

 

AI is, in essence, human-like decision making by computers, which aims to develop informed and practical solutions to complex problems. The use of AI is expanding but its use can be controversial, and there have been instances where it has produced biased results, or solutions that were clearly not fit-for-purpose. In this series of blogs, we will look at the rapidly developing field of ethical AI and discover that social disadvantages often result from an incomplete understanding of the scale of a project, or the groups it will impact, and that it is imperative that the proposed outcomes are tested at all stages of the development lifecycle, to ensure that a solution has no unexpected consequences. 

 

It is a common feature of new or unfamiliar concepts that we can focus in on one aspect and not realise that this is either a very generic idea, or alternatively, a subsection of a larger area of thought. For example, we often use the term Artificial Intelligence, but this high-level concept includes many subsets, for example machine learning, robotics, and speech recognition, and to ensure we’re communicating on an equal footing with another person, we need to specify what we’re discussing. In the same way, we tend to look at data protection as a stand-alone methodology for protecting people when their data is collected, analysed, and stored, whereas this activity is really a way of thinking about the rights people have in relation to their data, ideas of privacy and the consequences that data misuse could have. Looking at the wider context, data protection is a subset of ethics, the moral principles that govern behaviour or activities. So, we can see ethical AI as a way of balancing the potential good that can come from the use of AI, for example, improved healthcare, education, and energy management, against the potential harms that could occur, which include bias and discrimination, invasions of privacy, and unexplainable outcomes.

 

When we consider ethical AI, what we are really looking at are the people involved in its creation. Consequently, the primary questions should relate to the drivers and motives of those financing an AI solution, for instance, what problem is the development proposing to solve, is it for financial gain, and if so, what mechanisms does it use. Or, if it’s to help people, are there any aspects of its design that might work against its intended use. It’s worth remembering that even good intentions, no matter their original driver, can have negative consequences.

 

The second question set will concern the development team; does it have a culture of responsibility and use ethically sound practices at all stages of design and development. And even if we assume good intentions, will we inadvertently create something that is not fit-for-purpose, or disadvantages part of the population it is designed to help.

 

Consequently, ethical AI is not a theoretical discussion point, rather it is an intrinsic part of the development lifecycle, and increasingly one which public and private bodies are searching for ways to efficiently and comprehensively, integrate into their project management toolset.

 

Next time: H is for Harm – Potential Trouble in AI Development