H is for Harm – Potential Trouble in AI Development

H is for Harm – Potential Trouble in AI Development

Can AI be ethical?

 

In this series we are discussing the pros and cons of AI, and the ethical issues that arise out of its use.

Today’s subject is: H is for Harm – Potential Trouble in AI Development

 

The use of AI can be a force for good; it can complement human judgement and intuition with cold hard logic based on aggregate decisions from millions of data elements. However, the law of unintended consequences can easily come into play when designing something with so much processing power. It is here that the concept of ethical AI comes into its own, always looking for the situations that were not originally considered, and potential issues with the data used to train AI systems.

 

The simplest issue, and the one that’s been in the news, concerns bias and discrimination. This issue occurs when the data used to train an AI solution is biased towards or against a certain group or groups of people. For instance, Amazon found that their AI algorithm designed to analyse applicants' resumes to find talented hires and remove human bias was itself biased due to the source of resumes it used for its learning stage. These consisted of historic resumes from employees hired by Amazon. The issue with this approach was that there was a bias against women, since most hires to that point had been male, and also bias against an all-female college, since no previous applicants from there had been hired.

 

Amazon detected the bias and tried to correct it but realised that they were ultimately unable to eliminate all AI based bias because it was not easy to establish where the bias was occurring. And in fact in correcting some biases, others were inadvertently created. This might seem to make the use of AI untenable, given there are always likely to be some learned biases that cannot be detected. However, the counter argument might be that if you have detected the blatant issues that might impact certain groups, and you continually look for trends that might indicate previously undetected bias, then the good that the AI might give, will possibly outweigh any negatives.

 

The question we then need to ask is whether, in this type of example, we are prepared to accept some bias in order to benefit the most people, or do we expect our decision-making algorithms to be entirely bias free.

 

Next time: A is for aggregation; the ethical sensibilities of deanonymising personal data