Ethics in Technology - AI for the Common Good

Ethics in Technology - AI for the Common Good

AI for the Common Good

 

In 2010 a research project was set up by the Chinese Academy of Sciences and PLA General Hospital in Beijing to reassess whether the diagnosis by doctors of hospital patients who had been evaluated as having ‘no hope’ of regaining consciousness was correct. In 2018 the researchers reported that at least seven ‘no hope’ patients in Beijing had been assessed by an artificial intelligence system, which predicted they would wake within a year; and they did.

In one case neurologists had conducted four rounds of tests to show the potential for recovery and had given the patient a score of seven out of twenty three on a coma recovery scale, a value that gave his family a legal right to unplug life support. After reviewing the brain scans, the computer gave the patient more than twenty points, almost a full score, and the man, plus six other patients, woke within 12 months, as predicted. It’s worth noting that the machine made some mistakes; in one case a 36-year old was given low scores by both the AI and doctors, but recovered. Despite this, the machine has achieved a 90 per cent accuracy on prognostic assessments.

The system works by reviewing patient brain scans taken using functional magnetic resonance imaging, which detects brain activity by measuring tiny changes in blood flow. The AI algorithms can scrutinise these changing attributes and determine previously unknown patterns from past cases. Further, the algorithm can look at different brain functions and see connections between them. Doctors cannot do this as there is too much data and much of it involves changes so small they are invisible to the human eye. The system has helped diagnose more than 300 patients to date, but it doesn’t replace doctors, it just helps the doctors and families make better decisions.

On the surface, this is an example of a use of AI that has positive benefits and there are few reasons to try to bias the AI to produce an incorrect result. However, it’s possible, if unlikely, that the family might want to turn off the life support, for financial or other reasons, and might try to influence those managing the system to produce a different result. In fact, this is highly unlikely, since the base data is hard to manipulate. However, if we look at other scenarios, for example, the use of AI in determining the risks associated with fracking for gas or oil in a particular location, we see that the financial implications are large and the ability to manipulate the outcome, whilst still difficult, becomes more attractive. Once again, we reach a situation where the use of AI, the control of the data inputs, and the financial rewards can be in conflict, and we need to be asking questions rather than relying on an unbiased outcome.

 

Click here to read more about ethics in technology: AI and Bias.