How AI crashed the car...

How AI crashed the car...

By Steven Orpwood

January 2024

 

AI is a hot topic of conversation at the moment, and one aspect that generates much discussion is driverless cars. Over the last 20 years the concept has been developed and implemented, but recent trends show that many car manufacturers and corporate early adopters are halting their research and development due to a series of accidents which have either killed or seriously injured people. To understand why, let us compare a human driver and an AI car with a neural network.

 

A person driving along a road sees ‘something’ in the middle of the road. In the seconds leading up to them reaching the ‘something’ they ask themselves a series of questions that aim to establish the best course of action. “Should I stop?”, or “Can I drive round it or slow down?”. “No, there is traffic on my right and a cyclist on my left and cars behind me”. “Can I drive over it?” “No, it looks like a rock as opposed to a crisp packet”, or “it’s a crisp packet but it might have something hard in it”, or “it’s a bottle, and it looks to be made of glass and not plastic”. They process information, make comparisons to things they have seen before, or heard about (but crucially not necessarily in a similar scenario), and use this to form a decision. Based on the situation and information they have, they decide to drive over the ‘something’, or around it. This sequence of events takes place quickly and uses information assimilated over many years, in numerous different situations, to inform the decision.

 

AI on the other hand, uses learned data to make decisions. It assesses the ‘something’, compares it to existing images, and using its other inputs, be they images, radar or lidar, decides how to proceed. Its analysis is limited to things it has learned through simulation. Since it learns from experience it needs to encounter many different situations to become a better decision maker. The AI is most likely to be a neural network, which has inputs (what it detects), and outputs (what it does). Between these two interfaces is a hidden layer that contains weights which are amended as the AI learns and changes the outputs to account for different inputs. The learning mechanism of the neural network is similar to our own, and ‘experienced’ AI is analogous to comparing the decision making of an adult with that of a child.

 

However, if a person only relied on what they learned through trial and error, then they would not be able to deal with unknown situations as they occur. They will only be able to assimilate them, amend their mental weightings and handle the situation in future; similar to our AI model. What a human driver has that the AI doesn’t is intuition, which is a combination of imagination and insight, alongside emotional intelligence, to be able to interpret an unusual situation and make ‘reasonable’ decisions. Intuition is experience and insight rolled into one useful package.

 

This is not to say that humans don’t have issues too; accidents occur all the time. We do though have a framework for holding people to account, whereas technology is held to a higher level of accountability, and yet where the blame lies when an accident occurs is not currently clear.

 

Is it impossible for AI to drive cars safely? This author feels that AI will one day be better than people, but that day is not tomorrow, but some years from now. And how will this be achieved? Again, it is only speculation, but it will require new advances in technology, better algorithms, different ways of approaching the problem, and a greater understanding from the public as to what AI can do, what it can do well, and what it cannot do at all.

 

At Aim we are continually looking for innovative ways to use technology to help people and organisations achieve results that would be difficult to achieve in more traditional ways, and this includes the judicious and appropriate use of AI. Far from being something to be frightened of, AI is to be embraced, understood, and used to enhance our lives, whilst keeping a close eye on the ethics of its development.

 

If you would like to understand how Aim uses these technologies, please contact us for further information.