AI – An Inconsistent application of the law

AI – An Inconsistent application of the law

By Steven Orpwood, Data Protection Expert

May 2023

 

A recent legal action brought against an AI model has highlighted the different expectations placed on human and machine decision making. This blog looks at the dilemma and how it might be resolved.

 

A quick search on Google for inconsistent application of the law brings up the following headlines: “marriage law 'confusing and inconsistent', says law body”; “laws for teenagers are 'dangerously inconsistent'”; “confusing and inconsistent Covid rules aren't working”; and “report finds US hate crimes laws inconsistent”. They are not all new, but it shows that the word ‘inconsistent’ occurs frequently in relation to activities that we assume are, and could reasonably expect to be, ‘consistent’.

 

In the US, a law firm is suing the first ‘robot’ lawyer for practicing law without a licence. In fact, the robot was only designed to complete legal documents, but the lawsuit suggests that the client complained about poor results, and in the UK, the Competition and Markets Authority has started to investigate the impact of AI on consumers and business.

 

So how does all this fit together? Let’s consider ten AI models all given the same inputs and base data. If they’re asked a question, will they all give the same answer? The assumption has to be that they won’t, because they will all be created in different ways, and their machine learning algorithms will be different. Does this mean they can’t do what we require them to do? Not necessarily, but it does mean we can’t guarantee consistent results that give confidence to the user. And this is where we come back to robot lawyer.

 

Given the inconsistent application of laws and rules mentioned above, why do we consider it acceptable for several lawyers to argue a case in court in several different ways, or Governmental groups to make laws that are inconsistent, but it’s not okay for ten different AI models to produce ten different answers to a problem. Perhaps the answer lies in the codes of practice that people have put in place to govern their actions in certain fields, for example, law, medicine, and most sports. These rules do not mean that the same inputs will result in the same outputs, but they provide a framework to guide the user, and for any practitioner their own experience will impact their application of those rules, using them as appropriate, but still within the framework originally set down.

 

At the moment, AI models do not function according to these frameworks, or if they do, it’s not necessarily simple to determine how their decisions were reached, so the global trend towards investigating how AI models work, and trying to produce frameworks for them to follow is not a condemnation of AI or its use in the coming years, but it is a good starting point to give users of these systems confidence that the results are based on firm foundations and arrived at in a consistent way. In fact, it is possible we might reach a point where headlines including the word inconsistent will be absent from Google searches.

 

Aim Ltd is a consultancy providing data indexing and searching software using the latest AI and machine learning technology, and we are committed to developing tools that are both functional and accountable. Contact us for more information.