Would you trust an AI for a legal work?

View the original post on Musing.ioI would not immediately trust AI for legal work. What makes AI powerful is how well it is programmed. The limit here, as in legal interp...

6 years ago, comments: 4, votes: 4, reward: $3.80

View the original post on Musing.io

I would not immediately trust AI for legal work. What makes AI powerful is how well it is programmed. The limit here, as in legal interpretation, is that programmer biases would enter into the code. So, you would be trading one kind of human error for another.

If it is a learning AI, it necessarily has to make mistakes in order to build the data set that would allow it higher accuracy. AI, like humans, learns from mistakes. They are incredibly efficient at making mistakes and learning from them much faster. However, this is little consolation for the person whose life is ruined because the AI does not have experience in the particulars of the work. Assuming the AI has exhausted all possible permutations of legal learning, it could be extremely reliable.

One thing AI would not help with are the whims of people involved. People enter into contracts, lawsuits, witness stands, and legal matters for all sorts of messy personal and business reasons. They settle out of court, drop lawsuits unexpectedly, enter into contracts under duress or while drinking, are intimidated against testifying, make up false testimonies, and all manner of human foibles.

So, the problem is still a human problem, not one of cold calculation. Would the AI question a person’s motivation? Would it come up with a creative solution allowing two parties to agree without going to court? Although on the surface, legal work appears to be one of logic and reason, there is use of nuance and personal judgement that goes into it as well. Prosecutors, for example, have discretion on charging people with a crime. There is room for negotiation and plea deals. Even in the case of iron clad contracts, they are always subject to renegotiation if both parties can agree.

To recap, any AI legal work would be limited and suspect at the outset. Over time, AI may learn as its data set grows. So, early legal AI would need to be supervised very closely and tutored incessantly. Once it has some degree of mastery, it may still need humans to capture those little nuances that often prevent strict legal interpretations.