Such AI tools are not ‘thinking’ beings and the approach to problem-solving that they rely upon is very different to how humans think. They are undoubtedly effective in some specific contexts, but while we might tolerate an AI recommender that suggests a streaming show that we don’t enjoy, we should be much more cautious about an algorithm that guides a judge to deny an individual their liberty. We should prohibit them altogether if it transpires that those recommendations simply repeat historical patterns of bias against particular ethnic or social groups. It is also important to remember that experiments have shown that if the same sets of rules are given to different teams of software developers to automate, the results will differ from each other in ways that matter.
If lawyers, and particularly judges, rely more and more on software, and particularly AI, to assist them in their daily work, this (often invisibly) shifts the power in decision-making away from the legal profession and towards the programmer and the systems architect. New professions are developing and may need regulation to ensure that ethical standards are met. The public should be confident that the website chatbot which advises them on routine matters such as planning permission, income tax, or family law has been developed by individuals with appropriate education and training, who have proper liability insurance in place, and have an understanding of how their services might cause harm. They should also be able to trust that their lawyer is not relying solely on computer advice (which may have significant blind spots), particularly if failure to argue a particular line of defence could result in their losing money, going to jail, or being deported.
The European Union is developing an ‘AI Regulation’, which will limit the use of certain ‘high-risk’ AI systems, but we may also need to require training, certification, and ongoing membership requirements for those designing and building AI systems, as we already do for architects and engineers, for example. However, regulation should not prevent innovation where it can bring clear benefits to consumers and businesses, and justified concerns about unscrupulous or naïve entrepreneurs should not be an excuse for lawyers to stifle competition. Nor should the very real difficulties in developing AI that does not reproduce existing social problems halt experiments by courts in online dispute resolution, particularly for small claims.
Discussion of lawtech often predicts ‘robot lawyers’ and ‘robot judges’. (Sometimes it seems that if lawyers were all replaced by machines, we would not be missed.) These predictions are mistaken in at least two ways: first, judges and lawyers do a great deal more than simply give legal advice or hand down judgments – they also manage the courts, provide commercial guidance, and support their clients in many ways. Second, and more important, AI tools are not ‘general intelligence’ and probably never will be. They can be very effective and efficient in certain limited domains, but they are not sentient, will often fail in ways that are very different to human mistakes, and they do not recover from failure like humans can.
The lawyer of the future will probably be relying on these tools to analyse and draft documents, locate relevant legal authorities, and advise their clients on the likelihood of positive or negative outcomes if they go to court, but a good lawyer will have a keen understanding of the strengths and weaknesses of AI (in the same way that they will know the abilities and limits of the people on their team) and will know how to marshal them in a way that works well and makes sense in a particular context, rather than applying them blindly in a ‘one size fits all’ manner.
The future of the law is not automation, but augmentation – using machines to supplement and assist with thinking by humans, not replacing them, for example, with software tools that can search vast databases of text or make predictions about the outcomes of litigation based on past experience with more accuracy than a person. This is in many ways no different to the ways that lawyers have used technologies to expand the limits of the mind in the past: writing down legal texts rather than relying on memory, printing them so that they can be quickly distributed, indexing them so that relevant material can be easily located. Properly managing this transition will require the addition of critical digital literacy to the essential skills that we expect of a student in law. NUI Galway is including this and other modern requirements in ground-breaking modules such as Understanding the Law and Law and Innovation, but there is still a great deal more to do in order to fully engage with the new potential and dangers of so-called AI.
This publication has emanated from research supported in part by a research grant from Science Foundation Ireland (SFI) under Grant Number 19/PSF/7665. It is based on Oireachtas Library & Research Service, 2021, L&RS Spotlight: Algorithms, Big Data and Artificial Intelligence in the Irish Legal Services Market.