Technology at the service of justice

The court system is often criticized for being slow and expensive. Litigants are increasingly seeking alternatives to state justice to resolve their disputes, turning to Alternative Dispute Resolution (ADR) mechanisms such as conciliation, mediation, or arbitration. The explosion of small value e-commerce disputes has resulted in the integration of digital technologies into ADRs to effectively resolve such disputes. Known as Online Dispute Resolution (ODR), these dispute resolution mechanisms are widely used – especially but not exclusively – in e-commerce (e.g. the eBay Resolution Center). States are also gradually adopting digital technologies in their justice systems, for example in China [1] where the proceedings of some courts are entirely held online. In Switzerland, the draft revision of the Code of Civil Procedure (CPC) provides for the possibility of hearing witnesses, experts and parties by videoconference [2]. Around the world, the COVID-19 crisis has accelerated the use of digital technologies in state courts, for example in Canada with hearings held on Zoom.

New technologies are increasingly being used in legal professions. The use of artificial intelligence (AI) helps lawyers in their work, for example by facilitating and accelerating legal research [3]. So-called “predictive” justice, which makes it possible to predict a court decision in particular based on the analysis of case law, has also appeared both in law firms and in courts, for example in France with the software Predictice [4].

From judge to robot judge

The effectiveness of ODRs and AI encourages their use at the state level to administer justice more efficiently while ensuring that states keep their central place in this task and allowing them, where appropriate, to place safeguards. This idea has led to the concept of a robot judge. In order to materialize this concept, a blend of Machine Learning and Natural Language Processing is required. The algorithm that would serve as a judge would have to be trained on the existing case law and would have to be able to emulate the human judge by presiding over the proceedings, analyzing the documents produced, carrying out a legal syllogism, and rendering a reasoned decision, all while being entirely independent. To our knowledge, such an algorithm is not yet in operation today, the main obstacles being the difficulty of accessing the large amount of data necessary for the learning phase (not all case law is accessible, at least digitally), on the one hand, and a technology not yet capable of reproducing a legal syllogism the same way as a human, on the other hand (see Van der Branden, Les robots à l’assaut de la justice, Brussels 2019, p. 61 ff). This last element seems the most difficult to overcome because, in addition to the legal reasoning, moral values would have to be instilled in the algorithm and it could even be necessary to digitalize feelings and emotions.

However, there are already several computer software incorporating a form of AI approaching the concept of a robot judge. In the United States, for example, the COMPAS software assesses the probability of recidivism to help the judge decide whether or not to release a detainee on bail pending trial [5]. In some Chinese courts, AIs collect disputes, manage interactions with the parties and gather relevant information for the judge [6]. Estonia is considering setting up an AI-only court for cases with a litigation value of less than € 7,000 [7]. There are also examples of AI being used by state courts in France [8] and Colombia [9]. However, there are no plans to introduce AI into Swiss legal proceedings for now.

This new phenomenon obviously raises many questions, in particular ethical questions in relation to fundamental rights. The European Commission for the Efficiency of Justice of the Council of Europe adopted in 2018 a European Ethical Charter on the use of artificial intelligence in judicial systems and their environment [10], which lists the guiding principles for the implementation of AI in legal proceedings. These five principles are:

  • The principle of respect of fundamental rights;
  • The principle of non-discrimination;
  • The principle of quality and security;
  • The principle of transparency, impartiality and fairness;
  • The principle “under user control”.

Recognition of a robot judge’s decisions

In view of these developments regarding the administration of state justice, one can legitimately ask the question of the recognition in Switzerland of a foreign judgment rendered by an autonomous AI, on the assumption that this fiction becomes reality.

The legal scope of a decision is limited to the territory of the state where it was rendered. In order for a decision to have legal effects in another state, that state must recognize the decision, usually through a recognition procedure, the objective of which is to check the conformity of the foreign decision with certain pre-established conditions. Swiss private international law provides five conditions for recognition, including the conformity of the decision with Swiss public order and, more precisely, with formal public order (Art. 25 let. c cum Art. 27 para. 2 PILA) . It is through this condition that the Swiss judge could determine whether a decision rendered by a robot judge complies with the procedural requirements anchored in the Swiss public order, which correspond to the minimum requirements of Art. 6 para. 1 ECHR (see Othenin-Girard, La réserve d’ordre public en droit international privé suisse, Zurich 1999, p. 107 ff).

The conformity of a robot judge’s decisions with Swiss public order

To determine whether a foreign judgment rendered by a robot judge complies with formal Swiss public order, the authority could draw inspiration from the five principles of the Council of Europe’s Ethical Charter. When the use of AI in the decision-making process complies with these principles, recognition of the decision in Switzerland should not be an issue. One of the principles of the Ethical Charter prescribes that the user (i.e., the judge) must maintain control over the decision, meaning that the final decision must come from the judge and not from the AI. If the decision emanates from a fully autonomous robot judge, it should be determined whether the intervention of a human in the decision-making process is an essential procedural aspect which is an integral part of Swiss public order. If this were to be the case, the decision could not be recognized in Switzerland. Other elements would also come into play in the process of recognizing a foreign judgment rendered by a robot judge.

The requirement that the decision must emanate from a court established by law seems to be met when the robot judge is appointed as a member of the judiciary power, endorsed by the public law of the foreign country. In contrast, the independence and impartiality of an AI raise questions. What influences could have shaped the design of the software? It is necessary to analyze which persons (natural or legal) have participated in the programming of the robot judge and to assess the degree of control that the state has been able to exercise over the stages of development. If, as in the case of COMPAS in the United States, the company that developed the computer software refuses to disclose the internal architecture of the software to preserve trade secrecy, how can we ensure that the product is impartial and serves the public interest rather than the private interests of the company? If the state in question had a procedure for controlling the learning phase within its political institutions, what about the principle of separation of powers and the independence of the judge? And even if de facto independence was demonstrated, what about the appearance of independence? These elements show how important the principle of transparency (required by the European Ethical Charter) is in order to be able to assess the absence of bias in the decisions rendered by an AI. They can, however, be difficult to estimate in the process of recognizing a decision rendered by a robot judge.

Other procedural rights could potentially be affected, for example if the litigant does not really know which elements are decisive in the eyes of the AI in its decision-making. Indeed, these types of software can turn out to be completely opaque (“black box effect”), even to the eyes of experts. This could lead to a violation of the right to adversarial proceedings as it may prove difficult to identify the elements against which to argue.

The generalizing and abstract “reasoning” followed by a machine learning algorithm also raises the question of its compatibility with the individual and concrete function of the decision, especially in our legal system of civil law tradition. A priori, common law legal systems, based on case-law, seem more suited to the reasoning followed by an AI. The duty to motivate a decision could also be a problem. However, some software are starting to be able to justify a decision and case law does not easily admit an infringement of public order due to lack of motivation (see Bucher, CoRo PILA/LC, Art. 27 PILA, N 51 and ref.).

Along with these purely procedural questions, certain flaws in decision-making algorithms have already provoked outrage in public opinion. The documentary film Coded Bias [11] revealed that several algorithms used on a large scale are discriminatory, in particular against people of certain ethnicities or against women. If the same observation were to be observed in a robot judge, there would be a real incompatibility of its decisions with the prohibition of discrimination which is a central component of Swiss public order (see Bucher, CoRo PILA/LC, Art. 90 PILA, N 18).

A future in the hands of the (human) judge

In our opinion, the state of science does not yet allow the principle to be established that a foreign decision rendered by an autonomous AI must be recognized in Switzerland. The Swiss judge faced with the question should apply the following triad: first analyze the public policy issues raised above, then the degree of autonomy of the algorithm, and finally the type of decision. The more fundamental rights are potentially impacted by the robot judge’s decision, the more carefully the decision’s compliance with procedural public order must be scrutinized. The more autonomous the robot judge is, the more caution must be exercised in recognizing its decisions because it is likely to continue its learning in an uncontrolled manner. The more a decision pertains to distributive justice by calling for the appreciation of human values, the more delicate the intervention of the robot judge is, because it is all the more likely to have negative effects on litigants, and the recognition should be given with caution. Where to place the cursor along these three axes will obviously depend on each individual case. This question reminds us of the difficulty of being a judge.

Auteur(s) de cette contribution :

Page Web | Autres publications

PhD student and research assistant at University of Neuchâtel | Research focus on legal issues of digitalization (blockchain, platforms, AI, digital integrity)

Autres publications

Master in Law student at the University of Neuchâtel and student-assistant at the LexTech Institute, with a particular interest in the legal implications of the digitization of society and in private international law