When AI Gets It Wrong: Ireland’s Court of Appeal Addresses Hallucinations in Legal Submissions

In a judgment delivered by Costello P. on 26 March 2026, in the case of James Guerin v Gemma O’Doherty [2026] IECA 48, the Court of Appeal addressed, for the first time, the appropriate use of artificial intelligence (“AI”) in the preparation of legal submissions. The Court used this opportunity to set out guidelines governing the use of AI tools in the preparation of legal submissions.

The appeal concerned the Defendant’s challenge to a High Court decision refusing her application to have the defamation proceedings against her struck out. In the course of the appeal, Costello P. remarked that the Defendant, Ms. O’Doherty, a lay litigant, had used AI to compile her submissions.

The submissions brought before Costello P. were found to include references to authorities which, upon scrutiny, “simply did not exist”, a phenomenon more commonly referred to as “hallucinations”. This is an inherent and well-documented risk associated with using AI. Further, it was evident that Ms O’Doherty had not verified the existence of the authorities she cited, nor were the “cases” relied upon supportive of the propositions she advanced.

It was further noted by the Court that Ms. O’Doherty did not inform the solicitors for the Plaintiff or the Court that she had prepared her submissions with the aid of AI. It was outlined by Costello P., that “Parties, whether represented or not, have an obligation not to mislead the court, which includes the obligation not to rely upon, or advance submissions based upon ‘fake’ authorities or propositions which have no basis in law.”

In light of her concerns, Costello P. expressed that parties, including lay litigants, should use AI appropriately and should be given guidance as to how such programmes may be used to assist in litigation. She went on to set out the following principles of general application:

  1. “Parties are entitled to use artificial intelligence to assist in carrying out research in respect of their case provided that they do so responsibly and do not, even inadvertently, mislead the court by advancing propositions or relying upon supposed authorities which in fact have no foundation at all and are simply hallucinations.
  2. In all cases where they do so, they should expressly inform both the other parties and the court of their use of artificial intelligence in this regard.
  3. A self-represented party is responsible for the ultimate written or oral work in their case just as much as the lawyers' representing parties are.
  4. It is important therefore that any party who uses artificial intelligence as part of their research independently verifies the accuracy of their submissions and the authorities as cited as supposedly establishing the propositions advanced.
  5. No authority should be cited by a party who has not actually verified that it is a genuine judgment of the court and that it is - or at least arguably is - authority for the proposition contended for.”

Costello P. concluded the judgment with a warning that irresponsible use of AI risks bringing the administration of justice into disrepute and, further, of actively misleading the court.

This judgment represents a timely and significant intervention by the Court of Appeal at a moment when AI tools are becoming increasingly accessible to legal practitioners and lay litigants alike. In setting out clear expectations, around the responsibilities which arise where individuals use AI, the Court has emphasised that while such tools may serve as a valuable aid, they cannot replace the obligation placed upon litigants, both represented and otherwise, to ensure their submissions to the Court are accurate. It is anticipated that this decision will act as a benchmark for future judicial consideration of the role of AI in litigation, as the legal system continues to adapt to transformative technologies.

The judgment can be read in full here

For further information please contact Gavin Simons (Partner) or your usual AMOSS contact.