AI and IT guidance

Artificial intelligence (AI)

The guidance sets out the key risks with large language model systems (LLMs): anthropomorphism; hallucinations; information disorder; bias in data training; and mistakes and confidential data training.

It explores:

  • Due to possible hallucinations and biases, it is important for barristers to verify the output of LLM software and maintain proper procedures for checking generative outputs.
  • ‘Black box syndrome’ – LLMs should not be a substitute for the exercise of professional judgment, quality legal analysis and the expertise that clients, courts and society expect from barristers.
  • Barristers should be extremely vigilant not to share with an LLM system any legally privileged or confidential information.
  • Barristers should critically assess whether content generated by LLMs might violate intellectual property rights and be careful not to use words which may breach trademarks.
  • It is important to keep abreast of relevant Civil Procedure Rules, which in the future may implement rules/practice directions on the use of LLMs, for example, requiring parties to disclose when they have used generative AI in the preparation of materials, as has been adopted by the Court of the King’s Bench in Manitoba.


GDPR

GDPR compliance


Cybersecurity

Guidance