
Sign up to save your podcasts
Or


Courts worldwide are navigating uncharted waters with artificial intelligence, and their radically different approaches reveal a governance crisis that demands immediate attention from senior leaders. Across eight major jurisdictions, courts have responded to generative AI with starkly contrasting frameworks: New South Wales has imposed categorical prohibitions on AI-generated witness evidence and mandates sworn declarations that AI was not used,¹ whilst Singapore takes a permissive stance requiring no disclosure unless specifically requested, placing full responsibility on individual practitioners.² This fragmentation is not merely academic. Courts in the United States and Australia have already sanctioned lawyers for filing submissions citing entirely fabricated cases generated by AI 'hallucinations', where systems like ChatGPT created plausible-sounding but completely fictitious legal precedents.³ The consequences extend far beyond professional embarrassment to fundamental questions about evidentiary integrity, access to justice for self-represented litigants, and the preservation of confidential information that may be inadvertently fed into public AI systems and become permanently embedded in their training data.⁴
By Dr DarrylCourts worldwide are navigating uncharted waters with artificial intelligence, and their radically different approaches reveal a governance crisis that demands immediate attention from senior leaders. Across eight major jurisdictions, courts have responded to generative AI with starkly contrasting frameworks: New South Wales has imposed categorical prohibitions on AI-generated witness evidence and mandates sworn declarations that AI was not used,¹ whilst Singapore takes a permissive stance requiring no disclosure unless specifically requested, placing full responsibility on individual practitioners.² This fragmentation is not merely academic. Courts in the United States and Australia have already sanctioned lawyers for filing submissions citing entirely fabricated cases generated by AI 'hallucinations', where systems like ChatGPT created plausible-sounding but completely fictitious legal precedents.³ The consequences extend far beyond professional embarrassment to fundamental questions about evidentiary integrity, access to justice for self-represented litigants, and the preservation of confidential information that may be inadvertently fed into public AI systems and become permanently embedded in their training data.⁴