The Nonlinear Library

EA - Some underrated reasons why the AI safety community should reconsider its embrace of strict liability by Cecil Abungu


Listen Later

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some underrated reasons why the AI safety community should reconsider its embrace of strict liability, published by Cecil Abungu on April 9, 2024 on The Effective Altruism Forum.
Introduction
It is by now a well-known fact that existing AI systems are already causing harms like
discrimination, and it's also
widely expected that the advanced AI systems which the likes of
Meta and
OpenAI are building could also cause significant harms in the future. Knowingly or unknowingly, innocent people have to live with the dire impacts of these systems. Today that might be
a lack of equal access to certain opportunities or the
distortion of democracy but in future it might escalate to more concerning
security threats. In light of this, it should be uncontroversial for anyone to insist that we need to establish fair and practically sensible ways of figuring out who should be held liable for AI harms. The good news is that a number of AI safety experts have been making suggestions. The not-so-good news is that the idea of
strict liability for highly capable advanced AI systems still has many devotees.
The most common anti-strict liability argument out there is that
it discourages innovation. In this piece, we won't discuss that position much because it's already received outsize attention.
Instead, we argue that the pro-strict liability argument should be reconsidered for the following trifecta of reasons: (i) In the context of highly capable advanced AI, both strict criminal liability and strict civil liability have fatal gaps, (ii) The argument for strict liability often rests on faulty analogies and (iii) Given the interests at play, strict liability will struggle to gain traction.
Finally, we propose that AI safety-oriented researchers working on liability should instead focus on the most inescapably important task-figuring out how to transform good safety ideas into real legal duties.
AI safety researchers have been pushing for strict liability for certain AI harms
The few AI safety researchers who've tackled the question of liability in-depth seem to have taken a pro-strict liability for certain AI harms, especially harms that are a result of highly capable advanced AI. Let's consider some examples. In
a statement to the US Senate, the Legal Priorities Project recommended that AI developers and deployers be held strictly liable if their technology is used in attacks on critical infrastructure or a range of high-risk weapons that result in harm. LPP also recommended strict liability for malicious use of exfiltrated systems and open-sourced weights. Consider as well the Future of Life Institute's
feedback to the European Commission, where it calls for a strict liability regime for harms that result from high-risk and general purpose AI systems. Finally, consider Gabriel Weil's
research on the promise that tort law has for regulating highly capable advanced AI (also summarized in his
recent EA Forum piece) where he notes the difficulty of proving negligence in AI harm scenarios and then argues that strict liability can be a sufficient corrective for especially dangerous AI.
The pro-strict liability argument
In the realm of AI safety, arguments for strict liability generally rest on two broad lines of reasoning. The first is that historically, strict liability has been applied for other phenomena that are somewhat similar to highly capable advanced AI, which means that it would be appropriate to apply the same regime to highly capable advanced AI. Some common examples of these phenomena include
new technologies like trains and motor vehicles,
activities which may cause significant harm such as the use of nuclear power and the release of hazardous chemicals into the environment and the so-called
'abnormally dangerous activities' such as blasting with dynamite.
The second line of r...
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear LibraryBy The Nonlinear Fund

  • 4.6
  • 4.6
  • 4.6
  • 4.6
  • 4.6

4.6

8 ratings