If self-driving cars crash, who is responsible? Courts and insurers need to know what’s in the ‘black box’

The first serious accident involving a self-driving car in Australia took place in March this year. A pedestrian suffered life-threatening injuries when he was struck by a Tesla Model 3 in “autopilot” mode.

In the US, the road safety regulator is investigating a series of accidents in which Teslas on autopilot crashed into emergency vehicles with flashing lights during traffic stops during traffic stops.

A car accident on the highway at night with flashing emergency lights
A Tesla Model 3 collides with a stationary US emergency vehicle.
NBC / YouTube

The decision-making processes of ‘self-driving’ cars are often opaque and unpredictable (even for their manufacturers), so it can be difficult to determine who should be held accountable for incidents like this. However, the growing field of “explainable AI” may help provide some answers.



Read more: Who (or what) is behind the wheel? The regulatory challenges of self-driving cars


Who is responsible if self-driving cars crash?

While self-driving cars are new, they are still machines made and sold by manufacturers. If they cause damage, we must ask whether the manufacturer (or software developer) has fulfilled their safety responsibilities.

The modern negligence law comes from the famous case of Donoghue v Stevenson, where a woman discovered a decomposing snail in her bottle of ginger beer. The manufacturer was found negligent, not because it was expected to directly predict or control the behavior of snails, but because its bottling process was unsafe.

By this logic, manufacturers and developers of AI-based systems such as self-driving cars may not be able to foresee and control everything the “autonomous” system does, but they can take steps to mitigate risk. If their risk management, testing, auditing and monitoring practices are not good enough, they must be held accountable.

How much risk management is enough?

The tough question will be, “How much care and how much risk management is enough?” In complex software, it is impossible to pre-test for every bug. How do developers and manufacturers know when to stop?

Fortunately, courts, regulators and technical standardization bodies have experience in setting standards of care and responsibility for risky but beneficial activities.

Standards can be very demanding, such as the European Union’s draft AI regulation, which requires that risks be reduced “as much as possible” without considering costs. Or they may be more like the Australian Negligence Act, which allows for less strict management for less likely or less serious risks, or where risk management would reduce the overall benefit of the risky activity.

Lawsuits are complicated by AI opacity

Once we have a clear standard for risk, we need a way to enforce it. One approach could be to give a regulator the power to impose fines (as the ACCC does in competition cases, for example).

Individuals who have suffered damage from AI systems should also be able to sue. In cases involving self-driving cars, lawsuits against manufacturers will be of particular importance.

However, for such lawsuits to be effective, courts need to understand in detail the processes and technical parameters of the AI ​​systems.

Manufacturers often prefer not to reveal such details for commercial reasons. But courts already have procedures in place to balance commercial interests with an appropriate amount of disclosure to facilitate lawsuits.

A greater challenge can arise when AI systems themselves are opaque “black boxes”. For example, Tesla’s autopilot functionality is based on “deep neural networks,” a popular type of AI system in which even the developers can never be quite sure how or why it arrives at a certain result.

‘Explainable AI’ to the rescue?

Opening the black box of modern AI systems is the focus of a new wave of computer scientists and humanities: the so-called “explainable AI” movement.

The goal is to help developers and end users understand how AI systems make decisions, either by changing the way the systems are built or by generating explanations afterwards.

In a classic example, an AI system incorrectly classifies an image of a husky as a wolf. An “explainable AI” method reveals that the system is focused on snow in the background of the image, rather than the animal in the foreground.

(Right) An image of a husky in front of a snowy background.  (Left) An 'explainable AI' method shows which parts of the image the AI ​​system focused on when classifying the image as a wolf.
Explainable AI in action: An AI system incorrectly classifies the husky on the left as a “wolf”, and on the right we see that this is because the system focused on the snow in the background of the image.
Ribeiro, Singh & Guestrin

How this can be used in a lawsuit depends on several factors, including the specific AI technology and the damage caused. A major concern will be how much access the injured party gets to the AI ​​system.

The Trivago Case

Our new research analyzing a key recent Australian lawsuit offers an encouraging picture of what this could look like.

In April 2022, the federal court fined global hotel booking company Trivago $44.7 million for misleading customers about hotel room rates on its website and in TV advertisements, following a case filed by competition watchdog ACCC. A crucial question was how Trivago’s complex ranking algorithm chose the best-ranked hotel room offer.

Federal court has established rules for discovering evidence with safeguards to protect Trivago’s intellectual property, and both the ACCC and Trivago called for expert witnesses to provide evidence explaining how Trivago’s AI system worked.

Even without full access to Trivago’s system, the ACCC’s expert witness was able to provide convincing evidence that the system’s behavior was inconsistent with Trivago’s claim to give customers the “best price.”

This shows how technical experts and lawyers together can overcome AI opacity in lawsuits. However, the process requires close collaboration and in-depth technical expertise, and is likely to be expensive.

Regulators can now take steps to streamline things in the future, such as requiring AI companies to adequately document their systems.

The road ahead of us

Vehicles with varying degrees of automation are becoming more common and fully autonomous taxis and buses are being tested both in Australia and abroad.

Keeping our roads as safe as possible requires close collaboration between AI and legal experts, and regulators, manufacturers, insurers and users will all have a role to play.



Read more: ‘Self-driving’ cars are still a long way off. Here are three reasons why:


Leave a Comment