AI Ethics

Mar

Friday

10

Should AI Companies Be Sued If Their Algorithm Causes Harm?

Despite advances in facial recognition technology, research suggests that it is highly unpredictable and prone to extreme unreliability in identifying individuals with darker skin, rendering it an unreliable tool.

In Feb. 2019, a man named Nijeer Parks, aged 31 and African American, hailing from Paterson, NJ, visited Woodbridge Police to absolve himself of charges, which his grandmother informed him of, for which he believed there was a misunderstanding. He had a history of legal troubles due to drug charges but had left that life behind to pursue carpentry. He carried his Social Security card and ID, hoping this would resolve the issue. However, the situation proved to be more complex than he had anticipated.  

Parks

Parks got apprehended for a litany of severe allegations encompassing aggravated assault, illegal ownership of weapons, fraudulent identification usage, marijuana possession, theft from stores, absconding from crime scenes, and resisting arrest. To exacerbate the situation, Parks faced accusations of nearly colliding with a law enforcement officer while driving a car. As per police reports, Parks was detained after undergoing a "highly publicized" facial recognition scan of a photograph found to be connected to a falsified identification card discovered at the scene of the crime.

The legal proceedings of the case were terminated in November of 2019 due to insufficient evidence, prompting Park to initiate a lawsuit against the individuals responsible for his detention on the grounds of violating his civil liberties, falsely apprehending him, and falsely detaining him.

The growing usage of AI has increased the possibility of AI algorithms causing harm, raising significant legal and ethical concerns regarding responsibility for any resulting damage. 

The situation with Parks is a benign illustration, but what if the AI causes severe harm? The incident involving the self-driving Tesla Model S that crashed in April 2021, resulting in the deaths of two individuals, is an example. 

The car failed to navigate a slight curve in the road, colliding with a tree. The News Reporter for KHOU (CBS) Houston, Texas, Matt Dougherty, shared the incident on Twitter, disclosing that investigators are certain no one was present in the vehicle at the time of the accident. 

https://twitter.com/MattKHOU/status/1383821809053683721?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1383821809053683721%7Ctwgr%5Ee8ce50143c654d8085470fbbb3fd567831acc6d8%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.jumpstartmag.com%2Fai-gone-wrong-5-biggest-ai-failures-of-all-time%2F 

Here’s a picture of the damaged car below, posted by Matt on his Twitter page.

As posted on the twitter account above.

However, this was just the tip of the iceberg as even a few weeks earlier; a Tesla Model Y vehicle crashed into a police patrol vehicle in Michigan. The incident raised concerns about the safety of autonomous vehicles and the responsibility of the companies behind them.

As a result, the U.S. safety regulators began investigating 30 Tesla crashes associated with the Autopilot driving system.

Another example is robotic surgery. In November 2015, a patient died during a robot-assisted surgery at Freeman Hospital, UK. The patient, Mr. Pettitt, died from multiple organ failure following heart surgery. An inquest was reported to have been done to determine what could have caused the incident, and evidence revealed that the robotic assistance was key to the surgery failure. This then raised questions about training policies, negligence, and the use of AI technology in the healthcare industry. 

The discernment between augmented AI and autonomous AI is a crucial aspect. Augmented AI aids in amplifying human decision-making by applying it to various fields, including healthcare and finance. In contrast, autonomous AI functions independently without any human intervention, exemplified by the likes of self-driving vehicles and unmanned aerial vehicles.

Regarding augmented AI, culpability for detrimental consequences caused by the technology is shared by both the human operator and AI enterprise. However, in the case of autonomous AI, the accountability solely rests on the AI enterprise. 

Artnet recently reported a news post where a group of residents in Illinois filed a class action lawsuit on the popular AI Art Generator Lensa A.I for allegedly breaking states law by collecting users’ biometric information without their permission. This is a serious grave offence if the app is found guilty. 

So, should AI companies be sued if their algorithm causes harm? The answer is not straightforward. The legal framework for AI liability is still evolving, and different approaches exist in other countries.

For example, there is no federal law on AI liability in the United States, but there have been some state-level initiatives. In California, a law has been passed that makes the manufacturers of autonomous vehicles liable for accidents caused by their cars.  A post from Scientific American has also revealed that California are now considering curtailing the use of Tesla autonomous driving features. 

In Europe, the General Data Protection Regulation (GDPR) includes provisions on algorithmic decision-making and the right to explanation. In Asia, the situation is more varied, with countries like China and Japan adopting different approaches.

The issue of AI liability also raises questions about the role of regulation and governance. Some argue that AI companies should be self-regulated and follow ethical principles such as transparency, fairness, and accountability. Others believe there should be more stringent regulation and oversight, similar to the rules governing pharmaceuticals or aviation. 

The AI company's liability for algorithmic harm is a convoluted and intricate dilemma. With valid arguments for and against, accountability and responsibility are crucial as AI pervades all aspects of our existence. Policymakers, industry leaders, and society must navigate AI's ethical and legal implications to balance its advantages and disadvantages. Collaboration will facilitate a future that uses AI judiciously and virtuously to enhance our world and livelihoods.