Digital White Papers

IG19

publication of the International Legal Technology Association

Issue link: https://epubs.iltanet.org/i/1188906

Contents of this Issue

Navigation

Page 33 of 71

I L T A W H I T E P A P E R | I N F O R M A T I O N G O V E R N E N C E 34 T H E S T A T E V S . A I person for litigation purposes (Louisville, Cincinnati & Charleston R. Co. v. Letson), but AI has not yet had the misfortune of being legally classified as a person. Legal Status and Theory The heart of the issue is how to classify artificial intelligence within the legal system. Is it human in nature, a product, or a service? Legal scholars are debating just that. One stumbling block, artificial neural networks cannot tell us how they do things or how they arrived at a conclusion. Another, is whether the actions of AI are foreseeable and if there is delineated causation. In one of the first cases of AI being held accountable and the scope of liability evaluated, the adultery website Ashley Madison gained prominence (it helps that sex sells). In re Ashley Madison Customer Data Sec. Breach Litigation, almost 20 men sued Ruby Corp, formerly Avid Life Media, in the Eastern District of Missouri alleging that they were not aware that some of the website's estimated 38 million members were fake "bot" profiles. However, because the company was also embroiled in a lawsuit for damages attributed to the massive data security breach of their 37 million users (and their irate spouses), the company settled for $11.2 million preventing an AI legal precedent to be set. In other cases, the meaning of "impressions" was determined to include AI (Go2Net, Inc. v. C I Host, Inc.) and computer code deemed a service via a breach of warranty (Motorola Mobility, Inc. v. Myriad France). But there are few notable AI libel cases holding computers or machines accountable. As a result, Karni A. Chagal- Feferkorn, a fellow at the Haifa Center for Law & Technolo at the University of Haifa argues the need for distinguishing products and devices from thinking algorithms in product liability cases. Machine-learning inherently lacks transparency, a term commonly referred to as a "black box." Due to this fact, machines and computer programs have no obvious intent. But, when it comes to healthcare, the EU's 2018 General Data Protection Regulation (GDPR) stipulates that if personal data is used to make automated decisions about people, companies must be able to explain the logic behind the decision-making process, which creates quite the legal quandary. In the 1950s, Alan Turing, created the Turing Test to measure a machine's ability to imitate or exhibit human behavior. Around the same time, military strategist, John Boyd, devised the OODA loop (Observe, Orient, Decide Act.) to clarify the human decision-making process. Some technologists are attempting to apply these concepts to AI to develop a sort of litmus test. Perhaps, in the future, a legal test will be available to indicate AI intent, but we are not there yet. University of Oklahoma associate professor of law, Roger Michalski, proposes the creation of a new ad robotam personal jurisdiction doctrine for applicable standards under the law. But, he also foresees the potential pitfalls of such a move. Give robots rights and status under the law and it is only a matter of time before they are also entitled to the same basic and constitutional rights currently afforded to humans. There is simply no one size fits all solution, but this issue is something that needs to be addressed, and quickly. ILTA Dana Hackley is the Public Relations Specialist for Jackson Kelley, PLLC where she creates, manages, and executes the firm's communications strategy across multiple office locations. In addition, she works as an online Academic Coach through Instructional Connections LLC assisting with Communications undergraduate and graduate courses. She also works as a freelance writer and editor.

Articles in this issue

Archives of this issue

view archives of Digital White Papers - IG19