Digital White Papers

IG19

publication of the International Legal Technology Association

Issue link: https://epubs.iltanet.org/i/1188906

Contents of this Issue

Navigation

Page 32 of 71

I L T A W H I T E P A P E R | I N F O R M A T I O N G O V E R N E N C E 33 In 2018, internal documents obtained by Stat News indicated that during beta testing Watson often gave erroneous cancer treatment advice. ABB submitted an OSHA Report, citing the decedent's employer for removing safety devices and for allowing employees to enter the operational area. The Association for the Advancement of Artificial Intelligence, a nonprofit organization founded in 1979, has established a Code of Ethics and Professional Conduct for all AI professionals but it is not legally binding. In addition, per the National Conference of State Legislatures, 29 states have passed legislation and 11 governors have issued executive orders pertaining to autonomous vehicles. Yet, Arizona prosecutors decided not to criminally charge Uber for this year's self-driving car death of Elaine Herzberg. A civil settlement with Herzberg's family was reached; a preliminary report by the NTSB found that Uber had deactivated the car's emergency braking system. An increasing area of litigation popping up surrounds the healthcare industry where AI is being utilized for both diagnosis and to perform surgeries; manufacturers are sued for product liability. In O'Brien v. Intuitive Surgical, Inc., Daniel O'Brien alleged that the "da Vinci surgical robot," a medical device manufactured by Intuitive Surgical, Inc. was defectively designed and malfunctioned during O'Brien's pancreatectomy and islet cell transplant surgery resulting in injuries. After several dismissed and amended complaints, the court asserted that a strict liability claim requires proof of proximate causation and the case required proof that O'Brien was damaged by his reliance on Intuitive's alleged misrepresentations regarding the device, but he could not provide the court with an explanation of how the malfunction caused his injuries. Then there's IBM's Watson, made famous for his stint on the game show Jeopardy! and now working healthcare. IBM Watson for Oncolo was trained at Memorial Sloan Kettering Cancer Center to interpret cancer patient clinical information to identify individualized treatment options. So, what if Watson makes a grave error? The Corporate Practice of Medicine Doctrine says businesses can't practice medicine giving the AI manufacturer a legal loophole. Yet, in 2018, internal documents obtained by Stat News indicated that during beta testing Watson often gave erroneous cancer treatment advice and that IBM medical specialists and customers identified, "multiple examples of unsafe and incorrect treatment recommendations." Watson was reportedly trained on synthetic data despite the company telling prospecting buyers that it was utilizing real patients to make its recommendations. The malfeasance was so widespread, IBM executives were publicly singing Watson's praises while privately bemoaning its underperformance. At the time the story broke, according to IBM, Watson was in use at 230 hospitals and health organizations and reached 84,000 patients. Luckily, the health institutions involved were not solely reliant on Watson for treatment options and understood it was a preliminary trial. If Watson had been taken to task in the courtroom, New York University researchers Jason Chung and Amanda Zink assert Watson should fall under the same classification as a medical student legally. But, currently, under existing law, robots are property even though they mimic the work of humans. Common law tort and malpractice claims often center on human concepts of fault, negligence, knowledge, intent, and reasonableness. Corporations have been treated as an independent artificial T H E S T A T E V S . A I

Articles in this issue

Archives of this issue

view archives of Digital White Papers - IG19