Digital White Papers

IG19

publication of the International Legal Technology Association

Issue link: https://epubs.iltanet.org/i/1188906

Contents of this Issue

Navigation

Page 30 of 71

I L T A W H I T E P A P E R | I N F O R M A T I O N G O V E R N E N C E 31 The legal construct of "machine credibility" remains uncertain and continued evaluation. In the same vein of news of the strange or like this article's opening visual, robots have not been above the law. For example, a robot was arrested in connection with utilizing bitcoin to purchase ecstasy, a Hungarian passport and a baseball cap with a built-in camera in 2015. The automated shopping bot was developed by a Swiss art group to explore the dark web and given $100 worth of bitcoin to randomly purchase one item a week. Eventually, Swiss police released the robot (not the ecstasy) back into the custody of the artists and they along with the robot escaped facing any criminal charges. In 2016, a Russian robot was arrested (police even attempted to use handcuffs) while taking part in a political rally after police intervened to prevent it from interacting with the public. According to media reports, it was the same model of robot that previously tried to escape twice from its manufacturer. Apparently, the Beverly Hills Police Department is credited with having made the first robot arrest on August 18, 1982. The DC-2 robot (operated by a few teens via remote control) was handing out business cards without a permit. The robot apparently yelled, "Help me! They're trying to take me apart!" when police took it into custody. Again, no charges were filed. Surprisingly, as Andrea Roth, of UC Berkeley Law, notes, machine sources can not only serve as defendants, but they can also be compelled as "witnesses" under the Sixth Amendment with a few caveats. Courts have ruled that machine assertions are considered physical evidence not hearsay (United States v. Lizarraga-Tirado) and that they can provide testimony through machine evidence much like the Arkansas case where an Amazon Echo witnessed a murder. But as Roth explains, the legal construct of "machine credibility" remains uncertain and continued evaluation. American Legislation The bipartisan Artificial Intelligence Caucus for the 115th Congress launched in May 2017 with the goal, "to inform policymakers of the technological, economic and social impacts of advances in AI and to ensure that rapid innovation in AI and related fields benefits Americans as fully as possible." Since then, the Caucus has co-hosted, with the Software & Information Industry Association and IEEE-USA, one luncheon discussing ethics and privacy issues related to AI. Additionally, a handful of bills including one to establish a Federal Advisory Committee on the Development and Implementation of Artificial Intelligence (FUTURE of AI Act 2017), another related to autonomous vehicles (SELF DRIVE Act), retraining workers displaced by AI (Corps Act of 2017), one to ensure the retraining of workers impacted by AI (AI JOBS Act of 2019), and the ethical implications of AI (House Resolution 153) have been introduced in the House but have not become law. The Algorithmic Accountability Act was introduced in April and would direct the Federal Trade Commission to require, "entities that use, store, or share personal information to conduct automated decision system impact assessments and data protection impact assessments." The Equal Employment Opportunity Commission has come under fire for the use of AI in discriminatory hiring practices, especially after senators Kamala Harris, Elizabeth Warren, and Patty Murray sent a 2018 letter to the organization requesting the agency address AI's liabilities. Because there has been little movement from the EEOC, Big Law firms have taken T H E S T A T E V S . A I

Articles in this issue

Archives of this issue

view archives of Digital White Papers - IG19