Digital White Papers

IG19

publication of the International Legal Technology Association

Issue link: https://epubs.iltanet.org/i/1188906

Contents of this Issue

Navigation

Page 29 of 71

I L T A W H I T E P A P E R | I N F O R M A T I O N G O V E R N E N C E 30 T H E S T A T E V S . A I Intelligence Laboratory in the Department of Computer Science at The University of Texas at Austin, put it this way when sitting on a discussion panel, "We're starting to pair our brains with computers, but brains don't understand computers and computers don't understand brains" when sitting on a recent discussion panel. There is no doubt though that the technolo is advancing at a rapid rate, moving through evolutionary phases such as transforming from reactive devices to intuitive aids, and eventually one- day self-aware robots. AI is also being widely adopted by almost every sector of industry through automation, algorithms, and big data becoming enmeshed in everyday lives in some subtle and not so subtle ways. According to a recent report by Tractica, which measures emerging technolo trends, the artificial intelligence software market will reach $118.6 billion in annual worldwide revenue by 2025. In addition, their report titled "Artificial Intelligence Market Forecasts" asserts that the top ten industries in terms of AI spending will be telecommunications, consumer, automotive, business services, advertising, healthcare, retail, legal, the public sector, and education. We have seen AI manifest itself through autonomous vehicles, drilling for oil, investing in the stock market, preventing bank fraud, medical diagnostics and procedures, home security, legal document review and research, manufacturing, Siri and Alexa, and on and on. While we know so much about the mechanics of AI, we still understand very little regarding the logic behind AI, inherently a critical element of due process. It is this basic nature of intelligence, ambiguous and until recently only tied to flawed human beings, that complicates how existing law may be considered applicable. Because, with any great advance comes great risk. AI brings with it regulatory concerns and legal liability or negligence to assert on both sides of a complaint. Elon Musk, Tusla founder and outspoken doomsayer, asserted in 2014 remarks at an MIT symposium that the rise of AI has occurred in a regulatory vacuum. More recently, in 2017 at the National Governors Association meeting he declared, "I keep sounding the alarm bell, but until people see robots going down the street killing people, they don't know how to react, because it seems so ethereal." While that specific scenario may in the distant future, accidents involving AI are certainly a clear and present danger and regulation as well as the judicial system is playing catch up. The World Stage In 2017, the United Nations opened its Centre for Artificial Intelligence and Robotics in The Hague. The Centre is, "committed to advancing understanding of AI, robotics and the broader ecosystem of related technologies, from the perspective of crime, justice and security, and to exploring their use for social good and contributing to a future free of violence and crime." The specific initiatives since its founding remain unclear. In May, the Organization for Economic Co-operation and Development (OECD) and its 36-member countries ratified its Principles on Artificial Intelligence whose mission is to, "promote artificial intelligence (AI) that is innovative and trustworthy and that respects human rights and democratic values." Additionally, countries including Argentina, Brazil, Colombia, Costa Rica, Peru, and Romania are reportedly already adhering to the AI Principles. "Artificial Intelligence is revolutionizing the way we live and work and offering extraordinary benefits for our societies and economies. Yet, it raises new challenges and is also fueling anxieties and ethical concerns. This puts the onus on governments to ensure that AI systems are designed in a way that respects our values and laws, so people can trust that their safety and privacy will be paramount," explained OECD Secretary-General Angel GurrĂ­a following the release of the principles. While not legally binding, existing OECD principles in other policy areas have proved highly influential in setting international standards and helping governments to design national legislation. In fact, in June, the G20 adopted human-centered AI Principles that draw from the OECD's AI Principles. The discussion also reportedly centered on the European Commission's Ethics Guidelines for Trustworthy AI revised and republished in 2019, which was prepared by the High-Level Expert Group on Artificial Intelligence (AI HLEG), an independent group created by the EU in 2018.

Articles in this issue

Archives of this issue

view archives of Digital White Papers - IG19