publication of the International Legal Technology Association
Issue link: https://epubs.iltanet.org/i/1519635
I L T A W H I T E P A P E R | S E C U R I T Y & C O M P L I A N C E 22 updating risk management processes and training and awareness programs to possess AI needs. Once your organization has identified a need for an AI/ML system and governance protocol is in place, it's time to evaluate your risk. Conducting a risk assessment is critical as it allows you to understand the system's business requirements, data types, and access requirements and then define your security requirements for the system, considering data sensitivity, regulatory requirements, and potential threats. If the AI/ML system is Software as a Service (SaaS) or Commercial Party Off-the-Shelf (COTS), you must invoke appropriate third-party risk management processes. Often, this involves: • Ensuring the proper contractual clauses are in place to protect your organization and its information. could have been compromised by malicious third parties or had their third-party repositories of AI/ML models compromised. To minimize risk, leverage your third-party risk management and secure software development practices, focusing on various supply chain stages, including data collection, model development, deployment, and maintenance. Legal entities employing AI/ML systems should reinforce their cybersecurity to protect against threats that may disrupt services or infrastructure by causing downtime, impacting firm operations, leveraging ransomware, or launching denial-of-service attacks. Securing an AI/ML system can be unsettling initially, much like securing any other legal software. The process will vary depending on the use case, but it typically follows a structure similar to technical and organizational security that defends against threats and vulnerabilities. You can prepare by implementing AI governance and either modify or establish policies, processes, and controls to ensure your AI systems are developed, deployed, and used responsibly and ethically and are aligned with your organization's expectations and risk tolerance. This includes defining roles and responsibilities for AI governance; implementing data governance practices to ensure accurate, reliable, and secure utilization of data; creating guidelines for developing and validating AI models (testing for bias, fairness, and accuracy); considering ethical and compliance requirements and • Determining if a vendor can comply with organizational security policies. • Investigating whether the AI/ML model was created using secure coding practices, validating inputs, and then tested for vulnerabilities to prevent attacks such as model poisoning or evasion. Suppose you want to develop a unique set of AI/ML tools. In that case, you will want to consider the source of components you are utilizing carefully. Apply model attack prevention to the system as a part of the data science (add noise, make model smaller, hide parameters). Protect the AI/ML model with secure coding practices, validating inputs, and testing for vulnerabilities to prevent attacks such as model poisoning or evasion. Implement appropriate throttles and logging to monitor access to your model and ensure your code S E C U R I N G T H E U S E O F A R T I F I C I A L I N T E L L I G E N C E A N D M A C H I N E L E A R N I N G ( A I / M L ) I N L E G A L S E R V I C E S Once your organization has identified a need for an AI/ML system and governance protocol is in place, it's time to evaluate your risk.