Peer to Peer: ILTA's Quarterly Magazine
Issue link: https://epubs.iltanet.org/i/1542659
P E E R T O P E E R M A G A Z I N E · W I N T E R 2 0 2 5 59 the legal industry increasingly relies on AI for high- value workflows, cybersecurity risks are rising in step. Malicious cyber actors often focus on two key areas: new, untested technology and systems that contain valuable data. The legal industry's AI usage falls into both categories, which means extra focus on AI cybersecurity is mandatory. [6] Humans Are Still Necessary While automation accelerates workflows and AI models can draw on significantly more knowledge than one human can hold, high-risk contexts require a human in the loop. This is not just a best practice, but is quickly becoming a regulatory requirement. All workflows should be designed to allow for human intervention at critical junctures, enabling people to review, provide feedback, and override any AI inputs. Additionally, for compliance reasons, the frequency of human intervention should be tracked to improve AI performance. [7] Third-Party AI = Third- Party Risk Utilizing third-party providers is all but essential in any business context; the legal industry is no exception. However, with AI, new risks are present, and failures of a third-party AI will propagate to your platform, whether it is your fault or not. Ensure that vendor questionnaires rigorously assess training data, safeguards, and compliance levels. Additionally, all contracts should be updated to include AI-specific clauses to ensure there are no implied agreements between parties. Supply chain risk management is already paramount in the modern ecosystem, and AI extends this scope. The AI boom of the 2020s has altered almost every aspect of life, including the legal industry. Legacy compliance frameworks provide essential cybersecurity controls, but the introduction of AI has fundamentally changed what it means to be secure and compliant. These issues are no longer peripheral nice- to-haves but essential—with external pressure from laws, regulators, and clients alike. Building an actual AI compliance stack requires legal technology leaders to blend existing technology controls with safeguards tailored to AI's specific risks. There is no time to delay action; those who move now will not only avoid regulatory scrutiny but also be industry pioneers in establishing a new level of trust in the AI age. KARUN MAHADEVAN is a solutions engineer and product manager specializing in cybersecurity, automation, AI implementation and governance, and software development. At NopalCyber, he focuses on client solution development and operations management and oversees projects across North America. Previously, Mahadevan was a software engineer at Epic Systems. Mahadevan graduated summa cum laude with a B.S. in computer science and a minor in Chinese language. be limited to conventional IT risks; include the secondary and tertiary risks by AI.

