P2P

Spring24

Peer to Peer: ILTA's Quarterly Magazine

Issue link: https://epubs.iltanet.org/i/1521210

Contents of this Issue

Navigation

Page 35 of 74

36 P E E R T O P E E R : I L T A ' S Q U A R T E R L Y M A G A Z I N E | S P R I N G 2 0 2 4 of ethnicity to gendered assumptions about household roles, the consequences of biased image generation are profound and far-reaching. The very fabric of these Gen AI-powered tools is woven from the depths of the internet, where xenophobia, racism, misogyny, violence, bigotry, and abusive tendencies fester unchecked. Despite efforts to sterilize these datasets, filtering out problematic content proves to be a Sisyphean task. Remnants of cultural bias linger, distorting representations of race, gender, and wealth. From disproportionately representing individuals who appear White, female, and youthful to perpetuating tropes about race, class, and intelligence, the impact of biased Gen AI extends far beyond mere pixels on a screen. This underscores the critical need for fully auditable tools and processes to track the effectiveness and impact of Gen AI optimizations. Responsible Gen AI deployment involves selecting accurate and audited data sources and establishing auditable tools and processes to track its effectiveness and impact. Moreover, privacy protection and rigorous auditing processes are imperative to prevent biases from seeping into Gen AI systems. In New York City, an ordinance was issued in Summer 2023 over bias in Gen AI systems used for hiring processes. While waiting for the typical procedural gridlock in Congress, the city took matters into its own hands. It prohibited employers and employment agencies from using an automated employment decision tool in New York City unless they ensure a bias audit is done and provide required notices. Under the new law, employers who wanted to use Gen AI systems in their hiring procedures would need to publish an annual bias audit report outlining how utilizing Gen AI in employment practices withstands scrutiny for bias. New York businesses must also inform applicants and employees whenever Gen AI-driven tools contribute to employment- related decisions. Transformative Potential in Combating Financial Fraud and Money Laundering While the ethical implications surrounding Gen AI are profound and warrant careful consideration, a compelling case exists for embracing this technology within the financial and legal sectors. From streamlining internal operations to bolstering security measures, its role in detecting and preventing fraud and money laundering is a groundbreaking application with far-reaching implications. In the payments industry, the initial adoption of Gen AI focused on internal functions, ranging from streamlining IT requests to managing internal expense reporting, informing lending processes, processing payments, or automating corporate employee expense payments for activities such as travel. Perhaps the most groundbreaking application of Gen AI in the F E A T U R E S "Privacy protection and rigorous auditing processes are imperative to prevent biases from seeping into Gen AI systems."

Articles in this issue

Archives of this issue

view archives of P2P - Spring24