P2P

Spring24

Peer to Peer: ILTA's Quarterly Magazine

Issue link: https://epubs.iltanet.org/i/1521210

Contents of this Issue

Navigation

Page 34 of 74

35 I L T A N E T . O R G A rtificial intelligence (AI) technologies have ignited significant interest in the financial payments industry, with their potential to rapidly analyze extensive datasets and mitigate human biases, thereby enhancing the effectiveness of financial crime prevention. However, recent advancements in AI, particularly in large language models (LLMs) and natural language processing, have introduced a novel iteration of technological intelligence: Generative AI, or Gen AI. Unlike traditional AI, which relies on historical data and numeric predictions, Gen AI has the unique ability to create entirely new content, often indistinguishable from human-generated material, without any reliance on preexisting data. Most financial payment solutions are pre-packaged products, using existing Gen AI models or APIs within layers of code tailored for specific applications in payment processing. Still, many organizations are integrating Gen AI into their operations and expressing intentions to invest further. According to a 2023 McKinsey poll, across industries, a third of the organizations surveyed already use Gen AI, 40% are planning further investment, and 28% are including it on their board's agenda. Considering the ethical governance and regulatory frameworks the financial and legal industry should implement as these advanced AI-powered tools become more commonplace is essential. Outputs are Only as Good as the Inputs The legal and financial industries' increasing reliance on Gen AI introduces the risk of discriminatory outcomes and undermines the fairness of decision-making processes. The complexity intensifies when tech developers treat their Gen AI models as black boxes, rendering the decision-making process opaque and mysterious. Like an exam student submitting an answer without showing their calculations, the inability to understand how Gen AI arrives at its decisions poses a fundamental threat. In fields like healthcare, finance, and law, where Gen AI plays a critical role, the consequences of errors made by these black boxes can be dire. If Gen AI systems are trained on data containing biases, these biases may perpetuate and amplify within the decision-making process, leading to unfair and unjust outcomes. Take predictive policing tools, for instance, constructed upon notoriously flawed and prejudiced crime data, which perpetuate cycles of discrimination. By prioritizing already over-policed neighborhoods, predictive policing algorithms amplify existing biases and exacerbate inequalities. The consequences are dire, as individuals find themselves trapped in a web of suspicion and surveillance simply due to their geographical location or appearance. The danger lies in Gen AI's ability to cloak biased decisions under the guise of impartial mathematical algorithms. This phenomenon, known as tech-washing, obscures the reality of systemic injustices. As researchers delve deeper into the murky waters of predictive policing, a disturbing trend appears. Predictive policing software disproportionately targets working-class communities and people of color, particularly Black individuals, in relentless cycles of unnecessary surveillance and undue suspicion. The evolution of Gen AI imaging software has ushered in a new era of visual creation, but with it comes the shadow of bias and stereotype perpetuation. The latest iterations of image generators, such as Stable Diffusion XL and LAION-400M, boast advancements in bias reduction. However, they remain enmeshed in outdated and harmful clichés. Despite efforts to refine their algorithms, these tools continue producing images that reinforce Western-centric stereotypes and distortions. From caricatured portrayals

Articles in this issue

Links on this page

Archives of this issue

view archives of P2P - Spring24