P2P

Summer25

Peer to Peer: ILTA's Quarterly Magazine

Issue link: https://epubs.iltanet.org/i/1538025

Contents of this Issue

Navigation

Page 53 of 99

54 Through a user-friendly interface, the chatbot ensures that actionable insights are available to everyone, not just those with advanced technical expertise. This approach not only fosters a culture of data-driven decision- making but also enhances transparency and accelerates collaboration across departments. Ultimately, it transforms data from a siloed resource into a shared asset that is accessible and useful to all. Phase Two of the chatbot will introduce more advanced capabilities, including automated workflows. For example, the chatbot could help formulate targeted marketing campaigns based on activity scores and engagement levels or generate predictive analytics by combining internal and external data to forecast trends. Another example would be work intake and routing to the proper legal resource. Gartner categorizes this use case as a "likely win--medium-to-high feasibility with medium-to-high value" (https://www.gartner.com/document-reader/ document/5949439?ref=solrAll&refval=473343270). RISKS AND GUARDRAILS What do AI products such as self-driving cars, Amazon Alexa, and Apple's Siri have in common? Initially, all of these products lacked trust. Siri and Alexa often struggled with understanding natural speech, especially regional accents or dialects, and required users to speak slowly or repeat commands. Similarly, the introduction of self-driving cars has raised uncertainty about their true capabilities, prompting questions about when, how, and to what extent users should rely on them. This gap between user expectations and performance underscores the need for guardrails, including functional transparency, ongoing model refinement, and built-in safety measures, to foster trust. In PC Chat, when releasing new datasets through the chatbot, there is clear communication on the types of questions it can answer because end users may not be familiar with the underlying table structures or data schemas available to the chatbot. Additionally, a built-in feedback mechanism enables users to flag incorrect or unhelpful responses, allowing us to identify areas for refinement. As with other AI tools, building trust in PC Chat is an ongoing process, and like any pioneering technology, occasional missteps are part of the journey. While some inaccuracies may be tolerable during early adoption, security cannot be compromised. One of the key advantages of integrating PC Chat with our data hub is that existing security and governance policies are automatically enforced. As users interact with the chatbot, access is strictly governed based on their roles and permissions, ensuring they can only view the tables, columns, or rows to which they are authorized. An example would This gap between user expectations and performance underscores the need for guardrails, including functional transparency, ongoing model refinement, and built-in safety measures, to foster trust.

Articles in this issue

Links on this page

Archives of this issue

view archives of P2P - Summer25