P2P

Summer25

Peer to Peer: ILTA's Quarterly Magazine

Issue link: https://epubs.iltanet.org/i/1538025

Contents of this Issue

Navigation

Page 13 of 99

14 REIMAGINING WORK The age of agentic AI forces every team to decide what to automate, what to augment, and when to bring humans back into the loop. This decision cuts deeper than workflow optimization: it requires rebuilding the fundamental contracts of delegation, oversight, and accountability that enable knowledge work to function. 1. Pick the Right Mode for the Job The traditional approach maps tasks to strategic value and complexity. But principal-agent research reveals a critical third dimension: AI self-awareness and human receptivity (https://www.researchgate. net/publication/374420256_ Task_delegation_from_AI_to_ humans_A_principal-agent_ perspective). Start by evaluating: Strategic value: Does this affect clients, revenue, or legal risk? Complexity/ambiguity: How many judgment calls are required? Delegation readiness: Can the AI accurately assess its capabilities for this task, and will the human trust and follow the AI's direction? This third dimension is important because AI systems often lack "metaknowledge," the ability to recognize when they are uncertain about something. An AI might confidently delegate a complex contract negotiation while failing to acknowledge it lacks contextual awareness about industry dynamics. Similarly, humans exhibit varying attitudes toward AI delegation based on their expertise, role, and prior experience with AI systems. Practical framework: Tasks that are low-stakes, well-defined, AND where AI demonstrates reliable self-assessment of default to delegate mode. High-stakes work remains human-led regardless of AI confidence. The middle ground, characterized by moderate complexity and uncertain AI self-awareness, requires hybrid approaches where AI attempts the task but with mandatory human checkpoint reviews. 2. Design Seamless Hand-Offs Classical delegation relies on monitoring and incentives, but AI-to- human delegation requires fundamentally new mechanisms for information sharing and trust-building. Information asymmetry solutions: AI systems must be designed to surface not just their decisions, but their confidence levels, the information they are missing, and the reasoning gaps they've identified. Rather than simple "escalation triggers," build AI systems that can explain what they need from humans and why. For example, an AI handling contract review should specify: "I have flagged this force majeure clause because it contains industry terminology outside my training data—I need human input on standard practice in biotech licensing."

Articles in this issue

Archives of this issue

view archives of P2P - Summer25