Humility in AI: Partnering With Technology That Assists, Not Overrides

BY Grace Turney | December 19, 2025

Paul Pavlou, PhD, the dean of the Miami Herbert Business School, doesn’t sugarcoat the future of work. While many leaders tiptoe around AI, Pavlou offers a direct assessment: AI will indeed replace many jobs, but that transformation represents only half the equation. The other half–how AI can elevate human potential in ways we’ve barely begun to imagine–demands the same attention.

During a fireside chat at From Day One’s Miami conference, Pavlou shared insights from his extensive research on AI, decision-making, and organizational transformation. The conversation, moderated by Steve Koepp, From Day One co-founder and editor-in-chief, explored how business leaders and educators are grappling with a technology that Pavlou describes as being “an order of magnitude” even more significant than previous breakthroughs like electricity or the internet.

Redefining What Technology Can Do

Unlike tools that simply automate tasks, Pavlou says that AI represents something fundamentally different: a technology designed to overcome human limitations rather than merely extend or mimic human capabilities. “It thinks like us, or more like us, and better than us,” he said. This important distinction changes the conversation from what AI can do for us to what it tells us about our own abilities.

The implications become stark when examining certain professions. Take radiology, for example, Pavlou points out that machines can analyze scans faster and more accurately than physicians. With that in mind, what is his advice for prospective students? Don’t become a radiologist if your job security depends on regulations requiring a human to perform tasks a machine handles better.

Yet he emphasizes this isn’t necessarily bad news for society. Better, faster diagnostic capabilities mean earlier disease detection and improved patient outcomes, even if it means fewer radiologists.

The Autonomy Paradox

Pavlou’s research on consumer decision-making revealed an intriguing paradox: people usually prefer to make their own choices, even when they know an algorithm would (theoretically) recommend something better. In studies examining how shoppers choose clothing, the researchers found that shoppers (particularly women) would rather make the final decision instead of accepting AI’s recommendation.

Paul A. Pavlou, dean & professor at the Miami Herbert Business School, University of Miami, shared his research on AI during the session 

This desire for autonomy extends beyond retail. Whether you’re a physician, an HR manager, or an executive, professionals want to understand why AI recommends specific actions rather than blindly accepting its output. “I want to have the last word,” Pavlou said to describe how people want to remain empowered to make their own decisions.

This insight packs profound implications for how organizations use AI systems. The technology works best not as a replacement for human judgment, but as a tool that enhances it, with humans maintaining ultimate control—and accountability.

Preparing Students for an Accelerated Timeline

At Miami Herbert Business School, Pavlou faces a concrete challenge: employers increasingly want candidates with two to four years of experience, yet the school’s primary mission involves preparing entry-level graduates. His solution leverages AI itself. By using technology to personalize education and provide real-world project experience, students can graduate with the equivalent of several years of workplace experience compressed into their undergraduate years, he says. 

The school has launched AI majors and minors while transforming existing programs to incorporate AI across disciplines, from HR to finance to accounting. “It’s not just about teaching students to use AI,” Pavlou said, “but using AI ourselves” to personalize the entire educational experience. The goal: graduates who are “job ready on day one” with capabilities that would have taken years to develop in previous generations.

Beyond Individual Jobs to Lifelong Learning

According to Pavlou, there has to be a shift in how organizations think about workforce development. AI’s rapid advancement means upskilling and reskilling can no longer be confined to early career stages. Companies increasingly approach Miami Herbert for guidance on what their employees, whether they have 20, 2,000, or 200,000 workers, need to know about AI.

This demand has shifted executive education, elevating it from a secondary offering to a strategic priority. Organizations need different training at different levels: foundational skills for entry-level employees, experimental mindsets for middle managers, and strategic frameworks for C-suite executives who must create organizational cultures open to AI adoption while establishing appropriate guardrails.

The Compassionate Machine

Perhaps the most provocative element of Pavlou’s research involves what he calls “compassionate AI.” The premise challenges common assumptions: if human beings often lack empathy and compassion in decision-making, can AI actually serve as a corrective force rather than an amplification of our flaws?

“The baseline is human beings,” Pavlou said. “They’re not very compassionate.” He offers the example of self-driving vehicles: while humans kill tens of thousands of people in car accidents every year, a single death caused by a driverless car causes widespread outcry and regulatory backlash. This double standard, he suggests, reflects our reluctance to acknowledge our own limited capabilities.

Pavlou expressed skepticism about companies that announce mass layoffs blamed on AI adoption. The real opportunity is not eliminating positions, but creating better jobs and generating more value. Organizations should focus on how AI allows for better decision-making, reduces errors, and improves outcomes rather than simply trying to cut costs through workforce reduction.

He advocates for comprehensive training as the foundation of responsible AI adoption, implemented at individual, team, and organizational levels. This training should address both effective use of the technology and ethical considerations. Only after organizations understand what the technology can do should they establish guardrails and policies, rather than creating restrictions for capabilities they don’t yet fully grasp.

The conversation concluded with a reminder that reflects Pavlou’s central point: AI doesn’t exist in a vacuum. “We created them to serve us and augment what we actually do,” he said. The question isn’t whether humans or machines are superior, but how we can work together to overcome limitations and elevate capabilities that neither could achieve alone. For business leaders navigating this transformation, that perspective offers a more productive framework than the binary thinking that has dominated much of the AI debate.

Grace Turney is a St. Louis-based writer, artist, and former librarian. See more of her work at graceturney17.wixsite.com/mysite.

(Photos by Josh Larson for From Day One)