As artificial intelligence continues to shape the future of work, education, and human interaction, so too does concern over its limitations, including the rise of so-called “AI hallucinations,” where AI systems confidently present misinformation. With The New York Times and other major outlets highlighting these risks, how should we balance innovation and responsibility in AI?
To help us navigate this complex landscape, we sat with Dr. Ja-Naé Duane, an internationally recognized AI expert, behavioral scientist, and futurist. A faculty member at Brown University and a research fellow at MIT’s Center for Information Systems Research, Dr. Duane has spent over two decades helping governments, corporations, and academic institutions harness emerging technologies to build better, more resilient systems.
Her insights have been featured in Fortune, Reworked, AI Journal, and many others. Her latest book, SuperShifts: Transforming How We Live, Learn, and Work in the Age of Intelligence, explores how we can thrive in an era defined by exponential change.
Let’s dive in.

1. How do you assess the systemic risks of AI hallucinations, particularly in high-stakes domains like healthcare, law, or public policy?
When systems confidently generate false, misleading, or entirely fabricated information, AI hallucinations represent a profound and growing systemic risk, especially in high-stakes environments such as healthcare, law, and public policy. These outputs do not arise from malicious intent, but rather from the limitations of large language models, which rely on statistical associations rather than factual understanding. In healthcare, the consequences can be life-threatening. Misdiagnoses, hallucinated symptoms, and incorrect treatment suggestions jeopardize patient safety, increase liability, and erode trust in clinical AI systems.
In legal settings, hallucinations can distort judicial outcomes, particularly when systems fabricate precedents or misquote legal statutes, thereby undermining the fairness and integrity of decisions. In public policy, inaccurate or fabricated data can mislead government responses, distort public records, and create vulnerabilities that malicious actors might exploit. Unlike traditional misinformation, which often stems from human intent, AI hallucinations are more challenging to detect because they are generated with confidence and plausibility. This makes them more insidious and less likely to be noticed in fast-paced decision-making environments.
The broader implications extend beyond individual errors and impact societal trust in institutions and the legitimacy of data-driven systems. To address these risks, we require rigorous validation, real-time monitoring, precise human oversight mechanisms, and regulatory frameworks specifically designed to handle AI’s unique failure modes. Hallucinations are not merely technical glitches. They are structural vulnerabilities with far-reaching consequences that demand deliberate and coordinated mitigation.
2. Are organizations sufficiently prepared to detect and mitigate AI errors now, or are they moving too quickly without safeguards?
Organizations today are in a precarious transition. Many are rushing to implement AI systems for efficiency, automation, and a competitive advantage. Still, few are adequately prepared to detect and mitigate the errors that can arise. While advances in enterprise AI risk management are emerging, such as using AI to anticipate threats, flag anomalies, or automate compliance, most existing risk frameworks were not built with AI’s complexity in mind. They lag in key areas like data governance, oversight protocols, and real-time monitoring. Many organizations still rely on siloed teams and outdated manual processes that fail to detect subtle or evolving risks inherent in AI models. Compounding the problem is the widespread lack of AI-ready data, which undermines model performance and increases the likelihood of errors going unnoticed.
Security vulnerabilities such as model poisoning and prompt injection attacks require new forms of technical defense that most enterprises have not yet adopted. Moreover, human oversight, the critical last line of defense, is often underdeveloped or under-resourced. While organizations are moving with urgency, this speed usually comes at the expense of safety. Overconfidence in traditional analytics or a failure to understand AI-specific risks can lead to costly mistakes, reputational damage, or regulatory exposure. As AI continues to evolve, so must the systems and mindsets that govern it. Until safeguards are embedded into the core of organizational AI strategies, the current pace of adoption may be outstripping our capacity to use these tools wisely and safely.
3. How do you view the psychological impact of AI-generated misinformation on users who may not fully understand the technology’s limitations?
The psychological impact of AI-generated misinformation is significant and deeply concerning, especially for individuals who lack the technical background to understand how these systems work or how their outputs are generated. When AI presents inaccurate information with the same confidence as factual content, it becomes increasingly complex for users to distinguish between truth and fiction. This ambiguity breeds confusion, fear, and anxiety. It also contributes to cognitive overload, as people are forced to navigate a complex digital environment where even trusted systems may not be reliable. Studies show that exposure to AI-generated fake news is associated with decreased media trust, increased polarization, and antisocial behavior. Users may develop cynicism, helplessness, or apathy toward information systems in this climate. This erosion of trust does not stop at AI. It spills over into institutions, news outlets, and public discourse. We are building trust in AI on uncertain foundations; the consequences are already visible.
Public confidence is being undermined by misinformation and a lack of transparency, inconsistent governance, and the opaque nature of many AI systems. Media coverage that sensationalizes or oversimplifies the risks only adds to the confusion. To restore trust and mitigate psychological harm, we must enhance public understanding of AI’s limitations, invest in media literacy, and establish clear ethical guidelines. Without these measures, misinformation’s emotional and cognitive toll will continue to grow, weakening societal resilience when clarity and trust are more vital than ever.
4. What responsibility do developers and institutions bear in shaping the narrative and governance of AI?
In SuperShifts, we emphasize that developers and institutions are not merely participants in AI’s evolution. They are its architects. As AI becomes increasingly embedded in how we live and work, the choices made by those building and governing these systems will shape the future’s moral, social, and institutional frameworks. Developers are responsible for designing systems that are not only technically robust but also ethically grounded. This means embedding human values such as dignity, equity, and transparency into the very foundations of the technology.
Institutions must also rise to the challenge of developing adaptive governance models that can keep pace with the rapid pace of innovation. That includes fostering cross-sector collaboration, involving diverse stakeholders in decision-making, and ensuring that the narratives surrounding AI are shaped by empathy and foresight rather than fear or hype. As SuperShifts explores through themes like IntelliFusion and SocialQuake, the convergence of human and machine intelligence is as much a cultural transformation as a technological one. If the dominant story becomes one of obsolescence or loss of control, we risk creating resistance, fear, and exclusion. However, if institutions frame AI as a collaborative and transformative tool that empowers humans and strengthens communities, we can build public trust and guide AI toward a more inclusive future. This is not just about regulation or design. It calls for wisdom, imagination, and collective responsibility from innovation’s helm.
5. What practical steps should be prioritized to ensure AI evolves as a tool for collaboration rather than confusion or harm?
To ensure AI matures as a collaborative force rather than a source of confusion or harm, we need a coordinated set of practical actions across policy, education, and industry. On the policy front, governments should prioritize regulatory frameworks that categorize AI applications based on their level of risk. High-impact healthcare, finance, and law enforcement systems must meet stricter safety, transparency, and human oversight standards. Regulation must be both anticipatory and adaptive, keeping pace with rapid technological advancement while grounding its protections in fundamental rights.
Policymakers should also promote international cooperation to prevent fragmented oversight and ensure that global AI systems adhere to consistent and ethical standards. We must begin preparing people to live and work with AI by integrating AI literacy into school curricula. Educators need the tools and training to use AI responsibly, and students should have a voice in shaping the policies that govern its use in their learning environments. Companies must conduct routine audits within the industry to detect bias, validate safety, and ensure compliance with evolving standards. They should also build transparency into their systems, allowing users to understand how AI makes decisions and intervene when necessary.
Most importantly, businesses must engage in ongoing conversations with regulators, researchers, and communities to align their innovation with societal expectations. Without this shared approach, AI may deepen inequality and confusion. However, with care, cooperation, and intentional design, we can build a future where AI enhances human potential and becomes a trusted partner in shaping a more resilient and intelligent world.