What I read this week on AI - week 30
Articles on AI that caught my eye and can be linked back to internal audit
Each week, I dive into five articles that explore how AI is reshaping our world — from psychological risks to regulatory frameworks. For auditors and tech-minded professionals, these shifts raise new questions about oversight, risk, and opportunity.
Here’s what stood out this week:
Overview
Heavy AI use is linked to burnout among full-time employees.
A new AI model, Centaur, predicts human decision-making with remarkable accuracy.
A proposed transparency framework for frontier AI stresses the need for accountability.
Lawyers keep getting in trouble for relying too much on AI — hallucinations included.
The godfather of AI warns of AI misuse and existential risks but still sees a safer path forward.
Heavy AI use is linked to burnout among full-time employees
A new Upwork survey reveals that full-time workers who use AI heavily are 88% more likely to feel burned out and twice as likely to consider quitting. Interestingly, freelancers using the same tools reported mostly positive outcomes. Across the board, 90% of respondents see AI more as a coworker than a tool, and trust in AI often surpasses that of human colleagues.
Internal Audit Reflection:
This reinforces that AI adoption isn’t just a technical or productivity issue — it’s a people risk. Auditors should assess how AI impacts employee well-being, psychological safety, and attrition. Is your organization monitoring the human cost of automation?
Read more:
Heavy AI use at work has a surprising relationship to burnout, new study finds
A new AI model, Centaur, predicts human decision-making with remarkable accuracy
Researchers at Helmholtz Munich introduced “Centaur,” a language model trained on over 10 million human decisions. It doesn’t just predict what people will choose — it mimics how they decide, including reaction times. The model bridges the gap between psychological theory and predictive AI.
Internal Audit Reflection:
This could redefine how we assess fraud risk, compliance behavior, and user trust. Auditors might one day use models like Centaur to simulate decisions under stress or ambiguity — adding behavioral depth to control testing or scenario analysis.
Read more:
AI That Thinks Like Us: New Model Predicts Human Decisions With Startling Accuracy
A proposed transparency framework for frontier AI stresses the need for accountability
As AI models grow more powerful, Anthropic calls for a minimal yet enforceable transparency framework. It includes public system cards, secure development disclosures, and protections for whistleblowers. The goal: regulate the big players without slowing down innovation.
Internal Audit Reflection:
This sets a benchmark for governance expectations. Internal audit can review whether their organizations align with these transparency principles — especially around model risk, safety testing, and public accountability. Consider using this framework as a reference in AI audit programs.
Read more:
The need for transparency in Frontier AI
Lawyers keep getting in trouble for relying too much on AI — hallucinations included
Despite repeated headlines about “bogus” AI-generated legal filings, lawyers continue to rely on ChatGPT and similar tools. Time pressure and lack of technical understanding are key culprits. While AI is increasingly embedded in legal tools, hallucinations remain a risk — and judges are not amused.
Internal Audit Reflection:
This is a powerful reminder of model limitations and the illusion of reliability. Auditors should examine how generative AI tools are used in sensitive or regulated processes. Are safeguards in place to detect hallucinated content or incorrect outputs?
Read more:
Why do lawyers keep using ChatGPT?
The godfather of AI on AI Risks: From Job Loss to Existential Threats
AI pioneer Geoffrey Hinton lays out his fears: misuse by bad actors, societal disruption, and the existential risk of AI surpassing human control. But he also sees hope in developing AI safely — provided we act now. His message is clear: the stakes are high, but not yet beyond reach..
Internal Audit Reflection:
This is a call for scenario planning. What risks should auditors watch if AI autonomy increases or falls into the wrong hands? Are crisis response and AI risk registers robust enough for what’s coming?
View the video:
Closing
Thank you for reading my digest on AI in Internal Audit. I hope these reflections spark further dialogue on how emerging technologies intersect with internal audit practices. Feel free to share your thoughts or share this with other via the link below!