What I read this week on AI - week 13
Articles on AI that caught my eye and can be linked back to internal audit
As an experiment, I’ll be sharing a few articles I read this past week on AI that got me thinking about what they might mean for internal audit.
Here’s what caught my attention this week:
Overview
Risk or Revolution: How AI is transforming the legal profession by automating routine work and enabling strategic focus.
Leadership Lag: McKinsey finds employees are ready to embrace AI—but leaders are the ones holding back.
AI Tools & Data Protection: VISCHER’s deep dive into how popular tools handle confidentiality and compliance.
Are AIs Conscious?: Ethical reflections on whether today’s models deserve moral consideration.
Memes & Machines: What a study on meme co-creation reveals about creativity in human-AI collaboration.
Risk or Revolution: Will AI Replace Lawyers?
The Forbes article “Risk or Revolution: Will AI Replace Lawyers?” discusses how artificial intelligence (AI) is transforming the legal profession by automating routine tasks, thereby allowing lawyers to focus on more complex matters. A significant majority—65%—of law firms acknowledge that effectively leveraging generative AI will be a key differentiator between successful and unsuccessful firms in the next five years. Similarly, in the field of internal auditing, AI will impact traditional practices. By automating data analysis and anomaly detection, AI enables auditors to enhance risk assessment and fraud detection processes. This technological shift allows internal auditors to concentrate on strategic decision-making and complex problem-solving, thereby increasing the overall efficiency and effectiveness of audit functions.
Superagency in the workplace: Empowering people to unlock AI’s full potential
The recent report by McKinsey “Superagency in the workplace: Empowering people to unlock AI’s full potential” shows that just 1 percent of the companies believe they are at maturity. Their research finds the biggest barrier to scaling is not employees—who are ready—but leaders, who are not steering fast enough.
A key insight from the report is that employees are more prepared to embrace AI than leaders often realize. For instance, 13% of employees report using generative AI (gen AI) for at least 30% of their daily tasks, whereas C-suite leaders estimate this figure to be only 4%. Additionally, 90% of employees aged 35 to 44 feel comfortable using gen AI at work, suggesting a generational readiness to integrate AI into daily operations.
Popular AI Tools: What about data protection?
The document “Popular AI Tools: What about data protection?” by VISCHER provides an analysis of various AI tools concerning data protection compliance, focusing on their suitability for handling confidential data, personal data, and professional secrets. The analysis also considers whether these tools utilize user data for their own purposes, such as training or service improvement.
Are AIs People?
In the Asterisk Magazine interview "Are AIs People?", researchers Rob Long and Kathleen Finlinson discuss the potential for artificial intelligence (AI) systems to possess consciousness and the ethical implications that would follow. Long estimates a 5% chance that current AI models are moral patients—a term indicating entities deserving moral consideration—with this probability increasing to 40% by 2030. The conversation references a paper co-authored by Long, which evaluates leading theories of consciousness and identifies indicators that could be present in AI systems. The paper concludes that there are "no obvious technical barriers to building AI systems which satisfy these indicators", suggesting that existing technology could, in principle, meet the conditions associated with consciousness.
One Does Not Simply Meme Alone: Evaluating Co-Creativity Between LLMs and Humans in the Generation of Humor
A recent study titled "One Does Not Simply Meme Alone: Evaluating Co-Creativity Between LLMs and Humans in the Generation of Humor" explored the collaborative potential between humans and Large Language Models (LLMs) in meme creation. The research involved 150 participants divided into three groups: one creating memes without AI assistance, another collaborating with an LLM, and a third where the LLM generated memes autonomously. Findings revealed that AI-generated memes outperformed both human-only and human-AI collaborations in average ratings of creativity, humor, and shareability. However, among the top-performing memes, human-created ones excelled in humor, while human-AI collaborations stood out in creativity and shareability. These results highlight the nuanced dynamics of human-AI collaboration in creative tasks, suggesting that while AI can boost productivity and generate broadly appealing content, human creativity remains essential for producing deeply resonant material.