Microsoft says AI tools such as Copilot or ChatGPT are affecting critical thinking at work – staff using the technology encounter 'long-term reliance and diminished independent problem-solving'
AI tools might be convenient for workers, but there's a risk they'll become too reliant in the future


Using generative AI at work may impact the critical thinking skills of employees — and that's according to Microsoft.
Researchers at Microsoft and Carnegie Mellon University surveyed 319 knowledge workers in an attempt to study the impact of generative AI at work, raising concerns about what the rise of the technology means for our brains.
Concerns about the negative impact are valid, the report noted, with researchers pointing to the “deterioration of cognitive faculties that ought to be preserved”.
That referenced research into the impact of automation on human work — which found that depriving workers of the opportunity to use their judgement left their cognitive function "atrophied and unprepared" to deal with anything beyond the routine.
Similar effects have also been noticed with reduced memory and smartphones, and attention spans and social media users.
"Surprisingly, while AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving," researchers said.
The study noted that users engaged in critical thinking mostly to double check quality of work, and that the more confidence a worker had in the generative AI tool in question, the less likely they were to use their own critical thinking to engage with their work.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
"When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration; and from task execution to task stewardship," the research found.
Researchers said more work was needed on the subject, especially because generative AI tools are constantly evolving and changing how we interact with them.
They called for developers of generative AI to make use of their own data and telemetry to understand how these tools can "evolve to better support critical thinking in different tasks."
"Knowledge workers face new challenges in critical thinking as they incorporate GenAI into their knowledge workflows," the researchers added. "To that end, our work suggests that GenAI tools need to be designed to support knowledge workers’ critical thinking by addressing their awareness, motivation, and ability barriers."
Reliance on AI tools could become a big problem
All of this is a problem as Microsoft has pushed its AI-powered Copilot tools into its wider software package, a trend across the wider industry — though some workers are sneaking it into their companies without explicit approval, too.
RELATED WHITEPAPER
Beyond cutting costs, one of the long cited assumptions about AI is that it could remove routine tasks from day-to-day work — helping employees do less drudgery and shift to more creative work.
Achieving that requires finding the right balance between fully automated tasks, those with a human in the loop, and wholly human work.
Research from Stanford has suggested workers are more effective and productive working alongside an AI assistant, but also found we easily slip into overreliance on such tools, sparking compliance or too much trust in the technology.
MORE FROM ITPRO
Freelance journalist Nicole Kobie first started writing for ITPro in 2007, with bylines in New Scientist, Wired, PC Pro and many more.
Nicole the author of a book about the history of technology, The Long History of the Future.
-
Third time lucky? Microsoft finally begins roll-out of controversial Recall feature
News The Windows Recall feature has been plagued by setbacks and backlash from security professionals
By Emma Woollacott
-
Google Cloud is leaning on all its strengths to support enterprise AI
Analysis Google Cloud made a big statement at its annual conference last week, staking its claim as the go-to provider for enterprise AI adoption.
By Rory Bathgate
-
Meta executive denies hyping up Llama 4 benchmark scores – but what can users expect from the new models?
News A senior figure at Meta has denied claims that the tech giant boosted performance metrics for its new Llama 4 AI model range following rumors online.
By Nicole Kobie
-
Fake it till you make it: 79% of tech workers pretend to know more about AI than they do – and executives are the worst offenders
News Tech industry workers are exaggerating their AI knowledge and skills capabilities, and executives are among the worst offenders, new research shows.
By Nicole Kobie
-
Sourcetable, a startup behind a ‘self-driving spreadsheet’ tool, wants to replicate the vibe coding trend for data analysts
News Sourcetable, a startup developing what it’s dubbed the world’s first ‘self-driving spreadsheet’, has raised $4.3 million in funding to transform data analysis.
By Ross Kelly
-
DeepSeek and Anthropic have a long way to go to catch ChatGPT: OpenAI's flagship chatbot is still far and away the most popular AI tool in offices globally
News ChatGPT remains the most popular AI tool among office workers globally, research shows, despite a rising number of competitor options available to users.
By Ross Kelly
-
Microsoft launches new security AI agents to help overworked cyber professionals
News Microsoft is expanding its Security Copilot service with new AI agents to help overworked IT teams deal with surging security threats.
By Bobby Hellard
-
‘DIY’ agent platforms are big tech’s latest gambit to drive AI adoption
Analysis The rise of 'DIY' agentic AI development platforms could enable big tech providers to drive AI adoption rates.
By George Fitzmaurice