Sundar Pichai: AI keeps me up at night
The Google chief warned that recent AI developments will have a profound impact on society


Google CEO Sundar Pichai has admitted that potential AI-related harms are a serious cause for concern and “keep him up at night”.
Pichai’s comments came in an interview with CBS’ 60 Minutes program in which the broadcaster delved into the ongoing work to develop AI systems at the tech giant.
Pichai said the rollout of generative AI systems will have a profound impact on society, noting that ‘knowledge workers’ such as software developers, accountants, and writers, could be among the most at-risk professions due to automation.
“This is going to impact every product across every company,” he said. “So that’s why I think it’s a very profound technology. AI will impact everything.”
When asked about the risks AI poses to society, Pichai warned that the current pace of development does raise concerns.
Google has been locked in a battle with Microsoft in recent months, with both tech giants rolling out generative AI systems to support core product offerings.
‘Bard’, Google’s generative AI system, launched in February and has been positioned as a key competitor to OpenAI’s ChatGPT system, which Microsoft has been integrating across a number of services, such as its Azure cloud division.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
“If deployed wrongly, it could be very harmful,” he said. “The technology is moving fast. Does that keep me up at night? Absolutely.”
Aligning AI regulation
Pichai insisted that the development of new systems must be matched by regulatory safeguards to mitigate adverse effects.
He noted, however, that the future of AI development and regulation should not be placed in the hands of a singular organization or group.
“It’s not for a company to decide,” he told the program. “This is why I think the development of this [AI] needs to include not just engineers, but social scientists, ethicists, philosophers, and so on."
RELATED RESOURCE
The Total Economic Impact™ of IBM iX digital commerce services
Delivering strategic growth with experience-led commerce solutions
“I think we have to be very thoughtful and I think these are all things society needs to move along.”
Pichai’s comments on AI regulation follow a controversial open letter published last month calling for an “immediate pause” to AI development.
The letter, signed by an array of tech pioneers, was published amid concerns that adequate regulatory safeguards were not currently in place to ensure systems would not cause societal problems.
It warned that AI development is “out of control” and demanded a six-month pause so international regulators can respond to the pace of innovation.
Lawmakers on both sides of the Atlantic have been exploring potential approaches to AI regulation in recent months.
Last week, the US National Telecommunications and Information Administration (NTIA) announced plans to explore “accountability measures” for companies developing AI systems.
The NTIA said it will launch a public consultation on AI products and services in a move that could help shape the Biden administration’s approach to federal regulations.

Ross Kelly is ITPro's News & Analysis Editor, responsible for leading the brand's news output and in-depth reporting on the latest stories from across the business technology landscape. Ross was previously a Staff Writer, during which time he developed a keen interest in cyber security, business leadership, and emerging technologies.
He graduated from Edinburgh Napier University in 2016 with a BA (Hons) in Journalism, and joined ITPro in 2022 after four years working in technology conference research.
For news pitches, you can contact Ross at ross.kelly@futurenet.com, or on Twitter and LinkedIn.
-
Bigger salaries, more burnout: Is the CISO role in crisis?
In-depth CISOs are more stressed than ever before – but why is this and what can be done?
By Kate O'Flaherty Published
-
Cheap cyber crime kits can be bought on the dark web for less than $25
News Research from NordVPN shows phishing kits are now widely available on the dark web and via messaging apps like Telegram, and are often selling for less than $25.
By Emma Woollacott Published
-
FTC announces probe into big name AI investments
The inquiry will examine the relationship between major cloud providers and AI companies, including Microsoft and OpenAI
By George Fitzmaurice Published
-
The CEO's guide to generative AI: Be a creator, not a consumer
Whitepaper Innovate your business model with modern IT architecture, and the principles of trustworthy AI
By ITPro Published
-
Building a strong business case for GRC automation
whitepaper Successfully implement an innovative governance, risk & compliance management platform
By ITPro Published
-
Three ways risk managers can integrate real-time controls to futurize operations at the bank
Whitepaper Defining success in your risk management and regulatory compliance
By ITPro Published
-
Why are AI innovators pushing so hard for regulation?
Opinion Tech giants are scrambling to curry favor with lawmakers amid a pending regulatory crackdown
By Ross Kelly Published
-
ChatGPT needs ‘right to be forgotten’ tools to survive, Italian regulators demand
News ChatGPT users in Italy could be granted tools to have false information changed under new rules
By Ross Kelly Published
-
US starts exploring “accountability measures” to keep AI companies in check
News The move follows Italy’s recent ban on ChatGPT due to data privacy concerns
By Ross Kelly Published
-
ChatGPT privacy flaw exposes users’ chatbot interactions
News OpenAI has not expanded on the flaw in detail, nor indicated its reach
By Rory Bathgate Published