Tech leaders worry AI innovation is outpacing governance

C-suite executives in an open plan office space discussing AI strategy plans.
(Image credit: Getty Images)

The rapid growth of AI is outpacing effective governance, researchers have warned, with business leaders desperate for more clarity on regulation.

NTT Data’s Responsibility Gap Survey of C-suite executives concludes that there's an urgent need for stronger AI leadership, balancing innovation with responsibility.

Eight-in-ten said a lack of clear policies is preventing them from scaling generative AI initiatives​, and that unclear government regulations are hindering AI investment and implementation, leading to delayed adoption.

And while nine-in-ten executives said they worry about AI security risks, only a quarter of CISOs said they have a robust governance framework​.

"The enthusiasm for AI is undeniable, but our findings show that innovation without responsibility is a risk multiplier," said NTT Data CEO Abhijit Dubey.

"Organizations need leadership-driven AI governance strategies to close this gap — before progress stalls and trust erodes."

There's a big split amongst C-suite executives about the appropriate balance between safety and innovation. A third of executives believe responsibility matters more than innovation, one-third think the opposite, and one-third rates them equally.

There are also concerns about sustainability, with three-quarters of leaders saying that their AI ambitions conflict with corporate sustainability goals, forcing them to rethink energy-intensive AI solutions.

Additionally, two-thirds of executives say their employees lack the skills to work effectively with AI, while 72% admit they don't have an AI policy in place to guide responsible use.

NTT Data recommends introducing Responsible by Design Principles, building AI responsibly from the ground up and, integrating security, compliance, and transparency into development from day one.

Leaders need a systematic approach to ethical AI standards, going beyond legal requirements, and organizations should upskill employees to work alongside AI and ensure teams understand AI’s risks and opportunities.

RELATED WHITEPAPER

Meanwhile, there needs to be global collaboration on AI policy, with businesses, regulators, and industry leaders coming together to create clearer, actionable AI governance frameworks and establish global AI standards.

"AI’s trajectory is clear — its impact will only grow. But without decisive leadership, we risk a future where innovation outpaces responsibility, creating security gaps, ethical blind spots, and missed opportunities," said Dubey.

"The business community must act now. By embedding responsibility into AI’s foundation—through design, governance, workforce readiness, and ethical frameworks—we unlock AI’s full potential while ensuring it serves businesses, employees, and society at large equally."

MORE FROM ITPRO

Emma Woollacott

Emma Woollacott is a freelance journalist writing for publications including the BBC, Private Eye, Forbes, Raconteur and specialist technology titles.