Facebook and TUM create joint AI ethics research centre
The social network will contribute $7.5 million to the centre over a period of five years
Facebook has teamed up with the Technical University of Munich (TUM) to create an independent research centre focused on the study of AI ethics.
The Institute for Ethics in Artificial Intelligence will draw on the expertise of thought leaders and academics to research potential ethical issues related to the use of AI - such as safety, privacy, fairness, and transparency - as well as identifying possible with new use cases.
Facebook will contribute $7.5 million over five years and offer insight into how it's using AI and algorithms in initiatives such as its Fairness Flow that can determine unintended bias. Although TUM also plans to consider other funding sources, too.
"At Facebook, ensuring the responsible and thoughtful use of AI is foundational to everything we do from the data labels we use, to the individual algorithms we build, to the systems they are a part of," Joaquin Quionero Candela, director of Applied Machine Learning at Facebook, wrote in a post announcing the partnership.
"AI poses complex problems which industry alone cannot answer, and the independent academic contributions of the Institute will play a crucial role in furthering ethical research on these topics... The Institute will also benefit from Germany's position at the forefront of the conversation surrounding ethical frameworks for AI - including the creation of government-led ethical guidelines on autonomous driving - and its work with European institutions on these issues."
The Institute for Ethics in Artificial Intelligence will be led by Professor Dr. Christoph Ltge.
"At the TUM Institute for Ethics in Artificial Intelligence, we will explore the ethical issues of AI and develop ethical guidelines for the responsible use of the technology in society and the economy," Dr. Ltge said.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
"Our evidence-based research will address issues that lie at the interface of technology and human values. Core questions arise around trust, privacy, fairness or inclusion, for example, when people leave data traces on the internet or receive certain information by way of algorithms.
"We will also deal with transparency and accountability, for example in medical treatment scenarios, or with rights and autonomy in human decision-making in situations of human-AI interaction."
Clare is the founder of Blue Cactus Digital, a digital marketing company that helps ethical and sustainability-focused businesses grow their customer base.
Prior to becoming a marketer, Clare was a journalist, working at a range of mobile device-focused outlets including Know Your Mobile before moving into freelance life.
As a freelance writer, she drew on her expertise in mobility to write features and guides for ITPro, as well as regularly writing news stories on a wide range of topics.