As 2023 begins, it’s clear that AI is playing a larger role in society, as people look to AI to address global issues ranging from disease detection to natural disaster prediction. And it’s playing an important role in our company, as advances in AI have made it possible to improve searches, photos, and translating more languages in real-time than ever before.
These benefits reflect a powerful and helpful new technology — one that is core to Google products — and one that must be developed thoughtfully and responsibly. That’s why, in 2018, we became one of the first companies to issue AI Principles, and build guardrails for stated applications we will not pursue. The AI Principles offer a framework to guide our decisions on research, and product design and development — and ways to think about solving the numerous design, engineering and operational challenges associated with any emerging technology.
But, as we know, issuing principles is one thing — applying them is another. Recently, we published our 4th annual AI Principles Progress Update — our review of our commitment to responsibly develop emerging technologies like artificial intelligence. This new report involves our most comprehensive look at how we put the AI Principles into practice. We believe a formalized governance structure to support the implementation of our AI Principles – and rigorous testing and ethics reviews — is necessary to put the principles into practice.
Without a strong governance structure, it would be impossible to apply principles to an emerging technology. Because AI is still a nascent technology, and many risks are yet to be discovered and defined, strong governance puts in place the processes to identify and mitigate AI risks before launching AI-enabled products.
Our report is our assessment of our progress on this front in 2022. We have found that by following these principles in our work, we’ve seen clear evidence that building AI with fairness, safety, privacy and accountability leads to applications that are better at their concrete goals of helping people navigate the world around them. Fittingly, responsibly and ethically developed products become successful products.
We also believe that defining and minimizing AI risks is especially urgent in 2023. As AI plays an increasingly important role in the economy and society, it is important that we continue to advance responsible practices in this space, and engage with regulators, civil society, and impacted communities to understand and manage AI’s risks and maximize its benefits.
People will see the most benefit from the development and deployment of AI if, and only if, those developing it discover, share and follow practices for developing it ethically. In our report, we share details on our thoughts, including our three pillars of AI Principles governance, to help others build a structured approach across research, operations and product teams.