When it comes to AI, we need both good individual practices and shared industry standards. But society needs something more: Sound government policies that promote progress while reducing risks of abuse. And developing good policy takes deep discussions across governments, the private sector, academia and civil society.
As we’ve said for years, AI is too important not to regulate — and too important not to regulate well. The challenge is to do it in a way that mitigates risks and promotes trustworthy applications that live up to AI’s promise of societal benefit.
Here are some core principles that can help guide this work:
Importantly, in developing new frameworks for AI, policymakers will need to reconcile contending policy objectives like competition, content moderation, privacy and security. They will also need to include mechanisms to allow rules to evolve as technology progresses. AI remains a very dynamic, fast-moving field and we will all learn from new experiences.
With a lot of collaborative, multi-stakeholder efforts already underway around the world, there’s no need to start from scratch when developing AI frameworks and responsible practices.
The U.S. National Institute of Standards and Technology AI Risk Management Framework and the OECD’s AI Principles and AI Policy Observatory are two strong examples. Developed through open and collaborative processes, they provide clear guidelines that can adapt to new AI applications, risks and developments. And we continue to provide feedback on proposals like the Europe Union’s pending AI Act.
Regulators should look first to how to use existing authorities — like rules ensuring product safety and prohibiting unlawful discrimination — pursuing new rules where they’re needed to manage truly novel challenges.