Beyond the cyber war, tech platforms are also playing a part in helping countries build resilience in the face of an information war.
To that end, Google’s Jigsaw teamed up with leading academics, civil society organizations and disinformation experts to pilot a method of fighting disinformation known as “prebunking” in Poland, Czechia and Slovakia. Based on social science research, prebunking works by helping people build psychological resilience to disinformation before they encounter misleading claims.
In addition, last year YouTube blocked more than 800 channels and over 4 million videos tied to Russian state-funded news media. And YouTube’s Hit Pause media literacy campaign has reached more than a billion people with tips for how to identify manipulation tactics used to spread misinformation.
As the invasion of Ukraine approaches the one-year mark, policymakers are taking a deeper look at the role technology has played in strengthening cyberdefense.
And they are asking how transformational innovations may enhance security resilience in the future.
Dramatic advances in artificial intelligence and machine learning mean that as we face new challenges, we will be able to draw on new cyber-defense tools, as well as innovations that will keep us at the front of the global competitiveness race.
AI is transitioning from deep research and one-off breakthroughs into products and applications that allow people to work in completely new ways.
In 2017, Google developed the Transformer architecture, the foundation for modern language models. But that’s only the beginning.
AI is opening the curtain on a new era in science and technology — a new era in nuclear fusion, quantum science, personalized medicine, agricultural productivity, materials science, and more.
On the security and information front, AI will help us detect, analyze, and block anomalous behavior and combat phishing and malware efficiently, effectively, and at scale in real time. We’re already seeing this today: AI applications predict whether emails pose a threat, and machine learning identifies toxic comments and problematic videos. For example, on YouTube, as we expanded our investments in people and machine learning, the rate of views on content that violates our Community Guidelines dropped by 75% from Q3 2017 to Q4 2020 – and continues to shrink.
We’ll soon have automated cyber defenses at an even broader scale, with advanced intelligence stopping attacks before they reach users. AI is just the latest frontier in the ongoing cat-and-mouse game between cyber-attackers and cyber-defenders.
With AI computations doubling every six months, it’s time for government, industry, academia, and the security community to align on common approaches and ensure these technologies are used responsibly.
We need a proactive digital agenda that promotes the benefits of these technologies while addressing the need for:
We’re at a watershed moment that calls for us to be bold, responsible and collaborative.
Technology can help secure our digital future while unlocking incredible opportunities — but only if we work together in a spirit of digital solidarity, ensuring that technology works for everyone.