Today, we’re sharing our second quarterly adversarial threat report that provides insight into the risks we see worldwide and across multiple policy violations. The report marks nearly five years since we began publicly sharing our threat research and analysis into covert influence operations that we tackle under the Coordinated Inauthentic Behavior (CIB) policy. Since 2017, we’ve expanded the areas that our threat reporting covers to include cyber espionage, mass reporting, inauthentic amplification, brigading and other malicious behaviors.
Here are the key insights in today’s Adversarial Threat Report:
Cyber espionage: Our investigations and malware analysis into advanced persistent threat (APT) groups show a notable trend in which APTs choose to rely on openly available malicious tools, including open-source malware, rather than invest in developing or buying sophisticated offensive capabilities. While some opt for more advanced malware that often incorporates exploits, we’ve seen a growing number of operations using basic low-cost tools that require less technical expertise to deploy, yet yield results for the attackers nonetheless. It democratizes access to hacking and surveillance capabilities as the barrier to entry becomes lower. It also allows these groups to hide in the “noise” and gain plausible deniability when being scrutinized by security researchers.
Emerging harms: Over the past year and a half, in response to organized groups relying on authentic accounts to break our rules or evade our detection, we’ve developed multiple policy levers to help us take action against entire networks — whether these are centralized adversarial operations or more decentralized groups — as long as they work together to systematically violate our policies. Since we began deploying these levers, we’ve enforced against networks with widely varying aims and behaviors, including groups coordinating harassment against women, decentralized movements working together to call for violence against medical professionals and government officials, an anti-immigrant group inciting hate and harassment, and a cluster of activity focused primarily on coordinating the spread of misinformation. Our report highlights our findings and takedowns in India, Greece, South Africa and Indonesia.
A deep dive into the Russia-based troll farm: We’re also sharing our threat research into a troll farm in St. Petersburg, Russia, which unsuccessfully attempted to create a perception of grassroots online support for Russia’s invasion of Ukraine by using fake accounts to post pro-Russia comments on content posted by influencers and media on Instagram, Facebook, TikTok, Twitter, YouTube, LinkedIn, VKontakte and Odnoklassniki. Our investigation linked this activity to the self-proclaimed entity CyberFront Z and individuals associated with past activity by the Internet Research Agency (IRA). While this activity was portrayed as a popular “patriotic movement” by some media entities in Russia, including those previously linked to the IRA, the available evidence suggests that they haven’t succeeded in rallying substantial authentic support.
We shared our latest findings with our peers at tech companies, security researchers, governments and law enforcement. We’re also alerting the people who we believe were targeted by these campaigns, when possible.
See the full Adversarial Threat Report for more information.