We’re constantly working to find and stop coordinated campaigns that seek to manipulate public debate across our apps.
Over the past three and a half years, we’ve shared our findings about coordinated inauthentic behavior we detect and remove from our platforms. As part of our regular CIB reports, we’re sharing information about all networks we take down over the course of a month to make it easier for people to see progress we’re making in one place.
Our teams continue to focus on finding and removing deceptive campaigns around the world — whether they are foreign or domestic. In March, we removed 14 networks from 11 countries. Five networks — from Albania, Iran, Spain, Argentina, and Egypt — targeted primarily people outside of their countries. Nine others — from Israel, Benin, Comoros, Georgia, and Mexico — focused on domestic audiences in their respective countries. We have shared information about our findings with industry partners, researchers, law enforcement and policymakers.
Here are a few notable highlights:
Early detection and continuous enforcement: The vast majority of the networks we removed in March had limited following or were in the early stages of building their audiences when we removed them. The small Iranian network is a good example: the threat actor behind it attempted to re-create their presence after we disrupted their operation targeting Israel in October 2020. Late last year and in early 2021, they began creating Pages and accounts, some of which were detected and disabled by our automated systems. About a month after their first Page was created our teams began investigating the rest of the network. Ongoing enforcement against these threat actors across the internet has made these operations less effective in building their following. With each removal, we set back the actors behind these networks, forcing them to re-build their operations and slowing them down.
A deep dive into a troll farm: In addition to these newer networks, we also investigated and disrupted a long-running operation from Albania that targeted primarily Iran. While not successful in building significant audiences over several years of operation, this campaign was run by what appears to be a tightly organized troll farm linked to an exiled militant opposition group from Iran, Mojahedin-e Khalq (MEK). To shine light on how such operations manifest on our platform, we’re adding a detailed research and analysis section at the end of this report. We’ve shared our findings with other platforms and researchers to contribute to additional discoveries into similar activity on the broader internet.
AI-generated images: We’ve removed three CIB networks that made use of profile photos likely generated using machine learning technologies capable of creating realistic images. Since 2019, we have now disrupted seven operations that used such synthetic photos. Notably, although the use of GAN-generated images can make an account look authentic to an external observer, it doesn’t materially change the deceptive behavior patterns that we look for to identify inauthentic activity.
We know that influence operations will keep evolving in response to our enforcement, and new deceptive behaviors will emerge. We will continue to refine our enforcement and share our findings publicly. We are making progress rooting out this abuse, but as we’ve said before – it’s an ongoing effort. We’re committed to continually improving to stay ahead. That means building better technology, hiring more people and working closely with law enforcement, security experts and other companies.
Here are the numbers related to the new 14 CIB networks we removed in March:
Networks removed in March 2021:
We view CIB as coordinated efforts to manipulate public debate for a strategic goal where fake accounts are central to the operation. There are two tiers of these activities that we work to stop: 1) coordinated inauthentic behavior in the context of domestic, non-government campaigns and 2) coordinated inauthentic behavior on behalf of a foreign or government actor.
Coordinated Inauthentic Behavior (CIB)
When we find domestic, non-government campaigns that include groups of accounts and Pages seeking to mislead people about who they are and what they are doing while relying on fake accounts, we remove both inauthentic and authentic accounts, Pages and Groups directly involved in this activity.
Foreign or Government Interference (FGI)
If we find any instances of CIB conducted on behalf of a government entity or by a foreign actor, we apply the broadest enforcement measures including the removal of every on-platform property connected to the operation itself and the people and organizations behind it.
Continuous Enforcement
We monitor for efforts to re-establish a presence on Facebook by networks we previously removed. Using both automated and manual detection, we continuously remove accounts and Pages connected to networks we took down in the past.
See the full report for more information.