FOREIGN TERRORIST ORGANIZATIONS & ROGUE NATIONS: SOCIAL MEDIA DISINFORMATION CAMPAIGNS

By Michael Prassad

As part of a standard “SWOT” Analysis – the aspect of Foreign Terrorist Organizations (FTOs) is an important set of threats that create risks for any country’s emergency management practitioners. Emergency managers, not just law enforcement, need to keep in mind their organization’s disaster readiness (resiliency) along the standard path of Protect/Prevent/Prepare, Respond, Recover and Mitigate – including the adverse impacts that can be generated by these threats. Tools and techniques – along with collaboration, coordination, cooperation, and communication – to and from the military and civilian intelligence agencies can assist emergency management practitioners at all levels of government.

It is crucial for emergency managers to understand the risks of any threat – and the possibility of adverse impacts to not only the communities they serve, but their own workforce (inclusive of all incident command and control structures) and those of allied partners. The training, indoctrination, methodologies, and tradecraft of domestic violent extremists (DVEs) can come from many of these FTOs – whether they are directly influenced and/or sponsored (such as the HVEs); or they are indirectly studied and researched by the DVEs.1 Using the experience and knowledge from historical warfare activities can also help prepare Emergency Managers to the DVE threat. This is applicable to the Incident Action Planning, through Unified Command and the use of the Intelligence branch.

The concept of disinformation (as well as propaganda, misinformation, malinformation, etc is not new – what has happened is that its use by foreign state and non-state actors to undermine and influence the “policies, security, or stability of the United States, its allies, and partner nations” has accelerated exponentially in the internet age. The United States has already seen disinformation impacts to its elections, COVID-19 response, and of course reputational impacts to individuals and organizations. Social media disinformation can be very powerful, very quickly distributed (think “going viral”), and as Jonathan Swift noted way back in 1710, “Falsehood flies, and the truth comes limping after it.”

Social media disinformation utilizes a number of key logical fallacies1 when it targets groups and individuals:

  • Mob Appeal: By appealing to a crowd, the hope is that emotions will override the fallacy. Phrases such as “everybody knows” fit this method of opinion vs. fact.

  • Weak Analogy: By comparing two or more disconnected items (for example COVID-19 and the Seasonal Flu), the reader is easily manipulated into making the connection on their own.

  • Suppressed Evidence: Failing to share the differences in analogies made or omitting transparency information/data. Reposts of disinformation with additional unfounded claims only amplifies the disinformation.

  • Appeal to Authority: By presenting disinformation (or reposting it) the authority only grows stronger, even when the original source may in fact be false and even utilize real officials’ names and personas.

The U.S. federal government divides its disaster readiness (and national defense) Intelligence activities (associated with Prevention and Protection) into two distinct jurisdictions: external threats and internal threats.

  • Foreign States and non-states (FTOs): The monitoring, reporting, alerting and data collection activities on these groups are performed by the U.S. State Department’s Global Engagement Center (GEC). The GEC has a focus now on Russia, China and Iran as the top state actors involved in disinformation campaigns. There are partnerships between government and academia for the research and monitoring of disinformation, especially what occurs via public social media accounts and on the web.

  • One of those partnerships is with the German Marshall Fund of the U.S. Alliance for Securing Democracy. Their Hamilton 2.0 Dashboard “provides a summary analysis of the narratives and topics promoted by Russian, Chinese, and Iranian government officials and state-funded media on Twitter, YouTube, state-sponsored news websites, and via official diplomatic statements at the United Nations” (alliance for securing democracy, 2021, p.1)

  • The GEC has also partnered with Park Capital Investment Group LLC to create an open-source platform called Disinfo Cloud1 which can help identify U.S. companies with tested tools and technology platforms which can help identify and thwart foreign-sponsored disinformation.

  • The U.S. federal government, through the Federal Bureau of Investigation (FBI) and the U.S. Department of Commerce’s Bureau of Industry and Security, can seize websites linked to foreign nationals and nation-states (based on U.S. law) because of a disinformation threat.

  • U.S. Nationals and U.S. Based groups: The monitoring, reporting, alerting and data collection activities on these groups are performed by the U.S. Department of Homeland Security’s Cybersecurity & Infrastructure Security Agency (CISA is fairly new, having been formed in 2018).

  • CISA provides alerts to other U.S. federal departments and agencies, and also in-depth education on both the various tradecraft threat elements used by DVEs (and potentially FTOs operating through U.S. groups) and the backgrounds/attack history of the groups themselves.

  • The FBI and DHS both investigate disinformation campaigns on the homeland from both FTOs and DVEs. DHS also has as one of its strategic goals outlined in their 2019 Department of Homeland Security Strategic Framework for Countering Terrorism and Targeted Violence policy document to bolster information sharing about foreign disinformation campaigns, as well as bolstering communication and coordination with state, local, tribal and territorial government entities. This local emphasis is critical to represent the trusted voices within communities who can quickly counter disinformation campaigns at the grassroots level.

 

DVES USE THE SAME PLAYBOOK AS FTOS WHEN IT COMES TO SOCIAL MEDIA DISINFORMATION CAMPAIGNS

The FBI notes that terrorism threats impacting the United States (and therefore U.S. emergency management) has two key factors of recent impact:

  • Lone offenders: Terrorist threats have evolved from large-group conspiracies toward lone-offender attacks. These individuals often radicalize online and mobilize to violence quickly.1 Without a clear group affiliation or guidance, lone offenders are challenging to identify, investigate, and disrupt. The FBI relies on partnerships and tips from the public to identify and thwart these attacks.

  • The Internet and social media: International and domestic violent extremists have developed an extensive presence on the Internet through messaging platforms and online images, videos, and publications. These facilitate the groups’ ability to radicalize and recruit individuals who are receptive to extremist messaging. Social media has also allowed both international and domestic terrorists to gain unprecedented, virtual access to people living in the United States in an effort to enable homeland attacks. The Islamic State of Iraq and ash-Sham (ISIS), in particular, encourages sympathizers to carry out simple attacks wherever they are located—or to travel to ISIS-held territory in Iraq and Syria and join its ranks as foreign fighters. This message has resonated with supporters in the United States and abroad (FBI, 2021). Artificial intelligence and machine learning are technological advances maliciously being used by FTOs and DVEs to increase their reach and distribution of social media disinformation. These same tools can be utilized by “good actors” (government and the private sector, especially social media corporate giants) to prevent disinformation campaigns and protect the public, as noted previously.

THREATS CAN MOVE FROM THE WEB TO THE REAL WORLD VERY QUICKLY

The Q-Anon network, designated a domestic violent extremist threat in 2019, had a “PizzaGate” disinformation campaign that resulted in actual violent incidents. West Point’s Combating Terrorism Center has a detailed analysis of how their disinformation campaigns have generated lone offender participation in real world criminal activity.The analysis and investigations into the January 6, 2021 attack on the U.S. Capitol – and its nexus to social media disinformation campaigns – is still in progress. At the very least, the FTOs have been amplifying and capitalizing on these events to further spread their own disinformation.

An October 2020 U.S. Department of Homeland Security Homeland Threat Assessment Report noted that “Russian influence actors also posed [online] as U.S. persons and discouraged African Americans, Native Americans, and other minority voters from participating in the 2016 election” (DHS, 2020, pp. 12-13).

That same report noted that foreign disinformation is not limited to national level impacts:

  • China views a state or locality’s economic challenges—including healthcare challenges due to COVID-19—as a key opportunity to create a dependency, thereby gaining influence. Beijing uses Chinese think tanks to research which U.S. states and counties might be most receptive to China’s overtures.

  • During the beginning of the COVID-19 outbreak, Beijing leveraged sister city relationships with U.S. localities to acquire public health resources. In February [2020], Pittsburgh shipped its sister city, Wuhan, 450,000 surgical masks and 1,350 coverall protective suits. Pittsburgh also established a GoFundMe account that raised over $58,000 to support Wuhan response efforts by providing medical supplies.

  • In Chicago, Chinese officials leveraged local and state official relationships to push pro-Chinese narratives. Also, a Chinese official emailed a Midwestern state legislator to ask that the legislative body of which he was a member pass a resolution recognizing that China has taken heroic steps to fight the virus. (DHS, 2020, p. 13)

WHAT CAN EMERGENCY MANAGERS DO TO INCREASE THEIR READINESS TO SOCIAL MEDIA DISINFORMATION?

Actions may speak louder than words, but those words can incite violence and generate threats and risks. Emergency managers already know the power of social media as it relates to public information alerts and warnings. They themselves (and through their governmental leaders) must be the trusted source for accurate and timely information needed to maintain life safety, incident stabilization, and property/asset protection before, during and after a disaster. Many times, the communications (both to and from the public) are expedited and amplified by social media. In some cases, social media may be the preferred (or only) way for members of the public to communicate with emergency management during a disaster. Disinformation campaigns can hinder or even threaten this method of communication – and can impact operations, finance/administration, planning and logistics.

  • Emergency managers should be connected to the Federal resources for Intelligence on FTO and DVE disinformation campaigns on a steady-state basis. This information should not be siloed within law enforcement only.

  • If possible, connect with the CISA and other resources directly. Utilize governmental collaboration systems such as HSIN and maintain a constant connection between law enforcement and emergency management. At the state level, utilize Fusion Centers for this type of threat, in addition to the others.

  • Maintain your own cyber-monitoring capabilities. Connect with academic researchers and other private sector partners who also monitor for cyber threats.

  • Perform both above – one example of this is the State of New Jersey. Their Fusion Center is populated by both their State Police (which runs the state’s Office of Emergency Management as well – one of only two states in the Nation – Michigan being the other – to operate this way) and their Office of Homeland Security and Preparedness (OHS&P), which reports directly to the Governor’s Office. In addition to generating its own threat analysis, the NJ OHS&P also has a robust Cybersecurity and Communications Integration Cell, which also provides public/private information alerts and sharing. In many ways, there is too much data out there for social media monitoring (especially open-source data), including what is available on disinformation campaigns. Organizations may need to utilize aggregator and filtration software to help focus the view to the areas important to them specifically. One example of this is Swan Island Technologies TX360 product, which is used by Allied Universal Security amongst others, to help “Mitigate Risk and Improve Response and Recovery.”

  • Countering disinformation campaigns requires the coordination of the organizations impacted with local, state, tribal and territorial governments. Emergency management can utilize their own public information capabilities, through their crisis communications team. This is true for private sector organizations as well as public ones.

  • Consider building communications templates in advance for disinformation campaigns, along the same lines as for fictitious disasters.

  • Exercise these templates (and the team which will implement/activate them) on a regular, continual basis. Consider current examples in the media impacting other organizations (or even other countries) and exercise the “what if this had happened to us?” aspects. Evaluate those exercises and make needed improvements to the planning, organization, equipment, and training of the crisis communications team.

  • Countering disinformation campaigns should not be limited to only “fighting back” via social media. The public may learn about the disinformation campaign from other sources and they themselves may not get their information via social media. And do not forget all the various languages that your constituents may use (including American Sign Language); as well as making sure your counter-messaging is accessible to people with disabilities and access/functional needs.

  • Finally, emergency managers are consequence management planners. The view that a disinformation campaign may be connected to another threat or hazard – or even that groups may be working in concert to promote complex coordinated attacks, is one which needs to be part of the planning for both steady-state and disaster operations. Reducing the “Pink Slice” – what one does not know they do not know – about a threat or hazard to any operations is part of the continuous vigilance needed for intelligence and situational awareness.

[Pop-Out Box 1]

Extremist vs. Terrorist: Should it matter to Emergency Management Practitioners?

There are obstacles to information sharing between the U.S. Intelligence community and state/local law enforcement agencies, most from the USA Patriot Act. Designation as terrorism may or may not bring additional benefits to threat Protection and Prevention (two elements of Disaster Readiness, for which Emergency Management practitioners are responsible for – outweighing the impacts to U.S. civil liberties.

https://www.rand.org/blog/2021/03/implications-of-domestic-terrorist-group-designations.html


 

[Pop-Out Box 2]

Terrorist or Patriot: It depends on who’s keeping score

Are “left-wing” groups such as Black Lives Matter and Antifa voicing political (and free speech) opinions and expressions or are they terrorist organizations? Can the same be said on the “right” for Three-Percenters and those groups that waive the Gadsden Flag (which also includes the National Rifle Association)?

 

https://www.newsweek.com/antifa-activists-vow-keep-fighting-even-terrorists-1584622

 

https://komonews.com/news/local/washington-three-percenters-say-defense-department-is-wrong-to-label-them-extremists

 

 

https://www.newyorker.com/news/news-desk/the-shifting-symbolism-of-the-gadsden-flag