Harmful Information (Misinformation and Harassment)

** Summary **

The purpose of this module is to provide a conceptual framework for small civil society organizations to address threats of harmful information including disinformation and online abuse such as harassment. These attacks spread hate and sway popular opinion using botnets, armies of trolls, and divisive fabricated content. They can target organizations, especially those engaged in political advocacy, with many tactics: activists and journalists are harassed, the reputations of advocacy organizations are tarnished, and public support for social causes is shifted. Complicating this picture, an organization’s ability to protect itself from harmful online information attacks can be impeded by the lack of a shared understanding of harm reduction across its own security, communications, human resources, and management functions.

** Learning Objectives **

  • Understand the nature and the challenges of harmful information, such as misinformation and online abuse.

  • Understand harms and risks of harmful information in order to prioritize controls.

  • Learn major categories of improving defenses against harmful information from the perspective of a leader in a single organization.

** Pre-Readings **

  • See Course Readings for "Harmful Information (Misinformation and Harassment)"

** Resources **

* Mitigation Framework

* Harmful Information Case Studies

** Activities **

In small groups, consider: How might Twitter be used to harm an organization... even when the site is used as designed (ie not being “hacked”)?

Discuss as a class.

** Discussion **

There will probably be some vocabulary and classification issues with some of these threats:

To which category of acceptability (Usually Acceptable, Sometimes/Borderline Acceptable, Always Unacceptable) do the following threats belong:

Propaganda – Disinformation – Misinformation – Malinformation – Internet Shutdowns - Harassment – Trolls – Bots – Doxxing – Mobbing – Swatting – Leaks – Sockpuppets – Astroturfing – Clickfarms - Deceptive Advertising – Exclusionary Advertising – Dog Whistles – Subtweeting – Parody News – Clickbait

** Input **

We define “harmful information” as the harmful threats that stem from the use or abuse of information systems as they are designed or intended to be used. For example, by the nature of its design, a system may be designed to allow its users to spread hate speech to any user even though that behavior is against a community standards policy. Major categories of abuses fall into “misinformation” and “harassment.”

Why “Misinformation”?

We use the term “misinformation” to classify false information spread about an organization or individual regardless of the accuracy of the information or the intent to cause harm.

The terms “disinformation,” “malinformation,” and “misinformation” among others may be used by some experts to more precisely describe the intent and factual accuracy of the information being spread. However, it is important to stress that information in each of these categories can harm an organization or individual regardless of its intent or accuracy. An organization’s focus on the harm caused by a threat should be more important than their concerns about its proper technical classification.

Understand the intent behind and accuracy of elements of an attack when possible.

  • Knowing the intent of an adversary is useful for anticipating how an incident might escalate in severity, persist over time, and evolve into future attacks.

  • Factual accuracy can be leveraged in most information attacks since “kernels of truth” can provide the attack more credibility but complete falsehoods can still dangerously spread despite being easy to disprove to careful observers.

When might the organization not care about the accuracy or intent of the harmful information?

Understanding the Types of Threats

Most concerns will fall into the following interactions between an organization and the harmful information:

  • Direct Targeting: Harmful information is sent directly to the organization and its members.

  • Indirect Threats: Harmful information is spread about the organization to those outside of the organization.

  • Ingestion: An individual or organization unwittingly incorporates and uses harmful information in its decision-making processes.

  • Generation: The organization unwittingly creates or spreads harmful information. Insiders may also harass or spread lies about fellow staff members or organization outsiders.

Harassment is not the same as misinformation

  • Harm/violation occurred, usually to an individual

  • Information may or may not be false, attack may not even contain content

  • Different communities of action and approaches to mitigations

Why consider them together?

  • Both “trust and safety” problems that don’t fall into traditional digital security domain.

  • Attacks and tactics can be similar or intertwined.

  • Mitigations for harassment are an important subset of which actions mitigate the harms of larger misinformation problems

Practical “Solutions” for Civil Society:

  1. Increase understanding / practices around holistic security

    1. Physical Security: Inadequate protection for our people, our devices, and workplaces allow online threats of physical violence to cause more psychological harm as the risk and perception of physical harm increases.

    2. Digital Security: The security of data and information systems are important as confidential information is often used in misinformation attacks and threats to the integrity and availability of our information can damage our credibility and hurt our ability to respond.

    3. Psychosocial Wellbeing: Misinformation and harassment can be damaging to our psychological well-being or mental health, yet the harms caused also confounds our ability to protect and respond to threats extending to both physical and digital domains.

  2. Integrate risk mitigation into existing systems and processes

    1. While some mitigations implemented by individuals may be adopted as organization-wide practices or policies, nearly all the protective measures can fit into processes that should already exist in most healthy, sustainable non-profit organizations. Alternatively, these existing practices, processes, and policies are generally “necessary but not sufficient” for protecting an organization from harmful information threats. For example, if an organization does not have practices for security incident response or policies to ensure inclusion and equity of its staff members, those will need to be created first.

    2. One example of integration into an existing Security program would be to match and nest additional mitigations within a previously selected framework. The NIST Cybersecurity Framework is a useful example given its functions parallel several activities in countering harmful information.

  3. Strengthen external relationships and collaboration

    1. Media outlets and tech platforms play an outsized role as vehicles for the spread and prevention of misinformation and online abuse while the governmental actors can have potential roles as both purveyors of harmful information and avenues to pursue legal action and criminal justice. Given limited formal systems to offer efficient incident resolution for human rights defense organizations, relationships with those entities ultimately will be only as strong as one’s personal relationships with their employees in influential operations, security, trust & safety, legal, and policy positions.

    2. These relationships can be personal, formal, backchannel, and collective.

Prioritization by Risk

What are the differences between the threats that matter and the ones that don’t?

Threats may have greater impact depending on the context, content, audience, motivation, medium, and the capabilities of an attacker to gain legitimacy, impersonate, link, amplify, collect, and suppress information. Adversary methods frequently include combining the following operations or strategies to make information attacks more effective and sustainable:

Gaining Legitimacy: By establishing or co-opting seemingly authoritative sources of information, adversaries can threaten the authority of your own organization or sources of information that support your mission. Legitimate news organizations may give a platform for adversaries, adversaries may create enduring organizations and media outlets, websites, and social media accounts, celebrity endorsements, or algorithmic decision-making may give the appearance of the adversaries’ legitimacy or how acceptable (legally, socially, or otherwise justified by norms) their message may be.

Collecting Sensitive Information: Adversaries may gather confidential or sensitive information to gain advantage in the control of information perception, content, and flow. Beyond extorting or blackmailing an entity via threats of disclosure, adversaries can publish embarrassing information not intended for outside audiences, share out-of-context information that supports conspiracy theories, or prevent information-sharing by creating fear around the organization’s ability to maintain confidentiality.

Impersonation: Adversaries may imitate trusted sources to build or decrease the confidence in information transferred. This may include creating numerous individual accounts that parrot talking points from supporting or opposing groups, the creation of organizational presence that mimics those in the field, or even spoofing real organizations and their staff members by using impostor websites and social media accounts. The goal of these tactics is to create doubt in the authenticity of information from your organization or to trick others into trusting the information coming from the impostor accounts.

Linking Together Targets: Some adversaries may try to capitalize on negative opinions or damaging information about associated organizations and individuals. Such “guilt by association” strategies are conducted by identifying and reporting on financial, personal/familial, geographic, and other connections among individuals, groups, and organizations. Given the transparency of financing and accounting required of most civil society organizations, the autonomy for a single organization can be threatened when disinformation campaigns target unrelated organizations that share common funding sources.

Amplifying the Message: Using progressions of dissemination channels, techniques, and resources, adversaries can spread information and even create a perception of consensus. Amplification can begin with its delivery to niche online forums & groups and progress to widely-used social media platforms and eventually more traditional media and celebrity endorsements. This amplification is usually intended to increase the signal strength or reach of its messages and increase the likelihood that opposing information is “drowned out.” Additionally, tactics can be used to create more awareness of the harmful content on a single platform such as creating inauthentic accounts to spread content or hijacking of hashtags.

Suppressing opposing perspectives: Suppression includes intentional efforts to restrict access and flow of alternative information. Adversaries can target the availability of an organization’s messaging by shutting down internet access, censoring online content and users, or blocking access to outside perspectives.

Identify Harmful Information Risks

  1. Identify Potential Threats

    • Consider threats to individuals, groups, or the organization

    • Consider direct targeting, indirect attacks, ingestion, and generation

  2. Connect Threats to Potential Harms

    • Identify the impact of potential threats to individuals, groups, and the organization

    • Consider physical, reputational, financial harms

  3. Create and Prioritize Threat Scenarios

    • Describe threat scenarios in detail

    • Evaluate and prioritize scenarios based on likelihood and impact

Identify controls.

Top 4:

  • Physical Security (‘Get out of Dodge’ plan)

  • Digital Security (Lock down accounts)

  • Mental Wellbeing (Preventing psychological harms)

  • DOCUMENTATION PLAN

** Deepening **

Have teams step through the Harmful Information Mitigation Framework using Case Study 1 or 2 in the Harmful Information Case Studies).

  • What are the harms or risks you find most important to address? (top 3)

  • Which mitigations would you prioritize for implementation? (top 3)

** Synthesis **

Summarize the module with an additional framing of asking “is the juice worth the squeeze?” or “how much effort/time/money is worth how much protection from misinformation and online abuse?” Are there some protections that should be in place regardless of resource constraints?

Provoke: If resources are currently unavailable to address online abuse, how might you convince your boss or your client that this is a problem that requires a reallocation of resources?

Last updated