In the evolving landscape of technology-based information-gathering, the distinctions between tasking an individual source and asking a crowd have become more pronounced. Why does this matter? At a time when indications and warning of global developments are critical to great power competition, the U.S. Government has not developed the appropriate policy and oversight mechanisms to leverage this capability, which is all too often conflated with human intelligence. What has changed? Today, the crowd is bigger, more aggregated, and therefore more anonymized than ever before. The physical and technical processes inherent in crowdsourcing platforms fundamentally distinguish asking a crowd from tasking a HUMINT asset. The crowd, rather than the individual, is the source of the information, reducing the risks inherent in individualized HUMINT collection and demanding a change in oversight and policy considerations.
In HUMINT collection, the tasking is specific to the individual. In crowdsourced information-gathering, the crowd is specific to the ask.
Distinguishing Crowdsourcing from HUMINT: Aggregation and Technology
The differences in tasking an individual versus asking a crowd are dispositive in distinguishing crowdsourcing from HUMINT and demonstrates the transformative role that crowdsourced data providers play in contemporary information collection.
Tasking within Human Intelligence (HUMINT) operations means particular instructions and missions are assigned to individual human sources by a person who is specifically trained and certified for the collection of information from individuals. This approach relies on tasking through interpersonal relationships, personalized instructions, and direct human-to-human interactions. In other words, the “task” is influenced by the individual.
Conversely, when data providers ask a crowd, the asking is not influenced by the source (the crowd), rather, the “ask” determines what crowd will answer. For example, changes to data requests or contract requirements are not instructions particularized for an individual human source. Unlike in HUMINT tasking, these asks are agnostic as to the source. When the data request or contract requirements change, then sourced crowd with automatically change with the ask. This does not require direct human engagement. The crowd adjusts to the ask.
Asking the crowd elicits information from a diverse group of individuals through technology-driven methods, eliminating the need for relationships with specific human sources and personalized tasking that have been the pinnacle of HUMINT collection. Utilizing crowdsourced data platforms makes the resultant information Publicly Available Information, rather than the product of sensitive contacts with individual sources.
In HUMINT collection, the tasking is specific to the individual. In crowdsourced information-gathering, the crowd is specific to the ask.
In sum, a pivotal factor distinguishing HUMINT from crowdsourced data is the role that technology plays in the interactions with the humans in the crowd. Crowdsourced data platforms utilize advanced technologies to aggregate and analyze information, creating a decentralized and efficient intelligence gathering process that does not rely on cultivated relationships with individual sources.
Enhanced Safeguard Framework
Like with all data, to ensure responsible and effective deployment of crowdsourced data, especially for sensitive data types or for data from specific regions on certain topics, implementation of enhanced safeguard framework is crucial.
Providers and users of crowdsourced data should execute the following:
Elevated Approval Authorities: Subject data from geopolitically-significant regions or areas with privacy concerns to a more rigorous and specialized approval process.
Specific Safeguards for Certain Data Types or Regions: Implementing tailored safeguards for specific data types or regions ensures a nuanced approach to data collection. Safeguards may include:
- Heightened scrutiny;
- Encryption measures; and/or,
- Restrictions on the collection and sharing of certain types of information to address the unique challenges associated with those data sets.
Heightened Safeguards for Contributor Safety, Security, and Enhanced Consent: Providers and users should ensure individual data contributor security and clearly informing contributors of the terms, conditions, and use of the data they collect and provide by implementing the following types of safeguards:
- Standard risk review processes that require advanced screening of requests for information to assess risk to contributors and implementation of processes to assess and reduce risk in cases where there may be identified safety concerns;
- Real-time response to any indications of security issues with locations that may be implicated in a request for information, utilizing immediate feedback from contributors and other sources;
- Assure acquisition of data for U.S. Government assesses, in line with law and policy, provider compliance with sanctions and that the acquisition process takes into account other national security-specific considerations, including provider information security regimes in order to ensure provider control over unknown actor or adversary influence; and/or,
- Ensure the terms and conditions are clear to contributors in order for contributors to provide truly informed consent with respect to the data they collect, as well as any user data provided/gleaned from contributor devices.
As technology continues to reshape categories of intelligence and information gathering, understanding the nuances between tasking one human and asking a crowd becomes pivotal. Crowdsourced data providers offer a dynamic alternative to traditional HUMINT collection practices, ushering in a new era of intelligence and information collection that harnesses the collective power of the crowd. Enhanced safeguards and a clear understanding that the individual HUMINT source determines the tasking but asking determines the crowd are essential to ensuring the appropriate standards of oversight are applied based on reduced risks to privacy and reliability in this aggregated, anonymized environment.
Danielle Duff contributed to this article.