HB Ad Slot
HB Mobile Ad Slot
GAO Testimony Before Congress Regarding Emerging Opportunities, Challenges, and Implications for Policy and Research with Artificial Intelligence
Wednesday, July 11, 2018

Timothy M. Persons, GAO Chief Scientist Applied Research and Methods, recently provided testimony on artificial intelligence (“AI”) before the House of Representatives’ Subcommittees on Research and Technology and Energy, Committee on Science, Space, and Technology.  Specifically, his testimony summarized a prior GAO technological assessment on AI from March 2018.  Persons’ statement addressed three areas:  (1) AI has evolved over time; (2) the opportunities and future promise of AI, as well as its principal challenges and risks; and (3) the policy implications and research priorities resulting from advances in AI.  This statement by a GAO official is instructive for how the government is thinking about the future of AI, and how government contractors can, too.

The Evolution and Characteristics of AI

Persons stated that AI can be defined as either “narrow,” meaning “applications that provide domain-specific expertise or task completion,” or “general,” meaning an “application that exhibits intelligence comparable to a human, or beyond.”  Although AI has evolved since the 1950s, Persons cited today’s “increased data availability, storage, and processing power” as explanations for why AI occupies such a central role in today’s discourse.  And while we see many instances of narrow AI, general AI is still in its formative stages.

Persons described “three waves” of AI.  The first wave is characterized by “expert knowledge or criteria developed in law or other authoritative sources and encoded into a computer algorithm,” such as tax preparation services.  The second wave is characterized by machine learning and perception, and includes many technologies recognizable today such as voice-activated digital assistants and self-driving cars.  The third wave is characterized by “the strengths of first- and second-wave AI . . . capable of contextual sophistication, abstraction, and explanation”; an example cited in his testimony was a ship navigating the seas without human intervention.  This third wave is just in its beginning stages.

Benefits of Artificial Intelligence and Challenges to Its Development

In his testimony, Persons summarized a number of benefits from the increased prevalence of AI, including “improved economic outcomes and increased levels of productivity” for workers and companies, “improved or augmented human decision making” through AI’s faster processing of greater quantities of data, and even providing “insights into complex and pressing problems.”  However, a number of challenges to further developing AI technology, such as the “barriers to collecting and sharing data” that researchers and manufacturers face, the “lack of access to adequate computing resources and requisite human capital” for AI researchers, the inadequacy of current laws and regulations to address AI, and the need for an “ethical framework for and explainability and acceptance of AI.”

In its report, GAO identified “four high-consequence sectors” for the further development of AI:  cybersecurity, automated vehicles, criminal justice, and financial services.  In each of these sectors, AI may be used as a valuable tool that could enhance that specific industry’s capabilities, but AI also presents concerns in that given industry, such as to safety, fairness, and civil rights, among other areas.

Policy Considerations to AI and Areas Requiring More Research 

Relying on the GAO report and the views of subject-matter experts, Persons’ testimony highlights a number of policy considerations and areas that require more research to improve AI.  One area is how to “incentiviz[e] data sharing.”  Persons highlighted that private actors need to better share data while still finding ways to safeguard intellectual property and proprietary information.  Similarly, federal agencies can share data that would otherwise not be accessible to researchers.  Another area was “improving safety and security,” as the costs from cybersecurity breaches are not necessarily borne equally between manufacturers and users.

One of the more significant policy considerations that will accompany increased usage of AI is “updating the regulatory approach.”  As an example, “the manufacturer of the automated vehicle bears all responsibility for crashes” under the regulatory structure as currently formulated.  Persons noted that regulators may need “to be proactive” in areas like this to “improve overall public safety.”  Relatedly, laws may have to adapt or evolve to allocate liability more appropriately, as “humans may not always be behind decisions that are made by automated systems.”  Without appropriate regulatory guidance, who bears responsibility for problems caused by AI remains unclear.  There is also a possibility for “establishing regulatory sandboxes,” which would enable regulators “to begin experimenting on a small scale and empirically test[] new ideas.”

Finally, Persons highlighted the importance of understanding “AI’s effects on employment and reimagining training and education.”  The data on this subject is currently incomplete, but Persons stated that it is believed job losses and gains will be sector specific.  With the increased prevalence of AI will also come the need to “reevaluate and reimagine training and education” to offset any possible job losses.

HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins