HB Ad Slot
HB Mobile Ad Slot
The Future of AI Regulation: The Government as Regulator and Research & Development Participant | Part 1 of 2
Tuesday, March 3, 2020

INTRODUCTION  

Artificial intelligence (AI) systems have raised concerns in the public—some speculative and some based in contemporary experience. Some of these concerns overlap with concerns about privacy of data, some relate to the effectiveness of AI systems and some relate to the possibility of the misapplication of the technology.  At the same time, the development of AI technology is seen as a matter of national priority, and fears of losing the “AI technology race” fuel national efforts to support its development.  

The healthcare and life sciences sectors are highly influenced by US government policy; accordingly, these industry sectors should monitor carefully US government policy pronouncements on AI. This special report is the first of two that will review the US government’s overarching national policy on AI, as articulated in Executive Order 13,859, “Maintaining American Leadership in Artificial Intelligence” (Executive Order), and the related draft Office of Management and Budget memorandum entitled “Guidance for Regulation of Artificial Intelligence Applications” (Draft Memo). While these two special reports will provide a high-level review of these documents, they will also highlight certain aspects and other recent developments that may be related to the Executive Order and the Draft Memo. 

Artificial intelligence (AI) systems have raised concerns in the public—some speculative and some based in contemporary experience. Some of these concerns overlap with concerns about privacy of data, some relate to the effectiveness of AI systems and some relate to the possibility of the misapplication of the technology. These concerns are heightened by the relative lack of specific legal and regulatory environment that creates guiderails for the development and deployment of AI systems. Indeed, the potential use cases of this new technology are startling—self-driving cars, highly accurate medical diagnosis and screenplay writing are all tasks that AI systems have proven themselves capable of performing. The “black box” nature of some of these systems, where there is an inability to fully understand how or why an AI system performs as it does, adds to the anxiety about how they are developed and deployed.

At the same time, many nations view the development of AI technologies a matter of national concern. Economic and academic competitiveness in the field is growing, and some governments are concerned that commercial enterprise alone will be insufficient to remain competitive in AI. It is not surprising, then, that governments around the world are beginning to address national strategies for the support of AI development, while at the same time struggling with the issue of regulation—preliminarily, conceptually and directly—including the US government.  

The role of the government in every industry can be significant, even in a market-driven economy like the US. This is particularly true for those industries that are susceptible to innovation through AI technologies and also highly regulated, controlled or supplied by governments, such as healthcare. Accordingly, the healthcare and life science industries should pay particular attention to governmental pronouncements on policy related to AI.

On January 13, 2020, the Office of Management and Budget (OMB) published a request for comments on a “Draft Memorandum to the Heads of Executive Departments and Agencies, ‘Guidance for Regulation of Artificial Intelligence Applications’” (the “Draft Memo”).1 OMB produced the Draft Memo in accordance with the requirements of Executive Order 13,859, “Maintaining American Leadership in Artificial Intelligence” (the “Executive Order”).2 The Executive Order called on OMB, in coordination with the Office of Science and Technology Policy Director, the Director of the Domestic Policy Council and the Director of the National Economic Council, to issue a memorandum that will:

(i) Inform the development of regulatory and non-regulatory approaches by such agencies regarding technologies and industrial sectors that are either empowered or enabled by AI, and that advance American innovation while upholding civil liberties, privacy and American values; and

(ii) Consider ways to reduce barriers to the use of AI technologies in order to promote their innovative application while protecting civil liberties, privacy American values, and United States economic and national security.3

The Executive Order also required OMB to issue a draft version for public comment to “help ensure public trust in the development and implementation AI applications.”4 Public comments on the Draft Memo are due March 13, 2020.5 Although the Draft Memo, like the Executive Order, speaks in general terms, it does provide more focus than the Executive Order in many ways. For example, the Executive Order requires implementing agencies to “review their authorities relevant to applications of AI” and submit plans to OMB to ensure consistency with the final OMB memorandum.6 The Draft Memo provides additional specificity regarding the information that the agencies must incorporate in their respective plans.7 

This special report is the first of two that will review the five guiding principles and six strategic objectives articulated in the Executive Order and the specific provisions of the Draft Memo. While these two reports will provide a high-level review of these documents, they will also highlight certain aspects and other recent developments that may be related to the Executive Order and the Draft Memo. These articles will not, however, address national defense matters.

EXECUTIVE ORDER STRATEGIC OBJECTIVES

The Executive Order makes very clear that maintaining American leadership in AI is a paramount concern of the administration because of its importance to the economy and national security. In addition, the Executive Order recognizes the important role the Federal Government plays:  

“[I]n facilitating AI R&D, promoting the trust of the American people in the development and deployment of AI-related technologies, training a workforce capable of using AI in their occupations, and protecting the American AI technology base from attempted acquisition by strategic competitors and adversarial nations.”8

The Executive Order identifies objectives that executive departments and agencies should pursue, which primarily address how the federal government can participate in developing the US AI industry. These objectives are as follows:  

1. PROMOTE AI R&D INVESTMENT: Promote sustained investment in AI R&D in collaboration with industry, academia, international partners and allies, and other non-Federal entities to generate technological breakthroughs in AI and related technologies and to rapidly transition those breakthroughs into capabilities that contribute to our economic and national security.9

The first objective has a few interesting components. First, the reference to “collaboration” includes “international partners and allies.” This implies that the current administration considers the US AI industry as being both international and also, perhaps, governmental. In particular, the reference to “allies” implies that foreign governments may be partners in the development of the US AI technology industry, presumably, at least, with respect to national security matters. Second, this objective specifically references “investment,” implying that the administration anticipates financial investment from the identified collaboration partners, including non-US industry and governments. How agencies achieve this objective will be fascinating to discover, particularly in light of US government restrictions on foreign investment in sensitive US industries and the recently enacted regulations implementing the Foreign Investment Risk Review Modernization Act of 2018.10

Federal policy on investment in AI is the subject of the National Artificial Intelligence Research and Development Strategic Plan (the “AI R&D Plan”),11 a product of the work of the National Science & Technology Council’s Select Committee on Artificial Intelligence. The AI R&D Plan is broadly  consistent  with the Executive Order, but its objectives and goals pre-date the Executive Order, and were not changed after the Executive Order. Other Federal agencies have also begun the process of actively engaging in an effort to support AI development, including healthcare-related agencies. The Centers for Medicare and Medicaid Services (CMS), citing the Executive Order, announced an AI Health Outcomes Challenge that will include a financial award to selected participants.12 CMS has selected organizations to participate that span a number of industry sectors, and include large consulting firms, academic medical centers, universities, health systems, large and small technology companies, and life sciences companies.13 In addition, a recent report on roundtable discussions co-hosted by the Office of the Chief Technology Officer of the Department of Health and Human Services and the Center for Open Data Enterprise (the “Code Report”) has identified a number of recommendations for Federal investment within its own infrastructure to support the R&D efforts within and without the Federal Government.14

2. OPEN GOVERNMENT DATA:

Enhance access to high-quality and fully traceable Federal data, models, and computing resources to increase the value of such resources for AI R&D, while maintaining safety, security, privacy and confidentiality protections consistent with applicable laws and policies.15 This objective should resonate with those developers who believe the Federal Government holds valuable data for purposes of AI R&D. The Code Report has already identified potentially valuable healthcarerelated data within the Federal Government (and elsewhere) and presented a series of recommendations consistent with the Executive Order objectives. In addition, the AI R&D Plan calls for the sharing of public data as well. 

3. REDUCE BARRIERS:

Reduce barriers to the use of AI technologies to promote their innovative application while protecting American technology, economic and national security, civil liberties, privacy, and values.16

Reducing barriers to use of AI technologies is an objective that implicates the existing regulatory landscape, as well as the potential regulatory landscape for AI technologies. Clearly, this objective is a call for agencies and departments to carefully balance the impact of regulations on development and deployment against what can only be described as an amorphous set of values. It remains to be seen whether we will see more definition here, although it should be noted that recent legislative efforts and regulations are reflecting certain values. For example, pending legislation in the State of Washington would require facial recognition services to be susceptible to independent tests for accuracy and “unfair performance differences across distinct subpopulations,” which can be defined by race, skin tone, ethnicity and other factors.17 The law would also require “meaningful human review” of all facial recognition services that are used to make decisions that “produce legal effects on consumers or similarly significant effects on consumers.”18

4. TECHNICAL STANDARDS:

Ensure that technical standards minimize vulnerability to attacks from malicious actors and reflect Federal priorities for innovation, public trust, and public confidence in systems that use AI technologies; and develop international standards to promote and protect those priorities.19

This objective includes quite a bit, and seems to imply a significant role for the Federal Government in terms of setting the objectives for technical standards for AI. In the summer of 2019, the National Institute of Standards and Technology (NIST) of the US Department of Commerce released a plan for Federal engagement in developing technical standards for AI in response to the Executive Order (the “NIST Plan”).20 The NIST Plan also clearly articulates the Federal Government’s perspective on how standards should be set in the US, including a recognition of the impact of other government approaches:

The standards development approaches followed in the United States rely largely on the private sector to develop voluntary consensus standards, with Federal agencies contributing to and using these standards. Typically, the Federal role includes contributing agency requirements to standards projects, providing technical expertise to standards development, incorporating voluntary standards into policies and regulations, and citing standards in agency procurements. This use of voluntary consensus standards that are open to contributions from multiple parties, especially the private sector, is consistent with the US market-driven economy and has been endorsed in Federal statute and policy. Some governments play a more centrally managed role in standards development-related activities—and they use standards to support domestic industrial and innovation policy, sometimes at the expense of a competitive, open marketplace. This merits special attention to ensure that US standards-related priorities and interests, including those related to advancing reliable, robust, and trustworthy AI systems, are not impeded.21 

The development of industry standards is already happening, evidenced, for example, by the publication of AI-related standards, including in healthcare, by the Consumer Technology Association.22  Another interesting aspect of this objective is to ensure that the standards reflect Federal priorities related to public trust and confidence in AI systems. An exploration of the issue of public trust is well beyond the scope of this short article, but even the most casual observer of this industry will note the very real lack of confidence in AI systems and fear associated with how they are being or may, in the future, be deployed.23  5. NEXT GENERATION RESEARCHERS: Train the next generation of American AI researchers and users through apprenticeships; skills programs; and education in science, technology, engineering, and mathematics (STEM), with an emphasis on computer science, to ensure that American workers, including Federal workers, are capable of taking full advantage of the opportunities of AI.24

The Future of AI Regulation: The Government as Regulator and Research & Development Participant   8 
The need for education related to the advances in technology is obvious, and is reflected in both the Code Report and the AI R&D Plan as well. It will be interesting to see how the Federal government achieves this objective, particularly in the many crossdisciplinary applications available. Already, some are reconsidering medical education in light of the advancement of AI systems.25  

6. ACTION PLAN:

Develop and implement an action plan, in accordance with the National Security Presidential Memorandum of February 11, 2019 (Protecting the United States Advantage in Artificial Intelligence and Related Critical Technologies) (the NSPM) to protect the advantage of the United States in AI and technology critical to United States economic and national security interests against strategic competitors and foreign adversaries.26

At the same time the White House issued the Executive Order, the Department of Defense launched its AI strategy. This subject is beyond the scope of these articles.

EXECUTIVE ORDER GUIDING PRINCIPLES

The guiding principles articulated in the Executive Order are, in some instances, little more than restatements of aspects of the objectives. Given the general nature of the objectives, this is not surprising. Nonetheless, some of the guiding principles highlight critical issues.

 1. COLLABORATION: The United States must drive technological breakthroughs in AI across the Federal Government, industry, and academia in order to promote scientific discovery, economic competitiveness, and national security. 27

Collaboration, as we have seen, is a theme that permeates many of the strategic objectives. Collaboration across industry sectors and government can be a challenge, but public-private partnerships have a long history in the United States and elsewhere.  

2. DEVELOP TECHNICAL STANDARDS AND REDUCE BARRIERS: The United States must drive development of appropriate technical standards and reduce barriers to the safe testing and deployment of AI technologies in order to enable the creation of new AI-related industries and the adoption of AI by today’s industries.28 Developing and deploying new technology within sensitive sectors, such as healthcare, requires balancing issues of safety with issues of overly burdensome regulation. The Food and Drug Administration (FDA) has been wrestling with this challenge for some time with respect to the treatment of clinical decision support tools covered in the 21st Century Cures Act, as well as digital health more broadly. Recently, the FDA published a discussion paper, which offers suggested approaches to the FDA clearance process that are designed to ensure efficacy while streamlining the review process.29

3.WORKFORCE: The United States must train current and future generations of American workers with the skills to develop and apply AI technologies to prepare them for today’s economy and jobs of the future.30 

This guiding principle reads more like an objective, and is very closely aligned with the fifth objective of the Executive Order. As noted already, we are seeing the need for cross-disciplinary training in areas where AI systems are likely to have application, and furthering the preparation of our workforce for these systems will be critical.  4. TRUST: The United States must foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application in order to fully realize the potential of AI technologies for the American people.31 The need for trust and confidence in AI systems for us to take full advantage of the benefit they promise is universally understood. This is a subject that will be explored in other articles within this series.  

5. INTERNATIONALIZATION: The United States must promote an international environment that supports American AI research and innovation and opens markets for American AI industries, while protecting our technological advantage in AI and protecting our critical AI technologies from acquisition by strategic competitors and adversarial nations.32 This guiding principle is a reflection of many longstanding US policy goals of opening markets for US industry participants while protecting their valuable intellectual property. In addition, the protection of vital US industries from foreign ownership or control has been of interest to the US government for many years, and, as noted above, the tools at the government’s disposal to protect this interest have been strengthened. 

CONCLUSION

Even taken together, the objectives and guiding principles set forth in the Executive Order provide only a general sense of focus and direction, but it would be surprising if it had been more specific. The goals of the Federal Government are broad, cut across multiple government agencies and functions, include the collaboration of industry and foreign interests, and address the government as both regulator and participant in the development of the AI industry. Since the issuance of the Executive Order, Federal agencies have been moving forward and are beginning the process of addressing the goals of the Executive Order. Greater specificity is coming.

Regardless, a few themes can certainly be pulled from the Executive Order. First, it is clear that this administration views the Federal Government as an active participant in the development of the US AI industry. While not without some downside risk, this generally bodes well for the industry in terms of investment, workforce training, access to data and other Federal resources and, potentially, having a convener of resources.

Second, this administration recognizes the importance of international collaboration, but is also acutely aware of potential dangers and risk. The extent to and ways in which this and future administrations balance the risk and reward of international collaboration in AI is yet to be defined. Third, standards need to be established. This is perhaps the most obvious of the objectives set forth, but it is also the one most fraught. The link between trust and standards, and the degree and type of regulation applied to the AI industry, are all yet to be developed. Here, every agency and organization must contemplate the market, public perception, effective testing criteria, and appropriate role for government and self-regulation.

The final theme, and key takeaway, perhaps, is that we are not there yet. The Executive Order is a call to action of the executive departments and agencies to start the process of coalescing around a central set of general objectives. We are far from seeing what this might look like, although many agencies have been addressing AI issues for years. A key development, and a key next step, will be the finalization of the Draft Memo and the development of executive department and agency work plans. 

1 85 Fed. Reg. 1731, 1825 (Jan. 13, 2020). The full text of the Draft Memo is available on the White House website at https://www.whitehouse.gov/wp-content/uploads/2020/01/DraftOMB-Memo-on-Regulation-of-AI-1-7-19.pdf.

2 Exec. Order No. 13,859, Maintaining American Leadership in Artificial Intelligence, 84 Fed. Reg. 3967 (Feb. 11, 2019), available at https://www.whitehouse.gov/presidential-actions/executive-ordermaintaining-american-leadership-artificial-intelligence/ (hereinafter “Exec. Order”).

3 Id. § 6(a).

4 Id. § 6(b).

5 85 Fed. Reg. at 1825. 

6 Exec. Order § 6(c).

7 Draft Memo, p. 10.

8 Exec. Order § 1.

9 Exec. Order § 2(a). 10 David J. Levine et al., Final Rules Issued on Reviews of Foreign Investments in the United States – CFIUS (Jan. 23, 2020), available at https://www.mwe.com/insights/final-rules-issued-on-reviews-offoreign-investments-in-the-united-states-cfius/. 11 NAT’L SCI. & TECH. COUNCIL, SELECT COMM. ON ARTIFICIAL INTELLIGENCE, THE NATIONAL ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT STRATEGIC PLAN: 2019 UPDATE, available at https://www.nitrd.gov/pubs/National-AI-RD-Strategy-2019.pdf. 

12 CMS Newsroom, CMS Launches Artificial Intelligence Health Outcomes Challenge (Mar. 2019), available at https://www.cms.gov/newsroom/press-releases/cms-launchesartificial-intelligence-health-outcomes-challenge.

13 AI Health Outcomes Challenge, available at https://ai.cms.gov/.

14 THE CENTER FOR OPEN DATA ENTERPRISE AND THE OFFICE OF THE CHIEF TECHNOLOGY OFFICER AT THE U.S. DEP’T OF HEALTH & HUM. SERVS., SHARING AND UTILIZING HEALTH DATA FOR AI APPLICATIONS: ROUNDTABLE REPORTS (2019), p. 15, available at https://www.hhs.gov/sites/default/files/sharing-and-utilizing-healthdata-for-ai-applications.pdf (hereinafter “Code Report”).

15 Exec. Order § 2(b).

16 Exec. Order § 2(c).

17 Washington Privacy Act, S.B. 6281, § 17(1) (2020) (hereinafter “Washington Privacy Act”).

18 Id. § 17(7).  The notion of human intervention between an AI system and an individual is not limited to this legislation.  The notion is widely discussed as a core ethical concern related to AI systems, and has been adopted in some corporate policies (see, e.g., https://www.bosch.com/stories/ethical-guidelines-for-artificialintelligence/).

19 Exec. Order § 2(d).

20 NIST, U.S. LEADERSHIP IN AI: A PLAN FOR FEDERAL ENGAGEMENT IN DEVELOPING TECHNICAL STANDARDS AND RELATED TOOLS (Aug. 2019), available at https://www.nist.gov/system/files/documents/2019/08/10/ai_standar ds_fedengagement_plan_9aug2019.pdf (hereinafter “NIST Plan”). 

21 NIST Plan, p. 9.

22 See https://shop.cta.tech/collections/standards/artificialintelligence.

23 The issues surrounding trust in AI systems will be explored in a future article in this series.

24 Exec. Order § 2(e). 

25 See, e.g., Steven A. Wartman & C. Donald Combs, Reimagining Medical Education in the Age of AI, 21 AMA J. ETHICS 146 (Feb. 2019), available at https://journalofethics.amaassn.org/sites/journalofethics.ama-assn.org/files/2019-01/medu11902_1.pdf.

26 Exec. Order § 2(f).

27 Id. § 1(a). 

28 Exec. Order § 1(b).

29 FOOD & DRUG ADMIN., PROPOSED REGULATORY FRAMEWORK FOR MODIFICATIONS TO ARTIFICIAL INTELLIGENCE/MACHINE LEARNING (AI/ML)-BASED SOFTWARE AS A MEDICAL DEVICE (SaMD), available at fda.gov/files/medical%20devices/published/US-FDA-ArtificialIntelligence-and-Machine-Learning-Discussion-Paper.pdf.

30 Exec. Order § 1(c). 

31 Id. § 1(d). 

32 Exec. Order § 1(e).

HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins