In its latest installment of the Transformers series, the Washington Post hosted key industry and thought leaders to discuss the current and future implications of artificial intelligence (“AI”).
A number of themes emerged from the two-hour discussion. First, all panelists agreed that AI will be a useful tool to amplify and extend human skills, but most also noted that humans are still a necessary part of the equation. Second, panelists acknowledged the need to combat bias in AI and some explained their current work to continually improve against biases. Many stated their goal was ensuring individuals with diverse experiences and backgrounds are infused at all stages of AI development; others similarly emphasized the need to test and validate not only in laboratory settings, but in the environments and communities in which the AI will be implemented. Third, the question of whether and how to regulate this field remains open, as some panels favor self-regulation while others favor government-imposed transparency and oversight.
Regulation of AI
The first panel discussed Congress’s role in addressing potential regulatory issues as AI technologies develop. As members of the Senate Committee on Commerce, Science, and Transportation, Senators Maria Cantwell (D-Washington) and Todd Young (R-Indiana) focus on AI and other technological developments. The Senators highlighted the “Future of AI Act,” which would establish a federal advisory committee within the Department of Commerce. The Act would help shape legislative and regulatory policy for an AI-driven economy, focusing specifically on fostering competitiveness and innovation, protecting privacy rights, supporting unbiased AI, and addressing workforce implications.
The Senators also compared the current debate and potential concerns over AI to other recent technologies. Just five years ago, for instance, drones were a hotly debated topic, but today drones have become widely accepted and are being applied in all manner of contexts. Finally, both Senators stressed the need to combat potential workforce changes from AI through education. However, they emphasized different education solutions increased federal investment versus more streamlined and modernized approaches to education.
Partnering with AI
The next panel featured a discussion about how technology companies can develop AI for beneficial uses. Peggy Johnson, Microsoft Executive Vice President for Business Development, stressed the need for companies to establish sets of governing principles as they develop new technologies, which Microsoft is doing with AI. In addition, if a company maintains empathy and dignity at the core of its engineering and developmental plans, then it will further the possibilities for beneficial uses of AI technologies. Additionally, engineering teams can more greatly reflect the diversity of the population, in order to ensure that the code they write and AIs they build are beneficial to the whole population, not just subsets. In the future, Johnson explained that she believes AI will prove most beneficial in areas of the economy with lots of data, such as healthcare, financial services, and climate change. In such economic sectors, AI can play a role in augmenting human intelligence to better sift through the data sets.
The Future of Work: AI and Automation
In the third panel, representatives from Salesforce, NASA, and Thomson Reuters explored how AI is likely to affect the job market and how key sectors will evolve to meet the new realities of these technological advances. The panelists discussed changing models of thinking about AI, from the old perception that AI could map the human brain on a microchip to the recognition now that AI will be most useful for streamlining time spent on tedious tasks. The panelists agreed that AI is not a mechanism for replacing humans in the work place, but instead serves to augment intelligence and extend human capacity. The panelists pointed to technological advancements in the past to prove this point. They pointed to the Industrial revolution which, while shrinking some industries, opened the door for new jobs opportunities in other areas. The same will be the case with AI, producing net gains in the economy. Finally, the panel discussed that an important part of this advancement will be rethinking job training. The panelists suggested that our education system should be remodeled to continuously build new skills, amplify learning, and retrain the way we approach jobs and tasks as technology transforms day to day societal life.
Maximizing the Benefits of AI
In another panel, Victoria Espinel, President and CEO of BSA, discussed how artificial intelligence can be used as a force for good. She addressed both the importance of training data to reduce bias as well as employing artificial intelligence in a way that itself reduces bias and broadens inclusion. Along with echoing previous points regarding diversification of engineers, she suggested emphasizing introducing coding and computer science as early as possible into our education systems. In addition, software could be used to bridge geographic divides between those seeking technological training and those seeking such highly trained individuals.
Even once checks against bias are built into the system, our uses of AI can also be tailored to reduce societal bias and broaden inclusion in the industry. For example, companies have begun using AI to aid individuals on the autism spectrum in recognizing facial cues. By using AI, these individuals can improve their engagement with individuals and society and thus increase their access to careers and opportunities.
Cracking the Code: Using AI Responsibly
The next panel featured Dario Gill, Vice President of AI for IBM, in a discussion of the role businesses can play in ensuring that AI technology is fair, ethical, transparent and unbiased. Gill proposed approaching developing AI technologies in the same way scientists approach their research. Engineers should identify the purpose of their work from the outset, build trust by creating reliable procedures and explaining the way they use data to build-in internal accountability, and validate claims about their products through demonstrable, published results. Through these mechanisms, businesses in the industry will self-regulate, as businesses that fail to engender this trust will likewise fail to foster consumer adoption of AI.
AI and Ethics: People, Robots and Society
The final panel, consisting of representatives from non-profit institutions engaged in AI research and policy, discussed building transparency and accountability into systems to ensure that the benefits of AI accrue to society writ large. The panelists shared a general concern that, without intervention, private industry will be allowed to define the rules that impact society. They agreed that the AI community needs to play a bigger role in governance and defining norms now.
One aspect of this, raised by Meredith Whittaker of NYU’s AI Now Institute, was building transparency. She expressed her concern that the dearth of information on so-called “black box systems” creates situations where AI is making determinations about access and opportunity without people even knowing the system played a role. These systems are built into the back-end of core decision-making and, currently, there is no way of accounting for where these systems have been implemented or the scale of such implementation. To truly be conscious of the impact these technologies have, she believes the AI community needs to establish a clear understanding that these products need to be tested and validated before being rolled out in high stakes domains.
Melanie Ramey also contributed to this post.