HB Ad Slot
HB Mobile Ad Slot
Product Liability Prevention for AI Product Designers—and Their Lawyers: Strategies to Unlock AI’s Potential in Health Care, Part 5
by: Consumer Product Safety at Mintz Levin, Yalonda T. Howze of Mintz  -  
Tuesday, November 13, 2018

As our use of AI technology becomes more frequent, interconnected, and integral to daily life, the liability exposure to AI product designers and manufacturers continues to escalate. There are more potential liability risks, including product liability risks, in our current environment than ever. With AI technology embedded in interconnected software and hardware products, gone are the days where we can neatly separate data security and privacy from product liability exposure. The more we enable our “smart” products and devices to “do” stuff and interact with other products and devices that “do” stuff, the greater likelihood that this new class of “actors,” so to speak, will do things that result in injury or harm to people and property.

While the application of product liability law in this area is in some ways untested, this untested state will not last long. Moreover, we can expect that a product may be deemed “defective” if its safety features are deemed inadequate. And designers and manufacturers will have to continue to have a duty to, at a minimum, take reasonable precautions against known and knowable risks.While we cannot eliminate risks altogether, engineers and AI product designers, can design more collaboratively and with a greater safety focus. Lawyers have a role to play here from a risk minimization perspective. Lawyers can and should work closely with product design teams, to make sure appropriate product safety troubleshooting is undertaken. This need for such proactive oversight is being discussed increasingly in board rooms across the country. Not surprisingly, motivated GCs are turning their attention to assessing the nature and scope of potential product exposure and exploring greater prevention in industries such as Tech, Social Media, Financial Services (such as FinTech), Healthcare, and beyond A Best

Practice: Collaborative Team to Assess and Manage Risks in the AI Design Phase

From experience with legal teams and product developers, and from conducting robust risk assessments with these players—a best practice for identifying and managing product liability risk is the implementation of a cross-functional risk assessment team to brainstorm and troubleshoot the risks posed by novel AI technology. Designers, engineers and lawyers are unlikely, but highly effective, allies for an effective strategy to troubleshoot, identify, and minimize AI product liability exposure.

It is general practice to troubleshoot and address all known risks for breaches, failures, malfunctions, and vulnerabilities. In addition to such standard testing, however, designers of AI products should also try to uncover risks that are “knowable unknowns.” This class of risks includes problems that have not yet surfaced but that can nonetheless be discerned by the designer or manufacturer through ongoing creative diligence. Because a given AI product may interconnect with another AI product and create opportunities for malicious interference and even physical harm, it is important that risk strategists think beyond the intended uses of a given device and try to think creatively about how the device can beused.  Lawyers should engage in this scenario testing as well.

As a distinct step in the design diligence protocol, there should be an actual collaboration between legal and design functions—with an ultimate focus on end-user safety. To be effective in this process, the lawyers are required to be nimble, creative, and focused on helping to create a framework for assessing risks posed by novel technology and helping to manage those risks (from a legal perspective) once they are identified. Given the nature of AI technology—there is no such thing as total risk avoidance. The goal is to surface, triage and help manage the legal exposure posed by both the inherent and unintended risks of AI technology.

Product designers and engineers involved in the process must be forthcoming, deliberative, and patient with the notion of exploring risk scenarios that go beyond performance and functionality testing. They must also not perceive the process of risk scenario testing as stifling creativity, but rather as a way to enable safer implementation of the creative, next-generation ideas they bring to the table.

From my experience in working with outside counsel, in-house counsel, designers and engineers, it has become apparent that safer product design and the minimization of product liability exposure in the AI space requires a collaborative, systematic and iterative protocol. Ultimately, this approach helps to better protect the user, the brand, and the company. 

Read Part 1Part 2,  Part 3 and Part 4.

HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins