HB Ad Slot
HB Mobile Ad Slot
Introducing the New SmartExpert: Self-driving Car "Drivers"
Friday, April 15, 2016

The National Highway Traffic Safety Administration has deemed the artificial intelligence that controls Google’s self-driving car a qualified “driver” under federal regulations. So, if a computer can drive, must we have a computer testify as to whether this new “driver” was negligent? It sounds laughable: “Do you, computer, swear to tell the truth?” But, with so many new potential avenues of litigation opening up as a result of “machines at the wheel,” it made us wonder how smart the new expert will have to be?

With its heart beating in Silicon Valley and its position well-established as a proponent of computer invention and progress, it was surprising when California was the first state to suggest we need a human looking over the computer’s shoulder. That is essentially what the draft regulations from the California Department of Motor Vehicles for the regulation of self-driving vehicles proposes - that self-driving cars have a specially-licensed driver prepared to take the wheel at all times. After years spent developing and testing self-driving cars in its home town of Mountain View, California, Google may now be looking elsewhere for testing and production. The rule proposed by the California DMV would make Google’s car impossible in the state.  Why?  Because humans cannot drive the Google self-driving car. It has no steering wheel and no pedals. The Google car could not let a human take over the wheel. Does that thought make you pause?

It apparently didn’t give the National Highway Traffic Safety Administration any cause for concern, as they approved Google’s self-driving software, finding the artificial intelligence program could be considered a bonafide “driver” under federal regulations. In essence, Google’s driving and you are simply a passenger. If you would hesitate to get in, Google’s Chris Urmson, lead engineer on the self-driving car program explains: “We need to be careful about the assumption that having a person behind the wheel will make the technology safer.” Urmson is basically saying computers are safer than humans. When you think about the number of automobile accident-related deaths in the United States alone, he may be right.  If he is right, wouldn’t artificial intelligences sophisticated enough to drive a car more safely than humans be able to learn to do other things better as well? Couldn’t they drive a forklift, perform surgery on humans, manage a billion dollar hedge fund? If that is where things are heading, who will testify as to the applicable standards of behavior for these machines? In the hedge fund example, will it be a former hedge fund manager who has years of experience handling large, bundled securities or a software developer who has years of experience programming artificial intelligence?

Who do you think will be able to testify in cases where an artificially-intelligent machine plays a role? Liability at the hands of a machine is bound to emerge. Someone will have to speak to the standard of judgment, discretion, and care applicable to machines. Maybe Google will be allowed to text while driving. Who’s to say?

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins