HB Ad Slot
HB Mobile Ad Slot
Machines With Moral Compasses - The Ethics Of 'Driverless' Cars
Thursday, November 30, 2017

Artificial intelligence (AI) is increasingly permeating our everyday lives from our voice command ‘smart speakers’ (such as Amazon Echo, Siri and Google Home) to the machine learning based recommendations when online shopping or watching Netflix.  As AI becomes increasingly autonomous and accessible, leaders in technology are calling for increased scrutiny and regulatory oversight to ensure society is protected from AI’s implications. Regulatory oversight of AI will need to integrate ethical, moral and legal values in its design process as well as part of the algorithms these systems use. Tech giants are becoming increasingly aware of the need to incorporate ethical principles in the development of AI.  Recently, Amazon, Facebook, McKinsey, Google’s DeepMind division, IBM, and Microsoft have founded a new organisation, the Partnership on Artificial Intelligence to Benefit People and Society, to establish best practices in ethical AI.

The ‘driverless car’ is often the centre of debate when discussing machines needing ‘moral compass’.  It has been predicted that by 2030, almost every car on Australian roads will be driverless.  Although driverless cars may reduce accidents caused by human error (which currently account for 90% of traffic accidents), there will undoubtedly be scenarios in which driverless cars will need to make decisions in the event of an unavoidable accident. These scenarios may require the car to be programmed to act as a ‘moral agent’ and decide between two bad outcomes.  For example, two children run onto the road in front of the vehicle, but if the vehicle was to swerve it would hit a tree and kill the driver and passenger, what would be the moral decision for the vehicle to make in that circumstance?  Should it be based on preservation of human life and minimise number of deaths? Should it take into account the age of the victims in favour of preserving the life of a child?

Given that ethics are dependent on socio-cultural context, this moral choice between preserving the life of the driver and passengers or preserving the life of the pedestrian would depend on whose moral compass that driverless car was modelled on.  Recently, research led by Professor Jean-Francois Bonnefon at the Toulouse School of Economics posed similar scenarios to team of workers on Amazon Mechanical Turk (a crowdsourcing internet marketplace) to gauge public opinion on what would be ethical and moral in such situations.

The results of the study have shown that participants generally believe the moral decision in an unavoidable road accident would be to minimise the death toll of the accident.  However, the study also controversially (but perhaps predictably) showed that participants were unlikely to choose this decision if it was their life being sacrificed.  This led Bonnefon and his team to note that if driverless cars were programmed with ‘moral algorithms’ that caused the vehicles to preserve the lives of pedestrians at the expense of the driver, consumers would be less likely to buy an autonomous vehicle. The report also notes that some decisions could be too morally ambiguous for the software to resolve.

For the time being, moral decision-making in autonomous vehicles must lie with the person ‘in control’ of the autonomous vehicle at the relevant time.  As we continue to advance toward fully autonomous driving, we expect the issue of moral programming to play an increasingly critical role.

HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins