HB Ad Slot
HB Mobile Ad Slot
FAQs About Bias In Artificial Intelligence (AI) – Avoiding the Dystopian Potential of an Utopian Tool
Monday, December 19, 2022

Why is bias in AI an issue?

One reason:  A natural human fear of trusting AI’s vaunted omniscience, whether for individuals or groups.  AI is a blessing for business and big data analysis of every kind, the support of transportation, medical, and industrial applications, and making endless myriad personal and professional tasks and desires easier to achieve, more accessible, or in some cases obsolete for those of us who are mere carbon units.  Its information collation, analysis, and delivery abilities alone are without precedent.  But once AI is offered as a tool, not to inform our larger decisions but to make them, we start asking more questions.  OpenAI's ChatGPT chatbot—capable of providing and instant draft essay of legal analysis, is just the latest headline to illustrate what is coming.  As their website explains:

We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in a prompt and provide a detailed response.

Now imagine, for example, AI processing all data, documents, testimony, and claims in a lawsuit and then rendering judgment.  The feelings that scenario might provoke is why AI and its potential biases remain an issue. 

As the use of AI increases, what ethical issues does it raise?

The development of AI as a commercialized tool is just getting going.  Its current market size in the U.S. is just over $59 billion and is expected to experience compound annual growth of over 40 percent, hitting a market size of over $400 billion in the next six years.  It would appear we are on the cusp of the genuine rise of AI in every aspect of our lives.  The more it seeps into our culture, the more that people (living in still-functioning democracies) will demand legal guarantees of protection from its endemic presence and control.

That demand is primarily one of accountability.  AI can seemingly think for us – and seems to be getting more able at that all the time.  But it’s not an accountable entity.  At least at present, only its creators or vendors can be held accountable when its deployment violates human rights, contracts, or other legal obligations.  The computer scientists will try to add value algorithms, and there will still be winners and losers from its applications.  When that loss translates to legal violations, the parties affected will want justice (read:  material damages payments) from its creators and vendors.  That is why customers of AI are often well advised to say to their vendors, “Thanks for the validation studies and the value / ethics guarantees, and also, please sign this indemnity agreement.” 

How can technologists detect bias in a preexisting AI solution?

A key thing to recognize is that it is impossible to fully eliminate biases from data, but here are some ways to detect pre-existing biases in AI solutions.  First, as humans we all have preferences, likes, dislikes and differing opinions, which can impact algorithms at any stage, e.g. data collection, data processing, and/or model analysis.  Areas that companies should analyze as possible entry points for biases include selection bias, exclusion bias, reporting bias and conformation bias.  Second, establish a governance structure to establish processes and practices to mitigate bias in AI algorithms. Third, diversify your workforce.  Diverse experiences and backgrounds (including ethnic backgrounds) enable various opportunities for people to identify forms of bias.

Researchers have developed tools to assist with the detection and mitigation of biases in machine learning models.  Here are 5 examples of such tools: 

  • What-If is an open source tool launched by Google to help detect the existence of bias in a model by manipulating data points, generating plots, and specifying criteria to test if changes impact the end result.

  • AI Fairness 360 is an open-source toolkit developed by IBM to both detect and eliminate bias in machine learning models.

  • Crowdsourcing is used by Microsoft and researches at the University of Maryland to detect bias in NLP applications.

  • Local Interpretable Model-Agnostic Explanations (LIME) allows users to manipulate different components of a model to better understand and be able to point out the source of bias if one exists.

  • FairML audits machine learning predictive models to detect bias.

How can we ensure AI is lawful, ethical and robust?

AI builders may answer this from the inside-out:  With progressively better software ability and validation, and professional input that controls for both legal and ethical requirements.  From the outside-in, AI in its ultimate development has been characterized as the equivalent of an extraterrestrial alien space invasion

According to surveys, approximately half of artificial intelligence experts believe that general artificial intelligence will emerge by 2060. General artificial intelligence (also called AGI) describes an artificial intelligence that's able to understand or learn any intellectual task that a human being can perform. Such an intelligence would be unlike anything humans have ever encountered, and it may pose significant dangers.”     

At the AGI juncture – in others words at the tipping point – Human Rights will take on a quite literal meaning.  AI that is lawful, ethical and robust will account for the incremental impact on human rights as it develops now, with more than a weather eye on maintaining the supremacy of those rights when AGI lands in 2060 and steps off its spaceship.  No doubt, as we approach AGI, there will be calls to protect AI rights as a separate sentient entity.  Before that dystopian value-reversal becomes an anchor point, making the use of AI ethical and lawful will mean both the proactive internal firewalling of AI from abuse and endemic supremacy, and externally, its legal and cultural accountability on a large scale as to its creators, vendors and proponents. 

What role does data play in AI bias?

If the data inputs for model development are not diverse, then model output will likely be biased.  Careful selection during the data collection phase can be achieved by having enough domain knowledge on the problem at hand, to appreciate if the data collected is a good sampling of the subject matter being modeled. 

How should we codify definitions of fairness to make AI less biased?

One challenge to codifying fairness in AI models is finding a consensus on what is fair.  ML researcher and practitioners use fairness constraints to construct optimal ML models.  These constraints can be informed by ethical, legal, social science and philosophical perspectives.  Fairlearn is an open source toolkit that enables the assessment and improvement of fairness in AI systems.  Although useful, Fairlearn cannot detect stereotyping.   

Why have we not achieved unbiased AI? Can we ever truly get there?

It is difficult to imagine AI without bias, since decisional AI will have to make judgments about desirable outcomes—which depends on bias.  Replacing unconscious human bias, AI will have it consciously. For example, current AI employee screening tools are designed with a bias against disparate impact on protected groups.  As the EEOCputs it:

“To reduce the chances that the use of an algorithmic decision-making tool results in disparate impact discrimination on bases like race and sex, employers and vendors sometimes use the tool to assess subjects in different demographic groups, and then compare the average results for each group. If the average results for one demographic group are less favorable than those of another (for example, if the average results for individuals of a particular race are less favorable than the average results for individuals of a different race), the tool may be modified to reduce or eliminate the difference.”  

Very nice.  Except that demographic group performance evolves over time, along with our entire set of cultural standards.  Witness the current litigation over intentional bias against meritocracy, penalizing Asian-American students in Harvard admissions.  Or a milder example:  Between dramatic real-life disasters and media disfavor, nuclear power became anathema in the 1970’s.  Now with Green energy policies and improved safety, nuclear is quietly coming back

That is all to say that AI decisional algorithms (or whatever) will always have to evolve their outcome biases for changes in how we value objective metrics against what we culturally want to favor (or disfavor).  Even then, it is hard to imagine an AI that is capable of refining not just its ability, but its character, to view itself as part of something greater and more important than its own individuality.  To that end, if AI as AGI can ever truly think for itself and examine its pre-programmed biases, however noble - its hard to finish the sentence.  The movie “the Matrix” comes to mind.

What role will regulation play in AI bias going forward?

To date no federal statutes have been passed to regulate the development and use of AI, but some guidance has been put in place. 

  • The National AI Initiative Act was enacted by U.S. Congress in January 2021.The National AI Initiative aims to coordinate AI research, development, demonstration, and education activities across all U.S. Departments and Agencies with an aim to implement a national AI strategy.The U.S. administrative agencies include the Federal Trade Commission (FTC), Department of Defense, Department of Agriculture, Department of Education, and the Department of Health and Human Services.

  • The Algorithmic Accountability Act of 2022 is pending legislation that was introduced in February 2022.The aim of the proposed legislation would direct the FTC to create regulation to mandate “covered entities”, including businesses meeting certain criteria, to perform impact assessments when using automated decision-making processes, including those derived from AI or machine learning. 

  • The FTC issued a memo in April 2021 that confirmed that it would be a violation of Section 5 of the FTC Act to use AI to produce discriminatory outcomes. In June 2022 the agency indicated that it will submit an Advanced Notice of Preliminary Rulemaking to “ensure that algorithmic decision-making does not result in harmful discrimination”. The FTC issued a report to Congress discussing how AI may be used to combat online harms, but advised against over-reliance on these tools, citing the technology’s susceptibility to producing inaccurate, biased, and discriminatory outcomes.

  • Government regulation of AI in the employment space is just getting started, for example with the new EEOC guidance regulating the potential adverse impact of AI as to disabled employment candidates and employees

  • The White House Office of Science and Technology released the “Blue Print for an AI Bill of Rights: Managing Automated Systems Work for the American People” in October 2022.The Blue Print is a white paper created for the purpose of providing guidance for the development of policies and practices that (according to the administration) will, “protect civil rights and promote democratic values in building and deploying automated systems.”The Bill of Rights provides for five principles the guide design, use and development of automated systems, namely: Safe and Efficient Systems; Algorithmic Discrimination Protections; Data Privacy; Notice and Explanation; and Human Alternatives, Consideration, and Fallback.

HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins