The rapid rise of AI used with advertising, marketing and other consumer facing applications has caused the FTC to continue to take notice and issues guidance. For example, the FTC is concerned about false or unsubstantiated claims about an AI product’s efficacy. It has issued AI-related guidance in the past. The following is some recent FTC guidance to consider when referencing AI in your advertising. This guidance is not necessarily new, but the fact that it is being reiterated should be a signal that the FTC continues to focus on this area and that actions may be forthcoming. In fact, the recent guidance states: “AI is important, and so are the claims you make about it. You don’t need a machine to predict what the FTC might do when those claims are unsupported.”
One of the key takeaways is do not overpromise what your algorithm or AI-based tool can deliver. Some of the finer points include the following.
Are you exaggerating what your AI product can do? Or even claiming it can do something beyond the current capability of any AI or automated technology? For example, we’re not yet living in the realm of science fiction, where computers can generally make trustworthy predictions of human behavior. Your performance claims would be deceptive if they lack scientific support or if they apply only to certain types of users or under certain conditions.
Are you promising that your AI product does something better than a non-AI product? It’s not uncommon for advertisers to say that some new-fangled technology makes their product better – perhaps to justify a higher price or influence labor decisions. You need adequate proof for that kind of comparative claim, too, and if such proof is impossible to get, then don’t make the claim.
Are you aware of the risks? You need to know about the reasonably foreseeable risks and impact of your AI product before putting it on the market. If something goes wrong – maybe it fails or yields biased results – you can’t just blame a third-party developer of the technology. And you can’t say you’re not responsible because that technology is a “black box” you can’t understand or didn’t know how to test.
Does the product actually use AI at all? If you think you can get away with baseless claims that your product is AI-enabled, think again. In an investigation, FTC technologists and others can look under the hood and analyze other materials to see if what’s inside matches up with your claims. Before labeling your product as AI-powered, note also that merely using an AI tool in the development process is not the same as a product having AI in it.
As a reminder, in some of its previous AI guidance, the FTC has indicated that it has decades of experience enforcing three laws important to developers and users of AI:
-
Section 5 of the FTC Act. The FTC Act prohibits unfair or deceptive practices. That would include the sale or use of – for example – racially biased algorithms.
-
Fair Credit Reporting Act. The FCRA comes into play in certain circumstances where an algorithm is used to deny people employment, housing, credit, insurance, or other benefits.
-
Equal Credit Opportunity Act. The ECOA makes it illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance.
It has cautioned that as companies launch into the new world of artificial intelligence, it is important to keep their practices grounded in established FTC consumer protection principles. This includes using AI truthfully, fairly, and equitably. Among other things, it has said:
Start with the right foundation. With its mysterious jargon (think: “machine learning,” “neural networks,” and “deep learning”) and enormous data-crunching power, AI can seem almost magical. But there’s nothing mystical about the right starting point for AI: a solid foundation. If a data set is missing information from particular populations, using that data to build an AI model may yield results that are unfair or inequitable to legally protected groups. From the start, think about ways to improve your data set, design your model to account for data gaps, and – in light of any shortcomings – limit where or how you use the model.
Watch out for discriminatory outcomes. Every year, the FTC holds PrivacyCon, a showcase for cutting-edge developments in privacy, data security, and artificial intelligence. During PrivacyCon 2020, researchers presented work showing that algorithms developed for benign purposes like healthcare resource allocation and advertising actually resulted in racial bias. How can you reduce the risk of your company becoming the example of a business whose well-intentioned algorithm perpetuates racial inequity? It’s essential to test your algorithm – both before you use it and periodically after that – to make sure that it doesn’t discriminate on the basis of race, gender, or other protected class.
Embrace transparency and independence. Who discovered the racial bias in the healthcare algorithm described at PrivacyCon 2020 and later published in Science? Independent researchers spotted it by examining data provided by a large academic hospital. In other words, it was due to the transparency of that hospital and the independence of the researchers that the bias came to light. As your company develops and uses AI, think about ways to embrace transparency and independence – for example, by using transparency frameworks and independent standards, by conducting and publishing the results of independent audits, and by opening your data or source code to outside inspection.
Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results. Under the FTC Act, your statements to business customers and consumers alike must be truthful, non-deceptive, and backed up by evidence. In a rush to embrace new technology, be careful not to overpromise what your algorithm can deliver. For example, let’s say an AI developer tells clients that its product will provide “100% unbiased hiring decisions,” but the algorithm was built with data that lacked racial or gender diversity. The result may be deception, discrimination – and an FTC law enforcement action.
Tell the truth about how you use data. In our guidance on AI last year, we advised businesses to be careful about how they get the data that powers their model. We noted the FTC’s complaint against a social media company, which alleged that the social media company misled consumers by telling them they could opt in to the company’s facial recognition algorithm, when in fact it was using their photos by default. The FTC’s recent action against an app developer reinforces that point. According to the complaint, the app developer used photos uploaded by app users to train its facial recognition algorithm. The FTC alleged that the company deceived users about their ability to control the app’s facial recognition feature and made misrepresentations about users’ ability delete their photos and videos upon account deactivation. To deter future violations, the proposed order requires the company to delete not only the ill-gotten data, but also the facial recognition models or algorithms developed with users’ photos or videos.
Do more good than harm. To put it in the simplest terms, under the FTC Act, a practice is unfair if it causes more harm than good. Let’s say your algorithm will allow a company to target consumers most interested in buying their product. Seems like a straightforward benefit, right? But let’s say the model pinpoints those consumers by considering race, color, religion, and sex – and the result is digital redlining (similar to the Department of Housing and Urban Development’s case in 2019). If your model causes more harm than good – that is, in Section 5 parlance, if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition – the FTC can challenge the use of that model as unfair.
Hold yourself accountable – or be ready for the FTC to do it for you. As we’ve noted, it’s important to hold yourself accountable for your algorithm’s performance. Our recommendations for transparency and independence can help you do just that. But keep in mind that if you don’t hold yourself accountable, the FTC may do it for you. For example, if your algorithm results in credit discrimination against a protected class, you could find yourself facing a complaint alleging violations of the FTC Act and ECOA. Whether caused by a biased algorithm or by human misconduct of the more prosaic variety, the FTC takes allegations of credit discrimination very seriously, as its recent action against an auto dealer demonstrates.