On July 9, 2024, the Federal Trade Commission issued a proposed order that would ban NGL Labs, LLC, and two of its co-founders from offering an anonymous messaging app called “NGL: ask me anything” (“NGL App”) to children under the age of 18. The Commission voted 5-0 to authorize the staff to file the complaint and proposed order in the U.S. District Court for the Central District of California.
Launched in 2021, the NGL App allows users to post a link to their social media page, inviting followers to click the link to send messages back to the users anonymously. The link leads to the NGL App, where followers can answer prompts such as, “If you could change anything about me, what would it be?” The answers are then sent back to the user anonymously, with a premium subscription option available to reveal the identity of the sender.
The FTC, together with the Los Angeles District Attorney’s Office, claimed that the NGL defendants violated numerous laws. The complaint included various allegations, such as:
- Direct Marketing to Minors: Despite being aware of the potential for cyberbullying and other harms from similar services, the NGL defendants actively marketed the NGL App to children by instructing employees to contact high school students directly through social media platforms such as Instagram.
- False Claims About AI Moderation: The NGL defendants falsely claimed that they used AI technology to filter out harmful content such as cyberbullying, yet consumers submitted numerous complaints of harmful conduct, and one person attempted to commit suicide because of the NGL App.
- Deceptive Practices: The NGL defendants sent fake, computer-generated messages that appeared to be from real people, tricking users into believing that the NGL App was being used by users’ social media contacts. When users purchased the premium subscription to find out the identity of the sender, the users did not receive information showing them who sent them the message, and instead received peripheral information such as the time the message was sent, whether the sender of the message used an Android or iPhone mobile phone, and the sender’s general location.
- Violation of COPPA: The Children’s Online Privacy Protection Act (“COPPA”) requires parental consent prior to collecting personal information from children under age 13. The NGL defendants, despite knowing that numerous children under age 13 used the app, did not obtain parental consent, did not honor parents’ request to delete their children’s personal data, and retained children’s data longer than reasonably necessary to fulfill the purpose for which the data was collected.
- Violation of ROSCA: The NGL defendants violated the Restore Online Shoppers’ Confidence Act (“ROSCA”) by failing to disclose and obtain consumers’ consent for recurring charges. Registered users were unaware that the premium subscription was a recurring weekly charge.
FTC Chair Lina M. Khan denounced “NGL’s reckless disregard for kids’ safety,” and Los Angeles District Attorney George Gascón stated that, “We cannot tolerate . . . companies . . . profit[ing] at the expense of our children’s safety and well-being.”
The FTC’s proposed order would impose a number of requirements on the NGL defendants, including to:
- pay $4.5 million, which would be used to provide redress consumers, and a $500,000 civil penalty;
- implement a system that would prevent users under age 18 from accessing the NGL App and delete a user’s personal information unless the user is over age 13 or if parental consent was obtained to retain the data;
- obtain express informed consumer consent before charging for a negative option feature (e., a provision of a contract under which the consumer’s silence or failure to take affirmative action to reject a good or service or to cancel the agreement is interpreted by the negative option seller or provider as acceptance (or continuing acceptance) of the offer), provide a simple mechanism to cancel any negative option feature, and send consumers reminders about negative option charges;
- and not send fake messages to users, not misrepresent the capabilities of any AI technology and its ability to filter out cyberbullying, and not misrepresent information related to negative option features.