Earlier this week, frequent CIPAWorld participant Google lost a motion to dismiss based on the use of their Google Cloud Contact Center AI (“GCCCAI”) product. And this case (Ambriz v. Google, LLC, Case No. 23-cv-05437-RFL (N.D. Cal Feb. 10, 2025) raises some fascinating questions regarding the use of AI in Contact Centers and more generally.
The GCCCAI product (which a prior motion to dismiss was discussed on TCPAWorld) “offers a virtual agent for callers to interact with, and it can also support a human agent, including by: (i) sending the human agent the transcript of the initial interaction and the GCCAI virtual agent, (ii) acting as a ‘session manager’ who provides the human agent with a real-time transcript, makes article suggestions, and provides step-by-step guidance and ‘smart replies’.” It does all of this without informing the consumers that the call is being transcribed and analyzed.
Plaintiffs sued Google under Section 631(a) of the California Penal Code. This provision has three main prohibition: (i) “intentional wiretapping”, (ii) “willfully attempting to learn the contents or meaning of a communication in transit”, and (iii) “attempting to use or communicate information obtained as a result of engaging in either of the two previous activities”. Plaintiffs in this case claim Google violated (i) and (ii) of the above provisions.
Google’s best argument in this case is that they are not a third party to the communications. Because only “unauthorized third-party listeners” can violate Section 631(a). Google argues that they aren’t a third party, they are merely a software provider, like a tape recorder.
The Court disagreed. Recognizing that there are essentially two distinct branches of these cases when it comes to how to look at software as a service providers, the court proceeds to discuss whether the GCCAI product is an “extension” of the parties or whether the GCCAI product has the “capability” to use the data for its own purposes.
If a software has “merely captured the user data and hosted it on its own servers where [one of the parties] could then use data by analyzing”, the software is generally considered to be an extension of the parties. Therefore, it’s not a third-party and wouldn’t violate CIPA. This is similar to the “tape recorder” example preferred by Google.
Alas, however, the court looked at GCCCAI as “a third-party based on its capability to use user data to its benefit, regardless of whether or not it actually did so.” The Court applied this capability test and found that the Plaintiffs had “adequately alleged that Google ‘has the capability to use the wiretapped data it collects…to improve its AI/ML models.’” Because Google’s own terms of use stated that they may do so if their Customer allows them to do, the Court inferred that Google had the capacity to do just that.
Google argued that it was contractually prohibited from do so, but the Court also found those prohibitions don’t change the fact that Google has the ability to do so. And that is the determining factor. Therefore, the motion to dismiss was denied.
A couple of interesting takeaways from this case:
- In a world where every company is throwing AI in their products, it is vital to understand not only WHAT they are doing with your data, but also what they COULD do with it. The capability to improve their models may be enough under this line of cases to require additional consumer disclosures.
- We are all so used to “AI notetakers” on calls, whether Zoom, Teams, or, heaven forbid, Google Meet. What are those notetakers doing with your data? Should you be getting affirmative consent? Potentially. I think it’s a matter of time before someone tests the waters on those notetakers under CIPA.
Spoiler alert: I have reviewed the Terms of Service of some major players in that space. Their Terms say they are going to use your data to train their models. Proceed with caution.