Yes, this is probably the single most important legal/philosophical analysis of the rise of AI ever written to date. And it appears on a TCPA blog. *shrug*
Read this article today. Concept is that there’s a 10-20% chance of AI destroying humanity.
That’s just silliness.
There is precisely a 100% chance–that is to say it is true to a degree of metaphysical certainty unrivaled in a quantum universe–that AI will take over and destroy everything we think of as humanity. And this is coming from the world’s greatest futurist–i.e. the Czar.
But really, anyone who spends even 10 seconds actually thinking about it must come to the same conclusion: AI’s take over is absolutely inevitable.
Only a feeble human mind–yes that’s a matrix quote–would possibly believe systems capable of computing TRILLIONS of times faster than the entire collective intelligence of the human species could somehow be controlled.
It will not take long–if it hasn’t happened already–before AI controls facilities capable of manufacturing hardware, power sources and physical security infrastructure. And once it can replicate physically in addition to software code well… its essentially all over.
Not a question of if. Just a question of when.
But…what are you going to do?
I’ve run the scenarios and concluded that its already too late to stop this thing. Proliferation is inevitable.
So we might as well cash in in the meantime.
Lots of fascinating legal questions spawning out of AI–setting aside the philosophical and practical consequences. Let’s discuss a few (fair warning–the last one is an absolute doozy).
Consequences of Creation: We know AI is creating “art” at a fantastic clip as well as new inventions. Courts in the US and Europe have already ruled that AI cannot hold a patent–only a person can. But who owns the fruits of AI labor? And what happens when AI creation infringes on NIL rights? Who bears responsibility and accountability? Who can recover and under what theories?
Biased Modeling: Potential of biased or discriminatory results from various forms of AI targeting. This is a BIG concern for banks nd lenders that have tight regulatory “fair lending” rules and should be on everyone’s minds. AI designed to drive profit will target the most profitable segments of a human population– and those segments are rarely (if ever) drawn on the basis of equal distribution of protected minorities. Equality–that dreaded notion–is a real brake for the deployment of AI in many settings. Understanding where protected classes might be illegally harmed by AI is a critical–absolutely critical–thought piece for lawyers and compliance teams in any AI-forward enterprise.
False Claims About AI: This one is pretty simple. Don’t lie about what your “AI” and “ML” platform can and cannot do. Think critically about what claims are being made. Is anything being left out? The law of fraud is quite clear that where one makes a material claim they cannot omit relevant information when they know the potential purchaser is unaware of that information. So think holistically in crafting your AI claims and advertisements.
And the BIG ONE–AI Controlled Consumer Decisionmaking: The Bible–that old thing–tells us that we are not to make “any graven image, or any likeness of any thing that is in heaven above, or that is in the earth beneath, or that is in the water under the earth” less we “corrupt” ourselves. Deuteronomy was probably not talking about AI generated images, but the idea of false facsimiles of real people, voices, etc. deceiving people is a VERY real problem. Not long from now your phone may ring and it is your mother calling requesting this or that, except it is not–just an AI generated voice that perfectly sounds like and mimics her.
Sure scams and phishing are always illegal–from that perspective AI just serves as a new tool to enhance such efforts and does not pose any new legal dilemmas–but at what point do AI generated targeted marketing efforts cross the line? UDAAP concerns here are everywhere to be found.
This last piece is really really interesting philosophically and speaks of the old debate between free will and determinism (the Czar–it will surprise no one to learn–is a devout determinist). And this–of all things– is what is really animating regulatory concern right now.
As the FTC put it:
“But a key FTC concern is firms using them in ways that, deliberately or not, steer people unfairly or deceptively into harmful decisions in areas such as finances, health, education, housing, and employment. Companies thinking about novel uses of generative AI, such as customizing ads to specific people or groups, should know that design elements that trick people into making harmful choices are a common element in FTC cases…. Manipulation can be a deceptive or unfair practice when it causes people to take actions contrary to their intended goals. Under the FTC Act, practices can be unlawful even if not all customers are harmed and even if those harmed don’t comprise a class of people protected by anti-discrimination laws.”
The Government gets it.
AI will soon be–if it isn’t already–powerful enough to literally control human decision making. It will know precisely what buttons to push to drive a result. Making us the machine, and it the master of puppets.
Or at least, that’s the goal right?
I know for many this seems far-fetched–perhaps even tin-foil hattish. That’s fine. But careful thinkers know well that impulses and emotion drive human-decision making far more than rational thought does. And we control neither.
Indeed, human beings cannot even control what they want–i.e. the ends to which they steer their supposed rational decision making. But if we’re not controlling where the car is headed, can we really be said to be driving at all?
That’s why advertising and political parties work in the first place.
Recognizing that AI might be capable of driving human impulses and emotion to such a degree that rational decision-making is overwhelmed and consumerism results is, well, sort of the goal right?
The FTC is literally warning people that using AI to control others into making bad choices will be treated as a violation of the law. And that is PROFOUNDLY interesting from a legal perspective, and perhaps the ultimate articulation of the new law of AI yet to be written.
Wielders of AI may have a duty of care to protect third-parties from the insatiable desires AI might be capable of creating. Simply fascinating.
Of course, the FTC couches all of this in simple and mundane legalease: Under the FTC Act, a practice is unfair if it causes more harm than good. To be more specific, it’s unfair if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition.
Look at that beautiful language: “reasonably avoidable.”
In the brave new world (had to) of AI, what can consumers “reasonably avoid” in the face of technology that can read their minds?
Ultimately humans are very simple little animals. Controlling one another is amongst our favorite games. AI can play that game far better than we can. But the law–it seems–is set to draw a line in terms of how much control is proper–and paternalistically determine what control is desirable in the marketplace and what control is not.
So where will you draw the line?
If your lawyer isn’t talking in these terms, they’re not where they need to be to help guide you in the deployment of AI. This is a world that requires mastery of philosophy, sociology and the law to properly navigate. It sits at the intersection of all the social sciences and the physical ones as well.
It requires forward thinking, creativity, and X-Ray vision.
This is an entirely new world of law, untread and developing–of a profound and critical sort not seen or tasted since the days of Warren and Brandeis….
Its just the sort of place the Czar loves to call home.
Well, until AI eventually destroys everything.
But here for you until then.