Valve has reportedly adopted a policy to reject games that use AI-generated content over infringement concerns. A developer posted on the “aigamedev” subreddit that in response to submitting a game with some assets that were obviously AI-generated, he received a rejection notice from Valve stating:
While we strive to ship most titles submitted to us, we cannot ship games for which the developer does not have all of the necessary rights. After reviewing, we have identified intellectual property in [Game] which appears to belong to one or more third parties. In particular, [Game] contains art assets generated by artificial intelligence that appears to be relying on copyrighted material owned by third parties. As the legal ownership of such AI-generated art is unclear, we cannot ship your game while it contains these AI-generated assets, unless you can affirmatively confirm that you own the rights to all of the IP used in the data set that trained the AI to create the assets in your game. We are failing your build and will give you one (1) opportunity to remove all content that you do not have the rights to from your build. If you fail to remove all such content, we will not be able to ship your game on Steam, and this app will be banned.
According to the developer, despite later editing those assets, “so there were no longer any obvious signs of AI,” his app was still rejected.
In a statement, a Valve spokesperson clarified that the company does not wish to discourage the use of AI in game development, and indeed sees great potential in it. However, it has concerns about the legal status of AI-generated art assets — considering the AI that made them may have been trained on data, including copyrighted art works that do not belong to the creator of the game. “Stated plainly, our review process is a reflection of current copyright law and policies, not an added layer of our opinion,” Valve said. “As these laws and policies evolve over time, so will our process.”
Whether using copyrighted content to train AI models constitutes infringement is a question pending in several lawsuits. This is just one of several unanswered legal questions with AI. As a result, many companies are developing AI-related policies to address the array of legal issues that can arise with generative AI. I addressed some of the legal issues in How Generative AI Generates Legal Issues in the Games Industry. Among other things, these policies address training AI models, using AI tools to and/or using AI code generators.
For companies training AI models, it is important to develop policies and procedures to ensure responsible use of AI and to mitigate any liabilities. This includes, among others, policies and procedures on the collection and use of data to train the AI models, the assessment of risk and safety issues before releasing a new model or product based thereon, prevention of personal information from improperly being used in the training data or the output of personal information or false or disparaging information about a person. The FTC investigation of OpenAI and its 17-page list of questions regarding its policies and procedures for training AI models, among other things, is a road map for companies that want to develop such policies and procedures. For more information on this, see The Need For Generative AI Development Policies and the FTC’s Investigative Demand to OpenAI. One of the important issues to consider when training AI is whether you may actually use the data you want to use. One aspect of this that surprises some companies is that even if you “own” certain data, you may not have the right to use it to train AI models. For more on this, see our prior article on Training AI Models – Just Because It’s “Your” Data Doesn’t Mean You Can Use It.
Companies using third AI party tools also need to develop policies on employee use of such tools. In the game industry, and others where generating and protecting creative content is important, companies need to assess whether they want to permit employees to use generative AI to create such content. Two factors to consider are whether there are infringement issues and that the AI-generated content is likely not protectable by copyright. If you use third party contractors to develop the game, whether to write code or generate art, your policies need to cover their use as well. For more information on employee and contractor use policies, see AI Technology – Governance and Risk Management: Why Your Employee Policies and Third-Party Contracts Should be Updated.
If your developers are using AI code generators to assist in coding the game, there are several additional issues to consider, primarily due to the use of open source code that is used in training such models. For more information on legal issues with AI code generators, see Solving Open Source Problems with AI Code Generators.
The power of generative AI is alluring and many companies are experimenting with its use. Given the array of legal issues, it is imperative for companies to develop policies to set guardrails for such use. The best way to get started with such policies is to have a qualified attorney prepare a presentation on the relevant legal issues and then develop policies for the identified use cases based on company-specific criteria.