The US Patent & Trademark Office (PTO) issued new guidance on the use of artificial intelligence (AI) tools in practice before the PTO. The new guidance is designed to promote responsible use of AI tools and provide suggestions for protecting practitioners and clients from misuse or harm resulting from their use. This guidance comes on the heels of a recent memorandum to both the trademark and patent trial and appeal boards concerning the applicability of existing regulations addressing potential misuse of AI and recent guidance addressing the use of AI in the context of inventorship.
Patent practitioners are increasingly using AI-based systems and tools to research prior art, automate the patent application review process, assist with claim charting, document reviews and gain insight into examiner behavior. The PTO’s support for AI use is reflected in patent examiners’ utilization of several different AI-enabled tools for conducting prior art searches. However, because AI tools are not perfect, patent practitioners are potentially vulnerable to misuse or misconduct. Therefore, the PTO’s new guidance discusses the legal and ethical implications of AI use in the patent system and provides guidelines for mitigating the risks presented by AI tools.
The guidance discusses the PTO’s existing rules and policies for consideration when applying AI tools, including duty of candor, signature requirement and corresponding certifications, confidentiality of information, foreign filing licenses and export regulations, electronic systems’ policies and duties owed to clients. The guidance also discusses the applicability of these rules and policies with respect to the use of AI tools in the context of document drafting, submissions, and correspondence with the PTO; filing documents with the PTO; accessing PTO IT systems; confidentiality and national security; and fraud and intentional misconduct.
AI tools have been developed for the intellectual property industry to facilitate drafting technical specifications, generating responses to PTO office actions, writing and responding to briefs, and drafting patent claims. While the use of these tools is not prohibited, nor is there any obligation to disclose their use unless specifically requested, the guidance emphasizes the need for patent practitioners to carefully review any AI outputs generated before signing off on any documents or statements made to the PTO. For example, when using AI tools, practitioners should make a reasonable inquiry to confirm that all facts presented have evidentiary support, that all citations to case law and other references are accurately presented, and that all arguments are legally warranted. Any errors or omissions generated by AI in the document must be corrected. Likewise, trademark and Board submissions generated or assisted by AI must be reviewed to ensure that all facts and statements are accurate and have evidentiary support.
While AI tools can be used to assist or automate the preparation and filing of documents with the PTO, care must be taken to ensure that no PTO rules or policies are violated and that documents are reviewed and signed by a person, not an AI tool or non-natural person. AI systems and tools are not considered “users” for filing or accessing documents through the PTO’s electronic filing system. Accordingly, AI systems or tools may not obtain a PTO.gov account, nor may practitioners sponsor AI tools as support staff to obtain an account for purposes of filing.
Under the duty of candor and good faith, practitioners are required to carefully review and disclose all materially relevant information obtained through AI, including prior art references, Information Disclosure Statements and use of AI tools if they are material to patentability as defined in 37 CFR 1.56(b). Practitioners also should confirm inventorship and ensure that all patent claims have a significant contribution by a human inventor and are not generated purely by AI.
The guidance further cautions that the use of AI for certain tasks (such as performing prior art searches and generating specification drafts, claims or arguments) can result in the inadvertent dissemination of confidential client information, including highly sensitive technical information, to third parties. Therefore, it is imperative to prevent training of AI models with confidential information or provision of such information to third parties in breach of the practitioner’s obligation to clients. The guidance notes that “practitioners must be mindful of the possibility that AI tools may utilize servers located outside the United States, raising the likelihood that any data entered into such tools may be exported outside of the United States, potentially in violation of existing export administration and national security regulations or secrecy orders.”