The use of artificial intelligence (“AI”) notetakers, while beneficial for productivity, raises significant concerns around privacy, security, and compliance. These risks are at the center of Brewer v. Otter.ai, a lawsuit filed on August 15, 2025, which claims Otter.ai unlawfully recorded conversations on popular video conferencing platforms without all participants’ consent and used the data to train its machine-learning models. This case reinforces the need for RIAs to implement robust safeguards when using AI notetaking tools to transcribe and summarize internal discussions and client meetings.
Currently, there are no specific regulations pertaining to the use of AI or AI notetaking tools. However, RIAs have a fiduciary duty to act in the best interests of their clients and abide by all regulations that the use of AI may implicate. RIAs should perform thorough vendor due diligence when selecting and implementing AI tools (specifically related to the AI company’s cybersecurity policies and practices), take measures to safeguard client privacy and confidentiality, and ensure compliance with applicable recordkeeping requirements.
Due Diligence
Before implementing an AI notetaking tool into practice, RIAs are strongly encouraged to inquire with AI companies about how data is stored, retained, and whether data or meetings will be used to train AI models. An RIA’s fiduciary duty to safeguard information extends past the end of the client meeting. RIAs should take measures to help ensure that client information will not be repurposed or used to train the company’s models.
Client Privacy
RIAs should remember that while they may choose to use AI notetaking tools, clients should ultimately consent to being recorded. Some jurisdictions require that all parties consent before recording conversations, while others only require one party to consent. Regardless, consent to record a conversation transcribed or summarized by AI tools carries both legal and professional implications. No matter what the jurisdiction’s rules require, RIAs should obtain client consent to use AI notetaking tools during client conversations.
Recordkeeping and Accuracy
RIAs must maintain true, accurate, and up-to-date books and records. When AI-generated transcripts and summaries are converted into written materials, regulators will likely consider these written materials to be communications related to investment advice. Transcripts should be retained in their original form, along with any notes highlighting potential inaccuracies made by the AI tool. RIAs should exercise caution and only rely on information they reasonably believe to be accurate, safeguarding both their compliance and the integrity of their client communications. After ensuring accuracy of the transcript and summary, RIAs should store each as part of their recordkeeping requirements.
If an RIA chooses to utilize an AI tool, it is strongly recommended that the Firm establishes a formal AI policy. This policy should specify the approved AI platforms and outline the permitted uses of AI, ensuring staff recognize that AI is to be employed only for the specific tasks designated by the Firm. Additionally, RIAs should provide training to their staff on AI best practices and require employees to review and acknowledge the AI usage policy.