HB Ad Slot
HB Mobile Ad Slot
AI in Family Offices: A Closer Look at Privacy, Confidentiality, and Fiduciary Responsibility
Monday, June 24, 2024
The integration of artificial intelligence (AI) has revolutionized various industries, offering efficiency, accuracy, and convenience. In the realm of estate planning and family offices, the integration of AI technologies has also promised greater efficiency and precision. However, AI comes with its own unique risks and challenges.

This is the first in a three-part series on AI for the ArentFox Schiff Family Office Newsletter. In this first installment, we consider the risks associated with the use of AI in estate planning and family offices. We will focus specifically on concerns surrounding privacy, confidentiality, and fiduciary responsibility. Subsequent installments will consider the use of AI in wealth management and how using AI may impact future generations of family office professionals.

Using AI in the Family Office Context

First, we should acknowledge why one may use AI in their practice. AI and large language models are advanced technologies capable of understanding and generating human-like text. They operate by processing vast amounts of data to identify patterns and make predictions. In the family office context, AI can offer assistance by streamlining processes and enhancing decision-making.[1] On the investment management side, through data analysis, AI can identify patterns in financial records, asset values, and tax implications, facilitating better informed asset allocation and distribution strategies. Predictive analytics capabilities enable AI to forecast future market trends and potential risks that may help family offices optimize investment strategies for long-term wealth preservation and succession planning.

AI may also be helpful in preparing documents relating to estate planning. If given a set of information, AI can function as a quasi-search engine or prepare summaries of documents. It can also draft communications synthesizing complex topics. Overall, AI offers the potential to enhance efficiency, accuracy, and foresight in estate planning and family office services. That being said, concerns from its use remain.

Privacy and Confidentiality

Family offices deal with highly sensitive information, including financial data, investment strategy, family dynamics, and personal preferences. Sensitive client information can include intimate insight into one’s estate plan (e.g., inconsistent treatment of various family members) or succession plans and/or trade secrets of a family business. The use of AI in managing and processing this information introduces a new dimension of risk to privacy and confidentiality.

AI systems, by their nature, require vast amounts of data to function effectively and train their models. In a public AI model, information given to the model may be used to generate responses to other users. For example, if an estate plan for John Smith, founder of ABC Corporation, is uploaded to an AI tool by a family office employee asked to summarize his 110-page trust instrument, a subsequent user who asks about the future of ABC Corporation may be told that the company will be sold after John Smith’s death.

Inadequate data anonymization practices also exacerbate privacy risks associated with AI. Even anonymized data can be de-anonymized through sophisticated techniques, potentially exposing individuals to identity theft, extortion, or other malicious activities. Thus, the indiscriminate collection and utilization of personal data by AI systems without robust anonymization protocols pose serious threats to client confidentiality.

Even if a client’s data is sufficiently anonymized, data used by AI is often stored in cloud-based systems, which are not impervious to breaches. Cybersecurity threats, such as hacking and data theft, pose a significant risk to the privacy of clients. The centralized storage of data in AI platforms increases the likelihood of large-scale data breaches. A breach could lead to the exposure of sensitive information, causing reputational damage and potential legal repercussions.

The best practice for family offices looking to use AI is to ensure that the AI tool under consideration has been vetted for security and confidentiality. As the AI landscape continues to evolve, family offices exploring AI should work with trusted providers with reliable privacy policies for their AI models.

Fiduciary Responsibility

Fiduciary responsibility is a cornerstone of estate planning and family offices. Professionals in these fields are obligated to act in the best interests of their clients (or beneficiaries) and to do so with care, diligence and loyalty, duties which could be compromised using AI. AI systems are designed to make decisions based on patterns and correlations in data. However, they currently lack the human ability to understand context, exercise judgment, and consider ethical implications. Fundamentally speaking, they lack empathy. This limitation could lead to decisions that, while ostensibly consistent with the data, are not in the best interests of the client (or beneficiaries).

The reliance on AI-driven algorithms for decision-making may compromise the fiduciary duty of care. While AI systems excel at processing vast datasets and identifying patterns, they are not immune to errors or biases inherent in the data they analyze. Additionally, AI is designed to please the user, and infamously has made up (or “hallucinated”) case law when asked legal research questions. In the financial context, inaccurate or biased algorithms could lead to suboptimal recommendations or decisions, potentially undermining the fiduciary’s obligation to prudently manage assets. For instance, an AI system might recommend a particular investment based on historical data, but it might fail to consider factors such as the client’s risk tolerance, ethical preferences, or long-term goals, which a human advisor would consider.

In addition, AI is prone to errors resulting from inaccuracy, oversimplification, and lack of contextual understanding. AI is often recommended for summarizing difficult concepts and drafting client communications. Giving AI a classic summary question, such as “explain the rule against perpetuities in a simple manner,”[2] demonstrates these issues. When given that prompt, ChatGPT summarized the time when perpetuities periods usually expire as “around 21 years after the person who set up the arrangement has died.” As estate planners would know, that is a vast oversimplification to the point of being inaccurate in most circumstances. After correcting ChatGPT, it generated an improved explanation of “within a reasonable amount of time after certain people who were alive when the arrangement was made have passed away.” However, this summary would still be inaccurate in certain contexts. This exchange highlights the limitations of AI and the importance of human review.

Given AI’s propensity to make errors, the delegation of decision-making authority to AI systems presumably would not absolve the fiduciary from legal responsibility in the case of errors or misconduct. As reliance on AI expands throughout professional life, fiduciaries may become more likely to use AI to perform their duties. An unchecked reliance on AI could lead to errors for which clients and beneficiaries would seek to hold the fiduciary liable.

Lastly, the nature of AI’s algorithms can undermine fiduciary transparency and disclosure. Clients entrust fiduciaries with their financial affairs with the expectation of full transparency and informed decision-making. However, AI systems often operate as “black boxes,” meaning their decision-making processes lack transparency. Unlike traditional software systems where the logic is transparent and auditable, AI operates through complex algorithms that are often proprietary and inscrutable. The black-box nature of AI algorithms obscures the rationale behind recommendations or decisions, making it difficult to assess their validity or challenge their outcomes. This lack of transparency could potentially undermine the fiduciary’s duty to communicate openly and honestly with clients or beneficiaries, eroding trust and confidence in the fiduciary relationship.

Conclusion

While AI offers many potential benefits, its use in estate planning and family offices is not without risk. Privacy and confidentiality concerns, coupled with the impact on fiduciary responsibility, highlight the need for careful consideration and regulation.

It is crucial that professionals in these fields understand these risks and take steps to mitigate them. This could include implementing robust cybersecurity measures, counteracting the lack of transparency in AI decision-making processes, and above all, maintaining a human element in decision-making that involves the exercise of judgment.


[1] Our colleagues Angelica F. Russell-Johnson, Esq., and Sarah Kerr Severson, Esq., recently explored potential uses for AI and the importance of human touch in the family office context. “The Reality of Artificial Intelligence in the Family Office Realm.”

[2] Per ChatGPT: “The rule against perpetuities is like a time limit for how long someone can control property or money after they pass it on. It’s a rule that says you can’t set up a situation where someone might control stuff forever and ever. Instead, there has to be a clear end point, usually around 21 years after the person who set up the arrangement has died. This way, property and money can’t be tied up indefinitely, and everyone gets a fair chance to use or own it eventually.”

HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins