Fiduciary Duties in the Age of AI: Broad Business Implications and Financial Sector Specifics

Artificial Intelligence (AI) is rapidly reshaping industries and business practices, necessitating a reassessment of various aspects within organizations. For fiduciaries, who are legally obligated to act in the best interests of their principals or beneficiaries, AI presents a double-edged sword. On one hand, AI can enhance decision-making, optimize operations, and provide deeper insights through advanced data analytics. On the other hand, it introduces new challenges related to transparency, accountability, and ethical considerations. This article delves into the broad business implications of AI for fiduciaries, covering both the general business world and conducting a specific examination within the financial sector.

According to Kamalnath (2020), AI holds significant promise in enhancing the intellectual and psychological independence of board members from management and prevailing majority opinions. For example, board members might hesitate to challenge their “fictive friends” or propose alternative courses of action due to groupthink. However, AI, when aiding in decision-making, is not influenced by such friendships or group dynamics. By providing directors with advanced tools to process vast amounts of data, AI helps address information overload and cognitive bias, enabling more informed decision-making. Strategically leveraging AI allows boards to substantially increase their efficiency and effectiveness, ultimately enhancing their capacity to fulfill fiduciary duties.

Cowger (2023) echoes Kamalnath (2020) in suggesting that AI can positively impact fiduciaries by providing tools for quickly analyzing large volumes of data, offering insights, and mitigating the influence of groupthink.  Additionally, well-designed AI tools can help fiduciaries detect misconduct or legal violations. However, Cowger (2023) also highlights the potential challenges that AI brings to fiduciaries. For example, both neglecting to utilize AI and blindly deferring to AI decisions can constitute breaches of the duty of due care, which mandates fiduciaries to gather all available material information and consider all reasonably obtainable advice and counsel before making decisions. The paper further suggests that as AI continues to advance, the liability of fiduciaries who leverage AI in decision-making is likely to increase. One key concern is AI’s limitations in handling conflicts of interest, potentially resulting in fiduciaries facing claims of breaching their duties, especially in scenarios involving different shareholder interests. Additionally, the divergence of AI decision-making from human values may pose risks of societal harm, prompting regulators to broaden fiduciary duties to include not only shareholders but also other stakeholders. This expansion of duties could lead to increased liabilities for corporate fiduciaries.

In Cowger (2023) and other studies examining AI in the ethics and legal domains, the black-box nature of AI algorithms is discussed as a central factor contributing to the complexities of responsible AI use. The black-box nature of AI algorithms refers to the inherent opacity surrounding their decision-making processes, making it difficult to discern how they arrive at their outputs. Unlike traditional algorithms, which provide clear insight into the steps taken to reach a decision, AI algorithms often operate through complex layers of computations and patterns that are not easily interpretable by humans. This opacity poses significant challenges in understanding the factors influencing AI decisions, leading to concerns about transparency and accountability.

Furthermore, in addition to transparency and accountability issues, commonly cited challenges include upholding human-centered values and fairness, ensuring robustness, security, and safety, and addressing biases. Each of these challenges has the potential to impact fiduciaries.

 

The Financial Sector

AI offers both benefits and challenges for fiduciaries in the financial sector. Among the benefits are the ability to analyze massive datasets, improved efficiency, and enhanced decision-making capabilities. The challenges posed by AI are evident in the regulations addressing conflicts arising from its use by investment advisers and broker-dealers, as proposed in the SEC’s agenda released on June 13, 2023. These challenges can stem from various factors, some of which affect the general business world and others specifically impact the financial sector. Here are some examples of how AI can affect fiduciaries in the financial sector:

Lack of Transparency and Accountability: Fiduciaries are required to act prudently and transparently in managing client assets. The lack of transparency in AI decision-making processes can make it challenging for fiduciaries to fulfill their duty of disclosure and provide clear explanations of investment strategies and outcomes to clients. Furthermore, AI systems can create significant accountability challenges, as their complex and opaque nature often makes it difficult to trace decision-making processes and assign responsibility for errors or biases.

Data Privacy and Security Concerns: Fiduciaries are obligated to uphold the confidentiality and security of client information. However, utilizing AI systems that depend on sensitive data raises concerns about data privacy and the risk of breaches. These breaches could compromise client confidentiality, directly impacting the fiduciary duty to safeguard client interests.

Overreliance on AI: Fiduciaries are obligated to exercise independent judgment and act in the best interests of their clients. However, overreliance on AI systems without sufficient human oversight can result in users performing worse on tasks compared to when either the user or AI operates independently (Passi and Vorvoreanu. 2022), compromising the ability to make informed decisions that align with client needs and objectives. Furthermore, the susceptibility of AI systems to adversarial attacks amplifies the risks associated with excessive dependence on these technologies (Truby et al., 2020).

AI Bias:  AI algorithms may exhibit biases for various reasons, leading to recommendations or decisions that are not in the best interests of clients. According to McGovern et al. (2024), these biases include human bias, data bias, statistical and computational bias, and systemic and structural bias. Human bias (e.g., confirmation bias, recency bias, and selection bias) involves cognitive biases introduced by individuals. Data bias stems from training datasets that reflect past lending biases, such as favoring applicants from certain socioeconomic backgrounds or geographic regions over equally qualified candidates from other groups. Statistical and computational bias arises from the statistical and computational processes during model training, including those introduced by the choice of algorithms, model architectures, or evaluation metrics. Systemic and structural bias refers to inherent biases present in the systems and structures within which AI operates.

Effects Specific to the Financial Sector

Systemic Risk: The interconnected nature of financial markets means that the use of “black box” algorithms by multiple market participants can amplify systemic risks. One of our previous articles provides a detailed explanation of this systemic risk effect. Therefore, the use of AI carries risk implications not only for the users themselves but also for others in the financial market, potentially affecting every fiduciary’s ability to fulfill their duty.

Market Manipulation Risks: AI algorithms can be exploited to create sophisticated manipulation techniques that generate artificial price movements, false market perceptions, or distorted trading activities. The opaque nature of ‘black box’ algorithms makes detecting and preventing such manipulation more challenging (Azzutti et al., 2022).  Moreover, a study by Mizuta (2020) found that an artificial intelligence agent, developed using a genetic algorithm in an artificial market simulation, identified market manipulation as an optimal investment strategy. The agent’s trades were recognized as market manipulation, demonstrating its ability to learn and execute manipulative strategies. When AI-driven market manipulation occurs, fiduciaries may face challenges in fulfilling their duties. The use of manipulative strategies by autonomous AI systems can distort market prices, mislead investors, and create artificial volatility, leading to potential financial losses for clients. To mitigate these risks, fiduciaries need to implement robust risk management practices, conduct thorough due diligence on AI algorithms and trading strategies, and stay informed about regulatory developments and best practices. Addressing the challenges posed by AI-driven market manipulation is essential to upholding their fiduciary duties and protecting the interests of their clients and beneficiaries.



Empowering Fiduciaries: The Importance of Continuous Learning

In the age of AI, it is essential for fiduciaries to engage in continuous learning to effectively manage the rapid advancements in technology and address new challenges. As AI technologies continue to evolve, fiduciaries must stay abreast of developments to adapt their practices accordingly. Continuous learning enables fiduciaries to gain insights into recent scientific discussions, innovative solutions, and best practices, empowering them to make informed decisions and fulfill their duties effectively.

For example, understanding the sources of AI bias and potential solutions is crucial. Numerous research papers focus on this topic. Ziosi et al. (2024) discuss the use of Explainable Artificial Intelligence (XAI) feature attribution methods and counterfactual approaches, which are instrumental in understanding algorithmic bias. Feature attribution aims to assign a value to each feature, representing its contribution to a model’s outcome, using methods like Shapley values. This helps to understand which features are most influential in determining an outcome and can highlight how protected characteristics such as race and gender contribute to the outcome. In contrast, counterfactual approaches focus on understanding causal relationships by examining hypothetical scenarios where certain input features are altered to see how outcomes change. This method can be used to learn what can be done to change the outcome. The paper critically examines both approaches, highlighting their strengths and limitations in uncovering and addressing algorithmic bias. By effectively utilizing these methods while being aware of their strengths and limitations, we can gain a comprehensive understanding of algorithmic bias, identify the factors driving biased outcomes, and develop targeted interventions to address and mitigate these biases effectively.

Research aimed at mitigating overreliance on AI is also a crucial area to stay informed about. Vasconcelos et al. (2023) investigate how explanations from AI systems have the potential to alleviate this overreliance. These explanations aim to clarify the rationale behind the AI’s predictions or recommendations, demonstrating how they logically follow from given conditions or premises. Two types of explanations are discussed: highlight explanations, which visually depict the reasoning process from premise to conclusion, and written explanations, which outline the logical arguments connecting the premise to the conclusion. Through a series of five experiments involving simulated AI in tasks of varying difficulty levels, the researchers find that explanations can reduce overreliance, particularly in more challenging tasks. Moreover, the understandability of explanations play crucial roles in determining their effectiveness in reducing overreliance. As explanations become easier to understand, overreliance on AI decreases.

Discussions on data privacy and security issues are prevalent in scientific research, providing valuable resources for continuous learning. For example, Alrubayyi et al. (2024) discusses several promising solutions to address security threats at the intersection of AI and IoT in IoMT and IoET applications, such as differential privacy, homomorphic encryption, and federated learning. Differential privacy is a concept in data privacy that aims to protect the privacy of individuals in a dataset while still allowing useful information to be extracted. By introducing controlled randomness into the data, it ensures that the output of an AI algorithm does not reveal specific information about any individual data point in the dataset. Homomorphic encryption allows computations to be performed on encrypted data without needing to decrypt it first. With this, sensitive financial data can be processed in untrusted environments, such as cloud services, without risking data leakage. Federated learning enables the training of machine learning models across multiple decentralized devices or servers holding local data samples. This approach does not require exchanging the data itself; only the model updates are shared and aggregated. We encourage readers to refer to the original article for explanations of other solutions.

While the examples above highlight important areas for fiduciaries to stay informed about, it’s essential to recognize that as AI technologies progress, these areas will also evolve. Continuous advancements in AI will introduce new challenges and opportunities, necessitating ongoing education and adaptation for fiduciaries. Staying abreast of the latest developments in AI ethics, regulatory changes, and emerging security threats is crucial. By doing so, fiduciaries can ensure they are equipped to manage client assets prudently, uphold transparency, and maintain accountability in an increasingly complex technological landscape.


References:

Alrubayyi, H., Alshareef, M. S., Nadeem, Z., Abdelmoniem, A. M., & Jaber, M. (2024). Security Threats and Promising Solutions Arising from the Intersection of AI and IoT: A Study of IoMT and IoET Applications. Future Internet16(3), 85. https://doi.org/10.3390/fi16030085

Azzutti, Alessio and Ringe, Wolf-Georg and Stiehl, H. Siegfried, Machine Learning, Market Manipulation and Collusion on Capital Markets: Why the “Black Box” Matters (January 6, 2022). European Banking Institute Working Paper Series 2021 – no. 84, University of Pennsylvania Journal of International Law, Vol. 43, No. 1, 2021, Available at SSRN: https://ssrn.com/abstract=3788872 or http://dx.doi.org/10.2139/ssrn.3788872

Cowger, A. R. Jr. (2023). Corporate Fiduciary Duty in the Age of Algorithms. Case Western Reserve Journal of Law, Technology & the Internet, 14, 138. Retrieved from https://scholarlycommons.law.case.edu/jolti/vol14/iss2/1/

Kamalnath, A. (2019). The Perennial Quest for Board Independence: Artificial Intelligence to the Rescue? Albany Law Review83, 1, 43–60.

McGovern, A., Bostrom, A., McGraw, M., Chase, R. J., Gagne II, D. J., Ebert-Uphoff, I., Musgrave, K. D., & Schumacher, A. (2024). Identifying and Categorizing Bias in AI/ML for Earth Sciences. Bulletin of the American Meteorological Society105, 3.

Mizuta, T. “Can an AI perform market manipulation at its own discretion? – A genetic algorithm learns in an artificial market simulation –,” 2020 IEEE Symposium Series on Computational Intelligence (SSCI), Canberra, ACT, Australia, 2020, pp. 407-412, doi: 10.1109/SSCI47803.2020.9308349.

Truby, J., Brown, R., & Dahdal, A. (2020). Banking on AI: mandating a proactive approach to AI regulation in the financial sector. Law and Financial Markets Review14, 2, 110–120. https://doi.org/10.1080/17521440.2020.1760454

Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M., & Krishna, R. (2022, December 13). Explanations can reduce overreliance on AI systems during Decision-Making. arXiv.org. https://arxiv.org/abs/2212.06823

Ziosi, M., Watson, D., & Floridi, L. (2024). A genealogical approach to algorithmic bias. Minds and Machines34, 9. https://doi.org/10.1007/s11023-024-09672-2

Like (0)