Explainable AI in Finance: Part 1 (with Infographics)

As artificial intelligence (AI) increasingly permeates various sectors, the importance of explainability has become a focal point in discussions around AI ethics. One major concern is that many AI models function as “black boxes.” These models, though capable of producing results, do not reveal their inner workings—similar to receiving an answer from a magic box without understanding how it was derived. Researchers are striving to develop explainable AI models, which are like transparent boxes where you can see the gears and mechanisms, making it possible to understand how the machine arrives at its conclusions. While black-box models are often highly effective, explainable AI is especially critical in fields like medicine and finance, where understanding the decision-making process is essential.

In finance, where AI is reshaping everything from investment strategies to risk assessments, the ability to demystify AI’s decision-making processes is essential for maintaining ethical standards, fostering trust among stakeholders, ensuring fair and justifiable financial outcomes, and upholding accountability.

Explainability of AI is particularly crucial in the context of AI-driven trading algorithms, as these models have the potential to amplify systemic risk in financial markets. Without a clear understanding of how these algorithms make decisions, their actions can lead to unintended consequences, such as exacerbating market volatility or triggering cascading failures during periods of stress. By enhancing the transparency of AI models, stakeholders can better assess and mitigate the risks associated with automated trading, ensuring that the deployment of AI technologies does not compromise market stability.

By drawing on examples from financial studies, we can clarify the concept of explainable AI in this context and demonstrate how these models enhance the decision-making process. This article will explore several such examples.

Ohana et al. (2021) utilizes 150 features categorized into risk aversion metrics, price indicators, financial metrics, macroeconomic indicators, technical indicators, and rates. These features are fed into an explainable AI model—GBDT—and alternative, less explainable ML models, such as deep LSTM and Deep Fully Connected (Deep FC) networks, to predict stock crashes. The explainable AI model demonstrates superior performance to its unexplainable alternatives in terms of performance metrics such as accuracy, precision, recall, F1-score, average precision, AUC, and AUC-PR.

The paper identified a range of important features as predictors of stock crashes, such as the 250-day percent change in the S&P 500 price/earnings (P/E) ratio, US 2-year and 10-year yields, the 20-day percent change in the Nasdaq 100 price, and put/call ratios.

What Makes the Explainable AI Model GBDT Explainable in This Context?

(The full-screen button is located in the lower right corner of the infographic.)

To be continued in Part 2


References:

Ohana, Jean-Jacques and Ohana, Steve and Benhamou, Eric and Saltiel, David and Guez, Beatrice, Explainable AI Models of Stock Crashes: A Machine-Learning Explanation of the Covid March 2020 Equity Meltdown (March 21, 2021). Université Paris-Dauphine Research Paper No. 3809308, Available at SSRN: https://ssrn.com/abstract=3809308 or http://dx.doi.org/10.2139/ssrn.3809308

Like (0)