Local Interpretable Model-Agnostic Explanations (LIME) is another technique commonly used in financial studies to improve the explainability of AI-based decision-making. LIME works by approximating the predictions of black-box AI models with simpler, more interpretable models. It generates explanations by locally approximating the complex AI model around a specific prediction (i.e., a single data point).
For example, if a deep neural network predicts that a loan application is risky, LIME attempts to explain why it made that prediction. To do this, LIME perturbs the input data (slightly modifying it) and observes how the complex model’s predictions change. By sampling and creating slightly altered versions of the original input, LIME collects the predicted outputs for each sample. It then fits an interpretable model (such as a linear model or decision tree) to approximate the behavior of the complex model in the local neighborhood of the data point. This simpler model highlights which features of the input had the most influence on the original prediction, allowing users to understand why the complex model made a specific decision.
Teplova et al. (2023) utilized LIME to enhance the interpretability of complex machine learning models that analyze the impact of various sentiment indices on NFT sales. By applying LIME, the authors identified different sentiment indices as significant predictors of NFT sales volume, noting that the influence of these indices varied across different time periods. The paper underscores the advantage of using LIME with models that rely on features that are not easily interpretable on their own.
Take, for instance, raw sentiment scores derived from social media posts. These scores, often represented as numerical values ranging from -1 (very negative sentiment) to +1 (very positive sentiment), provide minimal context on their own. For example, a score of 0.8 might suggest positive sentiment, but without additional information, it’s unclear how this sentiment correlates with actual sales volume. Does a score of 0.8 correspond to a substantial increase in sales, or is the relationship weak? LIME can help answer such questions.
Suppose we want to interpret the prediction for Day 2, where the sentiment score is -0.5. LIME generates a set of perturbed samples by creating variations around Day 2’s original features, including the sentiment score (e.g., -0.6, -0.4, and -0.5). The complex model then predicts sales volume for these perturbed samples. LIME subsequently fits a simple, interpretable model (such as linear regression) to these predictions. This surrogate model approximates the behavior of the complex model in the local vicinity of Day 2, allowing us to understand how changes in sentiment influence predictions.
The surrogate model might reveal, for example, that the negative sentiment score of -0.5 contributes to a predicted decrease in sales volume. It could show that for every 0.1 decrease in sentiment score, sales volume decreases by $500. In this way, LIME helps translate complex model predictions into understandable insights, linking sentiment scores more directly to sales outcomes.
Case-Based Reasoning (CBR) is an AI approach that solves new problems based on solutions to similar past problems. Instead of trying to apply general rules, CBR works by recalling and adapting previous cases that are relevant to the new situation.
A simplified example of CBR involves the following steps: When faced with a new problem, the system first searches its database, or “case library,” for past cases that closely resemble the current situation. Once one or more similar cases are identified, their solutions are reused—either as is or with slight modifications to address the specifics of the new problem. If necessary, the proposed solution is tested or evaluated, and adjustments are made to resolve any issues that arise. Finally, after successfully solving the problem, the system adds this new case to its library for future reference.
Li et al. (2022) demonstrates the application of Case-Based Reasoning (CBR) in financial risk detection. In this context, each case represents a unique situation characterized by various features (or attributes) that describe its properties. For example, a case might represent a credit card default, bank churn, or financial distress, with attributes such as income, credit score, loan amount, and repayment history.
The CBR approach uses these cases to identify similarities and make predictions or decisions based on past experiences. When a new query case is presented, the system retrieves similar cases from its database to inform its decision-making process, leveraging the outcomes of previous cases to provide insights or recommendations for the current situation. This reliance on historical cases is a fundamental aspect of the CBR methodology, enabling a more interpretable, experience-based approach to problem-solving in financial risk management.
The paper outlines three types of explanations. Firstly, results explanation based on similar cases involves presenting the most analogous past cases to the current query, allowing users to understand the rationale behind the decision by comparing it to previously resolved instances. Secondly, results explanation based on probability provides a quantitative assessment of the likelihood that the current case falls into a particular category, such as default or non-default, by calculating the probability weights of the most similar cases. Lastly, results explanation based on feature relevance highlights the importance of various attributes in the decision-making process, identifying which features significantly influenced the prediction outcome. Together, these explanations facilitate a clearer understanding of the model’s predictions and bolster user confidence in the system’s recommendations.
References:
Li, W., Paraschiv, F., & Sermpinis, G. (2022). A data-driven explainable case-based reasoning approach for financial risk detection. Quantitative Finance, 22(12), 2257–2274. https://doi.org/10.1080/14697688.2022.2118071
Teplova, T., Kurkin, A., & Baklanova, V. (2023). Investor sentiment and the NFT market: prediction and interpretation of daily NFT sales volume. Annals of Operations Research. https://doi.org/10.1007/s10479-023-05693-9
Like (0)