Academy of Marketing Studies Journal (Print ISSN: 1095-6298; Online ISSN: 1528-2678)

Research Article: 2025 Vol: 29 Issue: 5

Explainable AI as A Strategic Asset: Enhancing Accountability in Financial Disputes for Regulatory and Market Gains

Ilavarasan Rajendran, Rushford Business School, Switzerland

Citation Information: Rajendran, S. (2025). Explainable ai as a strategic asset: enhancing accountability in financial disputes for regulatory and market gains. Academy of Marketing Studies Journal, 29(5), 1-10.

Abstract

As the financial services business incorporates artificial intelligence, it becomes an intrinsic component of the everyday routine. However, the 'black box' phenomenon of the AI decision process may bring in the critical issues of accountability, fairness, and consumer trust. This plainness creates both serious legal/ethical and economic questions around responsibility, neutrality, fairness, and user trust. This research demonstrates and explores the full potential of learning technology as a grave digging engine using explanatory AI and marketing tools for multiple financial uses. This research leverages recent regulations (e.g., the EU's GDPR and the US financial legislation) to highlight the limitations of using the traditional regulation of systems in the case of AI systems. The paper reflects how regulation design mandate (mandatory versus optional XAI) affects market welfare and shows that, in particular cases, almost the same effects can be achieved with optional XAI commands, reducing compliance costs. Based on the results, this study suggests that integrating XAI aspects in compliance and marketing may enable financial institutions to harness the competitive advantage emerging in the more transparent (due to increased usage of AI) era. This study examines the boundaries of existing regulation over the use of AI in the marketplace. It supports a multi-layered regulatory model to ensure AI accountability and reduce the risk of regulatory arbitrage. This model is considered important to maintaining the development of humans and data quality. Ultimately, it urges flexible approaches to analyzing how AI can damage markets and consumers.

Keywords

Explainable AI (XAI), Financial Regulation, Algorithmic Accountability, AI Transparency, Financial Disputes, Responsible AI, AI Governance, Regulatory Compliance, Black Box Problem, Trust in AI, Market Differentiation, Financial Technology.

Introduction

In the last decade, the deployment of artificial intelligence (AI) in decision-making systems has revolutionized numerous sectors, among which finance has been one of the major adopters. Because AI can process vast amounts of data and find patterns, financial institutions have found it invaluable for staying ahead in terms of efficiency, speed, and precision. AI also plays a significant role in regulatory compliance processes such as Know-Your-Customer (KYC) checks and transaction monitoring (Boyd et al., 2024).

Today, AI is increasingly deployed for jobs like risk assessment, fraud detection, credit scoring, algorithmic trading, portfolio management, and customer service automation. The AI market in fintech was valued at $38.36 billion globally in 2024 and is expected to reach $190.33 billion by 2030 at a CAGR of 30.6% (Report TC9214, 2025).

However, just as AI in financial markets becomes increasingly ubiquitous and sophisticated, it poses new challenges especially regarding regulation, monitoring, and responsibility. Standard financial regulation which is predicated on human-based decision-making and clear ideas and predicts who is responsible for legal compliance is not always well-equipped to keep pace with the speed and opacity of AI systems. Many advanced, intense learning models are “black boxes,” generating high-performance results while providing few clear explanations for how decisions were reached. The un-interpretability here is a real issue in settings with high stakes like dispute resolution, being accused of fraud, and compliance checks—where interpretability is not only wanted but also necessary(Yeo et al., 2025).

Regulators and lawmakers face a difficult challenge in assuring responsible AI without hindering innovation or relying on outdated regulatory frameworks. This fast-evolving environment has been leading to the increasing demand for Explainable AI (XAI), a body of methods that intend to provide AI models with transparency, interpretability and accountability. Practices like prudential supervision, disclosures that the government requires, and the retrospective initiation of enforcement processes commonly prove inadequate as the technology cannot be discretized according to the customary modes of classification and decision-making (Boyd et al., 2024). The regulatory delay leaves the financial markets more vulnerable to prejudiced outcomes, systemic error, and loss of trust, and it hampers our ability to help maintain the integrity of our markets and protect consumers.

AI technologies offer real-time transaction monitoring, which is crucial for detecting fraud. AI systems can detect irregularities in transaction data, lowering response time for probable fraud. AI-enabled chatbots and virtual agents have improved consumer satisfaction by delivering quick assistance, exact instructions, and a better experience (Aros et al., 2024). XAI is bridging the gap between human knowledge and machine calculations on the internet, rather than only meeting regulatory and conflicting criteria.

Problem Statement

The quick uptake of artificial intelligence (AI) in financial services has driven new efficiencies in credit rating and scoring, fraud detection, algorithmic trading and investment tactics. However, with banks and other financial institutions increasingly using opaque and complex machine learning (ML) models, the sector is also wrestling with mounting problems with transparency, interpretability, and accountability, especially where financial disputes and regulatory scrutiny are concerned. (Ahmad & Joseph, 2025)

The black-box nature of many AI systems makes it hard for stakeholders—customers, auditors, or regulators, to grasp the reasoning underlying important choices. The opacity challenges the fairness and robustness of algorithmic results, especially when challenged in legal or policy contexts. Regulatory models based on direct, transparent lines of responsibility and disclosure are not equipped to deal with the pace or complexity of modern AI innovation (Lakshmanan, 2024).

Generative AI could potentially drive U.S. fraud losses from US$12.3 billion in 2023 to US$40 billion by 2027, according to Deloitte’s Center for Financial Services, representing a CAGR of 32% (Lalchand et al., 2024). This concerning trend suggests that there is an urgent need for accountable, transparent AI systems that are able to defend and explain their conclusions. Banks can reduce risk and more effectively settle disputes while restoring trust in financial systems increasingly driven by algorithms.

Despite increasing awareness of these issues, the strategic impact of Explainable AI (XAI) in addressing them has not yet materialized. It is necessary to consider how XAI can be used as both a compliance enabler and a competitive mechanism, facilitating financial dispute accountability, regulation surveillance, etc., and helping the market achieve overall integrity and confidence (Boyd et al., 2024).

Literature Review

The Rise of Explainable AI in Financial Services Marketing

Modern financial services marketing rely heavily on artificial intelligence to personalize customer experiences, offer dynamic pricing, analyse sentiment, and automate processes. Fallback Financial Services can add value by enhancing segmentation, optimizing advertising, and managing interactions in real time. But as we become more reliant on complex AI systems, we face a key issue: we cannot peer inside the decision-making process, particularly regarding customer-facing activity like credit scoring, automated investment advice, fraud, etc (Ridzuan et al., 2024).

Now, as companies automate decisions that can have a life-altering impact on consumers, marketing departments need to ask themselves a crucial question: How do we explain to consumers why they were denied an application for, say, a financial recommendation? That’s at the heart of explainability: the ability to understand, audit, and explain the basis for AI-driven decisions.

As the banking industry continues to evolve in a digital-first world, explainable AI (XAI) is proving to be a strategic marketing solution. It allows brands to build trust, ensure fair treatment and do what they say. An Increasing amount of research and industrial experience suggests that customers are more willing to use an AI-based service if they gain insight into the decision process. (Ridzuan et al., 2024) In contrast, black-box models are also potentially harmful to a business’s reputation and susceptibility to regulation (when perceived as unfair or discriminatory).

Moreover, there is increasing pressure from regulators for the explainability of AI. For example, some U.S. state insurance regulators require clear explanations for automation that directly caused an adverse effect on policies today. Regulation changes like these make a stronger case for AI to be marketed.

To fully achieve AI's marketing potential, organisations must implement a complete explainability plan that blends technical design with human understanding. This involves:

1. Empowering customer-facing teams (e.g., sales, support, advisors) to interpret and relay AI decisions clearly and confidently.

2. Leveraging tools like SHAP and LIME to provide meaningful, instance-level insights into model behaviour (Yeo et al., 2025).

3. Integrating explainability into the customer journey—ensuring that an intuitive, user-friendly rationale accompanies automated financial decisions.

Explainability is no longer backend compliance metric but a frontline marketing enablement. It reassures customers that their data is being handled ethically, enables them to challenge or understand decisions, and helps organizations maintain transparency and credibility in a competitive digital economy.

According to McKinsey, organizations that incorporate explainability into the very heart of their AI strategy are far more likely to see substantial EBIT (Earnings before Interest and Taxes) and revenue growth. These firms not only monitor AI outputs but communicate the 'why' to customers — which will be key to bridging the gap between data science and consumer confidence (Giovine & Roberts, 2024). For instance, Chime's financial wellness app uses AI to provide financial advice and nudges, but it ensures the nudges come complete with short explanations to avoid appearing manipulative.

Uses of AI in the Financial Markets

Expanding Credit Access and Creating Marketing Risks: Artificial intelligence has transformed how consumer credit is marketed, underwritten and serviced for the past decade. The traditional lending models largely depended on finite, structured data (like credit scores, income, and history of payments). Nevertheless, contemporary AI systems have been getting fed thousands of pieces of information, from education level to geolocation, rental history, browsing history, and even habits on social media. This transformation has increased credit availability, most notably for “credit invisibles”: people with little or no credit history, many of whom are in underserved or minority communities (Fletcher & Le, 2020).

To a marketer, that is potent. Using AI allows financial institutions to segment consumers more accurately and provide custom loan products in new markets that were previously unreachable. Upstart, for instance, an AI-powered lending platform, considers more than 1,500 variables in its proprietary credit decisioning model to individualize loan offers while adhering to legal mandates like the Equal Credit Opportunity Act (Montagnani et al., 2024). Its system will even automatically compose Adverse Action Notices to give utilities even more compliance support and improve customer communication.

To meet that disruption threat, belatedly, the traditional credit bureaus — such as Equifax and Experian have started integrating AI systems into their credit scoring models, creating hybrid systems that include a traditional score but are built on behaviour-based models. These strategies enable marketers to develop ever more micro-targeted campaigns, tailor product benefits according to a consumer’s profile and present real-time pre-approval offers according to a dynamic risk estimate (Anang et al., 2024).

However, the business potential is immense, as are the ethical and reputational hazards. AI models frequently work as black boxes, causing "proxy discrimination", meaning formally neutral inputs are related to protected characters, like race or gender. Without explainability, banks run the risk of digital redlining, with historically disempowered communities once again being left out by happenstance. This lack of transparency not only results in regulatory scrutiny, erosion of brand trust, and unhelpful PR cycles in a world where people expect fairness and transparency but also takes place through a global pandemic and amid a recession(Fletcher & Le, 2020).

AI's Double-Edged Sword in Market Dynamics: Artificial intelligence (AI) and machine learning have revolutionized the nature of trading with high-speed, algorithm-driven investment strategies. With more than 60% of trades now performed via these systems, companies enjoy enhanced liquidity, cost-effectiveness and instant response(Fletcher & Le, 2020).

However, rapid advancements can lead to instability, as evidenced by the 2010 Flash Crash. Algorithms based on outdated or homogeneous data can lead to price volatility, copycat behaviour, and systemic danger. Disruptions in investor confidence can negatively impact brand confidence and consumer retention in marketing. Therefore, in order to preserve credibility, set themselves apart morally, and retain long-term investor relationships, companies promoting AI-driven trading systems must cut innovation with an explainable AI structure(Montagnani et al., 2024).

Risk management: AI is changing the game when it comes to risk, making it possible for financial firms to detect, assess, and predict risks more quickly and accurately, from liquidity risk to volatility surges. Regulators also use these levers to adjust capital charges and levels of systemic risk, particularly for Global Systemically Important Banks (G-SIBs) (Fletcher & Le, 2020).

However, relying on AI comes with model risk, for instance, when algorithms misconstrue real-life data or fail to adjust to nuanced input. Mismodelings could also increase compliance costs, mischaracterize systemic exposure or even cast a bank into a higher tier of regulation when it did not deserve that treatment. From a marketing perspective, such failures can potentially erode trust in the brand, lead to regulatory scrutiny and erode consumer perceptions.

The Marketing Risk of Black-Box AI

The AI’s predictive powers are revolutionizing financial marketing, but not without risk. The heart of this tension is something called the black-box problem; AI makes decisions that are not just transparent to us, the creators of AI. In many cases, they are simply indecipherable. It is perhaps most damaging in a marketing context. If customers are denied loans, labelled frauds or deemed a “risk” without explanation, it can damage brand trust and loyalty irreparably (Fletcher & Le, 2020).

According to Deloitte (2024), fraud losses in the United States due to AI might reach $40 billion by 2027, underscoring the actual financial and reputational threats that opaque systems pose (Lalchand et al., 2024). Without transparency, companies risk frustrating users who feel that they have been unfairly judged or mistreated. Moreover, in a heavily regulated environment, systems like this are particularly challenging to get through regulatory compliance, especially regarding accountability and liability.

The reliance on data that AI has to have only makes that more challenging. Low-quality training data or ingrained biases can distort results, reinforcing discrimination or failing to take into account important edge cases. Worse, in capital markets, a lot of AI systems react in the same way to market signals where they can become caught in feedback loops, exacerbating volatility and putting systemic stability at risk – as we saw with Flash Crashes, for instance(Fletcher & Le, 2020).

For marketers, this poses urgent questions: How can we balance AI's efficiency and fairness? Can we defend our choices to regulators and consumers? Furthermore, if something goes wrong, who is at fault? The algorithm itself, the financial institution, or the developer?

Promoting explainability, ethical design, and regulatory preparation will be crucial as AI continues to grow, not only for compliance but also to maintain public confidence and corporate efficacy in AI-led advancements.

XAI and Regulatory Marketing Strategy

With AI-driven decision-making being adopted by firms more widely, especially in high-stakes areas such as lending and insurance, regulatory marketing strategies associated with Explainable AI increasingly influence competitive differentiation and policy compliance. When companies compete as duopolies, they strategically select their XAI approach, product quality, and pricing to reflect the varying tastes of consumers. Firms make these decisions based on the competitive landscape and how market leaders perceive the emerging global regulatory landscape for algorithmic transparency (Mohammadi et al., 2024).

XAI regulation is gaining traction globally, as is the case with the European Union’s 2016 General Data Protection Regulation. Introduced in 2016, GDPR grants individuals a “right to explanation” for decisions made by automated systems that significantly affect them, particularly in legal or financial contexts (Goodman & Flaxman, 2017). This has been praised by consumer rights groups and some tech companies like Meta, which see it as an opportunity to enhance data governance and consumer trust. Similarly, France’s Digital Republic Act mandates explanations of algorithmic decisions by public bodies, reflecting a growing consensus on the importance of explainability (Liu & Wei, 2021).

These regulatory shifts interact directly with marketing strategy. In our model, policy-makers can either mandate XAI, requiring all firms to offer a set level of explanation or implement optional XAI, where firms decide whether to adopt the policy-specified explanation level. Interestingly, optional XAI nearly always yields the same welfare benefits as mandatory XAI while lowering enforcement and monitoring costs. This is because firms can implement XAI as a strategic instrument of differentiation. As a result, they optimize levels of explanation based on the intensity of consumer values for transparency, enabling them to avoid highly damaging and destructive price warfare. (Mohammadi et al., 2024) This market-based asymmetry often generates welfare benefits, particularly as consumers possess distinct values of transparency.

However, it calls for a universal full explanation, where AI decisions are broken down into comprehensible components and considered more controversial. In addition, the global debates epitomize the tensions between innovation and regulation. For instance, although GDPR-style laws are being considered in the U.S. and the FCRA is a partial analogue, critics maintain that such regulation will indeed chill innovation, let alone that many AI models are inherently unexplainable (Liu & Wei, 2021).

Thought leaders such as Peter Norvig and others ask why AI should be held to a standard of explainability even higher than that of human systems, which rely on intuition, which is very hard to explain in words. (Adadi & Berrada, 2018) Ultimately, in this expressed way, a regulatory strategy that marketing adopts can more easily support innovation and consumer protection. Companies that can integrate emerging policy landscapes with their marketing and product strategies and integrate them with a flexible XAI policy on a contextual, multilayered level can take more advantage by rapidly enhancing XAI.

Strategic Market Gains from Explainability

Given the heightened dependence on consumer trust and regulatory scrutiny in many high-profile industries such as financial services, healthcare, and insurance, positioning XAI as part of a brand and market strategy may be more straightforward. A few examples include loan approvals, insurance pricing, and fraud detection, yet they result in legal and sometimes emotionally charged decisions for consumers. Thus, companies that use XAI’s advantages and show the public how their algorithms operate and make decisions are viewed as more transparent, ethically sound and customer-oriented (Rane et al., 2023).

Therefore, XAI is a marketing tool when entities include explainability in their marketing narrative, and they can differentiate with XAI from their competitors. This is also consistent with corporate social responsibility (CSR) objectives that allow companies to position themselves as a champion for responsible AI and ethical innovation. When organizations can demonstrate their dedication to demystifying AI, they fulfil regulatory requirements and create emotional trust among users, even those whom AI-based decisions may influence.

Moreover, explainability alleviates ambiguity, a common cause of consumer dissatisfaction and backlash, especially in cases of algorithmic absolute denial or failure. Clear and comprehensible explanations could also help avoid consumer complaints, regulatory scrutiny, and media-fuelled allegations and reduce reputational risk, legal liability, and crisis communication costs(Aros et al., 2024).

For instance, if a credit applicant is rejected, the full awareness of the factors contributing to the refusal creditworthiness, monthly income, or loan intention – may give him satisfaction even if the conclusion remains poor. Finally, research shows that clear AI practices can boost brand equity, NPS, and customer lifetime value and customer retention. All of these are critical measures in financial marketing. By building a reputation for fairness and transparency, firms can also attract more informed and loyal customer segments, reduce churn, and gain competitive advantage without necessarily engaging in unsustainable price competition.

Discussion

The implementation of Explainable AI into business and regulatory contexts is a critical inflexion point in how firms will use and disclose AI-driven decision-making. As evinced by our research, XAI has transformative potential beyond its technical instantiation: it is becoming a regulatory requirement, a competitive marketing differentiation, and a strategic necessity. For example, in regulated industries like finance, where algorithmic outcomes can directly impact consumers’ credit scores and financial health, XAI enables compliance with existing legal frameworks like the European Union’s General Data Protection Regulation or the U.S. Algorithmic Accountability Act while also building consumer confidence in a world that is becoming increasingly savvy to AI technologies (Aros et al., 2024).

Our discussion demonstrates that the multiple dimensions of XAI constitute its value. First, from a regulatory standpoint, mandates such as the EU’s right to explanation signal a global trend toward AI transparency. While the general belief is that mandatory XAI regulations enhance societal welfare, the referenced model shows that optional XAI may deliver the same outcomes if well-designed, encouraging market-based differentiation (Grennan et al., 2022). To this end, firms that adopt higher levels of explainability do so as a form of signalling toward consumer segments that value transparency, a factor that mitigates price wars and can enable personalized marketing.

Second, XAI has numerous marketing implications. Since transparency is becoming a new metric for brand equity, firms that share the logic behind their AI tools—for example, their credit approval decisions or fraud detection alerts—are seen as more ethical and customer-oriented. Including XAI in CSR initiatives or user education campaigns can strengthen brand positioning and support customer retention. It can also prevent reputation risks related to incomprehensible algorithmic decisions.

Implementing XAI can be a two-step process. The main approaches involve creating effective transparency systems and processes for understanding, engaging customers in explaining fundamental complexities, coaching, and implementation, ensuring reliability and safety with feedback collection systems, and resolving deployment issues.

However, the road to effective implementation is not a simple one. Operational challenges include defining a “sufficient” explanation, the trade-off between accuracy and interpretability, and conflicting stakeholder expectations. These pave the way for the requirement for context-specific standards. It is evident that organizations need to create dedicated AI governance bodies. Such bodies provide XAI standards and ensure that legal, technical, and business teams can work together to comply without inhibiting the development of innovations. Further, it is essential since organizational readiness for XAI exists on a broad spectrum. The discussion also points to the importance of maintaining a feedback loop—where user responses to AI decisions inform continuous model design and explanatory mechanisms improvement. This dynamic approach supports both regulatory compliance and customer satisfaction.

However, it is also critical to note that even though there is a strong affluence with a full-explanation model, more explanation is not necessarily better. Evidence shows that overwhelming customers with technical data can cause disorientation, antipathy, or responsiveness, a factor known as cognitive load. As a result, the cognitive accessibility of explanations should also be under proper focus and should be differentiated across various users. Thus, the conclusion that XAI is not only about the organization of technical design but also about effective communication has been supported. The discussion outlined the necessity for integrating explainability into the whole AI lifecycle, from model establishment to public narrative. Indeed, XAI is not distinguishable because it necessitates making AI discernible, understandable, communicable, and responsible.

Recommendations

To harness the full potential of AI while ensuring regulatory compliance, public trust, and strategic differentiation, businesses must take a proactive, structured approach to implementing explainable AI (XAI). The following recommendations offer a pathway for operationalizing XAI across organizational functions:

Embed Explainability into Responsible AI Frameworks: Organizations should lay out a series of explanations to be provided by their AI system that "given a model's purpose, covariates, and downstream effects will help a user assess the relative merits and modes of action of the proposed algorithms.

Establish a Cross-Functional AI Governance Committee: An effective AI governance committee is central to institutionalizing explainability. This body should include stakeholders from business leadership, technical development teams, legal and compliance experts, and ethics advisors. Its responsibilities include:

• Setting organization-wide standards for AI transparency.

• Developing a risk-based taxonomy to classify AI use cases according to their need for explainability.

• Defining escalation protocols for high-risk or legally sensitive models.

• Evaluating trade-offs between model complexity, performance, and transparency.

Such committees enable firms to create consistent, scalable, and context-specific explanation policies that balance user needs, regulatory demands, and business objectives.

Operationalize Use-Case-Specific Review Processes: Because AI applications vary widely in purpose and impact, companies must conduct structured, case-by-case assessments of explainability needs.

Invest in Explainability-Driven Talent and Tooling: Sustainable implementation of XAI requires ongoing investment in:

• Talent: Professionals with interdisciplinary expertise in technology, ethics, law, and policy.

• Technology: Context-appropriate explainability tools that integrate with AI development pipelines and support both ex-ante (pre-deployment) and post-hoc (output-focused) explanation techniques.

• Research: Continuous investigation into evolving regulatory frameworks and emerging best practices to future-proof systems.

• Training: Cross-functional education programs to upskill internal teams and ensure a consistent understanding of XAI's role, benefits, and limitations.

Firms should carefully evaluate off-the-shelf and open-source tools, recognizing that generic post-hoc explanations may not capture the full complexity or context of high-impact AI decisions.

Reasonably balance explainability and model performance: Increasing explainability might require simplification of models, but the trade-off should be assessed on a case-by-case basis. (In some cases, teams will use predictive modelling surrogates like logistic regression to get the predictions of a complex model in a more interpretable form) Nonetheless, companies need to see that these surrogates represent underlying decision logic faithfully and helpfully (especially in regulated domains).

Encourage Transparency and Accountability: Last but not least, enterprises must treat explainability as a compliance mandate and a strategic asset. By incorporating explainable outputs within their marketing, user communications, and customer service processes, companies can help cultivate consumer trust, enhance system usage, and differentiate themselves in competitive marketplaces.

Conclusion

Explainable AI (XAI) is critical in financial services to ensure transparency, trust, and compliance in the AI realm. As the regulatory regime becomes stricter and as more customers demand transparency, banks and financial service providers are unlikely to be in a position to exclude XAI from their operations for legal reasons if, by doing so, they discredit themselves in the market.

XAI breakthroughs are already simplifying complex decision-making and helping organizations achieve better visibility and proximity to their customers. By embracing explainability in AI governance and marketing, organizations in the financial industry can inspire integrity and innovation by turning transparency into a competitive differentiator.

Moreover, explainability has the potential to empower end users through a better understanding of how decisions impacting their financial well-being are determined. Beyond simply increasing users’ self-assuredness, this clarity marks for more sophisticated engagement with financial services and products. XAI-focused companies may benefit from reduced customer complaints, less regulatory intervention, and a loyal customer base. Explainability will be crucial to building an open and human-centred ecosystem with AI in this journey of innovating balanced against responsibility.

References

Adadi, A., & Berrada, M. (2018). Peeking Inside the Black-Box: A survey on Explainable Artificial Intelligence (XAI). IEEE access6, 52138-52160.

Indexed at, Google Scholar, Cross Ref

Ahmad, N., & Joseph, C. (2025). Enhancing Financial Decision-Making with Explainable AI: fair machine learning models for real estate and insurance applications. In ResearchGate. Retrieved June 9, 2025, from https://www.researchgate.net/publication/388614616_Enhancing_Financial_Decision-Making_with_Explainable_AI_Fair_Machine_Learning_Models_for_Real_Estate_and_Insurance_Applications

Anang, N. a. N., Ajewumi, N. O. E., Sonubi, N. T., Nwafor, N. K. C., Arogundade, N. J. B., & Akinbi, N. I. J. (2024). Explainable AI in financial technologies: Balancing innovation with regulatory compliance. International Journal of Science and Research Archive, 13(1), 1793–1806.

Indexed at, Google Scholar, Cross Ref

Aros, L. H., Molano, L. X. B., Gutierrez-Portela, F., Hernandez, J. J. M., & Barrero, M. S. R. (2024). Financial fraud detection through the application of machine learning techniques: a literature review. Humanities and Social Sciences Communications, 11(1).

Cross Ref

Boyd, R., Kennedy, O., Stevens, M., & Cole, J. (2024). Explainable AI models for enhancing transparency in Financial Decision-Making. In ResearchGate. Retrieved June 9, 2025, from https://www.researchgate.net/publication/392413352_Explainable_AI_Models_for_Enhancing_Transparency_in_Financial_Decision-Making

Fletcher, G.-G. S., & Le, M. M. (2020). The future of AI accountability in the financial markets. In Duke University School of Law. Retrieved June 10, 2025, from https://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=7081&context=faculty_scholarship

Giovine, C., & Roberts, R. (2024). Building AI trust: The key role of explainability. McKinsey & Company.

Google Scholar

Goodman, B., & Flaxman, S. (2017). European Union regulations on Algorithmic Decision making and a “Right to Explanation.” AI Magazine, 38(3), 50–57.

Cross Ref

Grennan, L., Kremer, A., Singla, A., & Zipparo, P. (2022). Why businesses need explainable AI—and how to deliver it. McKinsey & Company.

Google Scholar

Lakshmanan, A. (2024, December 13). Exploring explainable AI: Financial services. Aspire Systems - blog. https://blog.aspiresys.com/artificial-intelligence/exploring-explainable-ai-xai-in-financial-services-why-it-matters/

Lalchand, S., Srinivas, V., Maggiore, B., & Henderson, J. (2024, May 10). Generative AI is expected to magnify the risk of deepfakes and other fraud in banking. Deloitte Insights. https://www2.deloitte.com/us/en/insights/industry/financial-services/financial-services-industry-predictions/2024/deepfake-banking-fraud-risk-on-the-rise.html

Liu, B., & Wei, L. (2021). Machine gaze in Online Behavioral Targeting: The effects of algorithmic human likeness on social presence and social influence. Computers in Human Behavior 124 (2021): 106926.

Indexed at, Google Scholar, Cross Ref

Mohammadi, B., Malik, N., Derdenger, T., & Srinivasan, K. (2024). Regulating Explainable Artificial Intelligence (XAI) May Harm Consumers. Marketing Science.

Indexed at, Google Scholar, Cross Ref

Montagnani, M. L., Najjar, M.-C., & Davola, A. (2024). The EU Regulatory approach(es) to AI liability, and its Application to the financial services market.

Rane, N., Choudhary, S., & Rane, J. (2023). Explainable Artificial Intelligence (XAI) approaches for transparency and accountability in Financial Decision-Making. SSRN Electronic Journal.

Indexed at, Google Scholar, Cross Ref

Report TC9214. (2025, February). AI in Finance Market Size, Share, Growth Report - 2030. MarketsandMarkets. Retrieved June 9, 2025, from https://www.marketsandmarkets.com/Market-Reports/ai-in-finance-market-90552286.html

Ridzuan, N. N., Masri, M., Anshari, M., Fitriyani, N. L., & Syafrudin, M. (2024). AI in the Financial Sector: The Line between Innovation, Regulation and Ethical Responsibility. Information, 15(8), 432.

Indexed at, Google Scholar, Cross Ref

Yeo, W. J., Van Der Heever, W., Mao, R., Cambria, E., Satapathy, R., & Mengaldo, G. (2025). A comprehensive review on financial explainable AI. Artificial Intelligence Review, 58(6).

Indexed at, Google Scholar, Cross Ref

Received: 13-Jun-2025, Manuscript No. AMSJ-25-15994; Editor assigned: 14-Jun-2025, PreQC No. AMSJ-25-15994(PQ); Reviewed: 18- Jun-2025, QC No. AMSJ-25-15994; Revised: 10-Jul-2025, Manuscript No. AMSJ-25-15994(R); Published: 04-Jul-2025

Get the App