Author(s): Ilavarasan Rajendran
As the financial services business incorporates artificial intelligence, it becomes an intrinsic component of the everyday routine. However, the 'black box' phenomenon of the AI decision process may bring in the critical issues of accountability, fairness, and consumer trust. This plainness creates both serious legal/ethical and economic questions around responsibility, neutrality, fairness, and user trust. This research demonstrates and explores the full potential of learning technology as a grave digging engine using explanatory AI and marketing tools for multiple financial uses. This research leverages recent regulations (e.g., the EU's GDPR and the US financial legislation) to highlight the limitations of using the traditional regulation of systems in the case of AI systems. The paper reflects how regulation design mandate (mandatory versus optional XAI) affects market welfare and shows that, in particular cases, almost the same effects can be achieved with optional XAI commands, reducing compliance costs. Based on the results, this study suggests that integrating XAI aspects in compliance and marketing may enable financial institutions to harness the competitive advantage emerging in the more transparent (due to increased usage of AI) era. This study examines the boundaries of existing regulation over the use of AI in the marketplace. It supports a multi-layered regulatory model to ensure AI accountability and reduce the risk of regulatory arbitrage. This model is considered important to maintaining the development of humans and data quality. Ultimately, it urges flexible approaches to analyzing how AI can damage markets and consumers.