Navigating the Landscape of Artificial Intelligence: How Can The Financial Sector Make Use of Generative AI

Wei You Pan and Jack Yallop

#GenAI

In the ever-evolving landscape of financial technology, the conversation around artificial intelligence (AI), and particularly generative AI is gaining momentum. AI has been part of the financial landscape for decades but with the advances in generative AI come greater benefits but also risks that financial institutions need to consider in such a regulated industry.

While the potential benefits of generative AI are significant and the adoption by many is still being considered, a measured approach is needed when moving from proof of concept into production. In an edition of the Fintech Finance News Virtual Arena, several notable industry thought leaders from HSBC, Capgemini, and MongoDB came together to explore how the financial sector can make use of generative AI and what financial institutions must consider in their AI strategy.

Watch the panel discussion How can the financial sector make use of generative AI today with HSBC, MongoDB and Capgemini. Hear from:

  • EJ Achtner, Office of Applied Artificial Intelligence at HSBC

  • Dan Pears, Vice President, UK Practice Lead at Capgemini

  • Wei You Pan, Director, Financial Services Industry Solutions at MongoDB

  • Doug Mackenzie, Chief Content Officer at FF News

Addressing the challenges of generative AI

While financial technologists have always had to deal with persistent issues like risk management and governance, the adoption of generative AI in fintech introduces new challenges that AI specialists have always dealt with, like inherent biases and ethical concerns. One challenge that stands out for generative AI is hallucination — the generation of content that is not accurate, factual, or reflective of the real world. AI models may produce information that sounds plausible but is entirely fictional.

Generative AI models, especially in natural language processing, might generate text that is coherent and contextually appropriate but lacks factual accuracy. This poses challenges in different domains, including misinformation and content reliability. Examples of such challenges or risks may include:

  • Misleading financial planning advice: In financial advisory services, hallucinated information may result in misleading advice leading to unexpected risks or missed opportunities.

  • Incorrect risk assessments for lending: Inaccurate risk profiles may lead to poor risk assessments for loan applicants that can cause a financial institution to approve a loan at a higher risk of default than the firm would normally accept.

  • Sensitive information in generated text: When generating text, models may inadvertently include sensitive information from the training data. Adversaries can craft input prompts to coax the model into generating outputs that expose confidential details present in the training corpus.

It is thus paramount that financial institutions understand the technological impact, scale, and complexity associated with AI, especially the generative AI strategy. A strategic and comprehensive approach that encompasses various aspects of technology, data, ethics, and organizational readiness is critical. Here are some key considerations financial institutions must consider when adopting such a strategy:

  • Hallucination mitigation: Mitigating hallucination in generative AI is a challenging task, but several strategies and techniques can be employed to reduce the risk of generating inaccurate or misleading information. One promising strategy is to make use of the Retrieval Augmented Generation (RAG) approach to mitigate hallucination in generative AI models. This approach involves incorporating information retrieval mechanisms to enhance the generation process, ensuring that generated content is grounded in real-world knowledge. Vector Search is a popular mechanism to support the implementation of the RAG architecture which uses vector search to retrieve relevant documents based on the input query. It then provides these retrieved documents as context to the large language models (LLM) to help generate a more informed and accurate response.

  • Data quality and availability: Take a step back before adopting AI to ensure the quality, relevance, and accuracy of data being used for AI training and decision-making can be accessed in real time.

  • Education: Investing in training programs is key to addressing the skills gap in AI, ensuring the workforce is equipped to manage, interpret, and collaborate with AI technologies. For the adoption of AI to be successful, a culture of learning and development is vital, providing employees with the tools needed to be the absolute best that they can be for their personal and professional development. Furthermore, promoting awareness about potential vulnerabilities and continuously refining models to enhance their resilience against hallucination, biases, adversarial manipulation, and other weaknesses are essential to ensure success in generative AI applications.

  • Develop new governance, frameworks, and controls: Before going live, create safe and secure environments for testing and learning that allow you to fail fast in a safe manner. Moving headfirst into production with direct contact with customers can result in the wrong governance methods being implemented.

  • Monitoring and continuous improvement: Implement robust monitoring systems to measure and understand financial impacts, change impacts, scale, and complexity associated with the adoption of AI.

  • Scalability and integration: Design AI systems with scalability in mind to accommodate growing datasets and evolving requirements.

  • Security and privacy: Implement robust cybersecurity measures to safeguard AI models and the data they rely on. Techniques such as adversarial training, input sanitization, and incorporating privacy-preserving mechanisms can help mitigate the risk of generative AI inadvertently revealing private data. Incident response plans should be part of the cybersecurity measures, as well as regular education of the relevant stakeholders on security and privacy.

How MongoDB can help you overcome your data challenges

In the realm of adopting advanced technologies like AI and ML which require data as the foundation, organizations often grapple with the challenge of integrating these innovations into legacy systems, particularly when it comes to use cases such as fraud prevention where the platform is integrated with external sources for accurate data analysis on complete data. The inflexibility of existing systems poses a significant pain point, hindering the seamless incorporation of cutting-edge technologies. MongoDB serving as the operational data store (ODS) with a flexible document model enables financial institutions to efficiently handle large volumes of data in real time. By integrating MongoDB with AI/ML platforms, businesses can develop models trained on the most accurate and up-to-date data, thereby addressing the critical need for adaptability and agility in the face of evolving technologies.

Legacy systems, marked by their inflexibility and resistance to modification present another challenge in the pursuit of leveraging AI to enhance customer experiences and improve operational efficiency. Integration struggles also persist, especially in the financial sector, where the uncertainty of evolving AI models over time requires a scalable infrastructure. MongoDB's developer data platform future-proofs businesses with its flexible data schema capable of accommodating any data structure, format, or source. This flexibility facilitates seamless integration with different AI/ML platforms, allowing financial institutions to adapt to changes in the AI landscape without extensive modifications to the infrastructure.

Concerns regarding the security of customer data, especially when shared with third parties through APIs, further complicate the adoption of innovative AI technologies. Legacy systems can stand in the way of innovation as they are often more vulnerable to security threats due to outdated security measures. MongoDB’s modern developer data platform addresses these challenges with built-in security controls across all data. Whether managed in a customer environment or through MongoDB Atlas, a fully managed cloud service, MongoDB ensures robust security with features such as authentication (single sign-on and multi-factor authentication), role-based access controls, and comprehensive data encryption. These security measures act as a safeguard for sensitive financial data, mitigating the risk of unauthorized access from external parties and providing organizations with the confidence to embrace AI and ML technologies.

If you would like to discover more about building AI-enriched applications with MongoDB, take a look at the following resources: