How banks can ensure customer safety in Gen AI adoption | Asian Business Review
, APAC
309 views
ChatGPT, one of the most popular generative AI chatbots in the world. Photo by Mojahid Mottakin via Unsplash.

How banks can ensure customer safety in Gen AI adoption

Gen AI is expected to add up to US$340b in new value to banks. But obtaining a slice of this will not be an easy process.

Generative AI (Gen AI) is all the rage amongst companies today. Globally, more than 70% of companies are exploring real use cases for Gen AI, according to a study by Forrester. For experts, it's clear why:  firms that actively harness genAI to enhance experiences, offerings, and productivity will realize outsized growth and will outpace their competition, Forrester said in a report.

The benefits of gen AI are especially enticing for the banking industry, who have the potential to gain up to US$340b in new value from it, according to a study by McKinsey & Co.

“By harnessing Gen AI, banks and financial institutes can now improve personalisation of its services to individual customers based on their preferences and behaviour. They can also generate synthetic data that closely resembles real-life scenarios, which can aid training and address biases that may exist in historical datasets,” Andy Cease, director of product marketing for Entrust, told Asian Banking & Finance via exclusive correspondence.

With Gen AI’s possible adoption, however, came concerns around security and privacy. 

“The possibilities of Gen AI are endless, prompting banks and financial institutes to reshape their strategies. However, as with any adoption of new technologies, to use Gen AI effectively, organisations need to think first about what their users and customers’ needs and wants are,” Cease noted.

Asian Banking & Finance interviewed three industry leaders and experts from IBM Technologies, LexisNexis Risk Solutions, and Entrust to learn more on how the banks and companies can ensure customer safety when adopting Gen AI.

 

John J. Duigenan, General Manager, Financial Services Industry, IBM Technology:

"When you ask a company, “What do you think about generative AI?” they probably think about two or three questions: Can I trust it? How do I get started? What happens if I don't get started– What opportunity am I losing [and] what am I automatically losing against my competitors? 

"This notion of trust is a boardroom topic everywhere right now. We can extend that trust into our clients' environments. The other piece of trust that I think is crucial, and we're starting to see this is the notion of regulation. Regulators in the EU have already moved with the AI act, President Biden has already moved with an executive order. But I think it's important, [whilst] we will see the regulatory landscape transform from here, no matter what, bad actors will still be bad actors and they will still use [and] misuse AI. There will be little that any of us can do about bad actors, other than to be able to detect them and prevent them in the future.

"The public has every reason to be concerned. There's an aspect of responsibility and trust around all of this that matters. Thinking about this in IBM terms, I would say from the very beginning of AI, we understood that trust and ethics and responsibility was super important. The reason we can build models in a certain way, is because we've had some of our most senior leaders in AI ethics writing our policies and ensuring that we live our policies. Our clients expect us to deliver trusted AI, because they equally know that this idea – that AI cannot be trusted – is out there in the public mindset. And so clients have a choice: they have a choice to buy consumer grade general AI, which arguably cannot be trusted. Or they can bet by an AI solution that has trust as an inherent capability built into it. The models that we build can be trusted. They don't... they're not used for deep fake work. They're used for business. And so by that very nature, when a client of ours buys a trusted solution from IBM, they can extend that trust to their clients."

 

Thanh Tai Vo, Director of fraud and identity strategy, APAC, LexisNexis Risk Solutions:

"Cybercriminals are exploiting deepfake technology to steal identities.They fabricate fraudulent documents and manipulate facial features or voices to establish counterfeit accounts or other applications in the victim’s name.

"Scammers can use deepfake technology in authorized push payment (APP) fraud to replace, alter or mimic someone's face in video or voice. This is especially troublesome as victims will believe fraudsters are actually loved ones.

"Businesseses should ensure they incorporate multiple layers of defense including behavioral intelligence, digital intelligence and beneficiary insights. Behavioral intelligence serves as a passive yet proactive method to identify and understand people’s usage patterns at the start of a transaction across the entire customer lifecycle, providing a smooth customer experience. It only introduces additional steps into the customer journey if there’s a perceived higher risk. This approach enables businesses to detect risk signals, protect customers from attacks, and safeguard the company’s reputation."

 

Andy Cease, Director of Product Marketing, Entrust:
"Whilst Gen AI has enormous potential, in the wrong hands, it has the capacity to wreak havoc; it’s a double-edged sword that brings both significant benefits and risks to organisations. 

"For cyber criminals, Gen AI not only increases the scale of cyber-attacks but also lowers the skill and resource barriers for executing them. 

"Deepfakes and synthetic identities have posed significant challenges for banks and financial institutes seeking to prevent fraudulent account openings and takeovers.

"But the good news is that just as Gen AI is a potent tool for cybercriminals, it is also a powerful cybersecurity tool. By generating synthetic data in volume, banks and financial institutes can train and refine fraud detection models more quickly and effectively than before, allowing them to improve the robustness of their solutions. 

"At the same time, beyond technology – it’s also important for banks and financial institutes to play an active role in raising awareness amongst users about Gen AI and its potential for misuse– and educate on how to spot deepfakes, phishing, and other malicious uses of AI-generated content."

Join Asian Business Review community
Since you're here...

...there are many ways you can work with us to advertise your company and connect to your customers. Our team can help you dight and create an advertising campaign, in print and digital, on this website and in print magazine.

We can also organize a real life or digital event for you and find thought leader speakers as well as industry leaders, who could be your potential partners, to join the event. We also run some awards programmes which give you an opportunity to be recognized for your achievements during the year and you can join this as a participant or a sponsor.

Let us help you drive your business forward with a good partnership!