Credit: btiger/Adobe Stock
Customer service technology provider Glia has introduced what it calls the banking industry's first contractual guarantee against AI hallucinations and prompt injection attacks, aiming to address growing concerns about the reliability and security of artificial intelligence tools used by financial institutions.
The New York-based company said the guarantee will apply to its Banking AI platform, used by more than 700 banks and credit unions, and will ensure that incorrect or misleading AI-generated responses are never presented to customers or members.
AI hallucinations occur when generative AI systems produce inaccurate or fabricated information, a risk that has raised compliance and reputational concerns across the financial services industry.
Glia said its platform eliminates that risk through a proprietary approvals framework that separates AI's ability to understand customer questions from the system that generates responses. While the platform uses large language models to interpret customer intent, it does not allow the AI to improvise answers in real time.
"Our platform makes negative impacts from AI hallucinations and prompt injection attacks not just improbable, but actually impossible," said Justin DiPietro, Glia's chief strategy officer and co-founder.
The company also said the system protects against prompt injection attacks, in which malicious actors attempt to manipulate AI systems into providing sensitive information or performing unauthorized actions.
Glia CEO Dan Michaeli said the approach is designed to give financial institutions the efficiency benefits of AI automation while maintaining the security, governance and trust required in banking.
© Arc, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to TMSalesOperations@arc-network.com. For more information visit Asset & Logo Licensing.