Houston expert discusses AI use in banking and phishing scams
With artificial intelligence technologies easily accessible and growing in popularity, consumers and business owners alike should be aware of both the benefits and risks when it comes to the utilization of generative AI tools in banking and finance. While data-driven AI creates the opportunity to further drive innovation in banking, the data-reliant nature of the industry makes it a natural target for scammers looking to intercept personal and business finances and sensitive customer information.
As banks and other financial service providers are using AI as a tool to scan for anomalies or errors that are known fraud techniques, criminals are using AI to improve their chances of perpetrating fraud. For this reason, consumers and businesses should guard their data with the same diligence used to guard cash and other valuable physical property.
Privacy and accuracy
For entrepreneurs and businesses of all sizes, it is important to keep in mind the practical applications of AI beyond the trending headlines, whether implementing the technology into everyday internal business practices, or into client-facing solutions.
When feeding information into AI, it is best to maintain a defensive position and be proactive about not disclosing sensitive or private information. Also, rely on sound judgment when deciding when and how to use AI technologies. From a business standpoint, privacy should be embedded into a financial system’s design and leaders should be transparent about the technologies used within a given system.
Technologies like ChatGPT are large language models operating on massive datasets, including documents and web pages across the internet. This poses a risk because some sources of this data lack accuracy. When seeking financial advice via AI technologies, it is best to conduct research by curating and limiting the dataset then talking through your unique financial position in person with your trusted banker and IT staff or consultants.
Phishing and business email compromise via AI
Historically, phishing and business email compromise, or BEC, attempts have been more easily recognizable and often flushed out due to grammatical errors and unnecessary punctuation. With technologies like ChatGPT, scammers are now better equipped to draft well written content that can fool a person into thinking a communication is legitimate. Phishing can lead to people clicking links or attachments that harbor malware or other viruses that can lead to account takeover. With BEC, a person might be fooled into thinking an email is from a legitimate person. Scams like these could potentially lead to the disclosing of sensitive information or accepting transaction instructions or changes, ultimately resulting in money being sent to a fraudster.
AI voice generators
AI voice generators can be used to mimic voices of anyone including bankers, C-suite leaders and customers. If a person is fooled into believing they have received a voicemail or are talking to a person they know, they may accept instructions from a fraudster like providing transaction approvals and sensitive or private information, resulting in fraud.
AI can also create fake identities, including AI-developed photos of individuals, and other false information. These fake identities could be used to create accounts for fraudulent purposes.
AI is here to stay
AI is forecasted to have a lasting impact on the banking industry. Whether on the business or consumer side of the spectrum, it will be important to embrace the innovation and enhancements generative AI will continue to produce, while maintaining a cautionary stance around protecting client and business information and finances. Fraud prevention practices will need to continue evolving alongside the fast-paced growth of generative AI in banking.
Ken Smiley is treasury management division manager of Amegy Bank and a fraud protection expert.