Fake it until you make it

Brand Equity is everything. Great R&D, production, marketing, sales and service are all needed to bring a brand to life, and make it flourish for generations. Brands like AMEX, Ford, Maxwell House Coffee are intergenerational powerhouses – and they are the ones that stand the most to lose to AI fraud because when you get burned in a brand-scam, like ordering a case of Maxwell House Coffee for 50% off with your VISA card, you lose faith in human nature as well as Maxwell House Coffee and VISA.  

WHAT CAN YOU AS A ADVERTISING AND MARKETING PROFESSSIONAL DO TO MAKE THE BRANDS YOU SUPPORT MORE FRAUD PROOF.

Hint: less online sales are better.

Visa now employes artificial intelligence to reduce fraudulent transactions as scammers also take to AI.

"We look at over 500 different attributes around [each] transaction, we score that and we create a score – there's an AI model that can do that. We do about 300 billion transactions a year," said James Mirfin, global head of risk and identity solutions. 

Fraudsters use generative AI to make their scams more convincing than ever, leading to unprecedented losses for consumers, according to a Visa report.

The company prevented $40 billion in fraudulent activity from October 2022 to September 2023 - nearly double the fraudulent amount prevented in the previous year.

Scammers use AI to generate primary account numbers [PAN] and test them consistently. The PAN is a card identifier, usually 16 digits long, but it can be up to 19 digits in some instances. 

Using AI bots, criminals repeatedly attempt to submit online transactions through a combination of primary account numbers, card verification values (CVV) and expiration dates until they get an approval response. This method, known as an enumeration attack, leads to $1.1 billion in fraud losses annually, comprising a significant share of overall global losses due to fraud, according to Visa.

To reduce the number of fraudulent transactions VISA looks at over 500 different attributes around each transaction. The transaction is then scored, and we create a score for it. Each transaction is assigned a real-time risk score that helps detect and prevent more enumeration attacks in transactions where a purchase is processed remotely without a physical card via a card reader or terminal.

Because we’re looking at a wide range of different attributes and we're evaluating every single transaction, we see new types of fraud emerging – and our model will see them and will catch them. Our AI model scores those transactions as high risk – allowing our customers to decide not to approve those transactions."

In the last five years, the firm has invested $10 billion in technology that helps reduce fraud and increase network security.

Generative AI enables fraud

Cybercriminals are turning to generative AI and other emerging technologies including voice cloning and deepfakes to scam people, Mirfin warned.

  • Romance scams
  • Investment scams
  • “Pig butchering” – a scam tactic in which criminals build relationships with victims before convincing them to put their money into fake cryptocurrency trading or investment platforms.

In today’s global marketplace the criminals don’t sit in a market, pick up a phone and call someone. That’s far too labour intensive. They're using some level of artificial intelligence, whether it's a voice cloning, a deepfake video, or social engineering, they're using artificial intelligence to enact different types of fraud, Mirfin said. Generative AI tools such as ChatGPT enable scammers to produce far more convincing phishing messages to dupe people.

With less than three seconds of audio, Cybercriminals can clone your voice, according to the U.S. based identity and access management company Okta. The voice clone can be used to trick your family members into thinking that you’re in trouble and need money, or trick banking employees into transferring funds out of your account.

Generative AI tools have also been exploited to create celebrity deepfakes to deceive fans, said Okta. Cybercriminals using generative AI to commit fraud can do it for a lot cheaper by targeting multiple victims at one time using the same or less resources, said Deloitte's Center for Financial Services in a report.

"Incidents like this will likely proliferate in the years ahead as bad actors find and deploy increasingly sophisticated, yet affordable, generative AI to defraud banks and their customers," the report said, estimating that generative AI could increase fraud losses to $40 billion in the U.S. by 2027, from $12.3 billion in 2023.

Earlier this year, an employee at a Hong Kong-based firm sent $25 million to a fraudster that had deep-faked his chief financial officer and instructed to make the transfer.

Chinese state media reported a similar case in Shanxi province this year where an employee was duped into transferring 1.86 million yuan ($262,000) to a fraudster who used a deepfake of her boss in a video call.