Insight

Explaining the fraudulent world of shallowfakes and the criminal risks involved

Published

Written by

Read time

Written by Tom Montague – Sales Director. 

Ever been tempted to use the freebie editing apps on your computer to make changes to the contents of a digital application form? You’re not alone. But beware, there are digital dangers in the shallow waters. And you may well be in receipt of fake, forged, or fraudulent documentation created by shallowfakery.

Growing numbers exploit readily-available software tools to alter documents and images in order to falsify the information they contain – resulting in ‘shallowfakes’.
For many, the use of whizzy software for digital deceptions is most associated with ‘deepfakes’ – counterfeit visual or audio recordings generated using Artificial Intelligence (AI). Deepfakes have gained notoriety as they convincingly depict real people in unreal situations, often with mischievous or malicious intent.

Shallowfakes differ from deepfakes in that they use more accessible and surface-level technology to deceive, most commonly used in routine interactions between vendors, service providers, and their customers.

Both types of fakery proliferate within digital fraud ecosystems as ‘synthetic media’. But shallowfakes can go beyond providing a duplicitous response to an information request. It harnesses technology to make hidden tweaks to standard wording, fabricate versions of official application forms, and doctor photographs tendered as evidence. In short, fraud.

Shallowfakes have a criminal lineage that reaches back to traditional paper- and print-based forgery and may be used to both unlawfully modify an existing template document or image, or to imitate one from scratch. According to law firm DAC Beachcroft, shallowfakes usually fall into one of two categories: proof of identity or address; and supporting evidence.

Overall, the law firm adds, any disclosure has the potential to be a shallowfake, but the crucial differentiator between a shallowfake (and any other false document) is the manipulation of genuine pre-existing media.

Examples range from insurance claims to credit eligibility applications, with names, dates and other information replaced or tampered with. Qualifications and birth certificates are also open to being digitally doctored before they’re submitted in support of job interviews and passport validations.

Verified instances of specific shallowfakes are rarely disclosed into the public domain. Some telling statistics, however, have emerged. TrustID’s latest ‘Trends in Fraudulent Identity Documents’ study found that in 2023, passports remained the most common form of fake document detected by its customers. These also made-up 48 per cent of the total of all fake documents TrustID saw in 2023.

Writing in InsurTech, Resistant AI, CEO Martin Rehak references an example involving a Canadian passport that was reused and submitted some 2,500 times over a 20-day timeframe – with one day registering more than 400 submissions, each with subtle changes in name, address, and hairstyle on the portrait, to avoid detection.

Other types of documents that may be “shallowfaked” include driver’s licences, utility bills, and account statements. Credentials needed to support a transaction or claim – such as invoices, contractual agreements, and terms and conditions – can also be added to the list.

Then there’s an additional level of pain and deception facing insurance claims handlers. A shallowfaker submitting a car accident claim may add paint damage, dents, and more through image manipulation to evidential yet falsified photos, making an accident appear worse than it really is.

Shallowfakes are a problem for vertical sectors, from education to financial services, but the insurance sector is probably the most targeted. Insurers identified 84,400 fraudulent claims in 2023, the Association of British Insurers says – that’s 11,800 more than the previous year. 

Old crime, new dangerous digital approach

The use of maliciously altered digital media isn’t new, of course. Computer-based photo and graphics editors have been readily available for years. What’s an unprecedented challenge, experts say, is the escalating scale of the fraud. The same application form can be re-used dozens of times with just name, account, and address changed, creating multiple faked identities from a single template.

The subtle alterations between numerous versions of the forged forms can prove hard to spot using conventional fraud detection techniques. The more copies being processed, the more authentic they can appear.

So, what can be done to counter this rising tide of doctored documentation and phoney photos? 

Shallowfakes present known challenges for counter-fraud technology. For instance, very often the information submitted by fraudsters is entered into a system via self-service transactions and automated Straight-Through Processing. To stand a good chance of catching it, fraud detection must be active at the point of data submission.

In addition, doctored photos are typically supplied in the form of unstructured data, maybe as image files of differing file sizes and resolutions. Scrutinising very large unstructured datasets is time-consuming and heavy on data storage space.

Even where information capture templates are generated from a formal database as structured data, by the time they’re downloaded by fraudsters and then uploaded to a company’s customer registration system, it’s likely they’ve been sent as an unstructured file (such as a PDF). 

Can Artificial Intelligence be trained to help? Probably. AI companies working on the problem reckon that AI can significantly increase the chances of detecting fakes both shallow and deep. But our best bet, it seems, is combining AI with human instinct, attentiveness, and awareness.  It’s this digital recipe that will root out many of these dodgy docs.

If you’d like to understand more about Howden, request a call back. 

CAPTCHA
11 + 2 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
0330 008 1334