Jun 12, 2025

Shallowfakes vs Deepfakes: The New AI Fraud Frontier in Finance 

Deepfake shallowfake

The rise of falsified and tampered documents and image media presents new challenges for fraud detection and highlights some long-standing but underappreciated risks. 

Almost every part of society is impacted – whether that be banks and insurers in financial services, providers in health care and government agencies spanning welfare to border protection, our most trusted educational institutions, and of course, small businesses and the public. 

On one side, sophisticated and hotly publicised deepfakes – hyper-realistic fake media created by AI – are emerging as a major threat.  On the other side, simpler shallowfakes (or “cheapfakes”) – low-tech but deceptive edits of images, videos or documents – are proliferating just as quickly.  

Together, it’s harder than ever to trust what we see or hear. This article explains what shallowfakes and deepfakes are, highlights the key risks, the differences and how modern technology (including AI) has made them more accessible and dangerous, and how agile technology innovators like Fortiro are fighting back with cutting-edge detection tools and methods. 

Shallowfake vs Deepfake – What’s the Difference? 

Shallowfakes manipulate real media, whereas deepfakes are AI-generated from the ground up. A shallowfake might be as simple as a doctored photo or a cheaply edited video clip.  

A deepfake leverages neural networks to generate new imagery or speech. For example, AI models can produce realistic faces or mimic someone’s voice, often fooling both humans and automated checks. 

This difference is important because it also influences what documents, images and processes of a business may be prone to deepfake or shallowfake attacks. 

Shallowfakes: Simple, Fast and Effective 

Shallowfakes are falsified images, videos, or documents created without advanced AI. In practice, these are simple manipulations using basic editing software or out-of-context media.  

For example, a fraudster might Photoshop different numbers on a bank statement or splice audio clips to change someone’s words. No machine learning is required – any bad actor with minimal skills can produce a shallowfake. Crucially, shallowfakes rely on tweaking authentic content in subtle ways, which can be surprisingly effective at fooling human reviewers. 

Whilst shallowfakes are perceived as ‘simple’, they are incredibly common and easy to do. ChatGPT, a popular GenAI service has 800m monthly active users, whereas Microsoft Paint is estimated to have more than 1.5 billion active users – and is arguably easier to access with less traceability. 

Mainstream image-editing software can be misused for fraud. Image manipulation programs like Photoshop, Apple Preview and PDF editors make it easy to alter financial documents (e.g. changing numbers on documents such as payslips, tax assessments and invoices). 

In this video below, we demonstrate how easy it is to change a document using Apple Preview, a free photo editor pre-installed on every Mac device sold – of which there are hundreds of millions in use.

Despite the name, shallowfakes are not a “lesser” threat – they can be just as dangerous. In fact, their very simplicity makes them accessible to virtually anyone, and their subtlety can evade detection.  

Deepfakes – The New Threat 

Deepfakes, meanwhile, refer to synthetic media generated with artificial intelligence (typically deep learning models). Deepfake technology uses AI algorithms (often trained on numerous data samples) to generate convincingly realistic yet false content, including images, documents, audio, or video that never actually existed. 

In other words, a deepfake can completely fabricate someone’s face or voice or create a wholly fake document or image from scratch, rather than just altering an existing one.  

In the video below, we demonstrate how ChatGPT can be used to quickly create a payslip/paystub with minimal prompting. 

Generative AI: Making Fraud Easier – and Harder to Detect 

The rise of generative AI has dramatically lowered the barrier to creating fake documents and images. What once required skilled editors or data scientists can now be done with off-the-shelf tools. Fake content has never been easier to create — or harder to catch.  

Advanced AI image and voice generators are widely available, and even basic editing apps have grown more powerful. This means that fraudsters don’t need professional tools or elite editing skills to produce fake content – a laptop and a few software downloads will suffice. 

According to a Deloitte analysis, new generative AI tools now enable anyone to create deepfake videos, voices, and documents at a low cost, with illicit toolkits being sold online for as little as $20. Generative AI has further accelerated this trend by automating what were once manual edits. 

As AI-generated content increases, both in online media and as malicious activity, detecting what’s real versus fake is a growing challenge. Studies indicate many fraud signals are practically invisible to the human eye – for instance, minute inconsistencies or metadata clues in a 90-page bank statement that a human reviewer would never notice.  

Even for tech companies, keeping up is hard. In the realm of audio, for example, the industry acknowledges that it is behind in developing tools to reliably identify AI-generated voices. In short, generative AI has equipped criminals with a cost-effective, scalable method to create fraudulent content, while making it more challenging to detect through traditional means. 

Our Perspective: Use of Shallowfakes & Deepfakes for Fraud 

Shallowfakes

Use case

  • Manipulating a single/small group of documents or images. 
  • Using readily available software, which is practically untraceable 
  • Making sure a document or image maintains its template, layout, font, etc, i.e bank statements, tax assessments, photo of a particular location/object 

Constraints

  • Access to the original media is required
  • Generally manual, and can be time-consuming to increase the scale of production

Real-world example


Shallowfake Insurance Claim: In the insurance sector, scammers have doctored accident photos to support bogus claims. One common ploy is photoshopping license plate numbers onto images of wrecked vehicles to make it appear that an insured car was totalled. Zurich Insurance’s Head of Fraud noted an uptick in claims where fraudsters simply plant a different registration number onto a salvage-yard car and submit it as evidence of a crash – a cursory glance makes the photo seem legitimate. Such low-tech fakery, while not using AI, has been effective in tricking claims handlers who “take it at face value”. 

Deepfakes

Use case

  • Document generation: creating believable but falsified documents e.g. payslips/paystubs, invoices, or IDs
  • Identity impersonation: generating realistic voices or faces to mimic individuals in audio/video (e.g. scams, social engineering)
  • Scalable misinformation: mass-producing fake content to influence public opinion, manipulate media, or pollute information ecosystems.

Constraints

  • Requires some knowledge for prompting or training for outputs where the user knows the template, appearance, voice, etc 
  • Requires reasonable skill to properly prompt and extract a useful output 

Real-world example


Synthetic Identities in Banking: Financial institutions are now battling deepfake identities used for fraud. FinCEN reports that criminals combine AI-generated profile images with stolen personal data to create entirely synthetic videos of individuals to pass KYC/Onboarding checks.  
 
At Fortiro, our network has also shared stories with us of these identities passing liveness testing using the latest in technology to combat this exact risk. Fraudsters then open bank accounts and access credit, which serve as funnels to launder illicit funds from other schemes.  

These examples illustrate that AI-powered fraud is no longer theoretical – it’s happening now across different financial domains – from fake insurance claims and loan application documents to voice-and-video impersonations.  

The Growing Threat to the Financial Sector 

The financial sector finds itself squarely in the crosshairs of these emerging threats. Banks, non-bank lenders, insurers and payments providers all face the prospect of account takeovers, fraudulent withdrawals/payments, and loan defaults facilitated by both shallowfakes and deepfakes.  

Insurance companies are seeing spikes in fake claims supported using fabricated evidence. Government agencies administering benefits or verifying identities (passport offices, welfare programs, etc.) must now contend with AI-generated IDs and documents designed to slip through identity checks. Even the general public is at risk, as seen with deepfake investment scams, fake invoices (with business e-mail compromise) or voice-cloned phone calls extorting the vulnerable; anyone can be a target of AI-enabled social engineering. 

Analysts project exponential growth in losses: as noted earlier, AI-driven fraud could cost tens of billions of dollars in the next few years if unchecked. The implications go beyond individual losses – there is a risk of systemic erosion of trust in digital transactions and verification processes. If every video call, document, or voice message could be a potential fake, financial institutions will need to double down on security measures to maintain confidence. As Zurich’s fraud chief warned, “Will it get to a point where you doubt everything that’s in front of you?”

Regulators are Alarmed, and for Good Reason 

Regulators and financial authorities are sounding the alarm. In late 2024, the U.S. Treasury’s FinCEN warned of rising fraud schemes “associated with the use of deepfake media” created with generative AI. Suspicious activity reports involving deepfakes have spiked, especially cases of fake identity documents used to bypass banks’ verification checks. 

The U.S Federal Trade Commission and Better Business Bureau have likewise issued consumer alerts about AI-driven impersonation scams.  

All evidence suggests an escalating threat landscape: financial institutions are experiencing a surge in AI-assisted fraud attempts, and losses are mounting. Deloitte estimates AI-generated content contributed to over $12 billion in fraud losses last year, a figure that could reach $40 billion in the U.S. by 2027

At Fortiro, we have received reports of organisations facing deepfake AI fraud challenges, particularly in identity verification and Know Your Customer (KYC) processes. Fortiro presented at a recent insurance summit, noting that ~30% of people polled failed to identify an AI-generated insurance claim photograph. 

Fighting Back with AI: How Fortiro and Others Stay Ahead 

In the face of AI-powered fraud, the most effective defence is to embrace and leverage AI and innovative technology as an equally powerful tool against fraud. Just as criminals are leveraging new tools, so too are financial institutions. 

At Fortiro we think it’s important to be proactive leaders in this fight, using cutting-edge techniques to stay one step ahead of forgers and impersonators. The key approaches to counter shallowfakes and deepfakes in financial services include using Deep Learning Detection Models – AI – the same technology often used to commit the fraud.   

These models are trained to learn the kinds of patterns that typically distinguish genuine from AI-generated content. These patterns may not be visible to the human eye, but deep learning detection models can identify inconsistencies in compression, texture, or statistical noise that often occur during synthetic generation. This allows them to make highly accurate distinctions between authentic and tampered or fake content, even when the differences are subtle. 

A critical area of focus is using AI to authenticate documents and data files submitted to financial institutions. Fortiro is a prime example, leveraging AI for document fraud detection. Our platform uses artificial intelligence in clever ways to detect issues such as document tampering. When a customer uploads a document or image as part of a loan or insurance application, Fortiro automatically analyses it for anomalies or signs of falsification. This can include checking the document’s digital metadata and properties, comparing fonts and layouts to known genuine templates, cross-verifying figures against expected ranges, and running image forensics on scans. By catching subtle inconsistencies that a human might miss, such automated verification tools can instantly flag forged or AI-generated documents.  

These AI capabilities run alongside traditional checks, which ensure that each document is comprehensively reviewed in a transparent and methodical manner, not just left up to the AI to work in a “black box”. 

Fortiro is revolutionising document fraud prevention, enabling organisations to process and check thousands of documents automatically and stop fake ones in their tracks. The payoff is twofold: dramatically reduced fraud losses and a faster, smoother experience for legitimate customers (since honest applications sail through while flagged ones get slowed for manual review). 

Conclusion: Proactive Defence in the Age of AI Fraud 

The advent of shallowfakes and deepfakes marks a new era of fraud, one where seeing is no longer believing.  

For financial services providers, this means evolving beyond traditional fraud checks and embracing next-generation defences. The threat is growing, but so is the industry’s response. Banks, insurers, and agencies are increasingly collaborating with tech partners and FinTech innovators to shore up their defences. Agile providers like Fortiro exemplify this proactive stance, delivering tools that can verify authenticity at scale and speed, effectively fighting AI with AI. 

In the coming years, staying ahead of AI-powered fraud will require constant vigilance and innovation. It will require a combination of technology, process, and education, including deploying sophisticated detection algorithms, instituting multi-layer verification protocols, and training staff and customers to be aware of new scam tactics. The financial sector has faced waves of change before – from online banking fraud to phishing – and has adapted each time.  

Get a demo today

Get a demo of Fortiro’s income document verification platform to see how it can help you.