
The growth of artificial intelligence (AI) and its increasing usage across almost every industry around the world may represents either a major opportunity or a threat, depending on your perspective. From automating manual, repetitive tasks to reviewing data stacks and enhancing decision-making, AI is truly revolutionising how we live and work. However, this innovation comes in parallel with emerging risks, and organisations are operating in increasingly murky waters where the potential for highly advanced, AI-based fraud may be growing. So, what threats does AI pose to businesses, and how can HR teams mitigate the risks for their companies in this rapidly evolving landscape?
Rising fraud levels
Just as AI has made it far easier to analyse a spreadsheet, or create a tailored shopping list, it’s also ramped up the potential for individuals to exploit these tools for fraudulent purposes. There is no doubt that the business environment is growing increasingly hostile. A 2025 report from the UK fraud prevention service (CIFAS) revealed a 28% increase in insider threats and employee fraud over the past two years, and in 2024, over a third (38%) of those occurred within the first three months of employment. This suggests that some fraudulent actors are not just slipping through the cracks but are actively targeting organisations with no fraud mitigation processes in place.
Technology is the main driver of this growth. In the past, HR teams may have been content with a CV, an interview, and a handful of references (alongside the legal DBS checks, where required) as sufficient steps to verify a candidate’s background and ensure they are who they claim to be. However, AI-backed tools can fabricate entire digital personas, false credentials, and references.
The potential impact of employee fraud goes far beyond operational outcomes, and failing to thoroughly vet potential and current employees can result in financial losses, legal risks, and reputational damage. Put simply, a single bad hire can have knock-on effects far wider than solely at an internal level.
In fact, the use of so-called “deepfake” technology in criminal activity has surged by 2,137% in the past three years and now constitutes 6.5% of all fraud cases. The ability of the platforms being exploited will only become more advanced, and the pace of development is growing exponentially. In a 2025 survey, a whopping 53% of businesses in the UK and US reported being targeted by deepfake fraud, and only 10% of that group were able to spot the risks before they were impacted. These figures may rise in future studies in the coming years as AI adoption and complexity continue to accelerate globally.
A new reality
These are staggering numbers, and AI-backed employee and candidate fraud is becoming a daily reality for many organisations globally. Size and scale do not dissuade attackers, and larger firms often represent a bigger potential “prize”. Equally, smaller firms could be seen as easier to breach and provide more access points for fraudulent candidates or bad actors to target.
In addition to tech growth, the broader changes taking place in the world of employment could potentially provide a platform for employee fraud. Vast numbers of people now work remotely or on a hybrid basis, making it harder for organisations to verify identities, qualifications, and experience, particularly where sophisticated tools are being used to create fraudulent documents. With managers having fewer, if any, in-person touchpoints with their staff, the potential for deception rises significantly. This is only heightened by digital onboarding processes becoming increasingly commonplace, which, while convenient and efficient, lowers the threshold for a potential fraudulent candidate to gain access to sensitive data or systems.
However, while AI is one of the drivers behind the rise in fraud of all types, HR teams should not shy away from embracing technology themselves. Technology remains one of tool that organisations can leverage to protect themselves. The correct use of modern platforms, combined with a human hand on the tiller, can be highly effective, and modern systems have built-in prevention mechanisms that can alert hiring teams to when candidates may be attempting to trick the system. However, a more holistic cultural shift is also needed to counteract every ongoing threat.
New vetting models
In this new world, HR teams should adapt the way they view vetting and move away from seeing it as a one-off task for new hires to more of a continuous process for all employees. The traditional model of conducting screening once at the start of a contract may be no longer fit for purpose at a time when threats are increasingly dynamic and commonplace. Ongoing vetting can mitigate insider fraud, halt those who may pose further risks in their tracks and provide early warnings for any issues that may arise in the future.
HR teams are now serving on the front line of this complex new environment and must navigate changing employment legislation, talent shortages, retention problems, and more challenging security threats driven by hiring in an increasingly AI-dominated world. That makes leveraging all the tools at their disposal to best protect them and their employers ever more important.
Artificial intelligence represents one of the factors reshaping the employment world, and it is a collective challenge to ensure that vetting integrity and rigour keep pace with wider innovation.
Digital Identity solutions play a role in mitigating these growing threats. By leveraging advanced biometric verification, facial recognition, and liveness detection, these tools can help distinguish between real individuals and synthetic or manipulated media, reducing the risk of impersonation and identity theft.
Additionally, Digital Identity may improve confidence that employees are who they claim to be during onboarding and throughout their tenure, minimising opportunities for internal fraud or credential misuse.
Rolf Bezemer is Executive VP and General Manager International at First Advantage