
Former fraudster Alex Wood, who now advises organisations on how to avoid scams.
This article is all about how Alex Wood, a poacher turned gamekeeper of sorts, is advising organisations on how to prevent fraud. Nevertheless, I want to begin by mentioning the time that he pretended to be the 13th Duke of Marlborough to gain access to some of London’s most opulent hotels.
Why start with this? Honestly, it captures my imagination. Fraud isn’t something to be celebrated but of all the frauds I’ve learned about, this one is probably my favourite.
Wood, who now co-hosts BBC Radio 4’s Scam Secrets podcast, described the story to me in a recent conversation:
A follow up chat with Alex will be available soon on the brand new LFG! podcast 🚀🚀🚀
“I was literally living in five-star luxury for seven months, never swiping a credit card. The hotels thought they were in touch with my diary secretary at Blenheim Palace and the overriding agreement was that you never ask the Duke for money. You don’t trouble him for a card… The Ritz, Claridges, The Dorchester, everything.
These bills were getting sent back to Blenheim Palace. Blenheim Palace was saying, ‘what the fuck is going on?” They reported it to the police and it all came unstuck after seven months.”
I can’t imagine this scenario without somewhat admiring the brass neck required to swagger into a lavish hotel and not pay. Wood later tells me that on one occasion he was presented with a platter of steaming hot flannels while checking into Claridges, and being spoken to as though he was royalty.
“I was thinking to myself, ‘mate, I’ve been scamming you,’” he reflects.
The Palace eventually cottoned on, and Wood’s prize for successfully fooling the hotels was a further three-and-a-half years of free accommodation, this time in prison.
While it’s not Wood’s most notorious act, as he would go on to steal £26 million from businesses over nine months via APP fraud, I’ll admit that I love the Fake Duke story. But let’s get back to today, as he’s now safeguarding organisations and helping them to combat increasingly sophisticated scams. As any fintech enthusiast knows, technology is opening up new frontiers for fintech, which means it’s also creating new attack vectors and opportunities for hackers and scammers.
AI as a Fraud Enabler
Let’s start with the obvious issue of AI. In generic terms it’s adding efficiency and personalisation to criminal activity, and this can take many forms. Wood explains that AI is proving particularly valuable in providing due diligence for attackers on companies that can be targeted, and that research that had once taken many hours can now be carried out in seconds.
It’s not just finding information, AI enhances strategic initiatives too. Tools like ChatGPT and Claude can be fed with prompts that enable malicious activity. It’s not as simple as asking your favourite chatbot how to commit the perfect crime, as you won’t receive an answer, but Wood informs me that criminals will get the results they need if they ‘switch sides’ and pretend to be concerned about fraud and ask for information on sophisticated attacks to look out for and prevent.
This, again, takes seconds, and Wood’s work with financial institutions has repeatedly shown that these time savings are a critical advantage for attackers. “A fintech or bank will have to work out what the hell fraudsters are doing, then start thinking about how to implement something to counter that, test it, get permission to roll it out. And it takes bloody weeks.”
Deepfakes are another major risk. AI has made it far easier to copy someone’s image and voice, and an outcome of this is ‘zero-touch fraud’; attacks made without any direct human contact. Zero-touch frauds are not only easier to carry out, they also help criminals stay hidden. Wood explains that his biggest scam may have gone unpunished had he carried it out today.
“The silver bullet when I was prosecuted for APP fraud was a recording of my voice. They had that on their phone system, but now that silver bullet wouldn’t exist. It makes prosecution and detection a lot harder. People can be anywhere in the world now committing this fraud.”
A hat tip to a fraud:
Wood’s criminal days are behind him, but I ask if there’s any recent scam that he’s been impressed with on a strictly technical level. He mentions the attack on UK engineering firm Arup, in which a financial controller was sent a phishing email by the company’s CFO requesting a confidential transaction.
The employee’s initial doubts were soothed when the CFO and several high ranking colleagues, some of whom were familiar to him, joined a video call encouraging him to transfer money. All of the figures who joined him on the call, it transpired, were AI-generated deepfakes. $25million was stolen from the company.
“This wasn’t somebody you’d think of as vulnerable,” says Wood. “This was an expert accountant. He was completely convinced he was on a call with all of his execs.”

A still from our podcast, in which Alex explains how he was recently scammed. The irony isn’t lost on him.
What Should Fintechs be Worried About Today?
Besides AI, there are several other threats that should be on our radars. While we might worry about attacks from outside the business, one of the most concerning challenges is to manage internal threats from people on the payroll and working in the company. This isn’t always easy to identify, but sudden changes in character, or persistently starting work early and/or finishing late can be indicators. Likewise, someone who doesn’t take holidays might be taking you down from the inside. Ironically, some of these traits might be typical of high performers, or be expected by your worst VC connections on LinkedIn.
Internal threats may even be simple accidents made under pressure. Wood sees this occasionally in his advisory work with law firms, where a desire to rush through a deal can lead to compromised due diligence. “It’s very important to take a step back and just review everything very thoroughly,” he says.
Another emerging problem is synthetic identity fraud, where new identities are created with a blend of genuine and fake information. These identities have been known to pass through KYC checks, and attacks can take place at a later date once onboarding is complete. This isn’t a straightforward thing to manage but large organisations within the UK now have more than sufficient incentive to respond due to the Failure to Prevent Fraud offence.
The UK's Failure to Prevent Fraud offence came into force in September 2025, for organisations meeting at least two of the following criteria: 1) 250+ employees, 2) £36m+ turnover, 3) £18m+ total assets.
Organisations may be held criminally liable where an 'associated person' - including employees, agents, subsidiaries, contractors, consultants, or anyone performing services for or on behalf of the organisation - commits fraud intending to benefit the organisation or its clients.
A further threat that Wood identifies is IMSI catchers, which are mobile phone surveillance devices disguised as cell towers. “We’re seeing IMSI catchers now mounted on drones over busy retail areas. They draw data from peoples’ phones and they have no idea that it’s happening.” In some respects, this is the data age’s answer to pickpocketing.
Wood tells me that the only protection against this is to double down on multi-factor authentication.
Who is a Fraudster?
More concerns remain, but a final issue to raise is personal bias and our clouded judgements when assessing a person’s propensity for fraud. Wood, a classically trained violinist who had performed for the Queen and played alongside Elton John many times, serves as an illustration here. Few would have anticipated that a successful musician who’d once earned £30,000 per month might fall into a life of crime. Even so, a career-stopping repetitive strain injury and a hefty mortgage proved to be transformative in the worst way.
“The fraudster can literally be anybody,” says Wood. “Some people think a fraudster is somebody in a hoodie in their mum’s basement, or somebody in a West African call centre. But we actually have huge homegrown risks.”
This is true when looking for accomplices as well as instigators. For example, criminals often seek out people willing to offer up their bank accounts to assist with a crime. These compromised accounts are known as mule accounts, and traditionally had been more commonly owned by people aged 25 and below. Wood tells me that more people in their 50s, and even 60s, are now involved in these crimes as economic challenges mount.
Inside the Mind of a Criminal
We rarely know who we’re dealing with when tackling a scammer, and their motives and attitudes vary. Wood is very candid when reflecting on his behaviour and his perspectives now help to inform the defensive strategies of fintechs and other organisations.
“I recognised that I had sort of psychopathic traits. I was living in a grandiose way, I had no real empathy for others, I was able to lie easily, becoming manipulative and really having no regrets about doing that sort of stuff.”
That said, Wood mentions that many fraudsters are prone to the delusion of the ‘victimless crime’, and convince themselves that they are modern day Robin Hoods despite the very personal impact of the crime. For instance in romance scams, victims are “not just robbed of their life savings, they’re robbed of the person they thought they were falling in love with.”
Speaking of his own wrongdoings, Wood explains that his turnaround was partly prompted by learning about the human impacts of his actions. In one instance, a £1.3m theft in working capital from a busines led to the release of 40 staff. In another situation, the consequences felt even more personal.
“One guy said he suffered a stroke, and I felt empathy then. I realised that it’s not just the banks we’re hitting. There’s a real human cost here.”
While Wood eventually reached a moment of redemption, we have to remove the rose-tinted specs. Many scammers are either remorseless or blind to the worst consequences of their activities.
Final Thoughts
We’re currently in a ‘golden era’ for fraud. All of the ‘personalisation at scale’ excitement that we apply to fintech is equally relevant for those looking to scam and steal. It’s important to be aware of this, and also to keep track of the new rules and standards that AI is setting for KYC and onboarding. Technology improvements are only perpetuating the ‘cat and mouse’ game between criminals and fraud prevention.
One of the most intriguing things here is that most of the core elements of fraud remain unchanged. The impacts and causes of fraud are still primarily human, and you don’t necessarily need to be doing anything ‘wrong’ to be at risk. To elaborate a little:
Tech-enabled insiders could be your biggest threat
Our biases and instincts can be manipulated to deceive us
A sense of shame can prevent people from reporting crime
Employees need to be trained to prevent fraud
Economic struggles are often the origin story of a criminal career.
To finish, I ask Wood for a top level view on how he advises companies:
“It’s mostly going in and scaring the hell out of them. Showing them how easy it is, how convincing fraud attacks can be, and how good cyber attacks can be."
If you want to know more about Alex’s story and how you can prevent attacks at your company, tune into Episode 1 of the LFG! Podcast (coming soon). Likewise, tune in to Scam Secrets on BBC Radio 4.
Get involved with LFG 🚀🚀🚀
This newsletter exists to tackle the big questions and curiosities of fintech. If you’re building something exciting, get in touch at [email protected].
If you enjoyed this piece and are seeking a writer for editorial, marketing or whitepaper copy, also use the email address above.
Features are not endorsements unless explicitly stated.
