In the age of artificial intelligence, software is taking over many roles previously occupied by humans and online scams are no exception. Cybersecurity experts are now warning LinkedIn users to be wary of an AI generated phishing campaign, which is designed to cheat people out of their personal information and money.
Discover our latest podcast
The latest scam involves AI generated profiles for phony companies, which targets aspiring job applicants. As a result, LinkedIn has been forced to block tens of millions of fraudulent accounts in recent months.
Fake LinkedIn profiles
The rise in fake LinkedIn profiles and the consequent scams that take place, targets vulnerable job seekers after a series of tech-layoffs and a volatile job market. Several state-of-the-art recruitment scams have taken the place the last few months and fraudsters are cheating those recently sacked by offering them jobs that do not exist.
Oscar Rodriguez, LinkedIn's vice president of product management, described, as per Money Life:
There's certainly an increase in the sophistication of the attacks and the cleverness.
Researchers at a US cybersecurity firm discovered a fraudulent whitepaper aimed at sales executives looking to convert their leads better. The ad’s graphic design is believed to have been created by AI tool, Dall-E, and it is also thought that the text in the whitepaper was created by an AI chatbot.
After accessing the whitepaper, users were directed to a call-to-action button which generated a form that then auto-fills with the user’s personal LinkedIn details. It is believed that the account page, named simply ‘Sales Intelligence’, was set up to harvest users personal details for criminal purposes.
Read more:
⋙ Is the four-day work week a scam? This is what the results from the trial in UK companies show
⋙ WhatsApp users: Beware of costly new scam that will put your money and data at risk
How to protect against cybercrime
James Bores, a cybersecurity consultant with Bores Consultancy, explained that it is actually quite difficult to recognise fraudulent AI-based content.
In terms of what people can do, the depressing answer is not much. There are tools to try and detect AI-generated content, but they are unreliable.
Some of them work, and some of them don’t, and these will only become more unreliable as the technology gets better.
Cyber experts have warned that something as simple as a user’s name, email, LinkedIn profile, and date of birth couldbe the first step to more serious fraud, such as identity theft.
Researchers have urged users to be wary of a lot of text that is written perfectly and without any mistakes as this could be the work of an AI chatbot, as humans normally make some mistakes. However, this could also be misleading as text with lots of mistakes is typically the sign of a non-AI scammer, who could be lacking good English. The simplest advice from Bores is if something seems too good to be true - it probably is.
A LinkedIn spokesperson said, as per the Evening Standard:
We continually evolve our teams and technology to stay ahead of new forms of scams so that we always help keep our members’ experiences safe.
We also encourage our members to report suspicious activity or accounts so that we can take action to remove and restrict them, as we did in this case.
Read more:
⋙ Facebook Marketplace: Police warn millions of this new scam that is on the rise
Sources used:
- The Evening Standard 'AI-generated content appears in LinkedIn phishing scams, but can users spot it?'
- Money Life 'Fraudsters hit LinkedIn with recruitment scam wave amid tech layoffs'