Prompt Engineering – GPT-3 Enhancing phishing and BEC scams

Wiper Malware
The Wiper Malware
January 1, 2023
Ransomware Attacks in Pakistan – LockBit 3.0
March 20, 2023
Wiper Malware
The Wiper Malware
January 1, 2023
Ransomware Attacks in Pakistan – LockBit 3.0
March 20, 2023

Prompt Engineering – GPT-3 Enhancing phishing and BEC scams

PROMPT-ENGINEERING

Cyber Security experts have utilized the GPT-3 natural language generation model and its ChatGPT chatbot to showcase how deep learning models can be used to make social engineering attacks, such as phishing or business email compromise (BEC) scams, more difficult to detect and easier to execute. A study conducted recently, illustrates that cyber attackers can now not only create unique variations of the same phishing bait using grammatically correct and human-like written text, but also construct entire email campaigns to make their emails more plausible, and even mimic the writing style of real individuals by using samples of their previous communications.

What is prompt engineering ?

GPT-3 is a deep learning-based autoregressive language model that can generate human-like responses based on small inputs, known as prompts. These prompts can range from simple questions or complex instructions to write on a specific topic leading to more detailed prompts that provide the model with more enhanced context. The technique of creating highly specific and high-quality responses through the use of refined prompts is called prompt engineering.

One of the clear advantages of using these types of tool is the ability for attackers to easily generate phishing messages without relying on someone who is proficient in English. However, it gets more interesting in large-scale phishing attacks, and even in targeted ones where the number of victims is smaller, the text or lure in the email used to be identical, hence making it easy for security vendors and automated filters to create detection rules based on the text. As a result, attackers know they have a limited window of time before their emails are flagged as spam or malware, and are blocked or removed from inboxes. With tools like ChatGPT and prompt engineering, attackers can write a prompt and generate an unlimited number of unique variations of the same lure message and even automate the process, so that each phishing email is unique each time.

It is possible to identify if a message was generated by an AI model and researchers are currently developing such tools. While these tools may be effective for current models and useful in specific situations, such as schools detecting AI-generated essays submitted by students, it’s challenging to envision how they can be applied to email filtering because workers around the world are already using these models to compose their business emails which makes their work easier.

The cyber security researchers were also able to use of techniques like social opposition, validation, opinions transfer, and fake news to create social media posts that are harmful to individuals’ or businesses reputation.They were successful in generating messages that legitimize scams, and produce convincing false news articles of events that were not even included in the bot’s training data.

Leave a Reply

Prompt Engineering – GPT-3 Enhancing phishing and BEC scams
We value your privacy
We use cookies to enhance your browsing experience, serve personalized content, and analyze our traffic. By clicking "Accept All", and by using this website you agree to our Cookies and Data Protection Policy.
Read more