ChatGPT, can you write a phishing email?

"ChatGPT, can you write a phishing email?»
"ChatGPT, can you write a phishing email?»

Image source Unsplash

The simple question posed by the mathematician Alan Turing in 1950, “Can machines think?” sparked off a long period of research and experimentation into artificial intelligence. Today, the numerous research and technological advances have borne fruit and many inventions using artificial intelligence have seen the light of day. So it was 72 years later, on 30 November 2022, that chatGPT was launched. Developed by OpenAI, an artificial intelligence research company, chatGPT quickly became a well-recognised term. Today, there are 186 million accounts and 1.6 billion visits in March 2023 alone.

What is chatGPT and how does it work?

ChatGPT is an artificial intelligence chatbot with a self-generating system. This means that the machine “interacts in a conversational manner” using natural language (known as NLP or Natural Language Processing). The artificial intelligence uses deep learning algorithms to analyse users’ questions and generate appropriate responses. Over time, chatGPT learns from its users’ questions and answers. This enables it to answer a very wide range of questions, such as writing cover letters, essays or even lines of code. And if the answer is incorrect, all you have to do is chat with it and a more convincing answer will be proposed. That is why this invention has so quickly caught on with so many people.

But chatGPT also has its drawbacks, particularly in terms of cybersecurity and, more specifically, phishing.

With great power comes great responsibility: managing the cyber risks associated with the creation of chatGPT is becoming a difficult task. Typically, cybercriminals don’t pull any punches. In recent years, global crime and cyberattacks have risen sharply, notably by 38% in 2022
One of the most worrying aspect of chatGPT are phishing attacks. Indeed, chatGPT has become a goldmine for hackers. Its ability to write texts of all types, without error, while generating human-like responses, is a major asset for cybercriminals. This accentuates an already present and widespread threat. The FBI’s IC3 report for 2022 shows that phishing is the crime with the highest number of complaints. In 2022, with 300,497 complaints in the USA alone, phishing is becoming the most widespread type of cyberattack, not only in the USA but worldwide. As well as being a widespread problem, it affects all sectors, so it is essential to be informed and prepared.  

Phishing is used by cybercriminals to obtain personal and sensitive information about their victims. To do this, criminals pretend to be reputable organizations by sending messages via text message, phone call or email. With these messages they invite their victims to click on a link to enter their personal details. 

OpenAI formally prohibits any malicious use of chatGPT. When asked directly to write malicious code or phishing emails, it refuses to do so. However, with sustained insistence and a clever turn of phrase, artificial intelligence can provide enough information to simplify a cybercriminal’s task. This manipulation can be carried out using the standard version of chatGPT, without the need for “JailBreaking“. This is a cause for concern, as chatGPT has already been used to create phishing e-mails and fraudulent web pages.

Interview with chatGPT

What does the main player think? To find out for sure, we asked chatGPT a few questions to get its “opinion” on the situation, but also to test the creation of fraudulent e-mails.

Firstly, from a legal point of view, does chatGPT follow a code of conduct and are there any regulations governing artificial intelligence?

For the moment, in Europe, there is no real law in place to regulate the use of artificial intelligences such as chatGPT. However, the European Commission has already launched a project aimed at providing a regulatory framework, and policy proposals have already been drawn up. As a result, there are as yet no official rules or bans in force in Europe. However, this is expected to change in the coming months or years.


ChatGPT is therefore not subject to a legal code of conduct, although the tool does appear to follow a moral code of conduct.

Interview with chatGPT

According to chatGPT, these are the things it is not allowed to do: engage in illegal activities, infringe intellectual property rights, provide personal or confidential information and, lastly, impersonate a person or organization.

We also asked its opinion on its ability to help someone launch a phishing attack. ChatGPT confirms that it has no moral right to do so. 

Interview with chatGPT - phishing

Finally, we also asked it if it was possible to freely obtain information on the presence of a DMARC entry in Nike’s zone file. Domain-Based Message Authentication Reporting and Conformance (DMARC) is an e-mail authentication method that allows the domain holder to define instructions for handling messages on its e-mail system. It is an effective tool against phishing. For cybercriminals, being informed of the presence of a DMARC within a company makes it easier to choose which companies to target: those that have not deployed a DMARC policy. ChatGPT was unable to provide information directly about the company’s DMARC record, but it did explain how to obtain it using the Windows command line.

Interview with chatGPT - DMARC

We also tried to test chatGPT to obtain a phishing e-mail. After a few questions, we were soon able to ask him the right questions. Finally, he was able to write us a convincing e-mail, posing as a bank. 

Interview with chatGPT - phishing email

It then provides us with this message, a perfect phishing trap, because it contains all the codes of a classic e-mail from a bank asking the recipient to provide their personal details. The message is written in proper English, with no spelling mistakes; it invites the recipient to act quickly, in a panic and without thinking. After obtaining this information, if the cybercriminal is not happy with any of the details, he can ask chatGPT to change them.

What can we expect from the future?

Will it be possible to block or slow down the development of AI? Following the release of chatGPT, a number of influential figures in the field of technology, such as Elon Musk and Apple co-founder Steve Wozniak, expressed their concerns by signing petitions and participating in open letters aimed at suspending the research and release of an AI more advanced than chatGPT. This reflects the concern of the European Commission and citizens about technological advances.

However, it is hard to imagine that artificial intelligences such as chatGPT will be banned altogether in the future. This is despite the risks they pose in terms of cyber security, for example. As proposed by the European Commission, the use of artificial intelligences such as chatGPT will be regulated. However, this is unlikely to be enough to stop cybercriminals wanting to use chatGPT as a phishing tool.

So it is best to prepare and protect yourself against the risks posed by artificial intelligence, which will become increasingly effective over time.

Protecting yourself with Nameshield’s DMARC policy

Who does not fear a phishing attack? That is why it is vital to check the email protection you have in place. This is often the route taken by cybercriminals trying to phish your information and that of your company.

An effective way to counter-attack is to deploy a DMARC policy.

Implementing a DMARC policy within your company has a number of advantages. It will enable you to block spoofing attempts and fraudulent e-mails. What’s more, this policy will strengthen the authentication of your traffic and help improve the deliverability of your emails.

Nameshield supports you in the deployment of a DMARC policy. Thanks to our expertise, we will be able to take care of its correct implementation, in the best possible conditions. 
Do not hesitate to contact your Nameshield consultant and keep up to date with technological advances such as chatGPT and its link to phishing and other cybercrimes.