Lately, I’ve been reading a lot about nefarious uses for large language models like ChatGPT. One thing various writers are predicting is the use of AI tools to improve the quality of phishing attacks. I can understand their concerns. A lot of current phishing attacks can be spotted because of spelling and grammar errors, awkward phrasing and poor structure.
Generative AI has the potential to create well-written, hard-to-spot emails and texts on practically any subject. I began to wonder if tools like ChatGPT could also be used to automate time-consuming tasks like compiling information about a specific person.
I wanted to know if a cybercriminal who was targeting me specifically and wanted to gather information about me quickly could delegate that work to ChatGPT.
At the moment, it looks like OpenAI has guardrails in place (see screenshot above). I haven’t tested any of ChatGPT’s brethren, but I hope other developers will build their tools responsibly and do what they can to prevent harmful uses of their technology.
What other downsides are we going to be experiencing from these kinds of tools in the near future?
Comments