Recently, we’ve seen a lot of online buzz regarding the use of ChatGPT as a password generator. While ChatGPT has proven that it can be a great tool, it’s also demonstrated some glaring weaknesses (AI hallucinations are no joke).
So, is it really a good idea to trust something as sensitive as password creation to ChatGPT’s AO–especially when OpenAI collects personal information included in your ChatGPT input? The verdict is still out, but here’s what we do know:
Initially, ChatGPT was based on information extracted from a wide variety of data sources, the most recent of which was created in 2021. However, since its launch, the AI chatbot has continued to analyze user input to improve its interactions.
Now, people are going to ChatGPT for everything from content creation to financial advice (neither of which we’d recommend by the way). However, the technology isn’t a miracle source of unending accurate information. On the contrary, because it bases its responses on input from people, it’s flawed by design.
And this doesn’t bode well for those who are starting to rely on ChatGPT to generate secure passwords.
The upside to using ChatGPT as a secure password generator is its ability to quickly produce passwords that are designed to fit your specifications. So, you can provide a simple prompt asking for the strongest passwords 12 characters in length with a mix of letters and numbers, and immediately get a list of suggestions.
Why use ChatGPT instead of an actual random password generator designed specifically for this purpose? We have no idea.
The downside seems a little more weighty than the pro(s). Because ChatGPT is constantly being influenced by the input of people, it relies heavily on their responses to generate new content.
So, if you ask it for a list of 10 of the most secure passwords it can produce, you may be getting the same list as anyone else who asks the same question. This may not seem like a big deal. However, if a cybercriminal asks ChatGPT the same question, they’ll now have a new list of common passwords that could be used to break into different accounts. Maybe the risk isn’t super high right now, but any time people create some cool piece of technology that relies on data, criminals find a way to abuse it.
Criminals have already been using ChatGPT as a way to create phishing content, impersonate certain professionals, write or improve on malware, and as a means to commit fraud. All of this depends on a combination of the original data ChatGPT learned from, and the new data that’s being constantly put in.
If people are asking ChatGPT to generate “strong” passwords, then the bot has access to these as well. And, any data you input lives in the OpenAI system until you decide to delete your entire account. While your conversations themselves may be deleted, the lessons the chatbot learned from them continue to influence interactions.
Although truly secure password generators and managers don’t necessarily store your random passwords in forms that are easily accessible to hackers, ChatGPT isn’t exactly held to the same standards. In fact, on March 20th, ChatGPT had its first major data breach that may have exposed the personal info and prompts of more than a million users.
If those prompts contained a list of “secure passwords” someone had used across platforms, then hackers could easily attach those to a user’s personal information to gain access to accounts.
Needless to say, sharing sensitive information of any kind with ChatGPT is a risk no one should take.
With all of this in mind, you’re much better off using a traditional secure password generator with all of the securities in place (we’re partial to Cloaked).
ChatGPT can be a fun way to spend your lunch hour or even a good way to quickly generate content, but it hasn’t reached a level of security that inspires trust. And with its reliance on data, it probably never will.
Join the Cloaked Community and stay on top of the latest news, privacy tips, product updates, and conversations around online security.