ChatGPT has grabbed the attention of millions of internet users in just a few months. With over 100 million active users in only two months of launch, it has become the fastest-growing consumer application.
From doing research and simple question answers to improving customer service, everything can be done with ChatGPT. Overall, it is gradually becoming a go-to resource for individuals and businesses. However, all the shiny aspects of ChatGPT are also raising concerns about data privacy. So, let’s explore in detail the privacy risks with ChatGPT and address whether it can disclose our personal information and passwords.
ChatGPT – A GPT-3 Large Language Model
ChatGPT is powered by the GPT-3 (Generative Pretrained Transformer 3) model, a powerful and the largest language-processing AI model trained to produce human-like text. OpenAI, the developer of ChatGPT, trained the GPT-3 model by text databases from the internet. It included an astonishing 570 GB of data gathered from books, articles, posts, Wikipedia, web texts, and plenty of other sources.
To be more specific, OpenAI fed 300 billion words into the system to train the GPT-3 model. So, if you have written a product review, an article, or any comment, then it is likely that the information is also fed to the GPT-3 model.
Why ChatGPT Training is a Privacy Concern
The way OpenAI has trained ChatGPT is a privacy concern due to many reasons. For instance, when OpenAI fed 570 GB of data, it didn’t ask any one of us whether it could use our data. This reflects a violation of privacy as the data the company has gathered can be sensitive and can be used to get our location and other personal details. Even if OpenAI has used publicly available data, the contextual integrity breach still exists, which requires it to avoid revealing users’ data outside the original context.
Moreover, all the data that OpenAI has scrapped from the internet, the company didn’t pay any website owner or companies. With Microsoft’s $10 billion investment and whooping users, OpenAI valuation can reach $29 billion. Moreover, its ChatGPT Plus subscription plan is expected to generate $1 billion in revenue by 2024. All this is possible because of the massive data OpenAI gathered from the internet, including our personal information.
Major Privacy Concerns with ChatGPT
To address whether ChatGPT can disclose our personal information and passwords, it is important to list the major privacy concerns with ChatGPT. So, below are a few concerning privacy issues with ChatGPT worth knowing:
No Procedure to Check for Personal Information
ChatGPT has no procedure that allows the users to check whether the company is storing any personal information. This is one of the rights that the European GDPR (General Data Protection Regulation) mentions, but it seems like it is not addressed by OpenAI properly till now. In fact, there is also an ongoing debate on whether the company adheres to GDPR requirements. OpenAI claims to comply with GDPR, but many doubt the company’s credibility in it.
Use of Copyrighted Content
The GPT-3 model is trained from massive internet data, which also includes copyrighted content. It is very well evident from ChatGPT itself. For example, if you ask ChatGPT to write a few sentences about a copyrighted book, it will certainly provide a few sentences from that book. So, it is likely that it is trained from plenty of copyrighted content without getting consent.
Remembers Every Query
Whatever you search in ChatGPT becomes part of ChatGPT’s database. ChatGPT requires that in order to train the model further and improve its response. But it has a concerning aspect to it. For example, if a developer asks ChatGPT to check a code, it gets stored in the database. Now if someone else asks ChatGPT to write a similar code, it can take some code elements from that developer’s personal code and share it with the other person. In short, its “always remembering” aspect provides a risk of unintentional exposure to sensitive information.
How Secure are our Personal Information and Passwords in ChatGPT?
Recently, a Microsoft employee asked in the internal forum whether ChatGPT was allowed for use in the workplace. Microsoft’s CTO office responded to the query and said that you can use ChatGPT or any other offering from OpenAI as long as you don’t share sensitive data. Similarly, Amazon has also notified employees with similar warnings. In fact, ChatGPT itself says to avoid sharing sensitive information.
When tech giants don’t trust ChatGPT for sensitive data protection, how can we believe that our personal information and passwords are secure in it? Since ChatGPT remembers everything to improve its model and responses, and the company also mentions disclosing personal data with third parties without further notice, we can say that ChatGPT can likely disclose our personal information and passwords intentionally or unintentionally.