OpenAI under investigation for data privacy violation and false ChatGPT responses

Avatar
Although ChatGPT has dominated headlines thanks to its abilities, its rise to the top has been marked by controversies. 
OpenAI

OpenAI, creator of the popular conversational AI tool ChatGPT, is being investigated by the Federal Trade Commission (FTC) to determine whether the chatbot violated data privacy laws and shared false information about certain people. Although ChatGPT has dominated headlines thanks to its abilities, its rise to the top has been marked by controversies. 

In a 20-page document titled “Civil Investigation Demand” sent to OpenAI, the U.S. agency requested to view company records on how it mitigates risks related to its AI models. It’s worth noting that this investigation is the AI company’s most significant face-off with a regulatory agency in the country. The document in part asks OpenAI to explain how the company got the data used to train the Large Language Models that power ChatGPT. Available options include data scraping and purchases from third parties.

For clarity, data scraping is using automated programs to search public websites for data to use them in various ways like training of AI models or targeted advertising. Data scraping is typically allowed in the U.S., though Twitter recently filed a suit against four unnamed entities for extreme data scraping. 

In response, OpenAI’s CEO, Sam Altman, tweeted that his company would cooperate  with the FTC to demonstrate that the technology used to power ChatGPT and other products is “safe and pro-consumer.

The FTC document also asks the tech company to explain the steps it has taken to measure the extent to which ChatGPT can make inaccurate and misleading statements about people. This part of the investigation aims to determine if the chatbot could have been trained to malign people’s reputations. The FTC also asks OpenAI to shed light on its products and how they’re marketed. It also requests insights into how new products are prepped before launch. 

If found guilty of the accusations, the FTC can either slam OpenAI with a fine or make them enter an agreement called a consent decree which would then prescribe how the company would use data. Interestingly, Amazon was fined $25 million last month for failing to comply with child data privacy laws. Fellow tech giants Meta and Twitter have also been fined for similar offenses. 

Aside from data privacy infringement and potential ChatGPT error, the FTC also urged OpenAI to provide records of a bug incident in March this year which momentarily revealed the chat history and payment records of active users. Addressing the situation, a section of OpenAI’s official statement said “It was possible for some users to see another active user’s first and last name, email address, payment address, credit card type and the last four digits (only) of a credit card number, and credit card expiration date. Full credit card numbers were not exposed at any time.”  

While the company’s statement claimed the incident affected 1.2% of ChatGPT plus subscribers, the FTC asked for a specific number of users, the number of complaints it received due to the matter, any modifications of its policies since the event happened, and a few more. The EU recently passed an AI regulation bill that promotes data privacy and prescribes hefty fines for erring companies. 

Read also: ChatGPT creator OpenAI fined $3bn for stealing private data to train its AI 

Now’s the time for wholesome AI regulation 

Yesterday, Technext reported that a businessman in Tanzania sued Vodacom, his telecom operator, after discovering vast amounts of his data on ChatGPT. According to him, this was done without his consent. His case and many others underscore AI models’ threat in an unregulated environment. 

The fight against AI tools like ChatGPT

Ideally, ChatGPT can help a student with homework or an employee with a task at work. But then, how does the bot handle the data users feed it, and how safe are user’s personal information? Two months ago, Samsung forbade its staff from using ChatGPT and other bots following a security incident where a worker uploaded a sensitive code onto the platform. A few other companies have followed suit. 

Recall that ChatGPT trains itself with information from many sources like public sites and conversations with users. Having a company secret on a public platform can have serious consequences. That’s why world governments should prioritize AI regulation to ensure that companies handle data correctly. As the AI gospel advances, stakeholders must strive to keep it in check. The EU recently passed a major AI regulation bill that promotes data privacy and prescribes hefty fines for erring companies. The U.S. and the rest of the world should follow suit.  


Technext Newsletter

Get the best of Africa’s daily tech to your inbox – first thing every morning.
Join the community now!

Register for Technext Coinference 2023, the Largest blockchain and DeFi Gathering in Africa.

Technext Newsletter

Get the best of Africa’s daily tech to your inbox – first thing every morning.
Join the community now!