Google warns staff against using AI chatbot over fears of data leak

Avatar
Bard and ChatGPT, two chatbots

Alphabet Inc – a multinational corporation and owner of Google – has warned employees about their use of AI chatbots including ChatGPT and Bard. This is to ensure workers do not reveal sensitive company information during their interactions with chatbots.

Ironically, Bard – one of the chatbots Alphabet is telling workers to be wary of – was built by Google. Alphabet also instructed its engineers to no longer ask chatbots to generate computer code. Amazon, Apple, and Samsung are other tech giants that have issued warnings to workers over chatbots, citing data privacy concerns. 

ChatGPT and Bard are two of the world’s most popular chatbots mainly because they contain generative AI which enables them to create unique responses similar to what the user wants. The above chatbots also absorb information supplied by users, training themselves with the data to provide more accurate answers. 

While this is highly beneficial for many people, Google fears that when employees give a prompt containing sensitive information, it exposes the company to severe data leaks. This is because the chatbot makers have human reviewers who examine interactions between the bots and users. 

Google also fears that the chatbots could train themselves using previous entries which could contain company secrets. The thought of sensitive data being repurposed as a response to a rival company worker’s prompt is indeed worrisome.  

AI Chatbot

These are not isolated concerns. Samsung’s ban on chatbot use by workers originated from an incident where an employee uploaded a secret document into ChatGPT. Google also fears that the chatbots could also train themselves using previous entries. 

Google’s announcement not only demonstrates the potential risks of chatbot usage but should also encourage individuals to be more careful when dealing with ChatGPT and the like. Sure, ChatGPT can help users craft an engaging resume or summarize books, but how exactly is user data managed? 

According to Open AI’s privacy policy, it gathers personal data from the following sources: input, file uploads, and feedback. While it claims the data is primarily used for training its chatbot, what happens when a data breach occurs? In March 2023, Open AI suffered a data breach that enabled users to see other users’ chat history payment information. 

Even though the bug behind the data leak has long been contained, it solidifies privacy fears. 

Read also: Heavy use of AI at work may cause insomnia, loneliness, and alcoholism- APA study

Despite its benefits, AI can be dangerous 

The buzz about AI and its capabilities has steadily increased recently, especially with the release of chatbots capable of answering complex questions, writing research papers, and much more. However, concerns about data privacy and the possibility of AI-led tools being manipulated for disinformation and other wrongful purposes have equally grown. 

Recently, an audio recording was released featuring the Labour Party’s candidate at the March 2023 Nigerian presidential election Peter Obi. The recording which went viral and was subsequently dubbed the “Yes Daddy” tape, reportedly contains a discussion between Mr. Obi and David Oyedepo, a popular religious leader.

In the conversation which appeared to have taken place sometime before the election, Obi allegedly tags the election as “a religious war” and wants Christians in the country’s Southwest and North-Central regions to vote for him. Countering the claim, Obi’s Party claimed it was an AI-generated recording

Deepfake audio

Away from the arguments for and against the audio’s authenticity, it’s worth noting that AI’s bad side can no longer be ignored.   Meanwhile, there have been efforts to regulate chatbots and AI use in general. Italy this year prevented Open AI – the maker of ChatGPT – from operating in the country because it doesn’t have the legal right to use citizen’s data. More European countries are expected to follow suit as recent advances in AI technology get scrutinized to ensure that wrongful use is avoided. 

Two days ago, the European Union passed a major AI bill that seeks to heavily regulate the technology. The slew of regulations brings “greater privacy standards, stricter transparency laws, and steeper fines for failing to cooperate.” Companies found to have flouted the new rules risk being fined up to $33 billion or 6% of their annual revenue. 


Technext Newsletter

Get the best of Africa’s daily tech to your inbox – first thing every morning.
Join the community now!

Register for Technext Coinference 2023, the Largest blockchain and DeFi Gathering in Africa.

Technext Newsletter

Get the best of Africa’s daily tech to your inbox – first thing every morning.
Join the community now!