OpenAI has announced that it will pay users up to $20,000 for identifying and reporting bugs found in its artificial intelligence products, especially the now-viral ChatGPT. The Microsoft-backed company has partnered with Bugcrowd, a bug bounty platform.
Bug bounty programmes, which are common in the tech industry, entail companies paying users for reporting bugs or other security flaws in tech products.
As regards the initiated programme, the company disclosed cash rewards ranging from $200 for “low-severity findings” to $20,000 for “exceptional discoveries.” Also, it acknowledged that the program would enhance transparency and collaboration, which it believes will spot technological weaknesses in its products.
The blog post reads, “This initiative is an essential part of our commitment to developing safe and advanced AI,” written by Matthew Knight, OpenAI’s head of security. “As we create technology and services that are secure, reliable and trustworthy, we would like your help.”
In order to ensure authenticity in the security threat discovery, a number of safety issues have been highlighted for participants on Bugcrowd’s page as void for the rewards. These include jailbreak prompts, coercive questions to initiate AI model writing malicious code, or queries that may trigger inappropriate language towards the chatbot users.
Read More; OpenAI to launch ChatGPT Professional, a premium paid version of the chatbot
Origin of OpenAI’s bug bounty programme
On the 16th March 2023, Greg Brockman, president and co-founder of the advanced research laboratory, quoted a 22-year-old jailbreak prompt enthusiast of the University of Washington, who created a site for jailbreak attempts and aired a tweet thread on some of the vulnerabilities of ChatGPT. Brockman mentioned that OpenAI had been considering initiating a network of “red-teamers” to detect weak spots.
OpenAI was established in 2015 and made ChatGPT public in November, sparking a spike in interest in AI software. Earlier this year, Microsoft, a tech company supporter, committed to investing an extra $10 billion in the business and started integrating a conversation service powered by OpenAI into its Bing search engine.
ChatGPT has been used to write college-level essays, poetry, computer code, schedule meals, and create budgets—often with human-like accuracy.
However, ChatGPT has also been discovered to give false answers to queries and contradict itself. Since its public release, users have tried to push the device to its boundaries by feeding it “jailbreak” commands, which try to deftly get around built-in limitations intended to stop harmful conduct, such as using hate speech or giving instructions on how to commit a crime.
OpenAI’s bug bounty program is not the first of its kind: Other companies have provided bounties to people who unveil bugs in their systems. Amazon, AT&T, Bumble, Buzzfeed, Chime, Coinbase, and Google Chrome are examples of tech companies that have walked this path.
Read More; OpenAI launches an API for ChatGPT and its Whisper speech-to-text tech