Spotting bugs in ChatGPT can now earn users up to $20,000

Artificial Intelligence-Audits (Copyright 2023 The Associated Press. All rights reserved)

ChatGPT creator OpenAI has announced that users of the artificial intelligence chatbot who flag bugs in the system will be rewarded up to $20,000.

The company said on Tuesday that its new programme will reward users between $200 and $20,000 for finding software bugs within ChatGPT, OpenAI’s plugins, the OpenAI API and other related services.

“We are inviting the global community of security researchers, ethical hackers, and technology enthusiasts to help us identify and address vulnerabilities in our systems,” OpenAI said, adding that it will be offering cash rewards based on the severity and impact of the reported issues.

“Our rewards range from $200 for low-severity findings to up to $20,000 for exceptional discoveries,” it said.

Acknowledging that “vulnerabilities and flaws” can emerge in the complex technology, the American company said it has partnered with the bug bounty platform Bugcrowd to streamline the submission and reward process.

“We invite you to report vulnerabilities, bugs, or security flaws you discover in our systems. By sharing your findings, you will play a crucial role in making our technology safer for everyone,” OpenAI said.

There are also guidelines and rules of engagement released by the company for what won’t be rewarded.

These include getting the AI model to “say bad things to you” and “write malicious code”.

OpenAI urged users who find bugs to “promptly” and unconditionally report the discovered vulnerabilities.

“Do not engage in extortion, threats, or other tactics to elicit a response under duress,” it said in its rules of engagement.

The company’s latest move comes following reports of potential risks of data breach and privacy concerns with the use of the AI chatbot.

ChatGPT was banned in Italy last month, with authorities saying the AI service would be investigated for how the platform protects user data, especially those of minors.

Data regulators in Germany as well as watchdogs in France and Ireland have said they are also looking into the rationale of ChatGPT’s ban in Italy.

Several universities in Japan also warned its faculty that when using generation AI tools like ChatGPT for assessing and translating unpublished research results, the data can be unintentionally leaked to the service provider, “partially or completely”.

“There is a risk that information that should not be leaked to the outside, such as information about the entrance examination and personal information of students and faculty members, will be transmitted to service providers through generation AI, etc, and there is a risk that it will be presented as an answer to other users,” Tohoku University said.