In a mass event at a conference, hackers will be allowed to try to break AI models like ChatGPT. The developers will use that information to strengthen the model.
All of this is in coordination with the Biden administration and the White House’s Blueprint for an AI Bill of Rights
No sooner did ChatGPT get unleashed than hackers started “jailbreaking” the artificial intelligence chatbot — trying to override its safeguards so it could blurt out something unhinged or obscene.
But now its maker, OpenAI, and other major AI providers such as Google and Microsoft, are coordinating with the Biden administration to let thousands of hackers take a shot at testing the limits of their technology.
Some of the things they’ll be looking to find: How can chatbots be manipulated to cause harm? Will they share the private information we confide in them to other users? And why do they assume a doctor is a man and a nurse is a woman?
“This is why we need thousands of people,” said Rumman Chowdhury, a coordinator of the mass hacking event planned for this summer’s DEF CON hacker convention in Las Vegas that’s expected to draw several thousand people. “We need a lot of people with a wide range of lived experiences, subject matter expertise and backgrounds hacking at these models and trying to find problems that can then go be fixed.”
No sooner did ChatGPT get unleashed than hackers started “jailbreaking” the artificial intelligence chatbot – trying to override its safeguards so it could blurt out something unhinged or obscene. But now its maker, OpenAI, and other major AI...