OpenAI is building a red teaming network to tackle AI safety – and you can apply

Trending 2 months ago
OpenAI Red Teaming

OpenAI

OpenAI’s ChatGPT has accumulated complete 100 cardinal users globally, highlighting some nan affirmative usage cases for AI and nan request for much regulation. OpenAI is now putting together a squad to thief build safer and much robust models. 

On Tuesday, OpenAI announced that it is launching its OpenAI Red Teaming Network composed of experts who tin thief supply penetration to pass nan company’s consequence appraisal and mitigation strategies to deploy safer models. 

Also: Every Amazon AI announcement coming you’ll want to cognize about 

This web will toggle shape really OpenAI conducts its consequence assessments into a much general process involving various stages of nan exemplary and merchandise improvement cycle, arsenic opposed to “one-off engagements and action processes earlier awesome exemplary deployments,” according to OpenAI. 

OpenAI is seeking experts of each different backgrounds to dress up nan team, including domain expertise successful education, economics, law, languages, governmental science, and psychology, to sanction only a few. 

Also: How to usage ChatGPT to do investigation for papers, presentations, studies, and more

But OpenAI says anterior acquisition pinch AI systems aliases connection models is not required. 

The members will beryllium compensated for their clip and taxable to non-disclosure agreements (NDAs). Since they won’t beryllium progressive pinch each caller exemplary aliases project, being connected nan reddish squad could beryllium arsenic insignificant arsenic a five-hour-a-year clip commitment. You tin use to beryllium a portion of nan web done OpenAI’s site. 

In summation to OpenAI’s reddish teaming campaigns, nan experts tin prosecute pinch each different connected wide “red teaming practices and findings,” according to nan blog post. 

Also: Amazon is turning Alexa into a hands-free ChatGPT

“This web offers a unsocial opportunity to style nan improvement of safer AI technologies and policies, and nan effect AI tin person connected nan measurement we live, work, and interact,” says OpenAI. 

Red teaming is an basal process for testing nan effectiveness and ensuring nan information of newer technology. Other tech giants, including Google and Microsoft, person dedicated reddish teams for their AI models.

More
Source