Your Tech Story

ChatGPT Creator OpenAI Is Testing Content Moderation Systems

ChatGPT Creator OpenAI Is Testing Content Moderation Systems

One of the trickiest problems on the web for years has been content management. Given how subjective it is involved in deciding what information should be allowed on a certain platform, it is a challenging topic for anybody to approach. OpenAI, the company that created ChatGPT, believes it can be of assistance and has been testing GPT-4’s capacity for content moderation. To develop a content management platform that is scalable, uniform, as well as customizable, it utilises the big multimodal model.

ChatGPT Creator OpenAI Is Testing Content Moderation Systems
Image Source: thestar.com

In a blog post, the business claimed that GPT-4 can, not solely assist in content moderation choices, but also in the development of rules and the speedy iteration of modifications to existing policies, lowering the process’s duration from months to hours.

It proclaims that the model can quickly adjust to any adjustments and interpret the numerous rules and nuanced aspects of content restrictions. According to OpenAI, this leads to more consistently labelled material.

OpenAI asserts GPT-4 moderating technologies enable businesses to complete about six months’ worth of work in a single day.

“We believe this offers a more positive vision of the future of digital platforms, where AI can help moderate online traffic according to platform-specific policy and relieve the mental burden of a large number of human moderators, ” OpenAI’s Lilian Weng, Vik Goel and Andrea Vallone wrote. “Anyone with OpenAI API access can implement this approach to create their own AI-assisted moderation system.”

Source: engadget.com

Manually assessing distressing information may have a major negative effect on human reviewers’ mental health, especially when it involves graphic content, as has been thoroughly documented. More than eleven thousand moderators will get a minimum of one thousand dollars in compensation from Meta in 2020 for any mental health problems that may have resulted from analysing content that was uploaded on Facebook.

Also Read:  Amazon Is Imposing Fee on Sellers Who Ship Products Themselves

“Judgments by language models are vulnerable to undesired biases that might have been introduced into the model during training. As with any AI application, results and output will need to be carefully monitored, validated and refined by maintaining humans in the loop,” OpenAI’s blog post reads.

Source: engadget.com

AI algorithms are not flawless. Major businesses have long used artificial intelligence in their moderation procedures, yet even with the help of technological advances, they frequently make poor content selections. It will be interesting to see if OpenAI’s technology will prevent many of the big moderation pitfalls that we have observed other businesses making throughout the years.

Leave a Comment

Your email address will not be published. Required fields are marked *