OpenAI issues a statement after accusations and prohibitions on the use of ChatGPT

You know that saying: the nail that sticks out gets hit the most? So, it seems that the OpenAI It hasn't been easy since its successful launch at the end of last year. In addition to countless criticisms on social media, a group of influential heads in global technology asked the company to stop developing AI for a while. Furthermore, the company was banned from operating in Italy last week. From then on, the owner of the ChatGPT published a statement talking about its concern about the topics covered by experts: security and privacy. With a tone of mea culpa, the OpenAI published last Wednesday (5) the measures it has taken to improve technology.

O release, published on the company's website, brings topics that concern privacy, data security, protection of children by the tool and useful new features. 

ADVERTISING

Notice of OpenAI

The company recognizes that AI tools offer many benefits to people, but they also present real risks. Before releasing any new system, the company says it carries out rigorous testing, involving external experts for feedback.

OpenAI states that it has observed measures to protect children

Another critical focus of OpenAI is the protection of children. The company requires people to be 18 or older — or 13 or older with parental approval — to use its AI tools and has strict policies against hateful, harassing, violent or adult content, among other categories. 

In addition, the company says that it makes significant efforts to minimize the potential for its models to generate content that harms children and works with organizations such as Khan Academy to build customized security solutions for educational environments.

ADVERTISING

See also:

Scroll up