January 29, 2023
Chicago 12, Melborne City, USA

Meta has built a massive new language AI—and it’s giving it away for free


Pinu helped change how research is published at various major conferences, introducing a checklist that researchers need to submit along with their results as well as details of codes and how tests are run. Since joining Meta (then Facebook) in 2017, he has championed that culture in his AI lab.

“The promise of open science is why I came here,” he says. “I will not be here on any other terms.”

In the end, Pinu wants to change the way we judge AI. “What we call sophisticated today is not just about performance,” he says. “It also needs to be up-to-date on responsibilities.”

Still, a large language model is a bold move for the given meter. “I can’t tell you that there is no risk in the language of making this model that we are not proud of,” Pinu said. “It will.”

Weighing risk

Margaret Mitchell, one of the AI ​​ethics researchers forced by Google in 2020 who is now on Hugging Face, sees the OPT release as a positive step. However, he said there were limitations to transparency. Has the language model been tested with sufficient rigor? Do the expected benefits outweigh the far-reaching disadvantages — such as the generation of misinformation, or racist and antisocial language?

“Exposing a large language model to the world where a wide audience can use it, or be influenced by its output, comes with responsibility,” he says. Mitchell notes that the model will not only be able to create harmful content on its own, but also through the downstream applications that researchers have built on it.

Meta AI has conducted OPT audits to remove some harmful behaviors, but the key is to release a model that researchers can learn, mole and everything, Pinu says.

“There were a lot of conversations about how to do this that let us sleep at night, knowing that there is a non-zero risk in terms of reputation, a non-zero risk in terms of loss,” he says. He dismisses the notion that you should not publish a model because it is too risky যার which is why OpenAI GPT-3’s predecessor, GPT-2, has refused to publish. “I understand the weaknesses of these models, but it’s not a research mindset,” he says


Source link

Leave feedback about this

  • Quality
  • Price
  • Service


Add Field


Add Field
Choose Image
Choose Video