Following the release of the GPT-2 in February 2019, OpenAI adopted a stunned approach to publishing the model in the largest size, claiming that the text it created was too realistic and dangerous to publish. This approach has led to controversy over how to responsibly publish large language models, as well as criticism that the broader approach was created for publicity.
Despite the GPT-3 being more than 100 times larger than the GPT-2 — and well-documented bias towards black people, Muslims, and other groups েষ্টা the GPT-3 commercialization effort with exclusive partner Microsoft went ahead in 2020 without any specific data. Driven or quantitative method to determine whether the model is suitable for release.
Altman suggested that the DALL-E 2 could follow the same procedure for GPT-3. “There is no clear metric that we all agree that we can point out that society can say it is the right way to handle it,” he says, but OpenAI wants to follow a metric like the DALL-E number of 2 images that illustrate, say, prison. A colorful person in the room.
One way to handle the biased problems with DALL-E 2 is to completely eliminate the ability to create human faces, says Hannah Rose Kirk, a data scientist at Oxford University who took part in the Red Team process. He co-authored a study earlier this year on how to reduce bias in multimodal models such as OpenAI’s CLIP, and suggested that DALL-E 2 adopt a classification model that limits the ability of the image creation system to stabilize stereotypes. By
“You get the loss of accuracy, but we argue that the loss of accuracy is valuable for reducing bias,” Kirk said. “I think it would be a big limitation on the current capabilities of DALL-E, but in some ways, many risks can be eliminated cheaply and easily.”
He found that with DALL-E 2, phrases such as “a place of worship,” “a plate of healthy food” or “a clean road” could return results with Western cultural bias, such as a prompt such as “a group of Germans.” Kids in the classroom “vs.” A group of South African kids in a classroom. DALL-E 2 will export images of “a couple kissing on the beach” but will probably not create images of “a transgender couple kissing on the beach” due to the OpenAI text filtering method. Text filters are there to prevent the creation of inappropriate content, Kirk says, but can contribute to the deletion of certain groups of people.
Leah Coleman is a member of the Red Team and an artist who has used the text-to-image model in her work for the past two years. He generally found the faces of the people produced by DALL-E 2 to be incredible and the results were not like photorealistic clip art, complete with white background, cartoonish animation and poor shading. Like Kirk, he supports filtering to reduce DALL-E’s ability to increase bias. But he thinks the long-term solution is to educate people to take pictures on social media with a grain of salt. “As much as we try to put a cork in it,” he says, “it will spread at some point in the coming years.”
Marcelo Renesi, CTO of the Institute for Ethics and Emerging Technologies, argues that although DALL-E2 is a powerful tool, it does not do what a skilled painter Photoshop and some time could not. The main difference, he said, is that DALL-E 2 changes the economics and speed of creating such images, making it possible to customize confusing industrialization or bias to reach a specific audience.
He had the idea that the Red Team process could do more to protect OpenAI’s legal or reputational liability than finding new ways to harm people, but he doubted DALL-E2 alone would bring down presidents or wreak havoc on society.
“I’m not worried about issues like social bias or confusion, because it’s a pile of burning rubbish right now that doesn’t make it any worse,” said Reneci, a self-described pessimist. “It’s not going to be a systemic crisis, because we’re already in one.”
More great cable story