— Weird Dall-E Mini Generations (@weirddalle) ) July 25, 2022
According to reports, Dayma plans to launch a premium version of his image generator.
“I would get interesting feedback and suggestions from the AI community,” Dayma told NPR. “And it became better, and better, and better” at generating images until it reached what Dayma referred to as “a viral threshold”.
This April, OpenAI revealed a more powerful AI text-to-image making tool DALL-E 2. According to the platform, this successor to DALL-E can produce images resembling photographs and illustrations that look extremely real and as if a professional artist made them.
Partly because of its concerns about misuse, OpenAI has provided access to DALL-E and DALL-E 2 only to select users, including artists and computer scientists, who must abide by stringent rules. As of July 12, the company has invited 100,516 people to try DALL-E. The company follows this approach to “learn about the technology’s capabilities and limitations.”
Paleontologists discovering a giant human skull made with DALL-E 2 https://t.co/cbrAJrSJm0
— Best Dalle2 Pics (@Dalle2Pics) ) July 10, 2022
Several developers have admitted to the potential for harm from using such generators when in the wrong hands. Deepfakes, or rather convincing applications of machine-learning models to render false images of celebrities and politicians, have become a concern for AI researchers, lawmakers, and nonprofits that work on online abuse and harassment.
Prof Toby Walsh, AI researcher and author of a book on the morality of AI, told The Guardian that the kind of technology that powers DALL-E makes it easier to create fake images. “We are seeing deep fakes being used all the time, and the technology is going to allow still images, but ultimately also video images, to be synthesized [more easily] by bad actors,” he says.
Users have rendered questionable prompts, resulting in images that probably shouldn’t exist. The company OpenAI said that it filters the system’s training data and restricts keywords that could provide explicit content. OpenAI had implemented a safety policy for all images, filtering images that could be sexual, violent, or inappropriate. It even has a second filter that is used to try and remove images that could be propagandistic. The DALL-E 2 team is also checking every image to weed out anything inappropriate.