Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Published : Oct 26, 2022
NOT ON THE CURRENT EDITION
This blip is not on the current edition of the Radar. If it was on one of the last few editions, it is likely that it is still relevant. If the blip is older, it might no longer be relevant and our assessment might be different today. Unfortunately, we simply don't have the bandwidth to continuously review blips from previous editions of the Radar. Understand more
Oct 2022
Assess ?

OpenAI's DALL·E caught everyone's attention with its ability to create images from text prompts. Now, Stable Diffusion offers the same capability but, critically, it's open source. Anyone with access to a powerful graphics card can experiment with the model, and anyone with sufficient compute resources can recreate the model themselves. The results are astounding but also raise significant questions. For example, the model is trained on image-text pairs obtained via a broad scrape of the internet and therefore will reflect societal biases, which means it could possibly produce content that is illegal, upsetting, or at the very least undesirable. Stable Diffusion now includes an AI-based safety classifier; however, given its open-source nature, people can disable the classifier. Finally, artists have noted that with the right prompts the model is adept at mimicking their artistic style. This raises questions about the ethical and legal implications of an AI capable of imitating an artist.

Download the PDF

 

 

English | Español | Português | 中文

Sign up for the Technology Radar newsletter

 

Subscribe now

Visit our archive to read previous volumes