fbpx
laion

Credit: Andrew Grush / Android Authority

Welcome to What’s New in AI, our weekly update where we bring you all the latest AI news, tools, and tips to help you excel in this new brave AI-driven world. Let’s start by focusing on the biggest (and currently breaking) story of the week:

Training models may be using real images that depict child abuse

An alarming new report from Stanford’s Internet Observatory has found that the LAION-5B dataset has at least 3,200 images of suspected child sexual abuse and so far at least a thousand images have been confirmed by Stanford in collaboration with the Canadian Centre for Child Protection and other anti-abuse groups. What is most concerning is this data is currently used by tools like Stable Diffusion’s Stability AI and Google’s Imagen generators.