“The LAION-5B machine learning dataset used by Google, Stable Diffusion, and other major AI products has been removed by the organization that created it after a Stanford study found that it contained 3,226 suspected instances of child sexual abuse material, 1,008 of which were externally validated”
The model is a massive part of the AI-ecosystem, used by Google and Stable Diffusion. The removal follows discoveries made by Stanford researchers, who found thousands instances of suspected child sex...
Oh So they _can_ remove data from training sets. How about that.
Let’s not forget this too: apnews.com/article/ai-a...
A new report warns that the proliferation of child sexual abuse images on the internet could become much worse if something is not done to put controls on artificial intelligence tools that generate d...
Fucking idiots. Indiscriminate scraping of data like this is a totally amateur move I’ve only seen done by dumb college students who had to learn their lesson the hard way.
Well this is incredibly fucking disturbing
So CS has learned only the most basic lessons from the history of the Lena image? Great 😐
Google, Yahoo, Facebook, Dropbox, etc. regularly filter these hash values and report users. Stunning that there was no check done here.
wow guys who would have thought ai would be used for unsavory and evil motives
wait wait wait how did an industry filled with amoral grifters, nepo babies, frat boy business school date r*pists and soulless corporate shills allow something like this to happen??