And yet, still specifying that CSAM in private channels is totally fine by them
As a follow-up to our work on computer-generated CSAM, we took a closer look at the training data used to train various generative models—most prominently, Stable Diffusion 1.5—to see to what degree CSAM itself might be present in the training data.
My comment on Jordan’s “Weaponization of CISA” report: “This report grossly misrepresents my work and the work of the MDM subcommittee — a volunteer group that served an advisory, not an operational role for CISA. …”
Hi, are we talking about AI risks this morning? Here’s a NYT story about my colleague @det.bsky.socialhttps://www.nytimes.com/2023/06/24/business/ai-generated-explicit-images.html
A.I. companies have an edge in blocking the creation and distribution of child sexual abuse material. They’ve seen how social media companies failed.
In the course of conducting a large investigation into online child exploitation, SIO discovered serious failings with the child protection systems at Twitter. 1/6 https://www.wsj.com/articles/twitter-missed-dozens-of-known-images-of-child-sexual-abuse-material-researchers-say-58d44f7b
Social-media platform has now improved its detection system, Stanford Internet Observatory was told