This report explores the growing threat of AI-generated child sexual abuse material (AI-CSAM) in the United States. It examines how generative technologies are being used to create synthetic but hyper-realistic imagery that mimics CSAM, raising urgent ethical, legal, and enforcement challenges. Although these materials may not involve real victims at the point of creation, their impact is far-reaching—fueling harmful behaviors and complicating law enforcement efforts. The paper calls for updated legislation, improved detection tools, and stronger collaboration between tech companies, law enforcement, and policymakers to address this emerging form of exploitation.
For the full report: