AI PLATFORMS MUST STOP HARMING PEOPLE
A third of young adults in the US have reported turning to AI for help with their personal life. Yet our research shows that these tools too often generate content that can cause real-world harm and violence.
Mainstream chatbots like ChatGPT have produced dangerous self-harm content. Others have fabricated convincing deepfakes that put marginalized communities at risk.
Now our latest investigation revealed that popular AI chatbots like Meta AI or Gemini will even assist users in planning mass violence, including school shootings and bombings.
The safeguards to stop this already exist. But too many AI companies are choosing speed and profit over safety.
AI should help people – not put them in danger.
Add your name to demand that AI companies put public safety first.