The EU is also investigating because Grok created 23,000 CSAM images

The EU has launched its own investigation into the Grok chatbot, which produces material related to child sexual abuse. Grok is estimated to have generated 23,000 CSAM images in just 11 days. Update: A second investigation has been launched in Ireland to focus on possible privacy breaches.

Despite several calls for Apple and Google to temporarily remove both X and Grok from the App Store, neither company has yet…

Grok generated 23,000 CSAM images

Like most other AI chatbots, xAI’s Grok is capable of generating images from text prompts. It can do so either directly in the app, on the web, or through X. However, unlike other services, Grok has extremely loose reins that allow it to generate disapproving semi-nude images of real individuals, including children.

Engadget states that one estimate suggests that Grok generated around 23,000 CSAM images in just 11 days.

The Center for Combating Digital Hate (CCDH) published its findings. The British non-profit organization based its findings on a random sample of 20,000 Grok images from December 29 to January 9. CCDH then extrapolated a broader estimate based on the 4.6 million Grok images produced during this period (…)

Over the course of 11 days, Grok created an estimated 3 million sexualized images — including an estimated 23,000 of children.

In other words, Grok generated an estimated 190 sexualized images per minute during that 11-day period. Between them, it produced a sexualized image of children every 41 seconds.

An EU investigation has been launched

Earlier this month, three US senators asked Apple CEO Tim Cook to temporarily remove both X and Grok from the App Store due to “disgusting content generation”. The company has not yet done so.

Two countries have blocked the app, and investigations have already been launched in California and the UK. Tea Financial Times states that the EU has now also opened an investigation.

The investigation, announced on Monday under the EU’s Digital Services Act, will assess whether xAI tried to mitigate the risks of deploying Grok’s tools on X and spreading content that “may constitute child sexual abuse material”.

“Non-consensual sexual pretense of women and children is a violent, unacceptable form of degradation,” said EU technology chief Henna Virkkunen.

If a company is found to have violated the DSA, it can be fined up to 6% of its annual global revenue.

Photo by Logan Voss on Unsplash

Add 9to5Mac as a preferred resource on Google
Add 9to5Mac as a preferred resource on Google

FTC: We use automatic income earning affiliate links. More.

Leave a Comment