Grok (team) apologised for creating a single non-pornographic ("sexualised") image with minors on a user's request. Something to be fixed (at least given current laws and morals) but doesn't seem that bad.
Btw, the AI generation of pornographic images involving minors seems to be a perfect example of victimless crime.
> Btw, the AI generation of pornographic images involving minors seems to be a perfect example of victimless crime.
In what way? We’re not talking about fictional minors here, it’s taking real children and creating naked images using their face and body and publicly distributing it. Of course there’s a psychological effect on the minor in question.
Not sure if you're talking about this specific case (of which I don't know the details). I mean AI-generated pictures in general contain non-existent people, unless specifically asked otherwise.
Because (this is not my personal opinion, but my observations on LLM discourse):
1. As it is usually the case, it is more or less accepted that AI and LLMs make mistakes. There have been cases of teenagers committing suicide at the request and/or suggestion of chatbots. So this is just another of these "mistakes"
2. Going against Elon Musk means you go against free speech. This also means you will become a target of US government meddling, similar to how Donald Trump railed against EU applying EU law to a company offering EU services (Twitter)
3. Elon Musk is also one of the most powerful men in the world, and it's likely getting banned from Twitter for reporting / sharing this is the best that can happen. The worst is harassment, like happened to journalists that post stuff unfavourable to Musk in the past.
4. Grok issued an apology (as if AIs and LLMs had a conscience)
5. Grok and other AI chatbots are already being used for sexual purposes. Although they are intended to be used by adults, and to share content depicting adults, this is still (in my opinion) outrageous. I think legislation should prohibit AI chatbots from emulating intimate relationships with other humans, but this hasn't happened, so this is a side effect of that.
6. It is already possible to generate CSAM through AI models. Cloud-hosted models tend to have guardrails, but people can modify, tune, and train open source models.
Grok (team) apologised for creating a single non-pornographic ("sexualised") image with minors on a user's request. Something to be fixed (at least given current laws and morals) but doesn't seem that bad.
Btw, the AI generation of pornographic images involving minors seems to be a perfect example of victimless crime.
> Btw, the AI generation of pornographic images involving minors seems to be a perfect example of victimless crime.
In what way? We’re not talking about fictional minors here, it’s taking real children and creating naked images using their face and body and publicly distributing it. Of course there’s a psychological effect on the minor in question.
Not sure if you're talking about this specific case (of which I don't know the details). I mean AI-generated pictures in general contain non-existent people, unless specifically asked otherwise.
I am talking about this specific case, where CSAM is being generated from real photos of real children.
Because (this is not my personal opinion, but my observations on LLM discourse):
1. As it is usually the case, it is more or less accepted that AI and LLMs make mistakes. There have been cases of teenagers committing suicide at the request and/or suggestion of chatbots. So this is just another of these "mistakes" 2. Going against Elon Musk means you go against free speech. This also means you will become a target of US government meddling, similar to how Donald Trump railed against EU applying EU law to a company offering EU services (Twitter) 3. Elon Musk is also one of the most powerful men in the world, and it's likely getting banned from Twitter for reporting / sharing this is the best that can happen. The worst is harassment, like happened to journalists that post stuff unfavourable to Musk in the past. 4. Grok issued an apology (as if AIs and LLMs had a conscience) 5. Grok and other AI chatbots are already being used for sexual purposes. Although they are intended to be used by adults, and to share content depicting adults, this is still (in my opinion) outrageous. I think legislation should prohibit AI chatbots from emulating intimate relationships with other humans, but this hasn't happened, so this is a side effect of that. 6. It is already possible to generate CSAM through AI models. Cloud-hosted models tend to have guardrails, but people can modify, tune, and train open source models.