I found that the members of r/luckystar were saying very sexual things about characters that, while 18, had very child like designs. And even if, they were saying shit like, “Cunny!” and, “Correction!” so, yeah.
I constantly reported all posts and comments like that I came across and eventually made a post asking why they were saying things like that. I wasn’t hostile, and was in fact, rather calm about the situation. Meanwhile, they were all extremely rude and hostile towards my post, calling me names and such, and about 20 minutes later, my account was shadowbanned.
So, not only did Reddit moderation side with the pedos, they didn’t even have the decency to tell me what I did wrong. Assuming how it all happened so suddenly, I’m just gonna assume the pedos all mass-reported my account for God knows what. Any and all appeals I’ve mad since then received no response.
I still stuck around for a few months after that, [that whole deal happened back in May of 2025.] but the extreme leftist political dooming invading every corner of the website I used made it absolutely miserable to use.
Tonight, I finally put my foot down and said, “I quit!” I discovered this website and made an account here. I guess you can use this post to welcome me to the Fediverse.


But you’re okay with the content of the video being shared publically, in principle, right?
Hmmm…
I mean, purely on principle? Sure. No one would have been harmed apart from the environmental damage. Once that’s done, nothing will undo that.
Psychological damage purely from exposure and normalization of that kind of content, probably not ideal.
The muddying of the waters around Epstein guilt, also bad. (“That was fake, so any other news must also be fake”).
Apart from the above sorts of things, (but maybe there’s others I didn’t think of off the top of my head): as long as no one watches it, it’s no more harmful than the sentence describing the idea in the first place.
So let’s remove the additional impacts to test the standard. Hypothetically, someone creates a tool that is freely available and has no environmental impacts, doesn’t rely on problematic training data, any of that.
Its free to use for anyone to create and distribute photo-realistic images of what look exactly like 6 year old girls in sexual situations.
You’re okay with this, because it’s not real and it’s just pictures on a screen. Right?
Sure, I guess, although it’s kind of inextricably linked to the damage of actually using it.
How is that different when it’s a drawing?
Because it’s less real. The amount of harm and/or damage is proportional the realism. Using that shitty line drawing I made for an example: if I say the lines represent something objectionable, would that make it so? No, not really.
The closer to real, the greater the psychological damage to the viewer. However it’s still no actual harm to anyone else.
And then production of actual CSAM actually does harm children.
Like, this seems like blatantly obvious stuff, no one is harmed by someone making lines on paper. (Or with modern tech lines on a screen but the idea is the same.)
So you agree that your line example is bad then. The lines are “less real” than a more detailed drawing.
I mean if you want to interpret some shitty line drawings as CSAM, knock yourself out.
The point I was trying (and clearly failing) to make is that judging images by the labels is stupid, but so is judging by leaving the appearance entirely open to interpretation.
Hell, I hadn’t even considered LLMs where a text description alone would be a problem since an LLM could use that to generate an image.
I mean if you want to say that sexual drawings of children is basically the same as a line, knock yourself out.
Pretty fucked up take though.