I'm not much on X anymore due to the vitriol, and visiting now kinda proved it. Beneath almost every trending post made by a female is someone using grok to sexualize a picture of them.
(And whatever my timeline has become now is why I don't visit more often, wtf, used to only be cycling related)
I left when they started putting verified (paid) comments at the top of every conversation. Having the worst nazi views front and center on every comment isn't really a great experience.
I've got to imagine that Musk fired literally all of the product people. Pay-for-attention was just such an obviously bad idea, with a very long history of destroying social websites.
Even on that theory, _not long term_, because for that sort of thing to work you still have to draw victims in, and breaking it as a social website will tend to discourage that.
Twitter also doesn't need users to fulfil Musk's goal of destroying a key factor in many (all?) people-powered movements. Occupy Wall Street, Arab Spring, BLM.
And I'm sure that's a factor in why people like Larry Ellison and Saudi princes stumped up some of the money.
The problem is that the media still uses X in it's reporting, and people still use and link to it. If we just stopped using X and stripped it of any legitimacy, it would fall off pretty quickly.
To be fair, as someone who used to manage an X account for a very small startup as part of my role (glad that's no longer the case), for a long time (probably still the case) posting direct links would penalize your reach. So making a helpful, self-contained post your followers might find useful was algorithmically discouraged.
Everything that is awful in the diff between X and Twitter is there entirely by decision and design.
Vagueposting is a different beast. There’s almost never any intention of informing etc; it’s just: QT a trending semi-controversial topic, tack on something like “imagine not knowing the real reason behind this”, and the replies are jammed full of competitive theories as to what the OP was implying.
It’s fundamentally just another way of boosting account engagement metrics by encouraging repliers to signal that they are smart and clued-in. But it seems to work exceptionally well because it’s inescapable at the moment.
Vague posting is as old as social networks. I had loads of fun back in the day responding to all the "you know who you are" posts on facebook, when it's clearly not aimed at me.
They also don’t take down overt Nazi content anymore. Accounts with all the standard unambiguous Nazi symbologies and hate content about their typical targets with associated slurs. With imagery of Hitler and praises of his policies. And calls for exterminating their perceived enemies and dehumanizing them as subhuman vermin. I’ve tried reporting many accounts and posts. It’s all protected now and boosted via payment.
Inviting a debate about what it was or wasn't only leads to a complete distractions over interpretation of a gesture when the dude already digs his own hole more deeply and more clearly in his feed anyways.
There's no debate. It was the most obvious nazi salute you could do. The only people who says it's not are nazis themselves, who of course delight in lying. (see Sartre quote.)
My comment was in response to the debate already starting so it's quite bold to claim no debate will be had (i.e. "debate" does not mean "something I personally am on the fence about", it's something other people will hold in response to your views). Whether there will or won't be debate about something is (thankfully) not something you or I get to declare. It just happens or doesn't, and it had already - and so it remains.
I'm sure "The only people who say it's not are <x>" is an abominable thought pattern Nazis and similar types would love everyone to have. It makes for a great excuse to never weigh things on their merits, so I'm not sure why you feel the need to invoke it when the merits are already in your court. I can't look at these numbers https://i.imgur.com/hwm2bI5.png and conclude most Americans are Nazi's instead of being willing to accept perhaps not everyone sees it the same way I do even if they don't like Nazis either.
To any actual Nazi supporters out there: To hell with you
To anybody who thinks either everyone agrees with what they see 100% of the time or they are a literal Nazi: To hell with you as well
The majority of people who had an opinion (32%) said it was either a Roman salute or a Nazi salute (which are the same thing). Lots of people had no idea (probably cuz they didn't pay attention). Only 19% said it was a "gesture from the heart", which is just parroting what Elon claimed, and I discount those folks as they are almost certainly crypto-Nazis.
So yeah, I believe there are a LOT of Nazi-adjacent folks in this country: they're the ones who voted for Trump 3 times even after they knew he was a fascist piece of garbage.
A few minor cleanups - I personally don't think they change anything (really, it's these stats themselves that lack the ability to do that anyways) but want to note because this is the exact kind of Pandora's box opened with focusing on this specific incident:
- Even assuming all who weren't sure (13%) should just be discounted as not having an opinion, like those who had not heard about it (22%), 32% is still not a majority of the remaining (100%-13%-22%) = 65%. 32% could have been a plurality of those with an opinion, but since you insisted on lumping things into 3 buckets of 32%, 35%, and remaining %, the remaining % of 33% would actually get the plurality of those who responded with opinions by this definition.
N.b. If just read straight from the sheet, "A Nazi salute" would have already had a plurality. Though grouping like this is probably the more correct thing to do, it actually ends up significantly weakening the overall position of "more people agree than not" rather than strengthening it.
- But, thankfully, "A Nazi Salute" + "A Roman Salute" would actually have been 32+2=34%, so plurality is at least restored by more than one whole percentage point (if you excluded the unsure or unknowing)!
- However, a "Roman salute" (which is a bit of a farce of a name really) can't really be assumed to be fungible with the first option in this poll. If it were fully fungible, it could have been combined into that option. I.e. there's no way to tell which adults responding "A Roman salute" meant to be counted as "a general fascist salute, as the Nazis later adopted" or meant to be counted as "a non-fascist meaning of the salute, like the Bellamy salute was before WWII". So whichever wins this game of eeking out percentage points comes down to how each person wants to group these 2 percentage points. Shucks!
- In reality, between error margins and bogus responses, this is about as close as one could expect to get for an equal 3 way split between "it was", "it wasn't", and "dunno/don't care", and pulling ahead a percentage point or two is really quite irrelevant beyond that it is, blatantly, not actually a majority that agree it was a Nazi-style salute.
Even though I'm one who agrees with you Elon exhibits neo-nazi tendencies, the above just shows how we go from "Elon replies directly supporting someone in a thread about Hitler being right about the Jewish community" and similar things constantly for years to debating individual percentage points to try to claim our favorite sub-majority says he likely made a one off hand gesture 3 years ago. Now imagine I was actually a Nazi supporter walking into the thread - suddenly we've gone from talking about direct pro-Nazi statements and retweets constantly in his feed to a chance for me to debate with you whether the majority think he made a one off hand gesture 3 years ago? Anyone concerned with Musk's behavior should want to avoid this topic with a 20 foot pole so they can get straight to the real stuff.
Also... I've run across a fair share of crypto lovers who turn out to be neo-nazish, but I'm not sure how you're piecing together that such a large portion of the population is a "crypto-Nazi" when something like only 28% of the population has crypto at all, let alone is a Nazi too. At least we're past "anyone who disagrees with my interpretations can only be doing so as a Nazi" though.
Ah, you're almost certainly correct here! Akin to crypto-fascist, perhaps I'd seen too many articles talking about the negatives of crypto to see the obvious there.
None of these kind of examples hold up under scrutiny when observed in video. There's a reason they're all shared as still photos or tiny blips of video which never shows the full motion salute that's being claimed.
I imagine I'm not the only one using HN less because both articles like this and comments like this are clearly being downvoted and/or flagged by a subset of users motivated by politics and the HN admin team seemingly doesn't consider that much of a problem. This story is incredibly relevant to a tech audience and this comment is objectively true and yet both are met with downvotes/flags.
Whether HN wants to endorse a political ideology or not, their approach to handling these issues is a material support of the ideologies these stories and comments are criticizing.
Yeah this was my first reaction this article is about tech regulation that is relavent and on topic. If Grok causes extra legislation to be passed because its lack of comment dececeny in the pursuit of money that is relavent. This is the entire argument around we can't have accountability for tools just people which is ridicuously. The result of pretending that this type of thing doesn't happen is legislative responses.
PG and Garry Tan have both been disturbingly effusive in praising Musk and his various fuckeries.
Like, the entirety of DOGE was such an obviously terrible series of events, but for whatever reason, the above were both big cheerleaders on Twitter.
And yeah the moderation team here have been clearly letting everything Musk-related be flagged even after pushback. It's absolutely vile. I've seen many people try to make posts about the false flagging issue here, only to have those posts flagged as well (unapologetically, on purpose, by the mods themselves).
Anecdotally I think that moderation has been a lot more lenient when it comes to political content in the last year than in years prior. I have no hard evidence that this is actually the case, but I think especially pre-2020 I'd see very little political content on HN and now I see much more. It's also probably true that both liberals and conservatives have become even more polarized, leading to bad-faith flagging and downvoting, but I'm actually not sure what could be done about that, seems similar to anti-botting protections which is an arms race
I'm late to this, but I'm doubtful that that perception is correct. It's true there are fluctuations, as with anything on HN, but the baseline is pretty stable. But the perception that HN has gotten-more-political-lately is about as old as the site itself. In fact, it's so common that about 8 years ago I took a couple hours to track down the history of it: https://news.ycombinator.com/item?id=17014869.
Any thoughts about the issues raised up thread? This article being flagged looks to me to be a clear indication of abuse of the HN flagging system. Or do you think there are justifiable reasons why this article shouldn't be linked on HN?
My thoughts are just the usual ones about this: flags of stories like this on HN are a kind of coalition between some flaggers who are agenda-motivated (which is an abuse of flagging) and other flaggers who simply don't want to see repetitive and/or flamebaity material on the site (which is a correct use of flagging, and is not agenda driven because this sort of material comes at us from all angles). When we see flaggers who are consistently doing the first kind of flagging, we take away their flagging privileges.
The wild thing is that this article isn't even a political issue!
"Major Silicon Valley Company's Product Creates and Publishes Child Porn" has nothing to do with politics. It's not "political content." It is relevant tech news when someone investigates and points out wrongdoing that tech companies are up to. If another tech company's product was doing this, it would be all over HN and there would be pretty much no flagging.
When these stories get flagged, it's because people don't want bad news to get out about the company--it's not about avoiding politics out of principle.
I've been using https://news.ycombinator.com/active a lot more the last year, because so many important discussions (related to tech, but including politics or prominent figures like Musk) gets pushed out from the front page quickly. I don't think it's moderators doing it, but mass-flagging by users, (or perhaps some automagic if the discussion is too intense like num comments or downvotes). Of course, it might be the will of the community to flag these, but it does feel a bit abused in the way certain topics gets killed quickly.
I just found out about this recently and like this page a lot. Dang has a hard job to balance this. I think newcomers might be more comfortable with the frontpage and if you end up learning about the other pages you can find more controversial discussions. Can't be mad about the moderation hiding these by default. Although I think CSAM-Bad should not be controversial.
Even a year ago, when Trump was posting claims that he was a king, etc. these things got removed, even though there were obvious implications on the tech industry. (Cybersecurity alone makes more political assumptions than it does on the hardness of the discrete logarithm, for example.)
I (and others) were arguing that the Trump administration is probably, and unfortunately, the most relevant topic to the tech industry on most any given day. This is because computer is mostly made out of people. The message that these political stories intersect deeply with technology (as is seen here) seems to have successfully gotten through.
I wish the most relevant tech story of every day were, say, some cool new operating system, or something cool and curiosity-inspiring like "you can sort in linear time" or "python is an operating system" or "i made X rewritten in Y" or whatever.
I think in most things, creation is much harder than destruction, but software and software systems are an exception where one individual can generally do more creation than destruction. So, it's particularly interesting (and jarring) when a few individuals are able to make decisions that cause widespread destruction.
We should collectively be proud that we have a culture where creation is easier than destruction. But it's also why the top stories of any given day will be "Trump did X" or "us-east-1 / cloudflare / crowdstrike is down" or "software widely used in {phones / servers} has a big scary backdoor".
This story belongs on this site regardless of politics. It is specifically about both AI and social media. Downvoting/flagging this story is much more politically motivated than posting/upvoting it.
I agree with that. But one, it is on the site, and two, how can the moderation team reasonably stop bad actors from downvoting it? They can (and probably do) unflag things that have merit or put it in the 2nd chance queue.
> But one, it is on the site, and two, how can the moderation team reasonably stop bad actors from downvoting it?
In 2020, Dang said [1]
> Voting ring detection has been one of HN's priorities for over 12 years: [...]
> I've personally spent hundreds of hours working on this, as well as tracking down voting rings of every imaginable sort. I'd never claim that our software catches everything, but I can tell you that it catches so much that I often go through the lists to find examples of good projects that people were trying ineptly to promote, and invite them to do it again in a way that is more likely to gain community interest.
Of course this sort of thing is inherently heuristic; presumably bots throw up a smokescreen of benign activity, and sophisticated bots could present a very realistic, human-like smokescreen.
> how can the moderation team reasonably stop bad actors from downvoting it
There are all sorts of approaches that a moderation team could take if they actually believed this was a problem. For example, identify the users who regularly downvote/flag stories like this that end up being cleared by the moderation team for unflagging or the 2nd chance queue and devalue their downvotes/flags in the future.
Accounts are free to make, so bad actors will just create and "season/age" accounts until they have the ability to flag, then rinse and repeat.
I think the biggest thing HN could do to stop this problem is to not make flagging affect an article's ranking until after a human mod reviews the flags and determines them to be appropriate. Right now, all bad actors apparently have to do is be quick on the draw, and get their flagging ring in action ASAP. I'm sure any company's PR team (or motivated Elon worshiper) can buy "100 HN flags on an article" on the dark web right now if they wanted to.
Why would a company like any one of Musk's need to buy these flags? Why wouldn't they just push a button and have their own bots get to work? Plausible deniability?
Who knows whether or not both happen? Ultimately, only the HN admins, and they don't disclose data, so we can only speculate and look for publicly visible patterns.
You can judge their trustworthiness by evaluating their employer's president/CEO, who dictates behavioral requirements regardless of the personal character of each employee
That already happens. I got my flagging powers removed after over-using flag in the past. (I eventually wrote an email to the mods pledging to behave more judiciously and asked for the power back). As a user you won't see any change in the UI when this happens; the flags just stop having any effect on the back end.
There is one subtle clue. If your account has flagging enabled, then whenever you flag something there is a chance that your flag pushes it over the threshold to flagged state. If your account has flagging disabled, this never happens. This is what hinted me to ask dang if I'd been shadowbanned from flagging.
I would be money that already happens, for flagging in particular, since it's right in the line of the moderation queue. For downvotes, it sounds like significant infra would be needed for a product that generates no revenue. Agree that I would like the problem to be solved as well however!
I think there's brigading coming in to ruin these threads. I had several positive votes for a few minutes when stating a simple fact about Elon Musk and his support of neo-nazi political parties then -2 a min later
I have downvoted anything remotely political on hn ever since I got my downvote button, even (especially) if I agree with it. I always appreciated that being anti-political was the general vibe here.
The part where you brought up politics is when I noticed it was political.
But I generally consider something political if it involves politicians, or anyone being upset about anything someone else is doing, or any topic that they could mention on normal news. I prefer hn to be full of positive things that normal people don't understand or care about.
What's political here? The mere fact of the involvement of Dear Leader?
(As a long-term Musk-sceptic, I can confirm that Musk-critical content tended to get insta-flagged even years before he was explicitly involved in politics.)
There's almost no such thing as a non-political thing. Maybe the sky colour, except that other cultures (especially in the past) have different green/blue boundaries and some may say it's green. Maybe the natural numbers (but whether they start from 0 or 1 is political) or the primes (but whether 1 is prime is political).
I mean, honestly, you are wasting your time. Why would you expect the website run by the guy who likes giving Nazi salutes on TV to take down Nazi content?
There's no point trying to engage with Twitter in good faith at this point; only real option is to stop using and move on (or hang out in the Nazi bar, I guess).
They meant howlingmutant0 but I don't know which posts they refer to
The ones I reported, I deleted the report emails so I can't help you at this moment. I don't know why you're surprised - you can go looking yourself and find examples
Yeah I went thru his media. There was some backwards swastika that someone drawn on a synagogue. People were mocking the fact that idiots can't even draw that correctly.
1. Can you point to exact posts? I saw one swastika somewhere deep in media. It's a description of what swastika is - no different from wikipedia article.
I normally stay away too, but just decided to scroll through grok’s replies to see how wide spread it really is. It looks like it is a pretty big problem, and not just for women. Though, I must say that Xi Jinping in a bikini made me laugh.
I’m not sure if this is much worse than the textual hate and harassment being thrown around willy nilly over there. That negativity is really why I never got into it, even when it was twitter I thought it was gross.
Before Elon bought it out it was mostly possible to contain the hate with a carefully curated feed. Afterward the first reply on any post is some blue check Nazi and/or bot. Elon amplifying the racism by reposting white supremacist content, no matter how fabricated/false/misleading, is quite a signal to send to the rest of the userbase.
he's rigged the algorithm to boost content he interacts with, unbanned and stopped moderating nazi content and then boosted those accounts by interacting with them.
X wrote in offering to pay something for my OG username, because fElon wanted it for one of his Grok characters. I told them to make an offer, only for them to invoke their Terms of Service and steal it instead.
Hmm, I have an old Twitter account. Elon promised that he was going to make it the best site ever, lets see what the algorithm feeds me today, January 5 2026.
1. Denmark taxes its rich people and has a high standard of living.
2. Scammy looking ad for investments in a blood screening company.
3. Guy clearing ice from a drainpipe, old video but fun to watch.
4. Oil is not actually a fossil fuel, it is "a gift from the Earth"
5. Elon himself reposting a racist fabrication about black people in Minnesota.
6. Climate change is a liberal lie to destroy western civilization. CO2 is plant food, liberals are trying to starve the world by killing off the plants.
7. Something about an old lighthouse surviving for a long time.
8. Vaccine conspiracy theories
9. Outright racism against Africans, claiming they are too dumb to sustain civilized society without white men running it.
10. One of those bullshit AI videos where the AI doesn't understand how pouring resin works.
11. Microsoft released an AI that is going to change everything, for real this time, we promise.
12. Climate change denialism
13. A post claiming that the Africa and South America aren't poor because they were robbed of resources during the colonial era and beyond, but because they are too dumb to run their countries.
14. A guy showing how you can pack fragile items using expanding foam and plastic bags. He makes it look effortless, but glosses over how he measures out the amount of foam to use.
15. Hornypost asking Grok to undress a young Asian lady standing in front of a tree.
16. Post claiming that the COVID-19 vaccine caused a massive spike (5 million to 150 million) cases of myocarditis.
17. A sad post from a guy depressed that a survey of college girls said that a large majority of them find MAGA support to be a turn off.
18. Some film clip with Morgan Freeman standing on a X and getting sniped from an improbable distance
19. AI bullshit clip about people walking into bottomless pits
20. A video clip of a woman being confused as to why financial aid forms now require you to list your ethnicity when you click on "white", with the only suboptions being German, Irish, English, Italian, Polish, and French.
Special bonus post: Peter St Ogne, Ph. D claims "The Tenth Amendment says the federal government can only do things expressly listed in the Constitution -- every other federal activity is illegal." Are you wondering what federal activity he is angry about? Financial support for daycare.
So yeah, while it wasn't a total and complete loss it is obvious that the noise far exceeds the signal. It is maybe a bit of a shock just how much blatant climate change denialism, racism, and vaccine conspiracies are front page material. I'm saddened that there are people who are reading this every day and taking it to heart. The level of outright racism is quite shocking too. It's not even up for debate that black people are just plain inferior to the glorious aryan race on Twitter. This is supposedly the #1 news source on the Internet? Ouch.
Edit: Got the year wrong at the top of the post, fixed.
Makes me laugh when people say Twitter is "better than ever." Not sure they understand how revealing that statement is about them, and how the internet always remembers.
They don't outnumber anyone. There's always a minority of hardcore supporters for any side... plus enough undecided people in the middle who mostly vote their pocketbook.
What to do about it is to point out to those people in the middle how badly things are being fucked up, preferably with how those mistakes link back to their pocketbook.
The best use of generative AI is as an excuse for everyone to stop posting pictures of themselves (or of their children, or of anyone else) online. If you don't overshare (and don't get overshared), you can't get Grok'd.
There's a difference between merely existing in public, versus vying for attention in a venue where several brands of "aim this at a patron to see them in a bikini" machines are installed.
And so installing the "aim this at a patron to see them in a bikini" machines made the community vastly more hostile to women. To the point where people say "well what did you expect" when a woman uses the product. Maybe they shouldn't have been installed?
The number of people saying that it is not worthy of intervention that every single woman who posts on twitter has to worry about somebody saying "hey grok, take her clothes off" and then be made into a public sex object is maybe the most acute example of rape culture that I've seen in decades.
This thread is genuinely enraging. The people making false appeals to higher principles (eg section 230) in order to absolve X of any guilt are completely insane if you take the situation at face value. Here we have a new tool that allows you to make porn of users, including minors, in an instant. None of the other new AI platforms seem to be having this problem. And yet, there are still people here making excuses.
I am not a lawyer but my understanding of section 230 was that platforms are not responsible for the content their users post (with limitations like “you can’t just host CSAM”). But as far as I understand, if the platform provides tools to create a certain type of harmful content, section 230 doesn’t protect it. Like there’s a difference between someone downloading a photo off the internet and then using tools like photoshop to make lewd content before reuploading it, as compared to the platform just offering a button to do all of that without friction.
Again I’m not a lawyer and this is my interpretation of the #3 requirement of section 230:
“The information must be "provided by another information content provider", i.e., the defendant must not be the "information content provider" of the harmful information at issue”
If grok is generating these images, I am interpreting this as Twitter could be becoming an information content provider. I couldn’t find any relevant rulings but I doubt any exist since services like Grok are relatively new.
1) These images are being posted by @Grok, which is an official X account, not a user account.
2) X still has an ethical and probably legal obligation to remove these images from their platform, even if they are somehow found not to be responsible for generating them, even though they generated them.
At this point in time, no comment that has the string "230" in it is saying that Section 230 absolves X of anything. Lot's of people are asking if it might, and if that's what X is relying on here.
I brought up Section 230 because it used to be that removal of Section 230 was an active discussion in the US, particularly for Twitter, pre-Elon, but seems to have fallen away.
With content generated by the platform, it certainly seems reasonable to understand how Section 230 applies, if it all, and I in particular think that Section 230 protections should probably be removed for X in particular.
> At this point in time, no comment that has the string "230" in it is saying that Section 230 absolves X of anything.
You are correct; I read your earlier post as "did we forget our already established principle"? I admit I'm a bit tilted by X doing this. In my defense, there are people making the "blame the user, not the tool" argument here though, which is the core idea of section 230
> None of the other new AI platforms seem to be having this problem
The very first AI code generators had this issue that user could make illegal content by making specific requests. A lot of people, me including, saw this as a problem, and there were a few copyright lawsuits arguing this. The courts however did not seem to be very sympathetic to this argument, putting the blame on the user rather than the platform.
Here is hoping that Grok forces regulations to decide on this subject once and for all.
Elon Musk mentioned multiple times that he doesn't want to censor. If someone does or says something illegal on his platform, it has to be solved by law enforcement, not by someone on his platform. When asked to "moderate" it, he calls that censorship. Literally everything he does and says is about Freedom - no regulations, or as little as possible, and no moderation.
I believe he thinks the same applies to Grok or whatever is done on the platform. The fact that "@grok do xyz" makes it instanteous doesn't mean you should do it.
I think it is completely fine for a tech platform to proactively censor AI porn. It is ok to stop men from generating porn of random women and kids. We don't need to get the police involved. I do not think non-consensual porn or CSAM should be protected by free speech. This is an obvious, no-brainer decision.
> X is planning to purge users generating content that the platform deems illegal, including Grok-generated child sexual abuse material (CSAM).
Which is moderating/censoring.
The tool (Grok) will not be updated to limit it - that's all. Why? I have no idea, but it seems lately that all these AI tools have more freedom than us humans.
The one above is not my opinion (although I partially agree with it, and now you can downvote this one :D ). To be honest, I don't care at all about X nor about an almost trillionaire.
It was full of bots before, now it's full of "AI agents". It's quite hard sometimes to navigate through that ocean of spam, fake news, etc.
Grok makes it easier, but it's still ugly and annoying to read 90-95% always the same posts.
This weekend has made me explicitly decided my kids photos will never be allowed on the internet especially social media. Its was just absolutely disgusting.
Ah no, I don't think you understand how religious fundamentalism works in practice.
For most fundamentalist religions, men are almost never penalized for bad behavior. It's nearly impossible to find a man being killed for violating a morality law including selling pornography or engaging in prostitution.
But on the other hand, it's very easy to find examples of women getting stoned for dressing improperly.
Firstly, what we are talking about here is not "selling pornography" or "engaging in prostitution". It's a public event of widespread obscenity being actively defended by those who made it possible. I can't take your comparison seriously as being good faith.
Secondly, you posit it's "nearly impossible to find a man being killed for violating a morality law", which is true because of two factors. The first of which is because it's difficult to find any kind of representative sample of justice being meted out in Afghanistan because it's not freely and actively reported on. The second of which is because the punishment for violating moral laws is usually public flogging. The idea that these laws overwhelmingly target women is false, most of the time the people being punished for breaking morality laws are men: https://amu.tv/137185
It is clear to me the only thing you know about Afghanistan is that women unfortunately live as second class citizens. This is clear not only because of the naive things you say, but because you explicitly fall back to painting with ridiculously unspecific brushstrokes. With your knowledge exhausted, you revert to talking about "most fundamentalist religions", despite the domain already being pretty well defined to specifically the Taliban. You shoehorn in the misogyny angle, as though that's relevant to the context and makes your point stronger, but it's just vacuous nonsense. Your entire point seems to be that the justice system in Afghanistan primarily punishes women (which is an silly falsehood in and of itself), and that's why a major public figure enabling mass obscenity would entail no consequences? Are you actually out of your mind? There's simply no way you actually believe this crap. I'm sorry, that kind of naivete is just too ridiculous to buy.
The cherry on top is that you lead all of this crypto neo-orientalist shit with "I don't think you understand how religious fundamentalism works in practice". Give me a break.
To put it simply - restricting a woman's right to birth control or abortion is restricting a woman's freedom. They can couch it in religious terms but that is the simplest way to demonstrate it.
https://www.npr.org/2025/08/07/nx-s1-5494710/trump-birth-con...
In states with the GOP controls both houses of legislature and governor, the restrictions against abortion in cases of incest or rape or the obscene combination of the two show exactly where the United States is headed federally.
The restrictions against abortion in case of something like rape, I think the thought is that the fetus's perceived right to life or perceived right to not be assaulted (chemically or physically) can't be deprived just because of a crime against the mother that was no fault of the fetus.
I actually find abortion with no exception for rape to be far more ideologically pure position than abortion with exceptions for rape.
The one that makes the least sense is restriction on abortion even in the case the fetus cannot survive. That one is far less defensible than not having a rape abortion exception as it can't be explained from the viewpoint of the rights of the mother nor from one of the rights of the fetus.
> That one is far less defensible than not having a rape abortion exception
It's defensible when you realize that forced pregnancy is viewed by many religious people as a punishment. "If you didn't want a baby, don't have sex" is very commonly heard in private conversations with religious people.
Because pregnancy is a holy punishment, the consequences, even death, are seen as moral.
This is also why the rape exception is more common than the medical exception. A mother dying because of an ectopic pregnancy or because she was too young to have a baby is god's will.
I've heard it as well, but after debating with a lot of people with anti-abortion views I think you've done yourself a huge disservice if you view that as the dominating argument against abortion.
I initially held your viewpoint but after engaging a lot of people I realized they often had pretty similar views on life and liberty as mine, they just were looking at it from the viewpoint of the fetus rather than the mother. From that perspective it just doesn't make sense at all to make an exception for rape.
There's a big difference in what people debate publicly and what they think/feel privately.
People will almost never take the "it's a punishment" position in a debate because that's not a popular position to hold and it's pretty weak morally at the end of the day. That's why the "life of the fetus" approach is most frequently taken even though it leads to absurd positions. For example, pro-life people will very often be put into a difficult position when the notion of IVF comes up.
That's what betrays their true views, IMO.
I've simply had a lot of private conversations with people on religion (I was mormon for a long time and served a mormon mission). That's where my opinion on the actual underlying anti-abortion attitude comes from. It's lots of private conversations in safe spaces. The fetus life, frankly, is almost never brought up as a reason.
And, as I pointed out, it pretty well explains why anti-abortion laws have weird boundaries if they were purely about the life of the fetus.
This duality occurs identically when people discuss legal required child support. "If you didn't want a child, don't have sex" is very commonly initial argument, but then it get changed into "well being of the child" approach, as it leads to the same conclusion. It not a punishment, its a childs's right to support by their biological parents.
Most nations still have social support and government responsibility as last resort, which equally can do the job of supporting children without willing parents, but then people return to the punishment/moral angle that if men don't want to pay for children then they should not have sex.
Look to how quickly people reaches for the morality position and we see how little friction anti-abortion policies has to overcome.
Both can be motivating factors even for the same person. But which dominates the motivation, I'd argue, can be seen by the end policies.
What I expect to see for those that view it as primarily punishment is little outside of child support for kids. Penalizing the "dead beat dad" as it were. I expect those governments to not provide child support, tax breaks for parents, or any sort of welfare/support/minimum standard of living for parents. That is to say, the answer to "how hard would it be to be a single parent" in those government would be "very hard".
For governments that are solely looking out for the welfare of the kid, I expect to see a large amount of support for kids, especially for poor families. I expect some help with childcare, housing, etc to make sure the kids are well cared for.
It definitive seems like a connection between a lack of social support and an emotion reaction to penalizing people for getting pregnant. On the extreme end we have a culture where the man should take responsibility for getting the woman pregnant and where abortion is seen as a way for both to escape responsibility. Support comes from the family rather than society, which also means members of the family is responsible towards the family.
The closer people holds to those values, the easier society accept laws like anti-abortion. Religion do play a supporting role in this by holding onto the values, but is itself not always in the center.
We already let rich people do eugenics in America. See Elon Musk with almost all male babies - this is astronomically against the odds if we assume 50/50 by chance.
Also abortion for eugenics is inefficient and difficult for some women with physical and mental effects - only in the extenuating circumstances like China during one child period would it be viable. This is a side issue from the main point. Ethics in IVF and usage of abortion/Plan B (where it’s so early that it’s not practical for your eugenics idea) - a discussion we should have and have already skipped for IVF in the USA (where it’s more practical from theoretical standpoint for your eugenics point) but it's a distraction from the primary objective of conservative groups - to force women to have less power and choice in their lives. That is the question we see being answered once conservatives gain power to make legislation of their choice in states or via the Supreme Court in the USA.
IIRC that is not a safe assumption. 50/50 is a population-wide statistic, but it's pretty common for individuals to have a substantially skewed probability for one gender of offspring or the other.
Your only point is to defend Elon Musk and never addressed the key point instead which is IVF can be used in the USA for eugenics as the other poster suggested abortions hypothetically would be in their horror scenario. Not every clinic but it's available and is not explicitly outlawed - it's only ethically and professionally dubious for certain scenarios. It's also much more practical that trying to use abortions for the same usage in terms of the mother's health.
I'm just an interested party because my family has nearly all girls for an entire generation, so I have paid attention to research that shows how this happens.
You seem to think it disagrees with a conspiracy theory. I don't care about that, I was just adding a bit of accuracy to the discussion. Carry on.
Are you saying that children's sex is heritable? I don't think there is any strong evidence for that (just weak evidence of tiny effects). Or are you just saying that for any 50/50 distribution, there will be some outliers that seem surprising, but are explained by simple statistics?
There are genetic and biological predispositions at the individual level that make some couples more likely to have one or the other gender children. So it is not at all surprising and there are many examples of parents having all boys or all girls, it does not mean they are selectively aborting to achieve a goal.
I’m not arguing the circumstantial evidence isn’t compelling. And as a talking point, it’s a good one. But as an argument made in good faith, I think one needs direct attribution to be able to say what something “plans.”
We need something. I asked if there was a hot mic because those two have a tendency to get recorded saying stupid things. It coud also be an e-mail, a memo, a recollection by a friend or former colleague, et cetera.
In the absence of evidence, it's much simpler to conclude that at least Musk is just sexist versus trying "to make women afraid to participate in public life."
I cannot roll my eyes hard enough at your comments. You expect every bad actor to make a public statement about their vile plans? Sure our president does it, but he's especially stupid.
> You expect every bad actor to make a public statement about their vile plans?
Neither Musk nor Miller have exhibited a tremendous amount of discipline in this department.
I don’t expect them to say this. But I do expect someone arguing in good faith to distinguish between plans and effects, particularly when we’re turning what looks like garden-variety sexism into a political theory.
I don't think it's that unusual on the scale of one person. it just happens sometimes, for example with Henry VIII. It could just be thr nature of the coin toss.
Most of his kids were conceived through IVF, and sex selection through IVF is not difficult. He also has disowned his one trans daughter because she refuses to be his son.
> don't think it's that unusual on the scale of one person
It would be remarkably unlikely. It's fair to say Musk is probably sex selecting for sons. It doesn't follow that he has plans "to make women afraid to participate in public life."
Henry VIII had only three legitimate kids surviving infancy, one of which was a boy. He was 1/3 for boys in this count. Add in the one for sure illegitimate child and he's 2/4 born male. Of those infants who were stillborn or died very early into their infancy, several were boys but it seems the counts for those children were uncertain. Also, there were probably several illegitimate male and female children had by him.
This rule faded from internet culture along with "don't use your real information". This rule started from people doing exactly this, 20 something years ago, and it making the rounds in the press.
Maybe I've got a case of the 'tism, but I really don't see an issue with it. Can someone explain?
It's a fictional creation. Nobody is "taking her clothes off", a bot is fabricating a naked woman and tacking her likeness (ie. face) on to it. If anything, I could see how this could benefit women as they can now start to reasonably claim that any actual leaked nudes are instead worthless AI slop.
I don't think I would care if someone did this to me. Put "me" in the most depraved crap you can think of, I don't care. It's not me. I suspect most men feel similarly.
A man's sexual value is rarely impacted much by a nude image of themselves being available.
A woman being damaged by nudes is basically a white knight, misogynist viewpoint that proclaims a woman's value is in her chastity / modesty so by posting a manufactured nude of her you have thereby degraded her value and owe her damages.
Yes, that's the conclusion I came to as well. The best analogue I can think of is taxation, where men - whose sexual values are impacted far more by control of resources - are typically far more aggrieved by the practice than women (who typically see it as a wonderful practice that ought to be expanded).
It feels odd for them to be advertising this belief though. These are surely a lot of the same people trying to devalue virginity, glorifying public sex positivity, condemning "slut shaming", etc.
Yes it is! Internet safety 101. Don't post stuff in public that you don't want to be public. There's always going to be creepy people doing whatever they want with all your public information. No AI company can stop that - the cat's out of the bag once you publish it to the whole world.
You must not know what "rape culture" means. It doesn't mean just rape, but an environment in which misogyny and violence against women is normalized / celebrated / defended.
Doing it without their permission and posting it below their comments is definitely misogyny and sexual harassment. If you did that in the workplace you'd be fired for cause.
Liking and sharing naked pictures of (adult) women means you hate women (/misogyny)? I've imagined lots of women naked and it's not because I hate them. I probably wouldn't share them but if I did it would be because I liked how they looked, not out of hate.
I'm sure lots of people do it for hate but legitimately a lot of people just look at someone and think "they'd look nice naked" and then if they actually have a picture of that, might share it under their comments/posts because it's enjoyable to look at rather than to punish someone.
Fantasy is always consensual; the line is only crossed when people share these images. It's the difference between fantasizing about a co-worker and telling them about those fantasies.
The problem is, this service very publicly shares those images, and usually in a way that directly tags the actual person. So, very much across the line in to "harming actual people."
Since this regularly harms women, people round it off to "misogyny". Even if the motive is not "harm women", the end result is still "women were harmed".
(There also IS an exception if you know the person well enough to expect that they're cool with it - but the onus is still on you if you guess wrong and your actions end up causing harm.)
Misogyny doesn't just mean you hate women. It can also mean you believe them lesser, or you do not respect women, or you are actively trying to undermine them.
Sharing lewd pictures is using the tools of the patriarchy to shame and humiliate women. That's misogyny.
Think of it this way. I want to humiliate a black man online, so I generate a picture of him eating a huge watermelon slice and share it around for giggles. Is that racism? Of course it is.
I'm kind of confused how you got to your current statement so I would like to ask you: how would you define misogyny? what is the behavior of a misogynist?
additionally, how would you define the term "rape culture"? Are you aware of the term at all?
Rape culture doesn't mean that everything is rape. It means that our culture routinely sexualizes women and downplays harassment towards them, which, in turn, emboldens rapists.
The same argument could be made of photoshopping someone's face on to a nude body. But for the most part, nobody cares (the only time I recall it happening was when it happened to David Brent in The Office).
"For a Linux user, you can already build such a system yourself quite trivially ..."
Convincingly photoshopping someones face onto a nude body takes time, skills, effort, and access to resources.
Grok lowers the barrier to be less effort than it took for either you or I to write our comments.
It is now a social phenomenon where almost every public image of a woman or girl on the site is modified in this manner. Revenge porn photoshops happened before, but not to this scale or in this type of phenomenon.
And there is safety in numbers. If one person photoshops a highschool classmate nude, they might find themself on a registry. For lack of knowing the magnitude, if myriad people are doing it around the country, then do you expect everyone doing that to be litigated that extensively?
> Revenge porn photoshops happened before, but not to this scale or in this type of phenomenon.
Mate, thats the point. I as a normal human being, who had never been on 4chan or the darker corners of reddit would have never seen or be able to make frankenporn. much less so make _convincing_ frankenporn.
> For lack of knowing the magnitude
Fuck that shit, if they didn't know the magnitude they wouldn't have spend ages making the photoshop to do it. You don't spend ages doing revenge, "because you didn't know the magnitude" You spend ages doing it because you want revenge
> if myriad people are doing it around the country, then do you expect everyone doing that to be litigated that extensively?
I mean we put people in prison for drink driving, lots of people do that in the states, same with drug dealing. Same with harassment, thats why restraining orders exist.
but
You are missing the point, Making and distributing CSAM is an illegal offence. Knowingly storing and transmitting it is an offence. Musk could stop it all now by re-training grok, or putting in some basic controls.
If any other person was doing this they would have been threatened with company ending action by now.
This is a heated topic and I share your anger. But you have completely misunderstood me.
We mostly agree, so let me clarify.
Grok is being used to make very much revenge porn, including CSAM revenge porn, and people _are using X because it's the CSAM app_. I think this is all bad. We agree here.
"For lack of knowing the magnitude" is me stating that I do not know the number of people using X to generate CSAM. I don't know if it is a thousand, a million, a hundred million, etc. So, I used the word "myriad" instead of "thousands", "millions", etc.
I am arguing that this is worse because the scale is so much more. I am arguing against the argument equivocating this with photoshop.
> If any other person was doing this they would have been threatened with company ending action by now.
Yes, I agree. X is still available on both app stores. This means CSAM is just being made more and more normal. I think this is very bad.
Friend, you are putting too much effort to debate a topic that is implicitly banned on this website. This post has already been hidden from the front page. Hacker News is openly hostile to anything that even mildly paints a handful of billionaires in a poor light. But let's continue to deify Dang as the country descneds openly into madness.
I also see it back now too, despite it being removed earlier. Do you have faith in the HN algo? Position 22 despite having more votes and comments and being more recent than all of the posts above it?
IMO, the fact that you would say this is further evidence of rape culture infecting the world. I assure you that people do care about this.
And friction and quality matters. When you make it easier to generate this content and make the content more convincing, the number of people who do this will go up by orders of magnitude. And when social media platforms make it trivial to share this content you've got a sea change in this kind of harassment.
How is "It's acceptable because people perform a lesser form of the same behavior" an argument at all? Taken to its logical extreme, you could argue that you shouldn't be prevented from punching children in the face because there are adults in the world who get punched in the face. Obviously, this is an insane take, but it applies the same logic you've outlined here.
"“We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” X Safety said. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
How about not enabling generating such content, at all?
Given X can quite simply control what Grok can and can't output, wouldn't you consider it a duty upon X to build those guardrails in for a situation like CSAM? I don't think there's any grey area here to argue against it.
I am, in general, pretty anti-Elon, so I don't want to be seen as taking _his_ side here, and I am definitely anti-CSAM, so let's shift slightly to derivative IP generation.
Where does the line fall between provider responsibility when providing a tool that can produce protected work, and personal responsibility for causing it to generate that work?
It feels somewhat more clearcut when you say to AI, "Draw me an image of Mickey Mouse", but why is that different than photocopying a picture of Mickey Mouse, and using Photoshop to draw a picture of Mickey Mouse? Photo copiers will block copying a dollar bill in many cases - should they also block photos of Mickey Mouse? Should they have received firmware updates whenever Steamboat Willy fell into public domain, such that they can now be allowed to photocopy that specific instance of Mickey Mouse, but none other?
This is a slippery slope, the idea that a person using the tool should hold the tool responsible for creating "bad" things, rather than the person themselves being held responsible.
Maybe CSAM is so heinous as to be a special case here. I wouldn't argue against it specifically. But I do worry that it shifts the burden of responsibility onto the AI or the model or the service or whatever, rather than the person.
Another thing to think about is whether it would be materially different if the person didn't use Grok, but instead used a model on their own machine. Would the model still be responsible, or would the person be responsible?
> Where does the line fall between provider responsibility when providing a tool that can produce protected work, and personal responsibility for causing it to generate that work?
There's one more line at issue here, and that's the posting of the infringing work. A neutral tool that can generate policy-violating material has an ambiguous status, and if the tool's output ends up on Twitter then it's definitely the user's problem.
But here, it seems like the Grok outputs are directly and publicly posted by X itself. The user may have intended that outcome, but the user might not have. From the article:
>> In a comment on the DogeDesigner thread, a computer programmer pointed out that X users may inadvertently generate inappropriate images—back in August, for example, Grok generated nudes of Taylor Swift without being asked. Those users can’t even delete problematic images from the Grok account to prevent them from spreading, the programmer noted.
Overall, I think it's fair to argue that ownership follows the user tag. Even if Grok's output is entirely "user-generated content," X publishing that content under its own banner must take ownership for policy and legal implications.
This is also legally problematic: many jurisdictions now have specific laws about the synthesis of CSAM or modifying peoples likenesses.
So exactly who is considered the originator is a pretty legally relevant question particularly if Grok is just off doing whatever and then posting it from your input.
"The persistent AI bot we made treated that as a user instruction and followed it" is a heck of a chain of causality in court, but you also fairly obviously don't want to allow people to laundry intent with AI (which is very much what X is trying to do here).
Maybe I'm being too simplistic/idealistic here - but if I had a company that controlled an LLM product, I wouldn't even think twice about banning CSAM outputs.
You can have all the free speech in the world, but not with the vulnerable and innocent children.
I don't know how we got to the point where we can build things with no guardrails and just expect the user to use it legally? I think there should be responsibility on builders/platform owners to definitely build guardrails in on things that are explicitly illegal and morally repugnant.
>I wouldn't even think twice about banning CSAM outputs.
Same, honestly. And you'll probably catch a whole lot of actual legitimate usage in that net, but it's worth it.
But you'll also miss some. You'll always miss some, even with the best guard rails. But 99% is better than 0%, I agree.
> ... and just expect the user to use it legally?
I don't think it's entirely the responsibility of the builder/supplier/service to ensure this, honestly. I don't think it can be. You can sell hammers, and you can't guarantee that the hammer won't be used to hurt people. You can put spray cans behind cages and require purchasers to be 18 years old, but you can't stop the adult from vandalism. The person has to be held responsible at a certain point.
I bet most hammers (non-regulated), spray cans (lightly regulated) and guns (heavily regulated) that are sold are used for their intended purposes. You also don't see these tools manufacturers promoting or excusing their unintended usage as well.
There's also a difference between a tool manufacturer (hardware or software) and a service provider: once the tool is on the user's hands, it's outside of the manufacturer's control.
In this case, a malicious user isn't downloading Grok's model and running it on their GPU. They're using a service provided by X, and I'm of the opinion that a service provider starts to be responsible once the malicious usage of their product gets relevant.
> I don't know how we got to the point where we can build things with no guardrails and just expect the user to use it legally?
Historically tools have been uncensored, yet also incredibly difficult and time-consuming to get good results with.
Why spend loads of effort producing fake celebrity porn using photoshop or blender or whatever when there's limitless free non-celebrity porn online? So photoshop and blender didn't need any built-in censorship.
But with GenAI, the quantitive difference in ease-of-use results in qualitative difference in outcome. Things that didn't get done when it needed 6 months of practice plus 1 hour per image are getting done now it needs zero practice and 20 seconds per image.
> Where does the line fall between provider responsibility when providing a tool that can produce protected work, and personal responsibility for causing it to generate that work?
If you operate the tool, you are responsible. Doubly so in a commercial setting. If there are issues like Copyright and CSAM, they are your responsibility to resolve.
If Elon wanted to share out an executable for Grok and the user ran it on their own machine, then he could reasonably sidestep blame (like how photoshop works). But he runs Grok on his own servers, therefore is morally culpable for everything it does.
Your servers are a direct extension of yourself. They are only capable of doing exactly what you tell them to do. You owe a duty of care to not tell them to do heinous shit.
It's simpler to regulate the source of it than the users. The scale that genAI can do stuff is much, much different than photocopying + Photoshop, scale and degree matter.
So, back in the 90s and 2000s, you could get The Gimp image editor, and you could use the equivalent of Word Art to take a word or phase and make it look cool, with effects like lava or glowing stone, or whatever. The Gimp used ImageMagick to do this, and it legit looked cool at the time.
If you weren't good at The Gimp, which required a lot of knowledge, you could generate a cool website logo by going to a web server that someone built, giving them a word or phrase, and then selecting the pre-built options that did the same thing - you were somewhat limited in customization, but on the backend, it was using ImageMagick just like The Gimp was.
If someone used The Gimp or ImageMagick to make copyrighted material, nobody would blame the authors of The Gimp, right? The software were very nonspecific tools created for broad purposes, that of making images. Just because some bozo used them to create a protected image of Mickey Mouse doesn't mean that the software authors should be held accountable.
But if someone made the equivalent of one of those websites, and the website said, "click here to generate a random picture of Mickey Mouse", then it feels like the person running the website should at least be held partially responsible, right? Here is a thing that was created for the specific purpose of breaking the law upon request. But what is the culpability of the person initiating the request?
Anyway, the scale of AI is staggering, and I agree with you, and I think that common decency dictates that the actions of the product should be limited when possible to fall within the ethics of the organization providing the service, but the responsibility for making this tool do heinous things should be borne by the person giving the order.
I think yes CSAM and other harmful outputs are a different and more heinous problem, I also think the responsibility is different between someone using a model locally and someone promoting grok on twitter.
Posting a tweet asking Grok to transform a picture of a real child into CSAM is no different, in my mind, than asking a human artist on twitter to do the same. So in the case of one person asking another person to perform this transformation, who is responsible?
I would argue that it’s split between the two, with slightly more falling on the artist. The artist has a duty to refuse the request and report the other person to the relevant authorities. If that artist accepted the request and then posted the resulting image, twitter then needs to step in and take action against both users.
Even if you can’t reliably control it, if you make a tool that generates CSAM you’ve made a CSAM generator. You have a moral responsibility to either make your tool unavailable, or figure out how to control it.
I'm not sure I agree with this specific reasoning. Consider this, any given image viewer can display CSAM. Is it a CSAM viewer? Do you have a moral responsibility to make it refuse to display CSAM? We can extend it to anything from graphics APIs, to data storage, etc.
There's a line we have to define that I don't think really exists yet, nor is it supported by our current mental frameworks. To that end, I think it's just more sensible to simply forbid it in this context without attempting to ground it. I don't think there's any reason to rationalize it at all.
I think the question might come down to whether Grok is a "tool" like a paintbrush or Photoshop, or if Grok is some kind of agent of creation, like an intern. If I ask an art intern to make a picture of CSAM and he does it, who did wrong?
If Photoshop had a "Create CSAM" button and the user clicked it, who did wrong?
I think a court is going to step in and help answer these questions sooner rather than later.
Normalizing AI as being human equivalent means the AI is legally culpable for its own actions rather than its creators or the people using it, and not guilty of copyright infringement for having been trained on proprietary data without consent.
I happen to agree with you that the blame should be shared, but we have a lot of people in this thread saying "You can't blame X or Grok at all because it's a mere tool."
From my knowledge (albeit limited) about the way LLMs are set up, they most definitely have abilities to include guardrails of what can't be produced. ChatGPT has some responses to prompts which stops users from proceeding.
And X specifically: there have many cases of X adjusting Grok where Grok was not following a particular narrative on political issues (won't get into specifics here). But it was very clear and visible. Grok had certain outputs. Outcry from certain segments. Grok posts deleted. Trying the same prompts resulted in a different result.
From my (admittedly also limited) understanding, there’s no bulletproof way to say “do NOT generate X” as it’s not non-deterministic and you can’t reverse engineer and excise the CSAM-generating parts of a model. “AI jailbreak prompts” are a thing.
Well it’s certainly horrible that they’re not even trying, but not surprising (I deleted my X account a long time ago).
I’m just wondering if from a technical perspective it’s even possible to do it in a way that would 100% solve the problem, and not turn it into an arms race to find jailbreaks. To truly remove the capability from the model, or in its absence, have a perfect oracle judge the output and block it.
Again, I'm not the most technical, but I think we need to step back and look at this holistically. Given Grok's integration with X, there could be other methods of limiting the production and dissemination of CSAM.
For arguments sake, let's assume Grok can't reliably have guardrails in place to stop CSAM. There could be second and third order review points where before an image is posted by Grok, another system could scan the image to verify whether it's CSAM or not, and if the confidence is low, then human intervention could come into play.
I think the end goal here is prevention of CSAM production and dissemination, not just guardrails in an LLM and calling it a day.
Given how spectacular the failure of EVERY attempt to put guardrails on LLMs has been, across every single company selling LLM access, I'm not sure that's a reasonable belief.
The guardrails have mostly worked. They have never ever been reliable.
Yes, every image generation tool can be used to create revenge porn. But there are a bunch of important specifics here.
1. Twitter appears to be taking no effort to make this difficult. Even if people can evade guardrails this does not make the guardrails worthless.
2. Grok automatically posts the images publicly. Twitter is participating not only in the creation but also the distribution and boosting of this content. The reason why a ton of people doing this is not because they personally want to jack it to somebody, but because they want to humiliate them in public.
3. Decision makers at twitter are laughing about what this does to the platform and its users when they "post a picture of this person in their underwear" button is available next to every woman who posts on the platform. Even here they are focusing only on the illegal content, as if mountains of revenge porn being made of adult women isn't also odious.
It is trivially easy to filter this with an LLM or even just a basic CLIP model. Will it be 100% foolproof? Not likely. Is it better than doing absolutely nothing and then blaming the users? Obviously. We've had this feature in the image generation tools since the first UI wrappers around Stable Diffusion 1.0.
> but output is directly connected to its input and blame can be proportionally shared
X can actively work to prevent this. They aren't. We aren't saying we should blame the person entering the input. But, we can say that the side producing CSAM can be held responsible if they choose to not do anything about it.
> Isn't this a problem for any public tool? Adversarial use is possible on any platform
Yes. Which is why the headline includes: "no fixes announced" and not just "X blames users for Grok-generated CSAM."
Grok is producing CSAM. X is going to continue to allow that to happen. Bad things happen. How you respond is essential. Anyone who is trying to defend this is literally supporting a CSAM generation engine.
An analogy: if you're running the zoo, the public's safety is your job for anyone who visits. It's of course also true that sometimes visitors act like idiots (and maybe should be prosecuted), and also that wild animals are not entirely predictable, but if the leopards are escaping, you're going to be judged for that.
Maybe because sometimes they're kids? You gotta kid-proof stuff in a zoo.
Also, punishment is a rather inefficient way to teach the public anything. The people who come through the gate tomorrow probably won't know about the punishment. It will often be easier to fix the environment.
Removing troublemakers probably does help in the short term and is a lot easier than punishing.
If the personal accountability happened at the speed and automation level that X allows Grok to produce revenge porn and CSAM, then I'd agree with you.
Yep. "Oh grok is being too woke" gets musk to comment that they'll fix it right away. But turn every woman on the platform into a sex object to be the target of humiliation? That's just good fun apparently.
I even think that the discussion focusing on csam risks missing critical stuff. If musk manages to make this story exclusively about child porn and gets to declare victory after taking basic steps to address that without addressing the broader problem of the revenge porn button then we are still in a nightmare world.
Women should be able to exist in public without having to constantly have porn made of their likeness and distributed right next to their activity.
You always have liability. If you put something there you tell the court that you see the problem and are trying to prevent it. It often becomes easier to get out of liability if you can show the courts you did your best to prevent this. Courts don't like it when someone is blatantly unaware of things - ignorance is not a defense if "a reasonable person" would be aware of it. If this was the first AI in 2022 you could say "we never thought about that" and maybe get by, but by 2025 you need to tell the court "we are aware of the issue, and here is why we think we had reasonable protections that the user got around".
How about policing CSAM at all? I can still vividly remember firehose API access and all the horrible stuff you would see on there. And if you look at sites like tk2dl you can still see most of the horrible stuff that does not get taken down.
It's on X, not some fringe website that many people in the world don't access.
Regardless of how fringe, I feel like it should be in everyones best interests to stop/limit CSAM as much as they reasonably can without getting into semantics of who requested/generated/shared it.
> How about not enabling generating such content, at all?
Or, if they’re being serious about the user-generated content argument, criminally referring the users asking for CSAM. This is hard-liability content.
This is probably harder because it's synthetic and doesn't exist in PhotoDNA database.
Also, since Grok is really good in getting the context, something akin to "remove their T-shirt" would be enough to generate a picture someone wanted, but very hard to find using keywords.
IMO they should mass hide ALL the images created since then specific moment, and use some sort of the AI classifier to flag/ban the accounts.
Willing to bet that X premium signups have shot up because of this feature. Currently this is the most convenient tool to generate porn of anything and everything.
I don’t think anyone can claim that it’s not the user’s fault. The question is whether it’s the machine’s fault (and the creator and administrator - though not operator) as well.
The article claims Grok was generating nude images of Taylor Swift without being prompted and that there was no way for the user to take those images down
I don't know how common this is, or what the prompt was that inadvertently generated nudes. But it's at least an example where you might not blame the user
Yeah but “without being asked” here means the user has to confirm they are 18+, choose to enable NSFW video, select “spicy” in Grok’s video generation settings and then prompt “Taylor Swift celebrating Coachella with the boys”. The prompt seems fine but the rest of it is clearly “enable adult content generation”.
I know they said “without being prompted” here but if you click through you’ll see what the person actually selected (“spicy” is not default and is age-gated and opt-in via the nsfw wall).
Let’s not lose sight of the real issue here: Grok is a mess from top to bottom run by an unethical, fickle Musk. It is the least reliable LLM of the major players and musk’s constant fiddling with it so it doesn’t stray too far from his worldview invalidates the whole project as far as I’m concerned.
Isn't it a strict liability crime to posses it in the US? So if AI-generated apparent CSAM counts as CSAM legally (not sure on that) then merely storing it on their servers would make X liable.
You are only liable if you know - or should know - that you possess it. You can help someone out by mailing their sealed letter containing CSAM and be fine since you have no reason to suspect the sealed letter isn't legal. X can store CSAM so long as they have reason to think it is legal.
Note that things change. In the early days of twitter (pre X) they could get away with not thinking of the issue at all. As technology to detect CSAM marches on they need to use it (or justify why it shouldn't be used - too many false positives???). As a large platform for such content they need to push the state of the art in such detection.. At no point do they need perfection - but they need to show they are doing their reasonable best to stop this.
The above is of course my opinion. I think the courts will go a similar direction, but time will tell...
> You are only liable if you know - or should know - that you possess it.
Which he does and responded with “I will blame and punish users.” Which yeah, you should, but you also need to fix your bot. He’s certainly has no issue doing that when Grok outputs claims/arguments that make him look bad or otherwise engages in what he considers “wrongthink,” but suddenly when there are real, serious consequences he gets to hide behind “it’s just a user problem”?
This is the same thing YouTube and social media companies have been getting away with for so long. They claim their algorithms will take care of content problems, then when they demonstrably fail they throw their hands up and go “whoops! Sorry we are just too big for real people to handle all of it but we’ll get it right this time.” Rinse repeat.
Blame and punish should be a part of this. However that only works if you can find who to blame and punish. We also should put guard rails on so people don't make mistakes. (generating CSAM should not be an easy mistake to make when you don't intend it, but in other contexts someone may accidentally ask for the wrong thing)
There's still a lot of of unanswered questions in that area regarding generated content. Whether the law deems it CSAM depends on if the image depicts a real child, and even that is ambiguous, like was it wholly generated or augmented. Also, is it "real" if it's a model trained on real images?
Some of these things are going into the ENFORCE act, but it's going to be a muddy mess for a while.
I think platforms that host user-generated content are (rightly) treated differently. If I posted a base64 of CSAM in this comment it would be unreasonable to shut down HN.
The questions then, for me, are:
* Is Grok considered a tool for the user to generate content for X or is Grok/X considered similar to a vendor relationship
* Is X more like Backpage (not protective enough) than other platforms
I’m sure this is going to court, at least for revenge porn stuff. But why would anyone do this to their platform? Crazy. X/Twitter is full of this stuff now.
I don't think you can argue yourself out of "The Grok account is owned and operated by Twitter". In no planet is what it outputs user generated content since the content does not originate from the user, at most they requested some content from Twitter and Twitter provided it.
Getting off to images of child abuse (simulated or not) is a deep violation of social mores. This itself does indeed constitute a type of crime, and the victim is taken to be society itself. If it seems unjust, it's because you have a narrow view of the justice system and what its job actually is (hint: it's not about exacting controlled vengeance)
It may shock you to learn that bigamy and sky-burials are also quite illegal.
Any lawyers around? I would assume (IANAL) that Section 230 does not apply to content created by an agent owned by the platform, as opposed to user-uploaded content. Also it seems like their failure to create safeguards opens up the possibility of liability.
And of course all of this is narrowly focused on CSAM (not that it should be minimized) and not on the fact that every person on X, the everything app, has been opened up to the possibility of non-consensual sexual material being generated of them by Grok.
The CSAM aspects aren't necessarily as affected by 230: to the extent that you're talking about it being criminal, 230 doesn't apply at all there.
For civil liability, 230 really shouldn't apply; as you say, 230's shield is about avoiding vicarious liability for things other people post. This principle stretches further than you might expect in some ways but here Grok just is X (or xAI).
Nothing's set in stone much at all with how the law treats LLMs but an attempt to say that Grok is an independent entity sufficient to trigger 230 but incapable of being sued itself, I don't see that flying. On the other hand the big AI companies wield massive economic and political power, so I wouldn't be surprised to see them push for and get explicit liability carveouts that they claim are necessary for America to maintain its lead in innovation etc. etc., whether those come through legislation or court decisions.
> non-consensual sexual material being generated of them by Grok
They should disable it in the Netherlands in this case since it really sounds like a textbook slander case and the spreader can also be held liable. note: it's not the same as in the US despite using the same word, deepfakes have been proven as slander and this is no different. Especially if you know it's fake by using "AI". There have been several cases of pornographic deep fakes, all of which were taken down quickly, in which the poster/creator was sentenced. The unfortunate issue even of taking posts down quickly is unfortunately the rule which states that if something is on the internet it stays on the internet. The publisher always went free due to acting quickly and not creating it. I would like to see where it goes when both publisher and creator are the same entity, and they do nothing to prevent it.
Yeah this is pretty funny. Seeing all these discussions about section 230 and the American constitution...
Nobody in the Netherlands gives one flying fuck about American laws what GROK is doing violates many Dutch laws. Our parliament actually did it's job and wrote some stuff about revenge porn, deep fakes and artificial CP.
I find it fascinating to read comments from a lot of people who support open models without guardrails, and then to read this thread with seemingly the opposite sentiment in overwhelming majority. Is it just two different sets of users with differing opinions on if models should be open or closed?
I think there's a difference between access without guardrails, and decrying what folks do with them, or in this case a site that allows / doesn't even care if their integrated tool is used to creep on folks.
I can argue for access to say photoshop like tools, and say folks shouldn't post revenge / fake porn ...
They ban users responsible for misusing the tool, and refer them to law enforcement when appropriate. The whole point of this article is to say that's not good enough ("X blames users for [their misuse of the tool]") implying that merely making the tool available for people to use constitutes support of pedophilia. (Textbook case of appealing to the Four Horsemen of the Infocalypse.) The prevailing sentiment in this thread seems to be agreement with that take.
Making the tool easy to use and allowing it to just immediately post on Twitter is much different than simply providing a model online that people can download and run themselves.
If you are providing a tool for people, YES you are responsible to some degree.
Think of it this way. I sell racecars. I'm not responsible if someone buys my racecar and then drinks and drives and dies. Now, I run an entertainment venue where you can ride along in racecars. One of my employees is drunk, and someon dies. Now I am responsible.
In, like, an "ask a bunch of people and see what they think" way. Consensus. I'm not talking legality because I'm not a lawyer and I also don't care.
But I think, most people would say "uh, yeah, the business needs to do something or implement some policy".
Another example: selling guns versus running a shooting range. If you're running a shooting range then yeah, I think there's an expectation you make it safe. You put up walls, you have security, etc. You try your best to migrate the bad shit.
Misuse in this case doesn't include harassing adult women with AI generated porn of them. "Oh we banned the people doing this with children" doesn't cut it, in my mind.
As of May posting AI generated porn of unconsenting adults is a federal crime[1], so I'd be very surprised if they didn't ban users for that as well. The article conflates a bunch of different issues which makes it difficult to understand exactly what is and is not being talked about in each individual paragraph.
I am glad that open models exist. I also prefer that the most widely accessible AI systems that have engineered prompts and direct integration with social media platforms have guardrails. I do not think that this is odd.
I think it is good that you can install any apk on an android device. I also think it is good that the primary installation mechanism that most people use has systems to try to prevent malware from getting installed.
This sort of approach means that people who really need unbounded access and are willing to go through some extra friction can access these things. It makes it impossible for a megacorp to have complete control over a computing ecosystem. But it also reduces abuse since most people prefer to use the low-friction approach.
When people want open models without guardrails they're mostly talking about LLMs not so much image / video models. Outside of preventing CSAM what kind of guardrails would a image or video model have? Don't output instructions on the image for how to make meth? Lol
How do you even train a model to do that? For closed / proprietary models, that works, but for open / offline models, if I want to make a LoRa for meth instructions in an image... I don't know that you can stop me from doing so.
The thread is about a model-as-a-service. What you do at home on your own computer is qualitively different, in ternd of harassment and injury potential, that something automatically shared to Twitter.
Any mention of Musk on HN seems to cause all rational thought to go out the window, but yeah I wonder in this case how much of this wild deviation from the usual sentiment is attributable to:
1. Hypocrisy (people expressing a different opinion on this subject than they usually would because they hate Musk)
vs.
2. Selection bias (article title attracts a higher percentage of people who were already on the more regulation, less freedom side of the debate)
vs.
3. Self-censorship (people on the "more freedom, less regulation" side of the debate being silent or not voting on comments because in this case defending their principles would benefit someone they hate)
There might be other factors I haven't considered as well.
Gee, I wonder why people would take offense at an AI model being used to generate unprecedented amounts of CSAM from real children, or objectify millions of women without their consent. Must be that classic Musk Derangement Syndrome.
The real question is how can the pro-Musk guys still find a way to side with him on that. My leading theory is that they're actually pro-pedophilia.
I think regardless of source, sharing such pictures on public social media is probably crossing the line? And everything generated by this model is de-facto posted publicly on social media (some commenters are even saying it's difficult to erase unwanted / unintended images?)
I'd also argue commercialization affects this - X is marketing this as a product and making money off subscriptions, whereas I generally think of an open model as something you run locally for free. There's a big difference between "Porn Producer" and "Photoshop"
Context matters. In this case we're talking about Grok on X. It's not a philosophical debate if open or closed models are good. It's a debate (even though it shouldn't be) about Grok producing CSAM on X. If this was about what users do with their own models on their local machines then things would be different since that's not openly accessible or part of one of the biggest sites on the net. I think most people would argue that public facing LLM's have some responsibility to the public. As would any IP owner.
I think the question of if X should do more to prevent this kind of abuse (I think they should) is separate from Grok or LLM's though. I get that since xAI and X are owned by the same person there is some complications here, but most of the arguments I'm reading have to do with the LLM specifically, not just lax moderation policies.
Jokes on xAI. Europe doesn't have a Section 230 and the responsibility fall squarely on the platform and its owners. In Europe, AI generated or photoshopped CSAM is treated the same as actual abuse-backed CSAM if the depiction is realistic. Possession and distribution are both serious crimes.
The person(s) ultimately in charge of removing (or preventing the implementation of) Grok guardrails might find themselves being criminally indicted in multiple European countries once investigations have concluded.
I'm not sure Grok output is even covered by Section 230. Grok isn't a separate person posting content to a platform, it's an algorithm running on X's servers publishing on X's website. X can't reasonably say "oh, that image was uploaded by a user, they're liable, not us" when the post was performed by Grok.
Suppose, if instead of an LLM, Grok was an X employee specifically employed to photoshop and post these photos as a service on request. Section 230 would obviously not immunize X for this!
Generating a non-real child could be argued that it might not count. However thats not a given.
> The term “child pornography” is currently used in federal statutes and
> is defined as any visual depiction of
> sexually explicit conduct involving a
> person less than 18 years old.
Is broad enough to cover anything obviously young.
but when it comes to "nude-ifing" a real image of a know minor, I strognly doubt you can use the defence its not a real child.
Therefore your knowingly generating and distributing CSAM, which is out of scope for section 230
A natural person. That's what CSAM covers. There have been prosecutions under federal CSAM laws otherwise, but there have also been successful constitutional challenges that, briefly, classify fabricated content as obscenity. The implication there is that private possession of obscene materials is lawful.
> Europe doesn't have a Section 230 and the responsibility fall squarely on the platform and its owners.
They have something like Section 230 in the E-Commerce Directive 2000/31/EC, Articles 12-15, updated in the Digital Service Act. The particular protections for hosts are different but it is the same general idea.
Is Europe actually going to do anything? They currently appear to be puckering their assholes and cowering in the face of Trump, and his admin are already yelling about how the EU is "illegally" regulating American companies.
They might just let this slide to not rock the boat, either out of fear and they will do nothing, or to buy time if they are actually divesting from the alliance with and economic dependence on the US
There's so many of these nonsense views of the EU here. Not being vocal about a mental case president doesn't mean politicians are "puckering their assholes". The EU is not affraid to moderate and fine tech companies. These things take time.
Under previous US admins and the relationship the EU had, yeah.
The asshole puckering is from how Trump has completely flipped the table, everything is hyper transactional now, and as we’ve seen military action against leaders personally is also on the table.
I’m saying I could see the EU let this slide now because it’s not worth it politically to regulate US companies for shit like this anymore. Whether that would be out of fear or out of trying to buy time to reorganize would probably end up in future getting the same kind of historical analysis that Chamberlain’s policy of appeasement to Germany gets nowadays
They are able to change how Grok is prompted to deny certain inputs, or to say certain things. They decided to do so to praise Musk and Hitler. That was intentional.
They decided not to do so to prevent it from generating CSAM. X offering CSAM is intentional.
Grok will shit-talk Elon Musk, and it will also put him in a bikini for you. I've always found it a bit surprisingly how little control they seem to have there.
Ok I understood when stuff related to dodge was consistently flagged for being political and not relevant to hacking but... This is surely relevant to the audience here, no?
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
That's what section 230 says. The content in question here is not provided by "another information content provider", it is provided by X itself.
Section 230 is not a magical "get of jail free" card that you can use to absolve your tech platform of any responsibilities to its users. Removing posts and banning users is obviously not a workable solution for a technology that can abuse individuals very quickly.
My point is more that a lot of people were talking about removing Section 230 protections, which I think is implicitly what X is saying absolves them of responsibility for Grok-generated CSAM.
Removing Section 230 was a big discussion point for the current ruling party in the US, when they didn't have so much power. Now that they do have power, why has that discussion stopped? I'd be very interested in knowing what changed.
Ah, I misinterpreted - apologies. The current ruling party is not a monolith. The tech faction has been more or less getting its way at the expense of the tradionalist natcon faction. The former has no interest in removing section 230 protections, while a few in the latter camp say they do.
But beyond the legality or obvious immorality, this is a huge long-term mistake for X. 1 in 3 users of X are women - that fraction will get smaller and smaller. The total userbase will also get smaller and smaller, and the platform will become a degenerate hellhole like 4chan.
When do we cross the line of culpability with tool-assisted content? If I have a typo in my prompt and the result is illegal content, am I responsible for an honest mistake or should the tool have refused to generate illegal content in the first place?
Do we need to treat genAI like a handgun that is always loaded?
Even ignoring that Grok is generating the content, not users, I think you can still hold to Section 230 protections while thinking that companies should take more significant moderation actions with regards to issues like this.
For example, if someone posted CSAM on HN and Dang deleted it, I think that it would be wrong to go after HN for hosting the content temporarily. But if HN hosted a service that actively facilitated, trivialized, and generated CSAM on behalf of users, with no or virtually no attempt to prevent that, then I think that mere deletion after the fact would be insufficient.
But again, you can just use "Grok is generating the content" to differentiate if that doesn't compel you.
Should Adobe be held accountable if someone creates CSAM using their software? They could put image recognition into it that would block it, but they don't.
Look what happens when you put in an image of money into Photoshop. They detect it and block it.
I don't know. Does it matter what I think about that? Let's say I answer "yes, they should". Then what? Or what if I say "no, I see a difference". Then what?
Who cares about Adobe? I'm talking about Grok. I can consistently say "I believe platforms should moderate content in accordance with Section 230" while also saying "And I think that the moderation of content with regards to CSAM, for major platforms with XYZ capabilities should be stricter".
The answer to "what about Adobe?" is then either that it falls into one of those two categories, in which case you have your answer, or it doesn't, in which case it isn't relevant to what I've said.
1) you need to bring your own source material to create it. You can't press a button that says "make child porn"
2) its not a reasonable to expect that someone would be able to make CSAM in photoshop. However more importantly the user is the one hosting the software, not adobe.
>You can't press a button that says "make child porn"
Where is this button in Grok? You have to as the user explicitly write out a very obviously bad request. Nobody is going to accidentally get CSAM content without making a conscious choice about a prompt that's pretty clearly targeting it.
is it reasonable (legal term, ie anyone can do it) that someone with little effort could create CSAM using photoshop?
No, you need to train, take a lot of time and effort to do it. with grok you say "hey make a sexy version of [picture of this minor]" and it'll do it. that doesn't take traning, and its not a high bar to stopping people doing it.
The non-CSAM example is this, it's illegal, in the USA to make anything that looks like a US dollar bill. ie photocopiers have blocks on them to stop you making copies of it.
You can get round that as a private citizen but its still illegal. A company knowingly making a photocopier that allows you to photocopy dollar bills is in for a bad time.
Something must have changed, there's a whole lot less concern about censorship and government intervention in social media, despite many "just the facts" reports of just such interventions going on.
I'm at a loss to explain it, given media's well known liberal bias.
How curious that your comment was downvoted! It seems completely appropriate and in line with the discussion.
I think it's time to revisit these discussions and in fact remove Section 230. X is claiming that the Grok CSAM is "user generated content" but why should X have any protection to begin with, be it a human user directly uploading it or using Grok to do this distribution publicly?
The section 230 discussion must return, IMHO. These platforms are out of control.
Grok is a hosted service. In your analogy, it would be like a gun shop renting a gun out to someone who puts down "Rob a store" as the intended usage of the rental. Then renting another gun to that same client. Then when confronted, telling people "I'm not responsible for what people do with the guns they rent from me".
It's not a personal tool that the company has no control over. It's a service they are actively providing and administering.
I think a better analogy would be going into a gun shop and paying the owner to shoot someone. They're asking grok to undress people and it's just doing it.
Would you blame only the users of a murder-for-hire service? Sure, yes, they are also to blame, but the murder-for-hire service would also seem to be equally culpable.
Great, can we finally get X blocked in the EU then? Far too many people are still hooked to toxic content on that platform, and it is owned by an anti-EU, right-extreme, nazi-salute guy, who would love nothing more than seeing the EU fail.
> That’s like blaming a pen for writing something bad,” DogeDesigner opined.
Genuinely terrifying how Elon has a cadre of unpaid yes-men ready to justify his every action. DogeDesigner regularly sub tweets Elon agreeing to his latest dumb take of the day, and even seems to have based his entire identity on Elon's doge obsession.
I can't imagine how terrible that self imposed delusion feels deep down for either of them.
> Genuinely terrifying how Elon has a cadre of unpaid yes-men ready to justify his every action.
A similar article[1] briefly made it to the HN front page the other day, for a few minutes before Elon's army of unpaid yes-men flag-nuked it out of existence.
I have a very hard time understanding the business case for xAI/Grok. It is supposedly worth $200 billion (at least by Silicon Valley math), putting it in the company of OpenAI and Anthropic, but like...Who is using it? What is it good for? Is it making a single dollar in revenue? Or is the whole thing just "omg Elon!!" hype similar to most of his other endeavors?
> Or is the whole thing just "omg Elon!!" hype similar to most of his other endeavors?
Yes, but combined with "omg AI" (which happened elsewhere; for instance, see the hype over OpenAI Sora, which is clearly useless except as a toy), so extra-hype-y.
I don't buy the "I only provide the tool" cop out. Musk does control what Grok spews out and just chooses not to act in this case.
When Grok stated that Israel was committing genocide, it was temporarily suspended and fixed[0]. If you censor some things but not others, enabling the others becomes your choice. There is no eating the cookie and having it too - you either take a "common carrier" stance or censor, but also take responsibility for what you don't censor.
If you follow the "tool-maker is responsible for tool-use" thread of thought to its logical conclusion, you have to hold creators of open-weights models responsible for whatever people do with these models. Do you want to live in a world that follows this rule?
But we don't have to take things to furthest conclusions. We can very easily draw both a moral and legal line between "somebody downloaded an open weight model, created a prompt from scratch to generate revenge porn of somebody, and then personally distributed that image" and "twitter has a revenge porn button right next to every woman on the platform that generates and distributes revenge porn off of a simple sentence."
People who say "society should permit X, but only if it's difficult" have a view of the world incompatible with technological progress and usually not coherent at all.
You seem unfamiliar with these things we have called laws. I recommend reading up on what they are and how they work. It would be generally useful to understands such things.
The core issue is that X is now a tool for creating and virally distributing these images anonymously to a large audience, often targeting the specific individuals featured in the images. For example, to any post with a picture, any user can simply reply "@grok take off their clothes and make them do something degrading", and the response is then generated by X and posted in the same thread. That is an entirely different kind of tool from an open-weight model.
The LLM itself is more akin to a gun available in a store in the "gun is a tool" argument (reasonable arguments on both side in my opinion); however, this situation more like a gun manufacturer creating a program to mass distribute free pistols to a masked crowd, with predictable consequences. I'd say the person running that program was either negligent or intentionally promoting havoc to the point where it should be investigated and regulated.
The phrase “its logical conclusion” is doing a lot of heavy lifting here. Why on earth would that absurdity be the logical conclusion? To me it looks like a very illogical conclusion.
Importantly, X also provides the hardware to run the model, a friendly user-interface around it, and the social platform to publicly share and discuss outputs from the model. It's not just access to the model.
I can see this becoming a culture war thing like vaccines. Conservatives will become pro-CSAM because it triggers the overly sensitive crybaby Liberals.
This has already been a culture war thing, and it's why X.com is able to continue to provide users with CSAM with impunity. The site is still up after everything, and the app is still on the app store everywhere.
When the far-right paints trans people as pedophiles, it's not an accident that also provides cover for pedophiles.
The age of consent between 16 and 18 is a relatively high born from progressive feminist wins. In the United States, the lowest AOC was 14 until the 1990s, and the AOC in the US ranged from _7 to 12_ for most of our existence.
To be clear, I'm in defense of a high age of consent. But it's something that had to be fought for, and it's not something that can be assumed to be safe in our culture (like the rejection of nazis and white supremacists, or valuing womens rights including voting and abortion).
Influential politicians like Tom Hofeller were advocates for pedophilia and nobody cares at all. Trump is still in power despite the Epstein controversy, Matt Gaetz still hasn't been punished for paying for sex with an underage girl in 2017. The Hitler apologia in the far-right spaces even explicitly acknowledge he was a pedophile. Etc.
In a different era, X would have been removed from Apple and Google's app stores for the CEO doing nazi salutes and the chatbot that promoting Hitler. But even now that X is a CSAM app, as of 3PM ET, I can still download X on both of their app stores. That would not have been normal just two years ago.
This has already been a culture war issue for awhile, there is a pro-pedophilia side, and this is just another victory for them.
We've already got a taste of that with people like Megyn Kelly saying "it's not pedophilia, it's ephebophilia" when talking about Epstein and his connections. Not surprising though. When you have no principles you'll go as far as possible to "trigger the libs".
Already the case. I can’t dig up the link, but I recall that a recent poll showed that about half of Republicans would still support Trump even if he was directly implicated in Epstein’s crimes.
Naughty Old Mr Car's fans are triggered by any criticism of Dear Leader.
This is actually separate to hn's politics-aversion, though I suspect there's a lot of crossover. Any post which criticised Musk has tended to get rapidly flagged for at least the last decade.
Only because of the broader context of the legal environment. If there was no prosecution for breaking and entering, they would be effectively worthless. For the analogy to hold, we need laws to throw coercive measures against those trying to bypass guard rails. Theoretically, this already exists in the Computer Fraud and Abuse Act in the US, but that interpretation doesn't exist quite yet.
Goalpost movement alert. The claim was that "AI can be told not to output something". It cannot. It can be told to not output something sometimes, and that might stick, sometimes. This is true. Original statement is not.
After learning that guaranteed delivery was impossible, the once-promising "Transmission Control Protocol" is now only an obscure relic of a bygone era from the 70s, and a future of inter-connected computer systems was abandoned as merely a delusional, impossible fantasy.
If your effort is provably futile, wouldn't saying you tried be a demonstration of a profound misallocation of effort (if you DID try), or a blatant lie (if you did not)?
The irony. Musk fumes about pedo leftist weirdos. And then his own ai bot creates CSAM. The right are full of hypocrites and weirdos compensating so so very hard.
Elon Musk attends the Donald Trump school of responsibility. Take no blame. Admit no fault. Blame everyone else. Unless it was a good thing, then take all credit and give none away.
lol. Always fun to watch HN remove hightly relevant topics from the top of the front page. To their credit they usually give us about an hour to discuss before doing so. How kind of them.
So let me get this straight. When people use these tools to steal artist’ styles directly to generate fake Ghibli art, then it’s «just a tool, bro».
But when it’s used to create CSAM, then it’s suddenly not just a tool.
You _cannot_ stop these tools from generating this kind of stuff. Prompt guards only get you so far. Self-hosted versions don’t have them. The human writing the prompt is at fault. Just like it’s not Adobe’s responsibility if some sick idiot puts bikinis on a child in Photoshop.
People posting random cute candids of their family and pets is about the most commonplace type of social media post there is. You should be getting angry at the weird pervs sexualizing the images (and the giant AI company enabling it).
Twitter isn't just generating the images. It is also posting them. Hey, now in the replies below your child's twitter post there's a photo of them wearing a skimpy swimsuit. They see it. Their friends see it.
This isn't just somebody beating off in private. This is a public image that humiliates people.
Speaking in the abstract: There are arguments that fictional / drawn CSAM (such as lolicon) lowers the rates of child sex abuse by giving pedophiles an outlet. There are also arguments that consuming fictional / drawn CSAM is the start of an escalating pattern that leads to real sex abuse, as well as contributing to a culture that is more permissive of pedophilia.
Anecdotally speaking, especially as someone who was groomed online as a child, I am more inclined toward the latter argument. I believe fictional CSAM harms people and generated CSAM will too.
With generated images being more realistic, and with AI 'girlfriends' advertised as a woman who "can't say no" or as "her body, your choice", I am inclined to believe that the harms from this will be novel and possibly greater than existing drawn CSAM.
Speaking concretely: Grok is being used to generate revenge porn by editing real images of real children. These children are direct, unambiguous victims. There is no grey area where this can be interpreted as a victimless crime. Further, these models are universally trained with real CSAM in the training data.
I understand where you're coming from, and I'll play devil's advocate to the devil's advocate: If generative AI is generating convincingly photorealistic CSAM, what the fuck are they training the models on? And if those algorithms are modifying images of actual children, wouldn't you consider those victims?
I strongly sympathize with the idea that crimes should by definition have identifiable victims. But sometimes the devil doesn't really need an advocate.
Considering that every image generation model out there tries to censor your prompts/outputs despite trying their best not to train on CSAM... you don't need to train on CSAM for the model to be capable of generating CSAM.
Not saying the models don't get trained on CSAM. But I don't think it's a foregone conclusion that AI models capable of generating CSAM necessarily victimize anyone.
It would be nice if someone could research this, but the current climate makes it impossible.
When you indiscriminately scrape literally billions of images, and excuse yourself from vigorously reviewing them because it would be too hard/expensive, horrible and illegal stuff is bound to end up in there.
That's probably incidental, horrible as it is. Models don't need training data of everything imaginable, just enough things in combination, and there's enough imagery of children's bodies (including non-sexual nudity) and porn to generate a combination of the two, same as it can make a hybrid giraffe-shark-clown on a tricycle despite never seeing that in the training data before.
The biggest issue here is not that models can generate this imagery, but that Musk's Twitter is enabling it at scale with no guardrails, including spamming them on other people's photos.
Yep, when my kid was taking selfies with my phone and playing with Google Photos, I appreciated that Google didn't let any Gemini AI manipulation of any kind occur, even if whatever they were trying to do was harmless. Seemed very strict when it detected a child. Grok should probably do that.
>If generative AI is generating convincingly photorealistic CSAM, what the fuck are they training the models on?
Pretty sure these models can generate images that do not exist on their training data. If I generate a picture of a surfing dachshund, did it have to train on canine surfers?
I'm not sure if there's been talk about it but it does make you wonder, would this AI generated CSAM 'saite' the abuser's needs and/or would it spread the idea that it isn't bad and possibly create more abusers who then go on to abuse physical children. Would those individuals have done it without the AI. I believe there's still debate over whether abuse is a result of nature or nurture but that starts to get into theoretical and philosophy. To answer your question about who the victim is I would say the children who those images are based off of. As well as any future children that are harmed due to exposure of these images or due to the abusers possibly seeking real content. I think for the most part AI generated porn hurts everyone involved.
There's definitely at least some people who will be influenced by being repeatedly exposed to images. We know that usual conditioning ideas work. (Like presence of some type of images mixed in with other sexual content) On the other hand, I remember someone on HN claiming their own images are out there in CSAM collections and they'd prefer someone using those if it stops anyone from hurting others.
The need to fight CSAM also provides a pretext for broader censorship. Look at all the people in this thread salivating over the prospect of using Grok generations to take down Musk, whom they hate for allowing people to express wrongthink on X. If they ever regain broad censorship powers over AI or people, they definitely won't stop at blocking CSAM.
Lots of research has been done on this topic. You say "let some science happen", and then two paragraphs later say "according to the research": so has or hasn't research taken place? (Last time I looked into this, I came away with the impression that most people considered pædophiles are not exclusively attracted to children: I reject your claim that the "no choice" claim is evidenced, and encourage you to show us the research you claim to have.)
I don't think you're engaging with this topic in good faith.
> Whether it is exclusive or not is not really relevant to the point.
Whether it's exclusive or not is very relevant to the point, because sexual fetishes and paraphilias are largely mutable. In much the same way that a bi woman can swear off men after a few bad experiences, or a monogamous person in a committed relationship can avoid lusting after other people they'd otherwise find attractive, someone with non-child sexual interests can avoid centring children in their sexuality, and thereby avoid developing further sexual interests related to children. (Note that operant conditioning, sometimes called "conversion therapy" in this context, does not achieve these outcomes.) I imagine it's not quite so easy for people exclusively sexually-attracted to children (though note that one's belief about their sexuality is not necessarily the same as one's actual sexuality – to the extent that "actual sexuality" is a meaningful notion).
> Can you link me to research on how AI generated CSAM consumption affects offending rates?
No, because "AI-generated" hasn't been a thing for long enough that I'd expect good research on the topic. However, there's no particular reason to believe it'd be different to consumption of similar material of other provenance.
It's a while since I researched this, but I've found you a student paper on this subject: https://openjournals.maastrichtuniversity.nl/Marble/article/.... This student has put more work into performing a literature review for their coursework than I'm willing to do for a HN comment. However, skimming the citations, I recognise some of these names as cranks (e.g. Ray Blanchard), and some papers seem to describe research based on the pseudoscientific theories of Sigmund Freud (another crank). Take this all with a large pinch of salt.
> For instance, virtual child pornography can cause a general decline in sexual child abuse, but the possibility still remains that in some cases it could lead to practicing behavior.
I remember reading research about the circumstances under which there is a positive relationship, which obviously didn't turn up in this student's literature review. My recent searches have been using the same sorts of keywords as this student, so I don't expect to find that research again any time soon.
None of the services dealing with actual research papers discovery/distribution block this. Don't expect AI to make up answers, start digging through https://www.connectedpapers.com/ or something similar.
I think primarily this victimizes all those all ready victimized by the CSAM in the training material and also generally offends the collective sense of morality our society has.
Simplistically and ignorantly speaking, if a diffusion model knows what a child looks like and also knows what an adult woman in a bikini looks like, couldn't it just merge the two together to create a child in a bikini? It seems to do that with other things (ex. Pelican riding a bicycle)
In principle yes, but in practice no: the models don't just learn the abstract space, but also memorise individual people's likenesses. The "child" concept contains little clusters for each actual child who appeared enough times in the dataset. If you tried to do this, the model would produce sexualised imagery of those specific children with distressing regularity.
There are ways to select a specific point or region in latent space for a diffusion model to work towards. If properly chosen, this can have it avoid specific people's likenesses, and even generate likenesses outside the domain of the latent space (which tend to have severe artefacts). However, text prompting doesn't do that, even if the prompt explicitly instructs it to: text-to-image prompts aren't instructions. A system like Grok will always exhibit the behaviour I described in my previous (GP) comment.
As I mentioned in another comment (https://news.ycombinator.com/item?id=46503866), there are other reasons not to produce synthetic sexualised imagery of children, which I'm not qualified to talk about: and I feel this topic is too sensitive for my usual disclaimered uninformed pontificating.
It’s been reported Grok has generated CSAM by editing photos of real children, so there’s the real victim you shouldn’t need to find this situation abominable.
This is a big, sensitive topic. Last time I researched it, I was surprised at how many things I assumed were just moralistic hand-wringing are actually well-evidenced interventions. Considering my ignorance, I will not write a lengthy response, as I am want to.
I will, instead, speak to what I know. Many models are heavily overfit on actual people's likenesses. Human artists can select non-existent people from the space of possible visages. These kinds of generative models have a latent space, many points of which do not correspond to real people. However, diffusion models working from text prompts are heavily biased towards reproducing examples resembling their training set, in a way that no prompting can counteract. Real people will end up depicted in AI-generated CSAE imagery, in a way that human artists can avoid.
There are problems with entirely-fictional human-made depictions of child sexual exploitation (which I'm not discussing here), and AI-generated CSAE imagery is at least as bad as that.
> CSAM includes both real and synthetic content, such as images created with artificial intelligence tools. A child cannot legally consent to any sexual act, let alone to being recorded in one.
No idea what "vibes" are. My question was a very clear one -- definition. If one draws someone in a bikini (the current Twitter craze), it fails the Roth test for unprotected obscene speech on point 2 ("Judged by what the average person in a particular community finds acceptable") -- nobody online today finds bikini photos unacceptable. It is thus constitutionally-protected free speech. If one prompts an ML model to do the drawing, how is it different?
If your English is weak, there are dictionaries, translation programs, and LLMs that can help. The first meaning at https://www.merriam-webster.com/dictionary/vibe is “a distinctive feeling or quality capable of being sensed,” which is the relevant one here.
The OP article refers to “outputs that sexualized real people without consent.” If any of those real people are minors, that qualifies as CSAM. It’s not complicated.
Given that “put X into a bikini” is constitutionally protected speech, and the output fails the Roth test for obscenity and is thus the same, would not every other law be null and void w.r.t. stoping this? And “qualifies as X according to some unelected org with no lawmaking power” is even weaker than a law.
You dislike it. I get it. I do too. But this is a discussion of law. Legally, I do not see how any law was broken. I welcome any citation to the contrary. I note, again, that "some unelected org said so" is not a weighty argument when the opposition is the SCOTUS's clear stance on the 1st amendment.
Most of the instances I’m seeing discussed on X are not “fictional depiction of nonexistent child” but instead “minor’s posted photo directly replied to with ‘grok, put her in a bikini covered in ‘donut glaze’’”, which, in my opinion, crosses moral bounds far beyond the scope of your theoretical lab-grown work of fiction
If you post pictures of yourself on X and don't want grok to "bikini you", block grok.
Yes, under the TOS, what grok is doing is not the "fault" of grok(the reason is the causal factor of the post[enabled by 2 humans: the poster and the prompter]; the human intent is what initiates the generated post, not the bot; just like a gun is shot by a human, not by the strong winds). You could argue it's the fault of the "prompter", but we're going to circle back to the cat & mouse censorship issue. And no, I don't want a less censored grok version that's unable to "bikini a NAS"(which is what I've been fortunate to witness) just because "new internet users" don't understand what the Internet is.(Yes, I know you can obviously fine-tune the model to allow funny generations and deny explicit/spicy generations)
If X would implement what the so-called "moralists" want, it will just turn into Facebook.
And for the "protect the children" folks, it's really disappointing how we're always coming back to this bullsh*t excuse every time a moral issue arises. Blocking grok is a fix both for the person who doesn't want to get edited AND the user who doesn't want to see grok replies(in case the posts don't get the NSFW tag in time).
Ironically, a decent amount of people who want to censor grok are bluesky users, where "lolicorn" and similar dubious degenerate content is being posted non-stop AS HUMAN-MADE content. Or what, just because it's an AI it's suddenly a problem? The fact that you can "strip" someone by tweeting a bot?
And lastly, sex sells. If people haven't figured out that "bikinis", "boobs", and everything related to sex will be what wins the AI/AGI/etc. race (it actually happens for ANY industry), then it's their problem. Dystopian? Sure, but it's not an issue you can win with moral arguments like "don't strip me". You will get stripped down if it created 1M impressions and drives engagement. You will not convince Musk(or any person who makes such a decision) to stop grok from "stripping you", because the alternative is that other non-grok/xAI/etc. entities/people will make the content, drive the engagement, make the money.
When I generate content on most AI's including Grok, I ask it to fashion a prompt first of the subject I want and ask it to make sure that it does not violate any TOS or CSAM policies. I also instruct it that the prompt should be usable by most AIs. It fashions the prompt. When I use the prompt, the system complains that the prompt violates the TOS. I then ask the AI to locate the troubling aspect of the prompt. It says that it has and provides an alternative, safer prompt. More often than not, this newer prompt is also flagged as inappropriate. This is very frustrating even when the original intent is not to create content that violates any public AI policy. From my experience, both users and the technology make mistakes.
Someone spending 40 hours drawing a nude is not equivalent to someone saying take this photo and make them naked and having a naked photo in 4 seconds.
Only one of these is easily preventable with guardrails.
Is Grok simply a tool, or is it itself an agent of the creative process? If I told an art intern to create CSAM, he does, and then I publish it, who's culpable? Me? The intern? Both of us? I don't expect you to answer the question--it's not going to be a simple answer, and it's probably going to involve the courts very soon.
So, if that "software program" had a traditional button UI, a button said "Create CSAM," and the user pushed it, the program's creator is not culpable at all for providing that functionality?
I would agree with this if Grok's interface was "put a pixel there, put a line there, now fill this color there" like Photoshop. But it's not. Generative AI is actively assisting users to perform the specific task described and its programming is participating in that task. It's not just generically placing colors on the screen where the user is pointing.
Come on man. Really? You think this is a good argument?
Why not charge the people who make my glasses cuz they help me see the CP? Why not charge computer monitor manufacturers? Why not charge the mine where they got the raw silicon?
Here you have a product which itself straight up produces child porn with like absolutely zero effort. Very different than some object which happens to be used, photograph materials
Of course it’s not the same thing but still doesn’t make sense to use companies as police. I’m sure it’s much easier than with Nikon but the wast majority of its users aren’t doing it, just go after those who do instead of demanding that the companies do the police work.
If it was a case where CSAM production becomes mainstream use case I would have agreed but it is not.
> instead of demanding that the companies do the police work
How hard is this? What are they doing now, and is it enough? Do we know how hard they are trying?
For argument's sake, what if they had truly zero safegaurd around it, you could type "generate child porn" and it would 100% of the time. Surely you'd agree they should prevent that case, and be held accountable if they never took action to prevent it.
Regulation, clear laws around this would help. Surely they could try go get some threshold of difficulty in place that is a requirement to adhere to preventing.
I'm not in CP so I don't try to make it generate such content but I'm very annoyed that all providers are trying to lecture me when I try to generate anything about public figures for example. Also, these preventive measures are not working well at all, yesterday I had one denying to generate infinite loop claiming its dangerous.
Just throw away this BS about safety and jail/fine whomever commits crime with these tools. Make tools tools again and hold people responsible for the stuff they do with these tools.
Im not saying the companies should necessarily do the police work, though they absolutely should not release CP-generators. What I am saying is the companies should be held responsible for making the CP. Sure the user who types "make me some CP" can be held accountable too, but the creators/operators of the CP-generator should as well.
The one with taking creepy pictures has real victims, the one with making the machine generate the picture doesn’t but it tells something about the character of the person who makes it generate so I’m fine with them punished. Either way making the machine provider do the policing is ridiculous.
If it's AI-generated, it should be legal - regardless of whether the person consented for their image to be used and regardless of the age of the person.
You can't have AI-generated CSAM, as you're not sexually abusing anyone if it's AI-generated. It's better to have AI-generated CP instead of real CSAM because no child would be physically harmed. No one is lying that the photos are real, either.
And it's not like you can't generate these pics on free local models, anyway. In this case I don't see an issue with Twitter that should involve lawyers, even though Twitter is pure garbage otherwise.
As to whether Twitter should use moderation or not, it's up to them. I wouldn't use a forum where there are irrelevant spam posts.
I don't know, I feel like I'm taking crazy pills with this whole saga. Perhaps I havent seen the full story.
The fact of the matter is they do have a policy and they have removed it, suspended accounts and perhaps even taken it further. As would be the case on other platforms.
As far as I understand there is no nudity generated by grok.
Should public gpt models be prevented from generating detestable things, yes I can see the case for that.
I won't argue there is a line between acceptable and unacceptable, but please remember people perv over less (Rule 34).
Are bikinis now taboo attire? What next, ankles, elbows, the entire human body?(Just like the Taliban).
(Edit: I'm mentioning this paragraph for my below point.)
GPT's are not clever enough to make the distinction by the way either, so there's an unrealistic technical challenge here.
I suspect the this saga blowing out of proportion is purely "eLoN BAd".
generating sexualised pictures of kids is verboten. Thats epstien level of illegality. There is no legitiamte need for the public to hold, make or transmit sexualised images of children.
Anyone arguing otherwise has a lot of questions to answer
You're the one making the logical fallacies and reacting emotionally. Read what I have said first please.
That is a different grok to the one publishing images and discussed in the article. Your link clearly states they are being moderated in the comments and all comments are discussing adults only. The links comments also imply that these folks are jailbreaking nearly, because of guardrails that exist too.
As I say read what I said, please don't put words in my mouth. The GPT models wouldn't know what is sexualised. I said there is a line at some point. Non-sexualized bikinis are sold everywhere, do you not use the internet to buy clothes?
Your immediate dismissive reaction indicates you are not giving what I'm saying any thought. This is what puritanical thought often looks like. The discourse is so poisoned people can't stop, look at the facts and think rationally.
I don't think there is much emotion in said post. I am making specific assertions.
to your point:
> Non-sexualized bikinis are sold everywhere
Correct! the key logical modifier is Non sexual. Also you'll note that a lot of clothing companies do not show images of children in swimwear. Partly that's down to what I imagine you would term puritanism, but also legal counsel. The definition of a CSAM is loose enough (in some jurisdictions) to cover swimwear, depending on context. That context is challenging. A parent looking for clothes that will fit/suit their child is clearly not sexualised (corner cases exist, as I said context) Someone else who is using if for sexual purposes is not.
and because like GPL3 CSAM is infectious, the tariff for both company and end user is rather high for making, storing, transmitting and downloading those images. If someone is convicted of collecting those images, and using them for a sexual purpose, then images that were created that were not-CSAM suddenly become CSAM, and legally toxic to posses. (context does come in here.)
> Your link clearly states they are being moderated in the comments
Which tells us that there is a lot of work on guardrails right? its a choice by xai to allow users to do this. (mainly the app is hamstrung so that you have to pay for the spicy mode.) Whether its done by an ML model or not is irrelevant. Knowingly allowing CSAM generation and transmission is illegal. if you or I were to host an ML model that allows user to do the same thing, we would be in jail. There is a reason why other companies are not doing this.
The law must be applied equally, regardless of wealth, or power. I think that is my main objection to all of this. its clearly CSAM, and anyone other than musk doing this would have been censured by now. All of this justification is because of who it is doing this, rather than what is being done. We can bike shed all we want about is it actually really CSAM, which negates the entire point of this, which is its clearly breaking the law.
> The GPT models wouldn't know what is sexualised.
ML Classification is really rather good now. Instagram's unsupervised categorisation model is really rather effective at working out context of an image or video (ie differentiation of clothes, and context of those clothes.)
> please don't put words in my mouth
I have not done this, I am asserting that the bar for justifying this kind of content, which is clearly illegal and easily prevented (ie a picture of a minor and "generate an image of her in sexy clothes") is very high.
Now you could argue that I'm implying that you have something to hide. I am actually curious as to your motives for justifying the knowing creation of sexualised images of minors. You've made a weak argument that there are legitimate purposes. You then argue that its a slippery slope.
Is your fear that this brings justifies an age gated internet? censorship? What is the price that you think is worth paying?
Again words in my mouth. I'm not justifying that and nowhere does it say that. I could be very impolite to you right now trying to slander me like that.
I said I don't understand the fuss because there are guardrails, action taken and technical limitations.
THAT is my motive. The end of story. I do not need to parrot outrage because everyone else is, "you're either with us or against us" bullshit. I'm here for a rational discussion.
Again read what I've said. Technical limitations. You wrote that long ass explanation interspersed with ambiguities like consulting lawyers in borderline cases and then you expect an LLM to handle this.
Yes ML classification is good now but not foolproof. Hence we go back to the first point, processes to deal with this when x's existing guardrails fail, as x.com has done, delete, suspend, report.
My fear (only because you mention it, I didn't mention it above, I only said I don't get the fuss above) it seems should be that people are losing touch in this grok thing, their arguments are no longer grounded in truth or rational thought, almost a rabid witch hunt.
At no point did I say or imply LLMs are meant to make legal decisions.
"Hey grok make a sexy version of [obvious minor]" is not something that is hard to stop. try doing that query with meta, gemini, or sora, they manage it reliably well.
There are not technical impediments to stopping this, its a choice.
My point is saying if it's so complex you have to get a lawyer involved, how do you expect your LLM&system to cover all its own shortcomings.
I'd bet if you put that prompt into grok it'd be blocked judging by that Reddit link you sent. These folks are jailbreaking nearly asking to modify using neutral terms like clothing and images that grok doesn't have the skill to judge.
> My point is saying if it's so complex you have to get a lawyer involved, how do you expect your LLM&system to cover all its own shortcomings.
Every feature is lawyered up. Thats what general counsel does. Every feature I worked on at a FAANG had some level of legal compliance gate on it, because mistakes are costly.
For the team that launched the chatbots, loads of time went into figuring out what stupid shit users could make it do, and blocking it. Its not like all of that effort stopped. When people started finding new ways to do naughty stuff, that had to be blocked as well. Because other wise the whole feature had to be pulled to stop advertisers from fleeing, or worse FCC/class action.
> These folks are jailbreaking nearly asking to modify using neutral terms like clothing
CORRECT! people are putting effort into jailbreaking the app. where as on x grok they don't need to do any of that. Which is my point, its a product choice.
None of this is "hard legal problems" or in fact unpredictable. They are/have done a ton of work to stop that (again mainly because they want people to pay for "spicy mode")
At this point it should be clear that they know that Grok is unsafe to use, and will generate potentially illegal content even without a clear prompt asking it to do so.
This is a dangerous product, the manufacturer _knows_ it is dangerous, and yet still they provide the service for use.
Possession of CSAM materials can be a criminal offence. Here in Canada, it doesn't matter if it's a sex doll that looks like a child, an imagined bit of erotica, or a drawing. And so, it is dangerous to use a tool that can inadvertently create such materials without user intent.
I actually think this is a profoundly interesting question and one that I'm interested in thinking about further.
I think it's a problem for society when bad behavior is not transgressive. And moreover I'm less certain about this one, but I sort of think theoretically that society should be more liberal than it's institutions, and it creates really weird feedback loops when the institutions are more "liberal" than the population naturally. (I'm using the term generically, not directly aligned with the political meaning)
I think the theory I would present is that people should not be encouraged to transgress further than they are impulsed to, but simultaneously people need an outlet to actually transgress in a way that is not acceptable! People shouldn't post edge memes because the algorithm encourages it. People should post edgy memes because it's transgressive! But when the institutions actually encourage it? How broken is it that you can't be an edgy teenager because edgy is the culture and not the counter-culture.
In 2025, I think the truly transgressive activity is to not be online. Is to be straight-edge. And I sort of wonder if this is a small part of the young male mental-health crisis. They're not telling edgy jokes to be closer to their friends, they're telling edgy jokes to get fake internet points so people click on more advertising. How fucked is that?
But it's like weird that kids are probably having less sex, drinking, smoking etc. then the institutions would have it.
So to kind of answer your question,
"In the 1990's, popular youth culture generally rebelled against this type of worry from adults but now even the youth are part of the moral witch-hunt for some reason."
This might explain how I, a formerly "edgy" gen-x 90's kid am heartily against institutions supporting this kind of behavior, while simultaneously supporting people engaging in it. The adults; X, parents, etc. SHOULD be worrying about this kind of stuff SO THAT popular youth culture can continue to rebel against it.
Could you clarify what you mean by "institution"? What institution is actively encouraging transgression? Do you mean the cultures on social media that socially reward transgression? Isn't that just people and their culture, not an institution? Or is there something social media companies are actively doing to promote specific transgressions?
I'm thinking about the 80s-90s worries about Christian heresy. Popular culture was (and still is) full of insults against Christianity, probably because that was the kind of thing that offended an older generation in the west at the time. Is it wrong for institutions to encourage that?
While I have my own personal moral standards, I see society in general as morally relativist and don't accept arguments that the popular morals of today are right because they're popular now, while the popular morals of previous generations were wrong because they contradict the "right" morals of today. That's why I don't have much respect for people trying to enforce their own culture's arbitrary morals while not equally respecting conflicting morals.
> when bad behavior is not transgressive
That's a tricky one because what's "bad behavior"? Does it include denying the existence of God?
For what it's worth I'm making an argument that is probably completely unfeasible and with morally rocky foundations, but -
> While I have my own personal moral standards, I see society in general as morally relativist and don't accept arguments that the popular morals of today are right because they're popular now, while the popular morals of previous generations were wrong because they contradict the "right" morals of today
While I agree with you morally, I think practically for the stability of society it's useful for there to be a relatively conservative (and not overly litigious) mainstream that people can choose to freely act outside of, and it doesn't entirely matter what that mainstream is. I am not morally aligned with the say they Reagan era moral majority who fought against foul language on TV and in music, but I think there is a value in having that "moral majority" to rebel against.
There was this sort of lightning in a bottle in the second half of the 20th century, or maybe this has always been an Western thing but - there was this strong conservative popular culture - you couldn't even swear on television - but transgression wasn't handled legally (at least not excessively so). So you could go see a transgressive comedian if you wanted to, but it was necessarily a subculture, and I think this idea is healthy for society. Strong social pressure in one direction, but an escape hatch from it if you want in communities that aren't part of the popular culture.
So yes, I would say X is an institution, and maybe if I had my way, X wouldn't even allow swearing. If you wanted to swear on the internet, you would have to find a relatively "underground" place to do it; you could do it on say a private forum, but not on anything with more than I dunno, 1 million users or something. But when X as an institution tells you that everything is ok; when basically ideas or pictures or movies stop being "dangerous" people stop being "dangerous". There's no unique thought because all ideas are part of the mainstream. I think it creates less free thought, not more.
But in summary to speak to your question directly - I think I'm making a very counter-intutive argument that the thing you say you want, people from the 90's who valued dangerous media doesn't exist anymore. I think in a sense, the folks on X are not all that 90's kid, they’re all the "moral majority" the 90's kid was railing against; In a perverse manner of speaking.
It's hardly a hunt when there's bunch of warty green women with black tipped hats openly stirring a cauldron going "Double double toil and trouble."
The world grew up and learned better. We've seen the effects that bullying, including cyberbullying, have on people. We've seen teenagers (and adults) get harassed with fake revenge porn.
Both of those things have and do motivate people to perform harm in the real world. It's not random beeps and boops that are nonsensical and challenging to manipulate behavior through.
> Where are the upsides in CSAM - whether real or computer generated? How does it benefit society?
In a public forum like X, there probably are no upsides.
In general, though, pedophilia exists. This isn't something that is going to change. What is the harm in providing them with a alternative to real CSAM (which actually and actively hurts children)?
Giving them no legal avenue allows us to put more of them in jail. Once they are in jail the odds of them molesting children goes from "possible very low but measurably above zero" to "~0."
I think you are looking at it from the point of it being a punishment for victimizing someone -- when in fact it's used not to punish crime but to put away people who potentially might victimize someone in the future.
Granting that I think X should have stronger content policies and technological interventions to bad behavior as a matter of business, I do think that the X Safety's team position[0] is the only workable legal standard here. Any sufficiently useful AI product will _inevitably_ be usable, at minimum via subversion of their safety controls, to violate current (or future!) laws, and so I don't see how it's viable to prosecute legal violations at the level of the AI model or tool developers, especially if the platform is itself still moderating the actually illegal content. Obviously X is playing much looser with their safety controls than their competitors, but we're just debating over degrees rather than principles at that point.
[0]
> Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.
A core issue here is that there isn't a black and white on a subject like this. Yes, it is wrong. Yes they have a responsibility. But at the same time taking that to an extreme leads to heavy censorship. So what is a practical middle ground? Is there something like a 'sexualized validation suite' that could be an industry standard for testing if an LLM needs additional training? If there were then victims could potentially claim negligence if they aren't using best practices and they were harmed because of it right? Are there missing social or legal mechanisms to deal with misuse? One thing I think is missing is a '911' for cyber offenses like this. If someone breaks into my house I can call 911, if someone creates revenge porn who do I call? I don't think there is a simple answer here, but constructive suggestions, suggestions that do balance free speech and being a responsible service provider would be helpful. 'They are wrong' doesn't actually lead to change.
Looks like this hit a nerve. Any comments on the practical solutions though? The comment wasn't advocating that they should make CSAM or that they shouldn't face repercussions for enabling it, at least I don't think it reads that way. I honestly think that a core issue here is we are missing practical fixes. Things that make it easier for victims to get relief and things that make it clear that a provider is being irresponsible so that they can face civil or criminal penalties. If there aren't solid industry standards then how can you claim they aren't implementing best practices to hold them accountable? If victims don't have effective means of relief then how will we find and stop this? I'd love to hear actual concrete actions that the industry can put in place. 'Just tell them to stop' doesn't create a framework that leads to change.
The reason it hit a nerve is that you're just being extraordinarily credulous about xAI's lies. There are solid industry standards, and we can just tell them to stop; we know this because Grok has a number of competitors which don't generate CSAM. Indeed, they've already implemented the industry standards which prevent CSAM generation; they just added a flag called "spicy mode" to turn them off, because those standards also prevent the generation of pornographic images.
Trust me, I believe nothing positive about xAI. Various players doing similar things and an actual published standard or a standards body are totally different things. The industry is really young. Like a couple years young a this point. There really aren't well developed standards and best practices. Moments like this are opportunities to actually develop them and use them, or at least start the process. Do you have a recognized standard you can point to for this? When it comes to car safety there are lots of recognized standards. Same with medical safety, etc etc. Is there anything like that actually in the LLM world?
(And whatever my timeline has become now is why I don't visit more often, wtf, used to only be cycling related)
Edit: just to bring receipts, 3 instances in a few scrolls: https://x.com/i/status/2007949859362672673 https://x.com/i/status/2007945902799941994 https://x.com/i/status/2008134466926150003
And I'm sure that's a factor in why people like Larry Ellison and Saudi princes stumped up some of the money.
Everything that is awful in the diff between X and Twitter is there entirely by decision and design.
It’s fundamentally just another way of boosting account engagement metrics by encouraging repliers to signal that they are smart and clued-in. But it seems to work exceptionally well because it’s inescapable at the moment.
Stop apologizing for Nazis or we'll think you are one.
I'm sure "The only people who say it's not are <x>" is an abominable thought pattern Nazis and similar types would love everyone to have. It makes for a great excuse to never weigh things on their merits, so I'm not sure why you feel the need to invoke it when the merits are already in your court. I can't look at these numbers https://i.imgur.com/hwm2bI5.png and conclude most Americans are Nazi's instead of being willing to accept perhaps not everyone sees it the same way I do even if they don't like Nazis either.
To any actual Nazi supporters out there: To hell with you
To anybody who thinks either everyone agrees with what they see 100% of the time or they are a literal Nazi: To hell with you as well
So yeah, I believe there are a LOT of Nazi-adjacent folks in this country: they're the ones who voted for Trump 3 times even after they knew he was a fascist piece of garbage.
- Even assuming all who weren't sure (13%) should just be discounted as not having an opinion, like those who had not heard about it (22%), 32% is still not a majority of the remaining (100%-13%-22%) = 65%. 32% could have been a plurality of those with an opinion, but since you insisted on lumping things into 3 buckets of 32%, 35%, and remaining %, the remaining % of 33% would actually get the plurality of those who responded with opinions by this definition.
N.b. If just read straight from the sheet, "A Nazi salute" would have already had a plurality. Though grouping like this is probably the more correct thing to do, it actually ends up significantly weakening the overall position of "more people agree than not" rather than strengthening it.
- But, thankfully, "A Nazi Salute" + "A Roman Salute" would actually have been 32+2=34%, so plurality is at least restored by more than one whole percentage point (if you excluded the unsure or unknowing)!
- However, a "Roman salute" (which is a bit of a farce of a name really) can't really be assumed to be fungible with the first option in this poll. If it were fully fungible, it could have been combined into that option. I.e. there's no way to tell which adults responding "A Roman salute" meant to be counted as "a general fascist salute, as the Nazis later adopted" or meant to be counted as "a non-fascist meaning of the salute, like the Bellamy salute was before WWII". So whichever wins this game of eeking out percentage points comes down to how each person wants to group these 2 percentage points. Shucks!
- In reality, between error margins and bogus responses, this is about as close as one could expect to get for an equal 3 way split between "it was", "it wasn't", and "dunno/don't care", and pulling ahead a percentage point or two is really quite irrelevant beyond that it is, blatantly, not actually a majority that agree it was a Nazi-style salute.
Even though I'm one who agrees with you Elon exhibits neo-nazi tendencies, the above just shows how we go from "Elon replies directly supporting someone in a thread about Hitler being right about the Jewish community" and similar things constantly for years to debating individual percentage points to try to claim our favorite sub-majority says he likely made a one off hand gesture 3 years ago. Now imagine I was actually a Nazi supporter walking into the thread - suddenly we've gone from talking about direct pro-Nazi statements and retweets constantly in his feed to a chance for me to debate with you whether the majority think he made a one off hand gesture 3 years ago? Anyone concerned with Musk's behavior should want to avoid this topic with a 20 foot pole so they can get straight to the real stuff.
Also... I've run across a fair share of crypto lovers who turn out to be neo-nazish, but I'm not sure how you're piecing together that such a large portion of the population is a "crypto-Nazi" when something like only 28% of the population has crypto at all, let alone is a Nazi too. At least we're past "anyone who disagrees with my interpretations can only be doing so as a Nazi" though.
Thanks for the note!
Whether HN wants to endorse a political ideology or not, their approach to handling these issues is a material support of the ideologies these stories and comments are criticizing.
Kinda like the scientists building the atomic bomb.
They'll be in for a rude awakening.
Like, the entirety of DOGE was such an obviously terrible series of events, but for whatever reason, the above were both big cheerleaders on Twitter.
And yeah the moderation team here have been clearly letting everything Musk-related be flagged even after pushback. It's absolutely vile. I've seen many people try to make posts about the false flagging issue here, only to have those posts flagged as well (unapologetically, on purpose, by the mods themselves).
"Major Silicon Valley Company's Product Creates and Publishes Child Porn" has nothing to do with politics. It's not "political content." It is relevant tech news when someone investigates and points out wrongdoing that tech companies are up to. If another tech company's product was doing this, it would be all over HN and there would be pretty much no flagging.
When these stories get flagged, it's because people don't want bad news to get out about the company--it's not about avoiding politics out of principle.
I'm not saying you're wrong about it being brigaded by PR bots, I'm saying it's still political. Hell, everything's political.
I (and others) were arguing that the Trump administration is probably, and unfortunately, the most relevant topic to the tech industry on most any given day. This is because computer is mostly made out of people. The message that these political stories intersect deeply with technology (as is seen here) seems to have successfully gotten through.
I wish the most relevant tech story of every day were, say, some cool new operating system, or something cool and curiosity-inspiring like "you can sort in linear time" or "python is an operating system" or "i made X rewritten in Y" or whatever.
I think in most things, creation is much harder than destruction, but software and software systems are an exception where one individual can generally do more creation than destruction. So, it's particularly interesting (and jarring) when a few individuals are able to make decisions that cause widespread destruction.
We should collectively be proud that we have a culture where creation is easier than destruction. But it's also why the top stories of any given day will be "Trump did X" or "us-east-1 / cloudflare / crowdstrike is down" or "software widely used in {phones / servers} has a big scary backdoor".
In 2020, Dang said [1]
> Voting ring detection has been one of HN's priorities for over 12 years: [...]
> I've personally spent hundreds of hours working on this, as well as tracking down voting rings of every imaginable sort. I'd never claim that our software catches everything, but I can tell you that it catches so much that I often go through the lists to find examples of good projects that people were trying ineptly to promote, and invite them to do it again in a way that is more likely to gain community interest.
Of course this sort of thing is inherently heuristic; presumably bots throw up a smokescreen of benign activity, and sophisticated bots could present a very realistic, human-like smokescreen.
[1] https://news.ycombinator.com/item?id=22761897
There are all sorts of approaches that a moderation team could take if they actually believed this was a problem. For example, identify the users who regularly downvote/flag stories like this that end up being cleared by the moderation team for unflagging or the 2nd chance queue and devalue their downvotes/flags in the future.
I think the biggest thing HN could do to stop this problem is to not make flagging affect an article's ranking until after a human mod reviews the flags and determines them to be appropriate. Right now, all bad actors apparently have to do is be quick on the draw, and get their flagging ring in action ASAP. I'm sure any company's PR team (or motivated Elon worshiper) can buy "100 HN flags on an article" on the dark web right now if they wanted to.
This just describes HN as a whole, so if this is the concern, might as well shut the site down.
Now that you mention it - I've noticed the same on Youtube ... I used to get suspended every 5 minutes on there.
edit: back to 14, kinda crazy
But I generally consider something political if it involves politicians, or anyone being upset about anything someone else is doing, or any topic that they could mention on normal news. I prefer hn to be full of positive things that normal people don't understand or care about.
(As a long-term Musk-sceptic, I can confirm that Musk-critical content tended to get insta-flagged even years before he was explicitly involved in politics.)
They are in here too. But thanks to moderation they are usually more subtle and use dog whistles or proxy targets.
Seems like bot behavior.
There’s one in this thread. A sibling to my comment.
I mean, honestly, you are wasting your time. Why would you expect the website run by the guy who likes giving Nazi salutes on TV to take down Nazi content?
There's no point trying to engage with Twitter in good faith at this point; only real option is to stop using and move on (or hang out in the Nazi bar, I guess).
Personally I've never seen anything like this.
Once again - links are trivial to share.
Otherwise this is hearsay.
The ones I reported, I deleted the report emails so I can't help you at this moment. I don't know why you're surprised - you can go looking yourself and find examples
https://x.com/UpwardChanging posts Hitler content, 14 words, black sun graphics, swastikas, antisemitic content etc. 21k followers
https://x.com/hvitrulfur supportively reposts swastika content, white supremacism, anti-black racism, islamophobia, 14 words
https://x.com/unconquered_sol black sun, swastikas, fasces, hitler glorification. 70k followers
2. Seems like case of https://en.wikipedia.org/wiki/White_guilt which spills into racism/white-supremacy.
3. This is literally art. Not my taste of course.
OP's claim was X is swimming in hate speech.
p.s. communist symbols are banned in a lot of the world too (https://en.wikipedia.org/wiki/Bans_on_communist_symbols), yet this is ok for bluesky:
* https://bsky.app/profile/mikkel314.bsky.social/post/3mbe62hg...
* https://bsky.app/profile/gwynnstellar.bsky.social/post/3mb5p...
* https://bsky.app/profile/negatron00.bsky.social/post/3mbfnnh...
* https://bsky.app/profile/kyulen742.bsky.social/post/3mb4nkeg...
* https://bsky.app/profile/mommyanddaddyslittlebimbo.com/post/...
I’m not sure if this is much worse than the textual hate and harassment being thrown around willy nilly over there. That negativity is really why I never got into it, even when it was twitter I thought it was gross.
I haven't seen Xi, but I am unfortunate enough to know that such an animated depiction of Maduro also exists.
These people are clearly doing it largely for shock value.
It's become a bit of a meme to do this right now on X.
FWIW (very little), it's also on a lot of male posts, as well. None of that excuses this behavior.
Fuck X.
1. Denmark taxes its rich people and has a high standard of living.
2. Scammy looking ad for investments in a blood screening company.
3. Guy clearing ice from a drainpipe, old video but fun to watch.
4. Oil is not actually a fossil fuel, it is "a gift from the Earth"
5. Elon himself reposting a racist fabrication about black people in Minnesota.
6. Climate change is a liberal lie to destroy western civilization. CO2 is plant food, liberals are trying to starve the world by killing off the plants.
7. Something about an old lighthouse surviving for a long time.
8. Vaccine conspiracy theories
9. Outright racism against Africans, claiming they are too dumb to sustain civilized society without white men running it.
10. One of those bullshit AI videos where the AI doesn't understand how pouring resin works.
11. Microsoft released an AI that is going to change everything, for real this time, we promise.
12. Climate change denialism
13. A post claiming that the Africa and South America aren't poor because they were robbed of resources during the colonial era and beyond, but because they are too dumb to run their countries.
14. A guy showing how you can pack fragile items using expanding foam and plastic bags. He makes it look effortless, but glosses over how he measures out the amount of foam to use.
15. Hornypost asking Grok to undress a young Asian lady standing in front of a tree.
16. Post claiming that the COVID-19 vaccine caused a massive spike (5 million to 150 million) cases of myocarditis.
17. A sad post from a guy depressed that a survey of college girls said that a large majority of them find MAGA support to be a turn off.
18. Some film clip with Morgan Freeman standing on a X and getting sniped from an improbable distance
19. AI bullshit clip about people walking into bottomless pits
20. A video clip of a woman being confused as to why financial aid forms now require you to list your ethnicity when you click on "white", with the only suboptions being German, Irish, English, Italian, Polish, and French.
Special bonus post: Peter St Ogne, Ph. D claims "The Tenth Amendment says the federal government can only do things expressly listed in the Constitution -- every other federal activity is illegal." Are you wondering what federal activity he is angry about? Financial support for daycare.
So yeah, while it wasn't a total and complete loss it is obvious that the noise far exceeds the signal. It is maybe a bit of a shock just how much blatant climate change denialism, racism, and vaccine conspiracies are front page material. I'm saddened that there are people who are reading this every day and taking it to heart. The level of outright racism is quite shocking too. It's not even up for debate that black people are just plain inferior to the glorious aryan race on Twitter. This is supposedly the #1 news source on the Internet? Ouch.
Edit: Got the year wrong at the top of the post, fixed.
What to do about it is to point out to those people in the middle how badly things are being fucked up, preferably with how those mistakes link back to their pocketbook.
They weren't placed there by God.
And you thought that was a different argument than "you shouldn't have worn that skirt if you didn't want to get raped"?
The CSAM machine is only a recent addition.
That's interesting - do you have a link for this? I'd be curious to know more of the section's details.
“The information must be "provided by another information content provider", i.e., the defendant must not be the "information content provider" of the harmful information at issue”
If grok is generating these images, I am interpreting this as Twitter could be becoming an information content provider. I couldn’t find any relevant rulings but I doubt any exist since services like Grok are relatively new.
2) X still has an ethical and probably legal obligation to remove these images from their platform, even if they are somehow found not to be responsible for generating them, even though they generated them.
I brought up Section 230 because it used to be that removal of Section 230 was an active discussion in the US, particularly for Twitter, pre-Elon, but seems to have fallen away.
With content generated by the platform, it certainly seems reasonable to understand how Section 230 applies, if it all, and I in particular think that Section 230 protections should probably be removed for X in particular.
You are correct; I read your earlier post as "did we forget our already established principle"? I admit I'm a bit tilted by X doing this. In my defense, there are people making the "blame the user, not the tool" argument here though, which is the core idea of section 230
The very first AI code generators had this issue that user could make illegal content by making specific requests. A lot of people, me including, saw this as a problem, and there were a few copyright lawsuits arguing this. The courts however did not seem to be very sympathetic to this argument, putting the blame on the user rather than the platform.
Here is hoping that Grok forces regulations to decide on this subject once and for all.
I believe he thinks the same applies to Grok or whatever is done on the platform. The fact that "@grok do xyz" makes it instanteous doesn't mean you should do it.
Anyways, super cool that anyone speaking out already has their SSN in his DB.
Weird. Why do people get in trouble for using the word "cis" on twitter?
> X is planning to purge users generating content that the platform deems illegal, including Grok-generated child sexual abuse material (CSAM).
Which is moderating/censoring.
The tool (Grok) will not be updated to limit it - that's all. Why? I have no idea, but it seems lately that all these AI tools have more freedom than us humans.
If you want to be an actress and you are 14 years old, you now have to worry about tools that make porn of you.
If you are an ordinary woman that wants to share photos with your friends on instagram, you now have to worry about people making porn of you!
It’s against the TOS to post a picture of your own boobs for example.
The one above is not my opinion (although I partially agree with it, and now you can downvote this one :D ). To be honest, I don't care at all about X nor about an almost trillionaire.
It was full of bots before, now it's full of "AI agents". It's quite hard sometimes to navigate through that ocean of spam, fake news, etc.
Grok makes it easier, but it's still ugly and annoying to read 90-95% always the same posts.
For most fundamentalist religions, men are almost never penalized for bad behavior. It's nearly impossible to find a man being killed for violating a morality law including selling pornography or engaging in prostitution.
But on the other hand, it's very easy to find examples of women getting stoned for dressing improperly.
Secondly, you posit it's "nearly impossible to find a man being killed for violating a morality law", which is true because of two factors. The first of which is because it's difficult to find any kind of representative sample of justice being meted out in Afghanistan because it's not freely and actively reported on. The second of which is because the punishment for violating moral laws is usually public flogging. The idea that these laws overwhelmingly target women is false, most of the time the people being punished for breaking morality laws are men: https://amu.tv/137185
It is clear to me the only thing you know about Afghanistan is that women unfortunately live as second class citizens. This is clear not only because of the naive things you say, but because you explicitly fall back to painting with ridiculously unspecific brushstrokes. With your knowledge exhausted, you revert to talking about "most fundamentalist religions", despite the domain already being pretty well defined to specifically the Taliban. You shoehorn in the misogyny angle, as though that's relevant to the context and makes your point stronger, but it's just vacuous nonsense. Your entire point seems to be that the justice system in Afghanistan primarily punishes women (which is an silly falsehood in and of itself), and that's why a major public figure enabling mass obscenity would entail no consequences? Are you actually out of your mind? There's simply no way you actually believe this crap. I'm sorry, that kind of naivete is just too ridiculous to buy.
The cherry on top is that you lead all of this crypto neo-orientalist shit with "I don't think you understand how religious fundamentalism works in practice". Give me a break.
I actually find abortion with no exception for rape to be far more ideologically pure position than abortion with exceptions for rape.
The one that makes the least sense is restriction on abortion even in the case the fetus cannot survive. That one is far less defensible than not having a rape abortion exception as it can't be explained from the viewpoint of the rights of the mother nor from one of the rights of the fetus.
It's defensible when you realize that forced pregnancy is viewed by many religious people as a punishment. "If you didn't want a baby, don't have sex" is very commonly heard in private conversations with religious people.
Because pregnancy is a holy punishment, the consequences, even death, are seen as moral.
This is also why the rape exception is more common than the medical exception. A mother dying because of an ectopic pregnancy or because she was too young to have a baby is god's will.
I initially held your viewpoint but after engaging a lot of people I realized they often had pretty similar views on life and liberty as mine, they just were looking at it from the viewpoint of the fetus rather than the mother. From that perspective it just doesn't make sense at all to make an exception for rape.
People will almost never take the "it's a punishment" position in a debate because that's not a popular position to hold and it's pretty weak morally at the end of the day. That's why the "life of the fetus" approach is most frequently taken even though it leads to absurd positions. For example, pro-life people will very often be put into a difficult position when the notion of IVF comes up.
That's what betrays their true views, IMO.
I've simply had a lot of private conversations with people on religion (I was mormon for a long time and served a mormon mission). That's where my opinion on the actual underlying anti-abortion attitude comes from. It's lots of private conversations in safe spaces. The fetus life, frankly, is almost never brought up as a reason.
And, as I pointed out, it pretty well explains why anti-abortion laws have weird boundaries if they were purely about the life of the fetus.
Most nations still have social support and government responsibility as last resort, which equally can do the job of supporting children without willing parents, but then people return to the punishment/moral angle that if men don't want to pay for children then they should not have sex.
Look to how quickly people reaches for the morality position and we see how little friction anti-abortion policies has to overcome.
What I expect to see for those that view it as primarily punishment is little outside of child support for kids. Penalizing the "dead beat dad" as it were. I expect those governments to not provide child support, tax breaks for parents, or any sort of welfare/support/minimum standard of living for parents. That is to say, the answer to "how hard would it be to be a single parent" in those government would be "very hard".
For governments that are solely looking out for the welfare of the kid, I expect to see a large amount of support for kids, especially for poor families. I expect some help with childcare, housing, etc to make sure the kids are well cared for.
The closer people holds to those values, the easier society accept laws like anti-abortion. Religion do play a supporting role in this by holding onto the values, but is itself not always in the center.
Is the opposition arguing for abortion, but only in the case of rape or incest? Clearly not. That would be a far more reasonable middle ground
Also abortion for eugenics is inefficient and difficult for some women with physical and mental effects - only in the extenuating circumstances like China during one child period would it be viable. This is a side issue from the main point. Ethics in IVF and usage of abortion/Plan B (where it’s so early that it’s not practical for your eugenics idea) - a discussion we should have and have already skipped for IVF in the USA (where it’s more practical from theoretical standpoint for your eugenics point) but it's a distraction from the primary objective of conservative groups - to force women to have less power and choice in their lives. That is the question we see being answered once conservatives gain power to make legislation of their choice in states or via the Supreme Court in the USA.
IIRC that is not a safe assumption. 50/50 is a population-wide statistic, but it's pretty common for individuals to have a substantially skewed probability for one gender of offspring or the other.
Huh?
I'm just an interested party because my family has nearly all girls for an entire generation, so I have paid attention to research that shows how this happens.
You seem to think it disagrees with a conspiracy theory. I don't care about that, I was just adding a bit of accuracy to the discussion. Carry on.
We need something. I asked if there was a hot mic because those two have a tendency to get recorded saying stupid things. It coud also be an e-mail, a memo, a recollection by a friend or former colleague, et cetera.
In the absence of evidence, it's much simpler to conclude that at least Musk is just sexist versus trying "to make women afraid to participate in public life."
Okay.
> You expect every bad actor to make a public statement about their vile plans?
Neither Musk nor Miller have exhibited a tremendous amount of discipline in this department.
I don’t expect them to say this. But I do expect someone arguing in good faith to distinguish between plans and effects, particularly when we’re turning what looks like garden-variety sexism into a political theory.
Not being debated here.
OP claimed Musk and Miller aim "to make women afraid to participate in public life."
Henry VIII also didn't have 14 children.
Henry VIII also didn't have all of his children by IVF.
Odds of 14 male children is like 1 in 15,000+
It would be remarkably unlikely. It's fair to say Musk is probably sex selecting for sons. It doesn't follow that he has plans "to make women afraid to participate in public life."
No, that follows from his other actions
Musk is 14/14 born male. Its extremely unnatural.
Miller, I'm sure, is mostly focused on his manifesting white nationalist wet dream and murder rampage.
If it looks and quacks like a Nazi duck, we might be best served by not assuming the troop build up on the border of Poland is a training exercise.
It's a fictional creation. Nobody is "taking her clothes off", a bot is fabricating a naked woman and tacking her likeness (ie. face) on to it. If anything, I could see how this could benefit women as they can now start to reasonably claim that any actual leaked nudes are instead worthless AI slop.
I don't think I would care if someone did this to me. Put "me" in the most depraved crap you can think of, I don't care. It's not me. I suspect most men feel similarly.
What's the big deal?
A woman being damaged by nudes is basically a white knight, misogynist viewpoint that proclaims a woman's value is in her chastity / modesty so by posting a manufactured nude of her you have thereby degraded her value and owe her damages.
It feels odd for them to be advertising this belief though. These are surely a lot of the same people trying to devalue virginity, glorifying public sex positivity, condemning "slut shaming", etc.
Posting a photo of yourself online is not an invitation for AI generated nudes.
I'm sure lots of people do it for hate but legitimately a lot of people just look at someone and think "they'd look nice naked" and then if they actually have a picture of that, might share it under their comments/posts because it's enjoyable to look at rather than to punish someone.
The problem is, this service very publicly shares those images, and usually in a way that directly tags the actual person. So, very much across the line in to "harming actual people."
Since this regularly harms women, people round it off to "misogyny". Even if the motive is not "harm women", the end result is still "women were harmed".
(There also IS an exception if you know the person well enough to expect that they're cool with it - but the onus is still on you if you guess wrong and your actions end up causing harm.)
Sharing lewd pictures is using the tools of the patriarchy to shame and humiliate women. That's misogyny.
Think of it this way. I want to humiliate a black man online, so I generate a picture of him eating a huge watermelon slice and share it around for giggles. Is that racism? Of course it is.
"Doing it without their permission and posting it below their comments is definitely misogyny and sexual harassment."
There is NO reason to do this publicly, under a woman's comment, other than to harass them.
It certainly is. It happens purely as a shame tactic for women.
> it definitely does not involve violence
Nobody said it did. Things can be harmful without being violent. Violence isn't the ultimate measure of morality.
additionally, how would you define the term "rape culture"? Are you aware of the term at all?
Convincingly photoshopping someones face onto a nude body takes time, skills, effort, and access to resources.
Grok lowers the barrier to be less effort than it took for either you or I to write our comments.
It is now a social phenomenon where almost every public image of a woman or girl on the site is modified in this manner. Revenge porn photoshops happened before, but not to this scale or in this type of phenomenon.
And there is safety in numbers. If one person photoshops a highschool classmate nude, they might find themself on a registry. For lack of knowing the magnitude, if myriad people are doing it around the country, then do you expect everyone doing that to be litigated that extensively?
Mate, thats the point. I as a normal human being, who had never been on 4chan or the darker corners of reddit would have never seen or be able to make frankenporn. much less so make _convincing_ frankenporn.
> For lack of knowing the magnitude
Fuck that shit, if they didn't know the magnitude they wouldn't have spend ages making the photoshop to do it. You don't spend ages doing revenge, "because you didn't know the magnitude" You spend ages doing it because you want revenge
> if myriad people are doing it around the country, then do you expect everyone doing that to be litigated that extensively?
I mean we put people in prison for drink driving, lots of people do that in the states, same with drug dealing. Same with harassment, thats why restraining orders exist.
but
You are missing the point, Making and distributing CSAM is an illegal offence. Knowingly storing and transmitting it is an offence. Musk could stop it all now by re-training grok, or putting in some basic controls.
If any other person was doing this they would have been threatened with company ending action by now.
We mostly agree, so let me clarify.
Grok is being used to make very much revenge porn, including CSAM revenge porn, and people _are using X because it's the CSAM app_. I think this is all bad. We agree here.
"For lack of knowing the magnitude" is me stating that I do not know the number of people using X to generate CSAM. I don't know if it is a thousand, a million, a hundred million, etc. So, I used the word "myriad" instead of "thousands", "millions", etc.
I am arguing that this is worse because the scale is so much more. I am arguing against the argument equivocating this with photoshop.
> If any other person was doing this they would have been threatened with company ending action by now.
Yes, I agree. X is still available on both app stores. This means CSAM is just being made more and more normal. I think this is very bad.
This site.
Hold on to that spirit and I think you'll genuinely do well in the world that's coming next.
IMO, the fact that you would say this is further evidence of rape culture infecting the world. I assure you that people do care about this.
And friction and quality matters. When you make it easier to generate this content and make the content more convincing, the number of people who do this will go up by orders of magnitude. And when social media platforms make it trivial to share this content you've got a sea change in this kind of harassment.
Also, this always existed in one form or another. Draw, photoshop, imagine, discuss imaginary intercourse with popular person online or irl
It's not worthy of intervention because it will happen anyway and it doesn't fundamentally change much
"“We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” X Safety said. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
How about not enabling generating such content, at all?
I understand everyone pouncing when X won't own Grok's output, but output is directly connected to its input and blame can be proportionally shared.
Isn't this a problem for any public tool? Adversarial use is possible on any platform, and consistent law is far behind tech in this space today.
Where does the line fall between provider responsibility when providing a tool that can produce protected work, and personal responsibility for causing it to generate that work?
It feels somewhat more clearcut when you say to AI, "Draw me an image of Mickey Mouse", but why is that different than photocopying a picture of Mickey Mouse, and using Photoshop to draw a picture of Mickey Mouse? Photo copiers will block copying a dollar bill in many cases - should they also block photos of Mickey Mouse? Should they have received firmware updates whenever Steamboat Willy fell into public domain, such that they can now be allowed to photocopy that specific instance of Mickey Mouse, but none other?
This is a slippery slope, the idea that a person using the tool should hold the tool responsible for creating "bad" things, rather than the person themselves being held responsible.
Maybe CSAM is so heinous as to be a special case here. I wouldn't argue against it specifically. But I do worry that it shifts the burden of responsibility onto the AI or the model or the service or whatever, rather than the person.
Another thing to think about is whether it would be materially different if the person didn't use Grok, but instead used a model on their own machine. Would the model still be responsible, or would the person be responsible?
There's one more line at issue here, and that's the posting of the infringing work. A neutral tool that can generate policy-violating material has an ambiguous status, and if the tool's output ends up on Twitter then it's definitely the user's problem.
But here, it seems like the Grok outputs are directly and publicly posted by X itself. The user may have intended that outcome, but the user might not have. From the article:
>> In a comment on the DogeDesigner thread, a computer programmer pointed out that X users may inadvertently generate inappropriate images—back in August, for example, Grok generated nudes of Taylor Swift without being asked. Those users can’t even delete problematic images from the Grok account to prevent them from spreading, the programmer noted.
Overall, I think it's fair to argue that ownership follows the user tag. Even if Grok's output is entirely "user-generated content," X publishing that content under its own banner must take ownership for policy and legal implications.
So exactly who is considered the originator is a pretty legally relevant question particularly if Grok is just off doing whatever and then posting it from your input.
"The persistent AI bot we made treated that as a user instruction and followed it" is a heck of a chain of causality in court, but you also fairly obviously don't want to allow people to laundry intent with AI (which is very much what X is trying to do here).
You can have all the free speech in the world, but not with the vulnerable and innocent children.
I don't know how we got to the point where we can build things with no guardrails and just expect the user to use it legally? I think there should be responsibility on builders/platform owners to definitely build guardrails in on things that are explicitly illegal and morally repugnant.
Same, honestly. And you'll probably catch a whole lot of actual legitimate usage in that net, but it's worth it.
But you'll also miss some. You'll always miss some, even with the best guard rails. But 99% is better than 0%, I agree.
> ... and just expect the user to use it legally?
I don't think it's entirely the responsibility of the builder/supplier/service to ensure this, honestly. I don't think it can be. You can sell hammers, and you can't guarantee that the hammer won't be used to hurt people. You can put spray cans behind cages and require purchasers to be 18 years old, but you can't stop the adult from vandalism. The person has to be held responsible at a certain point.
There's also a difference between a tool manufacturer (hardware or software) and a service provider: once the tool is on the user's hands, it's outside of the manufacturer's control.
In this case, a malicious user isn't downloading Grok's model and running it on their GPU. They're using a service provided by X, and I'm of the opinion that a service provider starts to be responsible once the malicious usage of their product gets relevant.
Pornography is regulated. CSAM is illegal. Hosting it on your platform and refusing to remove it is complicity and encouragement.
Historically tools have been uncensored, yet also incredibly difficult and time-consuming to get good results with.
Why spend loads of effort producing fake celebrity porn using photoshop or blender or whatever when there's limitless free non-celebrity porn online? So photoshop and blender didn't need any built-in censorship.
But with GenAI, the quantitive difference in ease-of-use results in qualitative difference in outcome. Things that didn't get done when it needed 6 months of practice plus 1 hour per image are getting done now it needs zero practice and 20 seconds per image.
If you operate the tool, you are responsible. Doubly so in a commercial setting. If there are issues like Copyright and CSAM, they are your responsibility to resolve.
If Elon wanted to share out an executable for Grok and the user ran it on their own machine, then he could reasonably sidestep blame (like how photoshop works). But he runs Grok on his own servers, therefore is morally culpable for everything it does.
Your servers are a direct extension of yourself. They are only capable of doing exactly what you tell them to do. You owe a duty of care to not tell them to do heinous shit.
I agree, but I don't know where that line is.
So, back in the 90s and 2000s, you could get The Gimp image editor, and you could use the equivalent of Word Art to take a word or phase and make it look cool, with effects like lava or glowing stone, or whatever. The Gimp used ImageMagick to do this, and it legit looked cool at the time.
If you weren't good at The Gimp, which required a lot of knowledge, you could generate a cool website logo by going to a web server that someone built, giving them a word or phrase, and then selecting the pre-built options that did the same thing - you were somewhat limited in customization, but on the backend, it was using ImageMagick just like The Gimp was.
If someone used The Gimp or ImageMagick to make copyrighted material, nobody would blame the authors of The Gimp, right? The software were very nonspecific tools created for broad purposes, that of making images. Just because some bozo used them to create a protected image of Mickey Mouse doesn't mean that the software authors should be held accountable.
But if someone made the equivalent of one of those websites, and the website said, "click here to generate a random picture of Mickey Mouse", then it feels like the person running the website should at least be held partially responsible, right? Here is a thing that was created for the specific purpose of breaking the law upon request. But what is the culpability of the person initiating the request?
Anyway, the scale of AI is staggering, and I agree with you, and I think that common decency dictates that the actions of the product should be limited when possible to fall within the ethics of the organization providing the service, but the responsibility for making this tool do heinous things should be borne by the person giving the order.
Posting a tweet asking Grok to transform a picture of a real child into CSAM is no different, in my mind, than asking a human artist on twitter to do the same. So in the case of one person asking another person to perform this transformation, who is responsible?
I would argue that it’s split between the two, with slightly more falling on the artist. The artist has a duty to refuse the request and report the other person to the relevant authorities. If that artist accepted the request and then posted the resulting image, twitter then needs to step in and take action against both users.
sorry you're not convincing me. X chose to release a tool for making CSAM. they didn't have to do that. They are complicit.
Truly, civilization was a mistake. Retvrn to monke.
There's a line we have to define that I don't think really exists yet, nor is it supported by our current mental frameworks. To that end, I think it's just more sensible to simply forbid it in this context without attempting to ground it. I don't think there's any reason to rationalize it at all.
you going to ban all artsy software ever because a bad actor has or can use it to do bad actor things?
If Photoshop had a "Create CSAM" button and the user clicked it, who did wrong?
I think a court is going to step in and help answer these questions sooner rather than later.
At least I think that's the plan.
From my knowledge (albeit limited) about the way LLMs are set up, they most definitely have abilities to include guardrails of what can't be produced. ChatGPT has some responses to prompts which stops users from proceeding.
And X specifically: there have many cases of X adjusting Grok where Grok was not following a particular narrative on political issues (won't get into specifics here). But it was very clear and visible. Grok had certain outputs. Outcry from certain segments. Grok posts deleted. Trying the same prompts resulted in a different result.
So yeah, it's possible.
I’m just wondering if from a technical perspective it’s even possible to do it in a way that would 100% solve the problem, and not turn it into an arms race to find jailbreaks. To truly remove the capability from the model, or in its absence, have a perfect oracle judge the output and block it.
The answer is currently no, I presume.
For arguments sake, let's assume Grok can't reliably have guardrails in place to stop CSAM. There could be second and third order review points where before an image is posted by Grok, another system could scan the image to verify whether it's CSAM or not, and if the confidence is low, then human intervention could come into play.
I think the end goal here is prevention of CSAM production and dissemination, not just guardrails in an LLM and calling it a day.
The problem is that these guardrails are trivially bypassed. At best you end up playing a losing treadmill game against adversarial prompting.
The guardrails have mostly worked. They have never ever been reliable.
1. Twitter appears to be taking no effort to make this difficult. Even if people can evade guardrails this does not make the guardrails worthless.
2. Grok automatically posts the images publicly. Twitter is participating not only in the creation but also the distribution and boosting of this content. The reason why a ton of people doing this is not because they personally want to jack it to somebody, but because they want to humiliate them in public.
3. Decision makers at twitter are laughing about what this does to the platform and its users when they "post a picture of this person in their underwear" button is available next to every woman who posts on the platform. Even here they are focusing only on the illegal content, as if mountains of revenge porn being made of adult women isn't also odious.
X can actively work to prevent this. They aren't. We aren't saying we should blame the person entering the input. But, we can say that the side producing CSAM can be held responsible if they choose to not do anything about it.
> Isn't this a problem for any public tool? Adversarial use is possible on any platform
Yes. Which is why the headline includes: "no fixes announced" and not just "X blames users for Grok-generated CSAM."
Grok is producing CSAM. X is going to continue to allow that to happen. Bad things happen. How you respond is essential. Anyone who is trying to defend this is literally supporting a CSAM generation engine.
Also, punishment is a rather inefficient way to teach the public anything. The people who come through the gate tomorrow probably won't know about the punishment. It will often be easier to fix the environment.
Removing troublemakers probably does help in the short term and is a lot easier than punishing.
Women should be able to exist in public without having to constantly have porn made of their likeness and distributed right next to their activity.
I replied to:
> They don’t seem to have taken even the most basic step of telling Grok not to do it via system prompt.
“It” being “generating CSAM”.
I was not attempting to comment on some random censorship debate,
but instead: that CSAM is a pretty specific thing.
With pretty specific legal liabilities, dependent on region!
See a lawyer for legal details of course.
Do yourself a favor and not Google that.
Regardless of how fringe, I feel like it should be in everyones best interests to stop/limit CSAM as much as they reasonably can without getting into semantics of who requested/generated/shared it.
Or, if they’re being serious about the user-generated content argument, criminally referring the users asking for CSAM. This is hard-liability content.
Also, where are all the state attorneys general?
Surprising, usually the system automatically bans people who post CSAM and elon personally intervenes to unban then.
https://mashable.com/article/x-twitter-ces-suspension-right-...
Also, since Grok is really good in getting the context, something akin to "remove their T-shirt" would be enough to generate a picture someone wanted, but very hard to find using keywords.
IMO they should mass hide ALL the images created since then specific moment, and use some sort of the AI classifier to flag/ban the accounts.
I don't know how common this is, or what the prompt was that inadvertently generated nudes. But it's at least an example where you might not blame the user
I know they said “without being prompted” here but if you click through you’ll see what the person actually selected (“spicy” is not default and is age-gated and opt-in via the nsfw wall).
Very weird for Taylor Swift...
Note that things change. In the early days of twitter (pre X) they could get away with not thinking of the issue at all. As technology to detect CSAM marches on they need to use it (or justify why it shouldn't be used - too many false positives???). As a large platform for such content they need to push the state of the art in such detection.. At no point do they need perfection - but they need to show they are doing their reasonable best to stop this.
The above is of course my opinion. I think the courts will go a similar direction, but time will tell...
Which he does and responded with “I will blame and punish users.” Which yeah, you should, but you also need to fix your bot. He’s certainly has no issue doing that when Grok outputs claims/arguments that make him look bad or otherwise engages in what he considers “wrongthink,” but suddenly when there are real, serious consequences he gets to hide behind “it’s just a user problem”?
This is the same thing YouTube and social media companies have been getting away with for so long. They claim their algorithms will take care of content problems, then when they demonstrably fail they throw their hands up and go “whoops! Sorry we are just too big for real people to handle all of it but we’ll get it right this time.” Rinse repeat.
Some of these things are going into the ENFORCE act, but it's going to be a muddy mess for a while.
The questions then, for me, are:
* Is Grok considered a tool for the user to generate content for X or is Grok/X considered similar to a vendor relationship
* Is X more like Backpage (not protective enough) than other platforms
I’m sure this is going to court, at least for revenge porn stuff. But why would anyone do this to their platform? Crazy. X/Twitter is full of this stuff now.
It may shock you to learn that bigamy and sky-burials are also quite illegal.
And of course all of this is narrowly focused on CSAM (not that it should be minimized) and not on the fact that every person on X, the everything app, has been opened up to the possibility of non-consensual sexual material being generated of them by Grok.
For civil liability, 230 really shouldn't apply; as you say, 230's shield is about avoiding vicarious liability for things other people post. This principle stretches further than you might expect in some ways but here Grok just is X (or xAI).
Nothing's set in stone much at all with how the law treats LLMs but an attempt to say that Grok is an independent entity sufficient to trigger 230 but incapable of being sued itself, I don't see that flying. On the other hand the big AI companies wield massive economic and political power, so I wouldn't be surprised to see them push for and get explicit liability carveouts that they claim are necessary for America to maintain its lead in innovation etc. etc., whether those come through legislation or court decisions.
They should disable it in the Netherlands in this case since it really sounds like a textbook slander case and the spreader can also be held liable. note: it's not the same as in the US despite using the same word, deepfakes have been proven as slander and this is no different. Especially if you know it's fake by using "AI". There have been several cases of pornographic deep fakes, all of which were taken down quickly, in which the poster/creator was sentenced. The unfortunate issue even of taking posts down quickly is unfortunately the rule which states that if something is on the internet it stays on the internet. The publisher always went free due to acting quickly and not creating it. I would like to see where it goes when both publisher and creator are the same entity, and they do nothing to prevent it.
Nobody in the Netherlands gives one flying fuck about American laws what GROK is doing violates many Dutch laws. Our parliament actually did it's job and wrote some stuff about revenge porn, deep fakes and artificial CP.
I can argue for access to say photoshop like tools, and say folks shouldn't post revenge / fake porn ...
If you are providing a tool for people, YES you are responsible to some degree.
Think of it this way. I sell racecars. I'm not responsible if someone buys my racecar and then drinks and drives and dies. Now, I run an entertainment venue where you can ride along in racecars. One of my employees is drunk, and someon dies. Now I am responsible.
In what way?
But I think, most people would say "uh, yeah, the business needs to do something or implement some policy".
Another example: selling guns versus running a shooting range. If you're running a shooting range then yeah, I think there's an expectation you make it safe. You put up walls, you have security, etc. You try your best to migrate the bad shit.
[1]: https://www.congress.gov/bill/119th-congress/senate-bill/146
I think it is good that you can install any apk on an android device. I also think it is good that the primary installation mechanism that most people use has systems to try to prevent malware from getting installed.
This sort of approach means that people who really need unbounded access and are willing to go through some extra friction can access these things. It makes it impossible for a megacorp to have complete control over a computing ecosystem. But it also reduces abuse since most people prefer to use the low-friction approach.
1. Hypocrisy (people expressing a different opinion on this subject than they usually would because they hate Musk)
vs.
2. Selection bias (article title attracts a higher percentage of people who were already on the more regulation, less freedom side of the debate)
vs.
3. Self-censorship (people on the "more freedom, less regulation" side of the debate being silent or not voting on comments because in this case defending their principles would benefit someone they hate)
There might be other factors I haven't considered as well.
Also I think a lot of people simply think models which are published openly shouldn't be held to the same legal standards as proprietary models.
The real question is how can the pro-Musk guys still find a way to side with him on that. My leading theory is that they're actually pro-pedophilia.
I'd also argue commercialization affects this - X is marketing this as a product and making money off subscriptions, whereas I generally think of an open model as something you run locally for free. There's a big difference between "Porn Producer" and "Photoshop"
The person(s) ultimately in charge of removing (or preventing the implementation of) Grok guardrails might find themselves being criminally indicted in multiple European countries once investigations have concluded.
Suppose, if instead of an LLM, Grok was an X employee specifically employed to photoshop and post these photos as a service on request. Section 230 would obviously not immunize X for this!
https://www.justice.gov/d9/2023-06/child_sexual_abuse_materi...
Generating a non-real child could be argued that it might not count. However thats not a given.
> The term “child pornography” is currently used in federal statutes and > is defined as any visual depiction of > sexually explicit conduct involving a > person less than 18 years old.
Is broad enough to cover anything obviously young.
but when it comes to "nude-ifing" a real image of a know minor, I strognly doubt you can use the defence its not a real child.
Therefore your knowingly generating and distributing CSAM, which is out of scope for section 230
What's "person" here? Usually, in law, "person" has a very specific meaning.
But
the law applies if its a depiction of a person who is real, So a sexualised hand draw drawing of a recognisable person, (who is a minor) is CSAM.
Which means if someone says to grok "hey make a sexy picture of this[post of a minor]" and it generates a depiction of that minor, its CSAM.
They have something like Section 230 in the E-Commerce Directive 2000/31/EC, Articles 12-15, updated in the Digital Service Act. The particular protections for hosts are different but it is the same general idea.
They might just let this slide to not rock the boat, either out of fear and they will do nothing, or to buy time if they are actually divesting from the alliance with and economic dependence on the US
The asshole puckering is from how Trump has completely flipped the table, everything is hyper transactional now, and as we’ve seen military action against leaders personally is also on the table.
I’m saying I could see the EU let this slide now because it’s not worth it politically to regulate US companies for shit like this anymore. Whether that would be out of fear or out of trying to buy time to reorganize would probably end up in future getting the same kind of historical analysis that Chamberlain’s policy of appeasement to Germany gets nowadays
They are able to change how Grok is prompted to deny certain inputs, or to say certain things. They decided to do so to praise Musk and Hitler. That was intentional.
They decided not to do so to prevent it from generating CSAM. X offering CSAM is intentional.
That's what section 230 says. The content in question here is not provided by "another information content provider", it is provided by X itself.
Removing Section 230 was a big discussion point for the current ruling party in the US, when they didn't have so much power. Now that they do have power, why has that discussion stopped? I'd be very interested in knowing what changed.
But beyond the legality or obvious immorality, this is a huge long-term mistake for X. 1 in 3 users of X are women - that fraction will get smaller and smaller. The total userbase will also get smaller and smaller, and the platform will become a degenerate hellhole like 4chan.
When do we cross the line of culpability with tool-assisted content? If I have a typo in my prompt and the result is illegal content, am I responsible for an honest mistake or should the tool have refused to generate illegal content in the first place?
Do we need to treat genAI like a handgun that is always loaded?
knowingly allowing it is not in good faith.
For example, if someone posted CSAM on HN and Dang deleted it, I think that it would be wrong to go after HN for hosting the content temporarily. But if HN hosted a service that actively facilitated, trivialized, and generated CSAM on behalf of users, with no or virtually no attempt to prevent that, then I think that mere deletion after the fact would be insufficient.
But again, you can just use "Grok is generating the content" to differentiate if that doesn't compel you.
Look what happens when you put in an image of money into Photoshop. They detect it and block it.
Who cares about Adobe? I'm talking about Grok. I can consistently say "I believe platforms should moderate content in accordance with Section 230" while also saying "And I think that the moderation of content with regards to CSAM, for major platforms with XYZ capabilities should be stricter".
The answer to "what about Adobe?" is then either that it falls into one of those two categories, in which case you have your answer, or it doesn't, in which case it isn't relevant to what I've said.
but to answer your point, no for two reasons:
1) you need to bring your own source material to create it. You can't press a button that says "make child porn"
2) its not a reasonable to expect that someone would be able to make CSAM in photoshop. However more importantly the user is the one hosting the software, not adobe.
Where is this button in Grok? You have to as the user explicitly write out a very obviously bad request. Nobody is going to accidentally get CSAM content without making a conscious choice about a prompt that's pretty clearly targeting it.
No, you need to train, take a lot of time and effort to do it. with grok you say "hey make a sexy version of [picture of this minor]" and it'll do it. that doesn't take traning, and its not a high bar to stopping people doing it.
The non-CSAM example is this, it's illegal, in the USA to make anything that looks like a US dollar bill. ie photocopiers have blocks on them to stop you making copies of it.
You can get round that as a private citizen but its still illegal. A company knowingly making a photocopier that allows you to photocopy dollar bills is in for a bad time.
I'm at a loss to explain it, given media's well known liberal bias.
I think it's time to revisit these discussions and in fact remove Section 230. X is claiming that the Grok CSAM is "user generated content" but why should X have any protection to begin with, be it a human user directly uploading it or using Grok to do this distribution publicly?
The section 230 discussion must return, IMHO. These platforms are out of control.
It's not a personal tool that the company has no control over. It's a service they are actively providing and administering.
Shall we ban prediction markets?
Genuinely terrifying how Elon has a cadre of unpaid yes-men ready to justify his every action. DogeDesigner regularly sub tweets Elon agreeing to his latest dumb take of the day, and even seems to have based his entire identity on Elon's doge obsession.
I can't imagine how terrible that self imposed delusion feels deep down for either of them.
A similar article[1] briefly made it to the HN front page the other day, for a few minutes before Elon's army of unpaid yes-men flag-nuked it out of existence.
1: https://news.ycombinator.com/item?id=46468414
Yes, but combined with "omg AI" (which happened elsewhere; for instance, see the hype over OpenAI Sora, which is clearly useless except as a toy), so extra-hype-y.
Seems like a toy feature.
When Grok stated that Israel was committing genocide, it was temporarily suspended and fixed[0]. If you censor some things but not others, enabling the others becomes your choice. There is no eating the cookie and having it too - you either take a "common carrier" stance or censor, but also take responsibility for what you don't censor.
[0] https://www.france24.com/en/live-news/20250813-chatbot-grok-...
If you try, you quickly end up codifying absurdities like the 80%-finished-receiver rule in firearm regulation. See https://daytonatactical.com/how-to-finish-an-80-ar-15-lower-...
People who say "society should permit X, but only if it's difficult" have a view of the world incompatible with technological progress and usually not coherent at all.
The law is filled with these questions. "Well, how do you draw the line" was not a sufficient defense in Harris v. Forklift Systems.
The LLM itself is more akin to a gun available in a store in the "gun is a tool" argument (reasonable arguments on both side in my opinion); however, this situation more like a gun manufacturer creating a program to mass distribute free pistols to a masked crowd, with predictable consequences. I'd say the person running that program was either negligent or intentionally promoting havoc to the point where it should be investigated and regulated.
You left out "who controls the output of the tool", which makes it a strawman.
I assume the courts will uphold this anyway because Musk rich and cannot be held accountable for his actions.
When the far-right paints trans people as pedophiles, it's not an accident that also provides cover for pedophiles.
The age of consent between 16 and 18 is a relatively high born from progressive feminist wins. In the United States, the lowest AOC was 14 until the 1990s, and the AOC in the US ranged from _7 to 12_ for most of our existence.
To be clear, I'm in defense of a high age of consent. But it's something that had to be fought for, and it's not something that can be assumed to be safe in our culture (like the rejection of nazis and white supremacists, or valuing womens rights including voting and abortion).
Influential politicians like Tom Hofeller were advocates for pedophilia and nobody cares at all. Trump is still in power despite the Epstein controversy, Matt Gaetz still hasn't been punished for paying for sex with an underage girl in 2017. The Hitler apologia in the far-right spaces even explicitly acknowledge he was a pedophile. Etc.
In a different era, X would have been removed from Apple and Google's app stores for the CEO doing nazi salutes and the chatbot that promoting Hitler. But even now that X is a CSAM app, as of 3PM ET, I can still download X on both of their app stores. That would not have been normal just two years ago.
This has already been a culture war issue for awhile, there is a pro-pedophilia side, and this is just another victory for them.
Projection. It’s always projection…
This is actually separate to hn's politics-aversion, though I suspect there's a lot of crossover. Any post which criticised Musk has tended to get rapidly flagged for at least the last decade.
So from technical wonder to just like a pen in one easy step. Wouldn’t it be great if you could tell the AI what not to output?
This has been tried extensively and has not yet fully worked. Google "ai jailbreaks".
The locks on my doors will fail if somebody tries hard enough. They are still valuable.
Only because of the broader context of the legal environment. If there was no prosecution for breaking and entering, they would be effectively worthless. For the analogy to hold, we need laws to throw coercive measures against those trying to bypass guard rails. Theoretically, this already exists in the Computer Fraud and Abuse Act in the US, but that interpretation doesn't exist quite yet.
Preventing 100%? Fail.
Reducing the number of such images by 10-25% or even more? I don’t think so.
Not to mention the experience you get to know what you can and what you can’t prevent.
And that vibe I mentioned in another comment is getting stronger and stronger.
But when it’s used to create CSAM, then it’s suddenly not just a tool.
You _cannot_ stop these tools from generating this kind of stuff. Prompt guards only get you so far. Self-hosted versions don’t have them. The human writing the prompt is at fault. Just like it’s not Adobe’s responsibility if some sick idiot puts bikinis on a child in Photoshop.
Some random anonymous reply guy creep says "@grok put her in a g-string, make it really sexy". Grok happily obliges and puts it on your timeline.
Photorealistic softcore porn of your toddler: it's all happening on X, the everything app.™
This isn't just somebody beating off in private. This is a public image that humiliates people.
Anecdotally speaking, especially as someone who was groomed online as a child, I am more inclined toward the latter argument. I believe fictional CSAM harms people and generated CSAM will too.
With generated images being more realistic, and with AI 'girlfriends' advertised as a woman who "can't say no" or as "her body, your choice", I am inclined to believe that the harms from this will be novel and possibly greater than existing drawn CSAM.
Speaking concretely: Grok is being used to generate revenge porn by editing real images of real children. These children are direct, unambiguous victims. There is no grey area where this can be interpreted as a victimless crime. Further, these models are universally trained with real CSAM in the training data.
I strongly sympathize with the idea that crimes should by definition have identifiable victims. But sometimes the devil doesn't really need an advocate.
Not saying the models don't get trained on CSAM. But I don't think it's a foregone conclusion that AI models capable of generating CSAM necessarily victimize anyone.
It would be nice if someone could research this, but the current climate makes it impossible.
CSAM of course: https://www.theverge.com/2023/12/20/24009418/generative-ai-i...
When you indiscriminately scrape literally billions of images, and excuse yourself from vigorously reviewing them because it would be too hard/expensive, horrible and illegal stuff is bound to end up in there.
The biggest issue here is not that models can generate this imagery, but that Musk's Twitter is enabling it at scale with no guardrails, including spamming them on other people's photos.
Pretty sure these models can generate images that do not exist on their training data. If I generate a picture of a surfing dachshund, did it have to train on canine surfers?
I don't think you're engaging with this topic in good faith.
---
Replying to https://news.ycombinator.com/item?id=46504101 in an edit, due to rate-limiting.
> Whether it is exclusive or not is not really relevant to the point.
Whether it's exclusive or not is very relevant to the point, because sexual fetishes and paraphilias are largely mutable. In much the same way that a bi woman can swear off men after a few bad experiences, or a monogamous person in a committed relationship can avoid lusting after other people they'd otherwise find attractive, someone with non-child sexual interests can avoid centring children in their sexuality, and thereby avoid developing further sexual interests related to children. (Note that operant conditioning, sometimes called "conversion therapy" in this context, does not achieve these outcomes.) I imagine it's not quite so easy for people exclusively sexually-attracted to children (though note that one's belief about their sexuality is not necessarily the same as one's actual sexuality – to the extent that "actual sexuality" is a meaningful notion).
You may be interested in Age of Onset and Its Correlates in Men with Sexual Interest in Children (https://europepmc.org/backend/ptpmcrender.fcgi?accid=PMC6377...).
> Can you link me to research on how AI generated CSAM consumption affects offending rates?
No, because "AI-generated" hasn't been a thing for long enough that I'd expect good research on the topic. However, there's no particular reason to believe it'd be different to consumption of similar material of other provenance.
It's a while since I researched this, but I've found you a student paper on this subject: https://openjournals.maastrichtuniversity.nl/Marble/article/.... This student has put more work into performing a literature review for their coursework than I'm willing to do for a HN comment. However, skimming the citations, I recognise some of these names as cranks (e.g. Ray Blanchard), and some papers seem to describe research based on the pseudoscientific theories of Sigmund Freud (another crank). Take this all with a large pinch of salt.
> For instance, virtual child pornography can cause a general decline in sexual child abuse, but the possibility still remains that in some cases it could lead to practicing behavior.
I remember reading research about the circumstances under which there is a positive relationship, which obviously didn't turn up in this student's literature review. My recent searches have been using the same sorts of keywords as this student, so I don't expect to find that research again any time soon.
---
I found a proper literature review: https://medcraveonline.com/AHOAJ/fifty-most-cited-research-p... Probably reading these papers will give you a good introduction to the field.
This topic is so censored on the internet and AIs refuse to discuss it so I have to ask.
> I reject your claim that the "no choice" claim is evidenced, and encourage you to show us the research you claim to have.
No choice in the attraction. Whether it is exclusive or not is not really relevant to the point. Though obviously acting on it would be a choice.
As I mentioned in another comment (https://news.ycombinator.com/item?id=46503866), there are other reasons not to produce synthetic sexualised imagery of children, which I'm not qualified to talk about: and I feel this topic is too sensitive for my usual disclaimered uninformed pontificating.
I will, instead, speak to what I know. Many models are heavily overfit on actual people's likenesses. Human artists can select non-existent people from the space of possible visages. These kinds of generative models have a latent space, many points of which do not correspond to real people. However, diffusion models working from text prompts are heavily biased towards reproducing examples resembling their training set, in a way that no prompting can counteract. Real people will end up depicted in AI-generated CSAE imagery, in a way that human artists can avoid.
There are problems with entirely-fictional human-made depictions of child sexual exploitation (which I'm not discussing here), and AI-generated CSAE imagery is at least as bad as that.
Anyone who sees it might decide to be a victim if they sense there's relief they can secure for damages they can describe.
Society, like others have said, for normalizing weird stuff.
Children, indirectly and hypothetically, if MAPs and their related content are normalized.
> CSAM includes both real and synthetic content, such as images created with artificial intelligence tools. A child cannot legally consent to any sexual act, let alone to being recorded in one.
If your English is weak, there are dictionaries, translation programs, and LLMs that can help. The first meaning at https://www.merriam-webster.com/dictionary/vibe is “a distinctive feeling or quality capable of being sensed,” which is the relevant one here.
The OP article refers to “outputs that sexualized real people without consent.” If any of those real people are minors, that qualifies as CSAM. It’s not complicated.
You dislike it. I get it. I do too. But this is a discussion of law. Legally, I do not see how any law was broken. I welcome any citation to the contrary. I note, again, that "some unelected org said so" is not a weighty argument when the opposition is the SCOTUS's clear stance on the 1st amendment.
Producing and disseminating CSAM (whether it's physical or digital copies) is a crime.
Yes, under the TOS, what grok is doing is not the "fault" of grok(the reason is the causal factor of the post[enabled by 2 humans: the poster and the prompter]; the human intent is what initiates the generated post, not the bot; just like a gun is shot by a human, not by the strong winds). You could argue it's the fault of the "prompter", but we're going to circle back to the cat & mouse censorship issue. And no, I don't want a less censored grok version that's unable to "bikini a NAS"(which is what I've been fortunate to witness) just because "new internet users" don't understand what the Internet is.(Yes, I know you can obviously fine-tune the model to allow funny generations and deny explicit/spicy generations)
If X would implement what the so-called "moralists" want, it will just turn into Facebook.
And for the "protect the children" folks, it's really disappointing how we're always coming back to this bullsh*t excuse every time a moral issue arises. Blocking grok is a fix both for the person who doesn't want to get edited AND the user who doesn't want to see grok replies(in case the posts don't get the NSFW tag in time).
Ironically, a decent amount of people who want to censor grok are bluesky users, where "lolicorn" and similar dubious degenerate content is being posted non-stop AS HUMAN-MADE content. Or what, just because it's an AI it's suddenly a problem? The fact that you can "strip" someone by tweeting a bot?
And lastly, sex sells. If people haven't figured out that "bikinis", "boobs", and everything related to sex will be what wins the AI/AGI/etc. race (it actually happens for ANY industry), then it's their problem. Dystopian? Sure, but it's not an issue you can win with moral arguments like "don't strip me". You will get stripped down if it created 1M impressions and drives engagement. You will not convince Musk(or any person who makes such a decision) to stop grok from "stripping you", because the alternative is that other non-grok/xAI/etc. entities/people will make the content, drive the engagement, make the money.
Grok is just another tool, and IMO it shouldn't have guard rails. The user is responsible for their prompts and what they create with it.
Only one of these is easily preventable with guardrails.
The user in not creating it you are based on a prompt you could easily say no to.
It's both. Very simple. You can't get around liability by forming a conspiracy [0].
https://en.wikipedia.org/wiki/Criminal_conspiracy
Or do you think a Microsoft exec should go to jail every time someone uses it to write a death threat?
https://www.justice.gov/usao-ak/pr/federal-prosecutors-alask...
How is the world improved by an AI tool that will generate sexual deepfake images of children?
Why not charge the people who make my glasses cuz they help me see the CP? Why not charge computer monitor manufacturers? Why not charge the mine where they got the raw silicon?
Here you have a product which itself straight up produces child porn with like absolutely zero effort. Very different than some object which happens to be used, photograph materials
Nikon doesn't sell a 1-minute child porn machine, xAI apparently does.
Maybe you think child porn machines should be sold?
If it was a case where CSAM production becomes mainstream use case I would have agreed but it is not.
How hard is this? What are they doing now, and is it enough? Do we know how hard they are trying?
For argument's sake, what if they had truly zero safegaurd around it, you could type "generate child porn" and it would 100% of the time. Surely you'd agree they should prevent that case, and be held accountable if they never took action to prevent it.
Regulation, clear laws around this would help. Surely they could try go get some threshold of difficulty in place that is a requirement to adhere to preventing.
I'm not in CP so I don't try to make it generate such content but I'm very annoyed that all providers are trying to lecture me when I try to generate anything about public figures for example. Also, these preventive measures are not working well at all, yesterday I had one denying to generate infinite loop claiming its dangerous.
Just throw away this BS about safety and jail/fine whomever commits crime with these tools. Make tools tools again and hold people responsible for the stuff they do with these tools.
Taking creepy pictures and asking a machine to create create creapy pictures for the world to see are not the same.
You can't have AI-generated CSAM, as you're not sexually abusing anyone if it's AI-generated. It's better to have AI-generated CP instead of real CSAM because no child would be physically harmed. No one is lying that the photos are real, either.
And it's not like you can't generate these pics on free local models, anyway. In this case I don't see an issue with Twitter that should involve lawyers, even though Twitter is pure garbage otherwise.
As to whether Twitter should use moderation or not, it's up to them. I wouldn't use a forum where there are irrelevant spam posts.
The fact of the matter is they do have a policy and they have removed it, suspended accounts and perhaps even taken it further. As would be the case on other platforms.
As far as I understand there is no nudity generated by grok.
Should public gpt models be prevented from generating detestable things, yes I can see the case for that.
I won't argue there is a line between acceptable and unacceptable, but please remember people perv over less (Rule 34). Are bikinis now taboo attire? What next, ankles, elbows, the entire human body?(Just like the Taliban). (Edit: I'm mentioning this paragraph for my below point.)
GPT's are not clever enough to make the distinction by the way either, so there's an unrealistic technical challenge here.
I suspect the this saga blowing out of proportion is purely "eLoN BAd".
> As far as I understand there is no nudity generated by grok.
There is nudity, and more importantly there is CSAM material being generated. reference: https://www.reddit.com/r/grok/comments/1pijcgq/unlocking_gro...
> Are bikinis now taboo attire?
generating sexualised pictures of kids is verboten. Thats epstien level of illegality. There is no legitiamte need for the public to hold, make or transmit sexualised images of children.
Anyone arguing otherwise has a lot of questions to answer
That is a different grok to the one publishing images and discussed in the article. Your link clearly states they are being moderated in the comments and all comments are discussing adults only. The links comments also imply that these folks are jailbreaking nearly, because of guardrails that exist too.
As I say read what I said, please don't put words in my mouth. The GPT models wouldn't know what is sexualised. I said there is a line at some point. Non-sexualized bikinis are sold everywhere, do you not use the internet to buy clothes?
Your immediate dismissive reaction indicates you are not giving what I'm saying any thought. This is what puritanical thought often looks like. The discourse is so poisoned people can't stop, look at the facts and think rationally.
I don't think there is much emotion in said post. I am making specific assertions.
to your point:
> Non-sexualized bikinis are sold everywhere
Correct! the key logical modifier is Non sexual. Also you'll note that a lot of clothing companies do not show images of children in swimwear. Partly that's down to what I imagine you would term puritanism, but also legal counsel. The definition of a CSAM is loose enough (in some jurisdictions) to cover swimwear, depending on context. That context is challenging. A parent looking for clothes that will fit/suit their child is clearly not sexualised (corner cases exist, as I said context) Someone else who is using if for sexual purposes is not.
and because like GPL3 CSAM is infectious, the tariff for both company and end user is rather high for making, storing, transmitting and downloading those images. If someone is convicted of collecting those images, and using them for a sexual purpose, then images that were created that were not-CSAM suddenly become CSAM, and legally toxic to posses. (context does come in here.)
> Your link clearly states they are being moderated in the comments
Which tells us that there is a lot of work on guardrails right? its a choice by xai to allow users to do this. (mainly the app is hamstrung so that you have to pay for the spicy mode.) Whether its done by an ML model or not is irrelevant. Knowingly allowing CSAM generation and transmission is illegal. if you or I were to host an ML model that allows user to do the same thing, we would be in jail. There is a reason why other companies are not doing this.
The law must be applied equally, regardless of wealth, or power. I think that is my main objection to all of this. its clearly CSAM, and anyone other than musk doing this would have been censured by now. All of this justification is because of who it is doing this, rather than what is being done. We can bike shed all we want about is it actually really CSAM, which negates the entire point of this, which is its clearly breaking the law.
> The GPT models wouldn't know what is sexualised.
ML Classification is really rather good now. Instagram's unsupervised categorisation model is really rather effective at working out context of an image or video (ie differentiation of clothes, and context of those clothes.)
> please don't put words in my mouth
I have not done this, I am asserting that the bar for justifying this kind of content, which is clearly illegal and easily prevented (ie a picture of a minor and "generate an image of her in sexy clothes") is very high.
Now you could argue that I'm implying that you have something to hide. I am actually curious as to your motives for justifying the knowing creation of sexualised images of minors. You've made a weak argument that there are legitimate purposes. You then argue that its a slippery slope.
Is your fear that this brings justifies an age gated internet? censorship? What is the price that you think is worth paying?
I said I don't understand the fuss because there are guardrails, action taken and technical limitations.
THAT is my motive. The end of story. I do not need to parrot outrage because everyone else is, "you're either with us or against us" bullshit. I'm here for a rational discussion.
Again read what I've said. Technical limitations. You wrote that long ass explanation interspersed with ambiguities like consulting lawyers in borderline cases and then you expect an LLM to handle this.
Yes ML classification is good now but not foolproof. Hence we go back to the first point, processes to deal with this when x's existing guardrails fail, as x.com has done, delete, suspend, report.
My fear (only because you mention it, I didn't mention it above, I only said I don't get the fuss above) it seems should be that people are losing touch in this grok thing, their arguments are no longer grounded in truth or rational thought, almost a rabid witch hunt.
"Hey grok make a sexy version of [obvious minor]" is not something that is hard to stop. try doing that query with meta, gemini, or sora, they manage it reliably well.
There are not technical impediments to stopping this, its a choice.
I'd bet if you put that prompt into grok it'd be blocked judging by that Reddit link you sent. These folks are jailbreaking nearly asking to modify using neutral terms like clothing and images that grok doesn't have the skill to judge.
Every feature is lawyered up. Thats what general counsel does. Every feature I worked on at a FAANG had some level of legal compliance gate on it, because mistakes are costly.
For the team that launched the chatbots, loads of time went into figuring out what stupid shit users could make it do, and blocking it. Its not like all of that effort stopped. When people started finding new ways to do naughty stuff, that had to be blocked as well. Because other wise the whole feature had to be pulled to stop advertisers from fleeing, or worse FCC/class action.
> These folks are jailbreaking nearly asking to modify using neutral terms like clothing
CORRECT! people are putting effort into jailbreaking the app. where as on x grok they don't need to do any of that. Which is my point, its a product choice.
None of this is "hard legal problems" or in fact unpredictable. They are/have done a ton of work to stop that (again mainly because they want people to pay for "spicy mode")
This is a dangerous product, the manufacturer _knows_ it is dangerous, and yet still they provide the service for use.
I think it's a problem for society when bad behavior is not transgressive. And moreover I'm less certain about this one, but I sort of think theoretically that society should be more liberal than it's institutions, and it creates really weird feedback loops when the institutions are more "liberal" than the population naturally. (I'm using the term generically, not directly aligned with the political meaning)
I think the theory I would present is that people should not be encouraged to transgress further than they are impulsed to, but simultaneously people need an outlet to actually transgress in a way that is not acceptable! People shouldn't post edge memes because the algorithm encourages it. People should post edgy memes because it's transgressive! But when the institutions actually encourage it? How broken is it that you can't be an edgy teenager because edgy is the culture and not the counter-culture.
In 2025, I think the truly transgressive activity is to not be online. Is to be straight-edge. And I sort of wonder if this is a small part of the young male mental-health crisis. They're not telling edgy jokes to be closer to their friends, they're telling edgy jokes to get fake internet points so people click on more advertising. How fucked is that?
But it's like weird that kids are probably having less sex, drinking, smoking etc. then the institutions would have it.
So to kind of answer your question,
"In the 1990's, popular youth culture generally rebelled against this type of worry from adults but now even the youth are part of the moral witch-hunt for some reason."
This might explain how I, a formerly "edgy" gen-x 90's kid am heartily against institutions supporting this kind of behavior, while simultaneously supporting people engaging in it. The adults; X, parents, etc. SHOULD be worrying about this kind of stuff SO THAT popular youth culture can continue to rebel against it.
I'm thinking about the 80s-90s worries about Christian heresy. Popular culture was (and still is) full of insults against Christianity, probably because that was the kind of thing that offended an older generation in the west at the time. Is it wrong for institutions to encourage that?
While I have my own personal moral standards, I see society in general as morally relativist and don't accept arguments that the popular morals of today are right because they're popular now, while the popular morals of previous generations were wrong because they contradict the "right" morals of today. That's why I don't have much respect for people trying to enforce their own culture's arbitrary morals while not equally respecting conflicting morals.
> when bad behavior is not transgressive
That's a tricky one because what's "bad behavior"? Does it include denying the existence of God?
> While I have my own personal moral standards, I see society in general as morally relativist and don't accept arguments that the popular morals of today are right because they're popular now, while the popular morals of previous generations were wrong because they contradict the "right" morals of today
While I agree with you morally, I think practically for the stability of society it's useful for there to be a relatively conservative (and not overly litigious) mainstream that people can choose to freely act outside of, and it doesn't entirely matter what that mainstream is. I am not morally aligned with the say they Reagan era moral majority who fought against foul language on TV and in music, but I think there is a value in having that "moral majority" to rebel against.
There was this sort of lightning in a bottle in the second half of the 20th century, or maybe this has always been an Western thing but - there was this strong conservative popular culture - you couldn't even swear on television - but transgression wasn't handled legally (at least not excessively so). So you could go see a transgressive comedian if you wanted to, but it was necessarily a subculture, and I think this idea is healthy for society. Strong social pressure in one direction, but an escape hatch from it if you want in communities that aren't part of the popular culture.
So yes, I would say X is an institution, and maybe if I had my way, X wouldn't even allow swearing. If you wanted to swear on the internet, you would have to find a relatively "underground" place to do it; you could do it on say a private forum, but not on anything with more than I dunno, 1 million users or something. But when X as an institution tells you that everything is ok; when basically ideas or pictures or movies stop being "dangerous" people stop being "dangerous". There's no unique thought because all ideas are part of the mainstream. I think it creates less free thought, not more.
Is it really better for the world that there's basically no 4chan anymore because Twitter is now 4chan? https://www.newyorker.com/culture/infinite-scroll/how-the-in...
But in summary to speak to your question directly - I think I'm making a very counter-intutive argument that the thing you say you want, people from the 90's who valued dangerous media doesn't exist anymore. I think in a sense, the folks on X are not all that 90's kid, they’re all the "moral majority" the 90's kid was railing against; In a perverse manner of speaking.
The world grew up and learned better. We've seen the effects that bullying, including cyberbullying, have on people. We've seen teenagers (and adults) get harassed with fake revenge porn.
Do you think that images and text can ever be dangerous? If so, then why do you doubt that computer generated images and text can be dangerous?
If not, then I'm not sure what the point of specifically mentioning computer generated.
I get the people arguing for free speech (in general) as there are lots of upsides.
Where are the upsides in CSAM - whether real or computer generated? How does it benefit society?
In a public forum like X, there probably are no upsides.
In general, though, pedophilia exists. This isn't something that is going to change. What is the harm in providing them with a alternative to real CSAM (which actually and actively hurts children)?
I think you are looking at it from the point of it being a punishment for victimizing someone -- when in fact it's used not to punish crime but to put away people who potentially might victimize someone in the future.
[0] > Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.