There’s a saying that has been true since the dawn of the internet: if your picture is on the internet, it stays on the internet.
And then AI entered the chat.
People are not only aware of their pictures being online anymore – after the updated X Terms of Service (TOS), they’re now also anxious, angry (or somewhere in between), and maybe a little confused about privacy, ownership, and machine learning. Tech companies constantly update their TOS, and X is the most recent one that stirred things up.
As of January 17th 2026, in the updated Terms of Service, X defines “Content” more broadly to include not only posts and media, but also prompts and outputs from its AI features. By using X, users grant the platform a broad license to store, analyze, and use this Content, including for AI and machine-learning development, without an opt-out option.
Your content has already been scraped in the past; this isn’t new (Common Crawl, n.d.). Public internet content has been mirrored, cached, archived, and reused across research projects, datasets, search engines, and analytics tools long before AI entered the chat. The terms themselves haven’t changed in an extreme way; the biggest difference is that the opt-out option is no longer available. And yes, I understand why that stings.
A little heads up: I’m not a tech expert. I’m just someone with curiosity, an opinion, and a hunger to write. This piece isn’t meant to be definitive – it’s a conversation starter. I’m also deliberately leaving the negative externalities of AI out of scope here, such as energy use, water consumption, or deepfakes, etc.
So… what actually changed?
Not as much as people think – and yet just enough to start the fire and concern a lot of people.
For years, platforms have had broad licenses to use your public content. That’s simply how open social media works: if strangers can see it, the platform needs the legal right to store it, show it, and share it within its ecosystem.
So they’re already using this data, and whether consciously or not, you’ve already given permission simply by using the platform. When you read the terms closely, what they’re saying isn’t all that surprising. This is what an open social media platform is. There’s no paywall, content is publicly visible, and if you didn’t want your face or opinions on the internet, realistically, they shouldn’t have been posted there in the first place.
What does feel different is how “content” is defined now. It’s no longer just your posts and photos. It also includes what you type into AI tools – and what comes back out of them. That shifts the emotional boundary. People aren’t just sharing content anymore; they’re feeding AI systems. And now that this becomes explicit in an updated TOS, it suddenly feels deeply personal.
Long story short: a lot of what we’re worried about now has already happened.
The ironic part is that X only really began blocking large-scale scraping by external parties after Musk took over. That means much of what existed before 2022 has already been picked up by third parties. And now X has updated its TOS around AI, removing the opt-out to support building Grok and analyzing existing platform data.
That’s the unsettling part for many. It feels like losing protection. And I get that – but there’s nuance here.
So why are platforms suddenly expanding what counts as data?
Not because they’re evil. More likely because they need it. AI models need variety, and we’re running out of “fresh” surface-level content. The internet has been scraped; AI has already been fed by everything public.
From an AI-development perspective, that’s a nightmare – so they need new input. Whether or not we like that, is a very different question.
People are panicking partly because of legal language they don’t fully understand, and partly because their illusion of control and privacy has cracked. For years, we lived in a social media world where “public” felt harmless. Now AI enters the room, and our posts and pictures suddenly feel more consequential.
My take? Don’t panic – but don’t be naive either.
You still retain ownership of your content; that hasn’t changed.
What has changed is how an open social media platform is defined. If other people can see your content, the platform needs the legal right to display it, distribute it, and share it within its ecosystem. In that sense, these terms aren’t very different from other open platforms like Tumblr, Reddit, or Instagram.
Public platforms have always worked this way. AI didn’t break the system – it revealed it. The question isn’t: “Do platforms use my content?” Because that’s a big, fat yes. The real question is: How comfortable am I contributing to that system?
Reflect for yourself on how much it actually matters to you that your content may be used. If you feel strongly about it, then go all-in and remove your content entirely – not just because X updated its terms, but as a conscious decision about your online presence.
More than ever, authenticity matters. Staying unique in an AI-world is important. And if you choose to opt out, that also means you’re not contributing to the development of future AI systems
So no, you don’t have to rage-quit X – but you don’t have to accept everything either.
You can redefine your relationship with “public.” Some people will keep sharing freely. Others will share less, or move their most meaningful work to paid or closed spaces where people consciously opt in instead of passively scrolling past – which, if you ask me, is an amazing decision for creators.
To be clear, and to add more nuance: none of this means all concerns are irrational or overblown. Removing an opt-out is not a small change. Formalizing AI access to prompts and outputs expands what platforms are allowed to do: not just now, but in the future. And while public content has been scraped before, there’s a real difference between fragmented, unofficial scraping and platform-sanctioned, large-scale AI training that is permanent and monetized. I see people are reacting to a loss of agency in systems they rely on. Saying “this was always public” may be technically true, but it ignores how the context has changed, and why that change feels personal. I see it, and I see you.
To end on a lighter note: instead of continuing to feed Grok or X brand-new content, you could choose to only reuse what you’ve already posted publicly. And keep any new work behind a pay wall. That way, at the very least, there’s nothing new for the system to learn from your content.
And if you’re feeling a little cheeky, you could even limit your public posts to content generated by Grok itself. A taste of their own medicine, if you will.
—
References
Common Crawl. (n.d.). Common Crawl – Open Repository of Web Crawl Data. Retrieved January 8, 2026, from https://commoncrawl.org/
X (formerly Twitter). (n.d.). Terms of Service 2026. Retrieved January 8, 2026, https://x.com/en/tos


