Paper by,
Simon Lermen, Daniel Paleka, Joshua Swanson, Michael Aerni, Nicholas Carlini, Florian Tramèr
It talks about deanonymizing those who writes under a pseudonym. Sites like reddit, lemmy would be that type.
From the paper,
Given two databases of pseudonymous individuals, each containing unstructured text written by or about that individual, we implement a scalable attack pipeline that uses LLMs to: (1) extract identity-relevant features, (2) search for candidate matches via semantic embeddings, and (3) reason over top candidates to verify matches and reduce false positives.
Our results show that the practical obscurity protecting pseudonymous users online no longer holds and that threat models for online privacy need to be reconsidered.
They can match writing styles, interests, details to infer a job or city, or other unstructured information. That allows to match unrelated pseudonyms to the same person. Like, FooFighterGroupie and Yolanda43905 are the same human, despite they never said it. It can allow also, to match a pseudonym to a real identity across sites. Like someone posted on LinkedIn with a real name. It takes less info than most people expect, to figure out Julia Greenberg of Cedarville, NH is FooFighterGroupie.
You can protect yourself by never giving away much info. But ofc sometimes that’s the whole point! Think talking about specific hobbies or w/e, gives away info. Also change up writing styles + vocab use, b/c it is a unique fingerprint.
I doubt this technique is used in a dragnet way… YET! But no reason it can’t scale, if the cost of resources goes low eonugh. We could eventually see it become standard, analysis to link people across sites and identities.
Do Throwaway accounts with no more than 6 months or 100 comments
Yes I’ve been worried about exactly this. I’m sure it’s very much within the realm of possibility these days.
As if we need more lessons in how cautious we should be with what we’re putting on the internet. What has been true 20 years ago hasn’t changed.
So it seems that letting LLMs to write sloppy posts for us can be useful after all. May be c/privacy should implement an automatic AI reformating XD
Previously, the advice was to translate your posts into one or two languages before posting. It seems that even rough content generated by large language models (LLMs) can help people fit in more easily.
I like how slop became “rough content” after translation.
Yah, there might be something to that. For protection against style + vocab matching.
It sucks though. I recently read where the more people use LLM assisters when they write, the more the whole virtual commons grows bland. It feeds back upon itself.
Sigh. I just want a world where we can have nice things. And assholes don’t try to ruin the nice things we could have.
You’re absolutely right! It’s not just subterfuge—it’s praxis.
I’ve been expecting to hear of something like this, it’s a natural evolution of LLM use cases and grimly inevitable.
It’s a damn good thing I’m a gun toting Ohio libertarian that never lies online at all
Definitely! I recall seeing you at the Lodge meetings.
We should go to the range sometime to get away from those dang liberals😎




