• ShotDonkey@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    14 minutes ago

    The results, especially the high numbers stated in the news article (68% recall, 90% accuracy) are overestimated as their verification method (i.e., whether the LLM detected really the right account) come from matching veryfied accounts with a test set of anonymous accounts of which they knew the real name. They knew the real name bcs the persons had a public link to their LinkedIn in their “anonymous” profile (which was removed for the sake of testing wheter the LLm can match the two acfounts. That being said: a user who uses a pseudonym but links his/her account publically to a, say, LinkedIn account doesn’t really care about anonymity and might hand out many more ‘breadcrumbs’ to follow than a truly anonymous account.

    But I still think that also in the case of a fully anonymous account, people can be fingerprinted and matched with non-anonymous identities due to language, style etc. by a LLM.

  • ComradePenguin@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    6 hours ago

    Is this the first step towards using local LLMs for anonymity? 🫠 Always rephrasing each sentence somewhat. Truly dystopian stuff

  • ne0phyte@feddit.org
    link
    fedilink
    English
    arrow-up
    25
    ·
    13 hours ago

    I am so grateful for already having been paranoid about sharing anything identifying about me starting 15+ years ago.

    I never uploaded a picture of myself. Never used my real name anywhere. I used different nicks for different branches of the Internet. A plethora of different email addresses etc.

    People thought I was being overly careful and I probably missed a lot of things due to not using Whatsapp, Facebook, Instagram, Twitter, Snapchat but I can’t say I regretted it at any point.

    • TankovayaDiviziya@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 hour ago

      Doing those is not unreasonable, but not even having a bank account is way too far. I know of someone, who was later diagnosed with autism and doesn’t have a job due to condition, initially didn’t want a bank account for fear of online snooping.

      Minimising digital footprint is perfectly fine, but trying to be off the grid and yet wants to participate in society and still engage in consumption is unreasonable. And this thinking isn’t just on one person, I saw many users in Reddit privacy stressing themselves out in trying to completely wipe off their digital footprints. Unless you participate in political activities, or really just wants to live completely isolated in a forest, being off the grid is totally unreasonable.

    • Scrollone@feddit.it
      link
      fedilink
      English
      arrow-up
      15
      ·
      11 hours ago

      It’s not enough. You should use a different writing style for each website you write on.

  • FauxPseudo @lemmy.world
    link
    fedilink
    English
    arrow-up
    80
    arrow-down
    1
    ·
    22 hours ago

    From a Facebook post I made on February 17th:

    There are giant AI data firms that promise they can go through massive troves of data and pull out general and specific information from them. Information that is actionable and accurate. Give it 6 million data points and it’ll find all the links and organize them for you and unmask hidden details that aren’t visible to the naked eye.

    Not one of those companies is stepping up to go through the publicly released Epstein files.

    • Spaniard@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      4 hours ago

      Today I asked AI to tell me which phone providers were available short by price and offers and it lied all the time, when I pointed it the AI corrected most of it but also removed some that were accurate for some reason.

      It would have been quicker if I did that myself instead of ask AI, oh also didn’t provide all companies.

      Maybe those companies have better AI that can make no mistakes but I doubt it, I think the LLMs will lie and no one has time to check if they are correct.

        • Spaniard@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 hours ago

          How come it ended up giving me the right answer albeit removing some previous right answers then? (removed a few companies for some reason)

          Anyway that was a small and easy to check misinformation but if they have over 3 decades of online informational about me noway a person is going to confirm the LLM didn’t bullshit it’s way to an answer to satisfy the human.

          • madmantis24@lemmy.wtf
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 hours ago

            These models aren’t going to produce accurate information about the people they investigate, and it won’t even matter if it’s accurate. What “matters” is that their reports will add new layers of the facade of legitimacy to whatever story the authorities using them want to construct

        • General_Effort@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          I don’t think you can do literally the same thing on the Epstein files. Maybe I’m misunderstanding what you have in mind.

          • FauxPseudo @lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            In theory, using the information and the released files and the information the public sources, it should be possible to figure out who those redacted names are based on writing style and other factors. We should be able to deanonymize.

    • Randomgal@lemmy.ca
      link
      fedilink
      English
      arrow-up
      28
      ·
      22 hours ago

      This is what I find crazy. Where are the AI bros chewing through the Epstein files?

      • osaerisxero@kbin.melroy.org
        link
        fedilink
        arrow-up
        19
        ·
        21 hours ago

        I would be shocked if someone hasn’t shoved them into a local model somewhere, but all the big ones would filter them to death with content restrictions

    • Mubelotix@jlai.lu
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      15 hours ago

      We wouldn’t want that tbh. Justice needs to be precise and backed up by tangible facts

      • KeenFlame@feddit.nu
        link
        fedilink
        English
        arrow-up
        4
        ·
        14 hours ago

        Also don’t use dna tests or chemical analysis. It’s invisible hocus pocus and can be wrong! And woe if someone that fucks and tortures kids regularly is wrongly accused of raping kids and running their child minds no that would be awful

      • FauxPseudo @lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        15 hours ago

        You can use the results of the AI analysis to identify people and then use that to do a proper investigation. Right now none of that is happening. No speculation. No tangibles. No investigation. No indictment.

        Trying to unmask people is a step in the right direction.

  • jballs@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    121
    ·
    24 hours ago

    As a registered Republican woman from Texas with five children and two dogs, let me just say that I am astonished!

    • whaleross@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      13 hours ago

      As true as my name is Brenda and my last name is also Brenda. And so is my husband, Brenda. It is a hot day in Texas America today, I’m going to grill one of our dogs for dinner. It is a hot day republican tradition to grill a dog. Hence the name Hot Dogs and the playful name Wieners, named after wiener dogs. Oh lordy bless you heart yeehaa.

    • pivot_root@lemmy.world
      link
      fedilink
      English
      arrow-up
      46
      ·
      22 hours ago

      Me too. I thought I was safe as a Ottoman Empire expatriate living in Arrakis! I don’t want LLMs to connect this account to my pseudonymous mommy blog where I write about my three children who might exist but could be delusions of my untreated schizophrenia.

      • potoooooooo ✅️@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        15 hours ago

        Oh, WE EXIST, mommy! Let me assure you, as one of said imaginary schizophrenia babies. Currently shacking up in Miami with my new wife I just met cranking my hog at Sturgis.

      • Bigfishbest@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        15 hours ago

        I don’t believe this! As a fumgrian living as a would be dead camoose off Mt. Kabul, I am overjizzed that AI is reading all my pornhub comments.

      • CheesyFingers@piefed.social
        link
        fedilink
        English
        arrow-up
        16
        ·
        18 hours ago

        It seems that i, the original Unidan, will unfortunately need to create even more alts to escape being found out. Blast!

  • cley_faye@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    13 hours ago

    Yeah. I got a hunch of that a while ago, while trying some “old” scenarios of de-anonymization we used to do by hand. Just asking questions and posting pictures got surprisingly accurate results. A single picture with (to me) no significant landmark could lead to localizing a specific part of a city, and that was using a local LLM with a relatively small model, running on a 16GB VRAM 4060Ti.

    It is now time to remember fondly the time where the younger people were warned by older people to not post all their stuff online, not over-share, be cautious about strangers, etc. I’m not sure when we lost that, but oh boy, it’s a festival.

  • doesit@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    15 hours ago

    Kind of obvious. If you’re a highschool teacher and you used to be a photographer. You also volunteer as a fireman. You live in France. You have 2 daughters. In 2022 you asked about repairs on your honda civic.
    All off this can be amassed from different posts on facebook or reddit. There’ll be just a few people that fit this profile.

  • ExLisper@lemmy.curiana.net
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    14 hours ago

    I think this will only work with people narrating their lives on social media.

    “Got coffee from my favorite Granier at La Rambla! Ready of new day of work designing hats for dogs”

    “Me and Bobby heading to Madrid to see my friend Concepcion. Do you like his new hat?”

    “Just got nominated for ‘best business-casual hat’ at this year’s Barkies! So proud”

    And so on…

    Because how are you going to de-anonymize some random ramblings about Linux and beans? Everyone likes Linux and beans.

    • vaultdweller013@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      Hell you could even pull a me and just lie about some shit. A story I heard from a YouTube video or documentary gets modified into an old bastard I knew for example. Introducing such variables would be easy for a human to eventually pick up that I’m lying but an AI may come to the conclusion that I’m Zack Hazard. Point is that I dirty up the info about myself, the psychotic larp I do semi-consistently probably also helps.

    • rebelsimile@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      10 hours ago

      This is good advice. Pick a person you know and drop hints that you’re them. Bonus points if that person is terminally online. Anyway, gotta get back to running X.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    36
    ·
    1 day ago

    Of course, another option is for people to dramatically curb their use of social media, or at a minimum, regularly delete posts after a set time threshold.

    Deletion won’t deal with someone seriously-interested in harvesting stuff, because they can log it as it becomes available. And curbing use isn’t ideal.

    I mentioned before the possibility of poisoning data, like, sporadically adding some incorrect information about oneself into one’s comments. Ideally something that doesn’t impact the meaning of the comments, but would cause a computer to associate one with someone else.

    There are some other issues. My guess is that it’s probably possible to fingerprint someone to a substantial degree by the phrasing that they use. One mole in the counterintelligence portion of the FBI, Robert Hanssen, was found because on two occasions he used the unusual phrase “the purple-pissing Japanese”.

    FBI investigators later made progress during an operation where they paid disaffected Russian intelligence officers to deliver information on moles. They paid $7 million to KGB agent Aleksander Shcherbakov[48] who had access to a file on “B”. While it did not contain Hanssen’s name, among the information was an audiotape of a July 21, 1986, conversation between “B” and KGB agent Aleksander Fefelov.[49] FBI agent Michael Waguespack recognized the voice in the tape, but could not remember who it was from. Rifling through the rest of the files, they found notes of the mole using a quote from George S. Patton’s speech to the Third Army about “the purple-pissing Japanese”.[50] FBI analyst Bob King remembered Hanssen using that same quote. Waguespack listened to the tape again and recognized the voice as Hanssen’s. With the mole finally identified, locations, dates, and cases were matched with Hanssen’s activities during the period. Two fingerprints collected from a trash bag in the file were analyzed and proved to be Hanssen’s.[51][52][53]

    That might be defeated by passing text through something like an LLM to rewrite it. So, for example, to take a snippet of my above comment:

    Respond with the following text rephrased sentence by sentence, concisely written as a British computer scientist might write it:

    Deletion won’t deal with someone seriously-interested in harvesting stuff, because they can log it as it becomes available. And curbing use isn’t ideal.

    I mentioned before the possibility of poisoning data, like, sporadically adding some incorrect information about oneself into one’s comments. Ideally something that doesn’t impact the meaning of the comments, but would cause a computer to associate one with someone else.

    I get:

    The deletion of data alone will not prevent a determined party from gathering information, as they may simply record the information as it becomes available prior to its deletion. Moreover, restricting usage is not an ideal solution to the problem at hand.

    I previously mentioned the possibility of introducing deliberate errors or misinformation into one’s own data, such as periodically inserting inaccurate details about oneself within comments. The goal would be to include information that does not significantly alter the meaning of the comment, but which would cause automated systems to incorrectly associate that individual with another person.

    That might work. One would have to check the comment to make sure that it doesn’t mangle the thing to the point that it is incorrect, but it might defeat profiling based on phrasing peculiarities of a given person, especially if many users used a similar “profile” for comment re-writing.

    A second problem is that one’s interests are probably something of a fingerprint. It might be possible to use separate accounts related to separate interests — for example, instead of having one account, having an account per community or similar. That does undermine the ability to use reputation generated elsewhere (“Oh, user X has been providing helpful information for five years over in community X, so they’re likely to also be doing so in community Y”), which kind of degrades online communities, but it’s better than just dropping pseudonymity and going 4chan-style fully anonymous and completely losing reputation.

    • zerofk@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 hours ago

      Your above average use of the word “one” and variations like “one’s” could be quite telling.

      As could my correction of “it’s” in the above sentence.

    • HyperfocusSurfer@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      23 hours ago

      Regarding the last point: it’s more of a bias, tho, so reducing it may even be a good thing. E.g. asking Kent Overstreet’s opinion on your bcachefs setup is probably useful, while getting relationship advice from him is ill-advised.

      • regenwetter@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 hours ago

        Advice being right or wrong isn’t necessarily the big issue for online communities (unless most other users are also wrong). What really degrades them is users acting like assholes, and someone who acts like that in a tech community is fairly likely to also do that in a political or relationship community.