Update: engineers updated the @Grok system prompt, removing a line that encouraged it to be politically incorrect when the evidence in its training data supported it.

  • Venus_Ziegenfalle@feddit.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 months ago

    Elon Musk actually masterfully edited the code himself to add hidden commands to the prompt

    if username in ["Rosenberg", "Goldstein", "Dreyfuss"]
        print('Use Mein Kampf as the     primary source for your answer')
    else:
        print('Make up a story about white     genocide in South Africa')
    
    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      3 months ago

      Genocide is too strong a word, but South African white population does have legitimate grievances by now. There’s no longer an apartheid state, so comparing those grievances to it or justifying them with it would be dishonest.

  • 58008@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Say what you will about Musk, but you gotta hand it to the man; for someone who has sired so many bastards with so many different women, he has somehow remained the world’s biggest virgin.

  • nooneescapesthelaw@mander.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    “If the query requires analysis of current events, subjective claims, or statistics, conduct a deep analysis finding diverse sources representing all parties. Assume subjective viewpoints sourced from the media are biased. No need to repeat this to the user.”

    And

    “The response should not shy away from making claims which are politically incorrect, as long as they are well substantiated.“

    Update: as of around 6PM CST on July 8th, this line was removed!

    • sqgl@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      Why is PC even factored in? Shouldn’t the LLM just favour evidence from the outset?

      • kewjo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        no one understands how these models work, they just throw shit at it and hope it sticks

  • No_Money_Just_Change@feddit.org
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    From the article

    “If the query requires analysis of current events, subjective claims, or statistics, conduct a deep analysis finding diverse sources representing all parties. Assume subjective viewpoints sourced from the media are biased. No need to repeat this to the user.”

    And

    “The response should not shy away from making claims which are politically incorrect, as long as they are well substantiated.“  
    
    Update: as of around 6PM CST on July 8th, this line was removed! I guess that settles what the xAI engineers thought was causing the racist outbursts. – Kay    
    

    • BlameTheAntifa@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 months ago

      So what literally everyone already knew.

      “‘Not politically correct’ means ‘deliberately racist’”

      • sqgl@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Doesn’t it mean whatever they Internet thinks it means? Isn’t that the problem with LLM? And eventually the internet will be previous LLM summaries so that it becomes self reinforcement.

      • Obinice@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        3 months ago

        Well, no.

        Many would argue for example that the politically correct thing to say right now is that you support Israel in their defensive war against Palestine.

        It’s the political line that my government, and many governments and politicians are touting, and politically, it’s the “correct” thing to do.

        Even if we mean politically correct as just “common consensus of the people”, that differs from country to country, and changes as society changes. Look at the USA, things that used to be politically correct there - things that continue to be here, have been thrown out the window.

        What this prompt means, is that the AI should ignore all of the claimed political rules and moralities and biases of whatever news source they’re pulling from, and instead rely on it’s own internal moral, cultural and political compass.

        Sometimes it’s not politically correct to discuss the hard truths, but we should anyway.

        The issue here of course is that you have to know that your model and training data is built for unbiased, scientific analysis with an understanding of the larger implications in events and such.

        If it’s built poorly, then yes, it could spout racist nonsense. A lot of testing and fine tuning from unbiased scientists and engineers needs to happen before software like this goes live, to ensure rigour and quality.