• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    3 months ago

    Nitpick: it was never ‘filtered’

    LLMs can be trained to refuse excessively (which is kinda stupid and is objectively proven to make them dumber), but the correct term is ‘biased’. If it was filtered, it would literally give empty responses for anything deemed harmful, or at least noticably take some time to retry.

    They trained it to praise hitler, intentionally. They didn’t remove any guardrails. Not that Musk acolytes would know any different.