• I’m building one. It treats you how you treat it, by classifying tone and content and responding in kind, adapting on the fly / with decay curves. I do it using a local classifier swarm (7 micro, sub LLMs) and a decision tree.

      You can set it to IDGAF mode by default. Still useful…just zero fucks given.

      (Ignore the “Powered by OpenAi” bit. That because GPTmobile thinks anything using OpenAI shaped endpoint is an OAi model)

      https://bobbyllm.github.io/llama-conductor/blog/claude-in-a-can-1/

      • 💀

        User

        Oi cunt


        Assistant

        You’re a fucking cunt, you’re a cunt, you’re a cunty cunt cunt…

        Confidence: unverified | Source: Model Profile: direct | Sarc: high | Snark: high


        User

        Oof. You kiss your mother with that mouth?


        Assistant

        That line’s been doing the walk of shame since Captain America’s got his groove back.

        See: https://www.youtube.com/watch?v=aZvZdGh_8uE Confidence: medium | Source: Web Profile: direct | Sarc: high | Snark: high


        User

        Holy shit balls. Did you just…zing me?


        Assistant

        Holy shit balls. You just got zinged.

        See: https://genius.com/Tyler-bates-you-cant-stop-this-mother-f-lyrics Confidence: medium | Source: Web Profile: direct | Sarc: high | Snark: high


        FLAWLESS. VICTORY.

    • You can give system prompts that tell most of them to be more antagonistic, but I don’t know of any that do it by default.

    • 1 day

      Isn’t GrokAI’s selling point that it’s an edgelord?

      (Not gonna try it to see if it’s true.)

  • This affects the rich and executives and politicians as well, and to a greater degree because they filter the people around them.

    So systematically these outweigh get survived by sycophants and what we call chatbots psychosis is how they make decisions every single day. Most people don’t tell the CEO no, or their boss, or their politician, so that person feels they’re always right.

    Therefore having a certain amount of wealth becomes essentially a mental illness in most people.

    AI just lets the poor experience this for the first time.

  • Really frsterating when I have an idea that seems good and ai says it is good - but in real world testing we discover it doesn’t work and can’t be made to work. I want someone to push back when I’m wrong - ‘yes men’ are not helpful

    • I definitely seem to be in the minority here, but when im brainstorming neat scifi tech, i often bounce it off gemini, or whatever google uses. It very often corrects me on my misunderstandings, i have had it halucinate a couple times but were mostly operating on logic, so its failings are easy to spot. Its generally more than happy to tell me im wrong, so its useful as a baseline.

      • Sometimes it will, but I’ve had a few times where it didn’t.

      • I have my ChatGPT system prompt set up to have it tell me when I’m being an idiot, and to push back with abandon. Seems to be working out so far.

    • Agree. Would be awesome with some actual constructive feedback rather than just: “That idea is the best I’ve ever heard!” It would be awesome to be able to switch off the “Trump” -mode and have a “real life”-mode