• 2 Posts
  • 20 Comments
Joined 2 months ago
cake
Cake day: August 10th, 2025

help-circle

  • Yeah, but so maybe it goes dormant for another 5-10 years and people forget. And then, when middle-aged Stephen Miller comes out like Pink Floyd when he shaved all his hair, all with big plans and fire in his eyes… maybe he’s running against Gavin Newsom who’s kind of a POS anyway and nothing the left can get excited about, maybe he wins, and maybe then he gets to work. And he’s got energy, he’s not old or stupid.

    I think it’s better that it’s Trump. It’s still awful, but I really think it’s better this way. I don’t think people will forget this once it passes, or get distracted.


  • Seriously. Go after the people who did crimes, make a big public push to impeach the dirty judges, and make sure to bring grocery prices down instead of pushing up wages. If he’d done that, maybe it would have partly worked, maybe it would have been tough, but I think pretty fantastic odds that however it shook out, Biden would be out there drooling his way through his second term right now, and Fox News would be screaming about what a problem it was every time he tripped going up the stairs, but there are lot of Florida detainees who would still be alive. Kilmar would still be happy with his family, right now.









  • I was in court a few times when I was younger. Nothing too major.

    The first time, it really disgusted me seeing person after person get up before me and try to tell the judge how they hadn’t done anything wrong with just transparent horseshit. My turn came up, the cop testified to what happened, the judge asked me for my side, and I just said that it happened the way the cop said. The judge was legitimately a little taken aback.

    What the fuck, what am I supposed to say? Maybe it would have been different if it had been big charges or if I had been less naive or something, or if there was some wiggle room in what happened, but I just didn’t see the point in wasting everybody’s time and making myself look stupid and dishonest.

    (Note: Do not do this. Court reality is different from everyday life honorable reality. Get a lawyer, don’t say shit, fight to negotiate a better deal and threaten to waste their time and resources making them prove it if they don’t work something out with you. That is what a person will do if they want a good outcome. My priorities were different, I guess, I don’t know. I will say that in this case it didn’t wind up getting me in any more trouble than I would have been anyway. Mostly I’m just telling what happened to me and how I reacted and why.)


  • True that

    I feel like almost every really good leader is someone who happens to have that “natural leader” quality, but also asks people for input constantly and is aware of their lack of judgement.

    A little semi related aside: There is a fascinating story in “Most Secret War” of the author’s first meeting where Winston Churchill was running the meeting. One, Churchill came in in working clothes, the only one not wearing a suit, and everyone thought for a second that he was the janitor or something entering the wrong room when he walked in. He just didn’t carry himself like “the boss.” Once they all realized everyone stood up and he sort of waved it off and took his seat like nothing special. He had sort of anti charisma.

    Once he started running the meeting, Jones said that Churchill had an almost supernatural ability to spot when Jones at least had something he needed to say. Somebody would say something that was wrong, Jones would carefully keep his face neutral because he was just some random low-level peon at this meeting and didn’t want to get in trouble, and the next thing that happened Churchill would say, “Jones, what do you think of that?” Basically he was at a grandmaster level of digging to get to the bottom of what was actually happening so everyone could make good decisions.

    I don’t really know that much about any famous leaders through history, but it was just fuckin’ fascinating as a window onto how these decisions and plans actually get made, to some small extent.


  • Yeah, I get it. I don’t think it is necessarily bad research or anything. I just feel like maybe it would have been good to go into it as two papers:

    1. Look at the funny LLM and how far off the rails it goes if you don’t keep it stable and let it kind of “build on itself” over time iteratively and don’t put the right boundaries on
    2. How should we actually wrap up an LLM into a sensible model so that it can pursue an “agent” type of task, what leads it off the rails and what doesn’t, what are some various ideas to keep it grounded and which ones work and don’t work

    And yeah obviously they can get confused or output counterfactuals or nonsense as a failure mode, what I meant to say was just that they don’t really do that as a response to an overload / “DDOS” situation specifically. They might do it as a result of too much context or a badly set up framework around them sure.


  • PhilipTheBucket@piefed.socialtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    15 days ago

    Initial thought: Well… but this is a transparently absurd way to set up an ML system to manage a vending machine. I mean it is a useful data point I guess, but to me it leads to the conclusion “Even though LLMs sound to humans like they know what they’re doing, they does not, don’t just stick the whole situation into the LLM input and expect good decisions and strategies to come out of the output, you have to embed it into a more capable and structured system for any good to come of it.”

    Updated thought, after reading a little bit of the paper: Holy Christ on a pancake. Is this architecture what people have been meaning by “AI agents” this whole time I’ve been hearing about them? Yeah this isn’t going to work. What the fuck, of course it goes insane over time. I stand corrected, I guess, this is valid research pointing out the stupidity of basically putting the LLM in the driver’s seat of something even more complicated than the stuff it’s already been shown to fuck up, and hoping that goes okay.

    Edit: Final thought, after reading more of the paper: Okay, now I’m back closer to the original reaction. I’ve done stuff like this before, this is not how you do it. Have it output JSON, have some tolerance and retries in the framework code for parsing the JSON, be more careful with the prompts to make sure that it’s set up for success, definitely don’t include all the damn history in the context up to the full wildly-inflated context window to send it off the rails, basically, be a lot more careful with how to set it up than this, and put a lot more limits on how much you are asking of the LLM so that it can actually succeed within the little box you’ve put it in. I am not at all surprised that this setup went off the rails in hilarious fashion (and it really is hilarious, you should read). Anyway that’s what LLMs do. I don’t know if this is because the researchers didn’t know any better, or because they were deliberately setting up the framework around the LLM to produce bad results, or because this stupid approach really is the state of the art right now, but this is not how you do it. I actually am a little bit skeptical about whether you even could set up a framework for a current-generation LLM that would enable to succeed at an objective and pretty frickin’ complicated task like they set it up for here, but regardless, this wasn’t a fair test. If it was meant as a test of “are LLMs capable of AGI all on their own regardless of the setup like humans generally are,” then congratulations, you learned the answer is no. But you could have framed it a little more directly to talk about that being the answer instead of setting up a poorly-designed agent framework to be involved in it.


  • PhilipTheBucket@piefed.socialtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    1
    ·
    15 days ago

    Yeah it’s a bunch of shit. I’m not an expert obviously, just talking out of my ass, but:

    1. Running inference for all the devices in the building to “our dev server” would not have maintained a usable level of response time for any of them, unless he meant to say “the dev cluster” or something and his home wifi glitched right at that moment and made it sound different
    2. LLMs don’t degrade by giving wrong answers, they degrade by stopping producing tokens
    3. Meta already has shown itself to be okay with lying
    4. GUYS JUST USE FUCKING CANNED ANSWERS WITH THE RIGHT SOUNDING VOICE, THIS ISN’T ROCKET SCIENCE, THAT’S HOW YOU DO DEMOS WHEN YOUR SHIT’S NOT DONE YET





  • I really would not recommend specializing in C# at this point in computing history. You can do what you want obviously, but Python is much more likely to be what you want. C++ or Java might be okay if you want a job and are okay with a little bit dated / not ideal languages, or you could learn one of the proliferation of niche backend Linuxy languages, but C# has most of the drawbacks of C++ and Java without having even their relative level of popularity.

    IDK what issue you’re having with VSCode, but I think installing the .NET SDK and then using dotnet by hand from the command line, to test the install, might be a good precursor to getting it working in VSCode. But IDK why you would endeavor to do this in the first place.



  • Beastie Boys had one of the first and biggest of the anti-Iraq-War songs, I can’t think offhand of one that was more “mainstream” at the time and still explicit and specific about it.

    Well I’ll be sleeping on your speeches 'til I start to snore
    Cause I won’t carry guns for an oil war
    As-Salamu alaikum, wa alaikum as-salam
    Peace to the Middle East peace to Islam

    And so on. It might not have been the best (IMO that is “Empire” by Dar Williams, with haunting sadness, historical scope, and irony), but it was big.