
Damn… and I thought nailing his feces to the door was bad…

Damn… and I thought nailing his feces to the door was bad…
What “usefulness” do you get out of them?

I’ve always felt the opposite. I love tofu, and have a zillion recipes for it. But seitan still disgusts me. No matter how I try to prepare it, I’ve always been disappointed.

I say this as a proud vegan, but seitan is fucking disgusting.

That’s because it isn’t true. Retraining models is expensive with a capital E, so companies only train a new model once or twice a year. The process of ‘fine-tuning’ a model is less expensive, but the cost is still prohibitive enough that it does not make sense to fine-tune on every single conversation. Any ‘memory’ or ‘learning’ that people perceive in LLMs is just smoke and mirrors. Typically, it looks something like this:
-You have a conversation with a model.
-Your conversation is saved into a database with all of the other conversations you’ve had. Often, an LLM will be used to ‘summarize’ your conversation before it’s stored, causing some details and context to be lost.
-You come back and have a new conversation with the same model. The model no longer remembers your past conversations, so each time you prompt it, it searches through that database for relevant snippets from past (summarized) conversations to give the illusion of memory.

Besides, tech bros didn’t program this in, this is just an LLM getting stuck in the data patterns stolen from toxic self-help literature.
That’s not necessarily true. The AI’s output is obviously shaped by the training data, but much of it is also shaped by the prompt (and I don’t just mean your prompt as a user).
When you interact with (for example) ChatGPT, your prompt gets merged into a much larger meta-prompt that you don’t get to see. This meta-prompt includes things like what tone the AI should use, how the AI should identify itself, how the AI should steer the conversation, what topics the AI should avoid, etc. All of that is under the control of the people designing these systems, and it’s trivially easy for them to adjust the way the AI behaves in order to, for example, maximize your engagement as a user.

Looks exactly like I expected, lol

That’s like asking me to pay 3 cents…

This seems like such a glaringly-obvious solution to lower inference cost that surely there must be some fundamental flaw in it… otherwise all of the big AI firms would be doing it, right?
Right…?

Of all the shitty AI products flooding the market right now, Atlassian’s Rovo has got to be the most useless I’ve had the misfortune of using.
They should be hiring more workers to fix their AI slop, not replacing them with even more of it.

Considering how much shit Bungie ate over this (and rightfully so), I doubt they would’ve gone this route. All it would take is one Twitter post from the artist saying Bungie tried to stiff them again and the whole controversy gets reignited all over.

People who enjoy hardcore pvp and extraction shooters are hyped… but the important question is whether or not those people represent a large enough niche to sustain a game with such a massive budget.
I’m guessing no, but who knows.

Introducing: Microsoft Cosmos!
Send your data to heaven while we turn the planet into hell!

My understanding is that these “datacenters” would be used exclusively for model training, where latency doesn’t matter.
It is still an outrageously stupid idea for a zillion other engineering reasons, though.

most moons
Pretty much every moon but Titan. Titan, however, would be excellent for heat dissipation. Long before generative AI was even a thing, scientists have speculated that Titan would be the perfect place for datacenters because low-temperature computation is so much more efficient.
Of course, building a datacenter on Titan would be a several-hundred-trillion dollar endeavor, so… good luck bootstrapping your way into that industry.

It’s also clever politics. Minnesota has the largest iron mining operations in the entire United States, so choosing iron as your core battery technology is a smart (albeit cynical) way to drum to some local support with the promise of bringing new demand back to the taconite mines.
Whether that will be strong enough to overcome the extreme negative sentiments around datacenter projects? Who knows…

There have been some pretty high-profile departures from Anthropic over the past few months, so… I dunno, seems like there are plenty of insiders who are unhappy with the company’s current trajectory.

No it isn’t. This is an unsubstantiated rumor and watching 18 minutes of sensationalized AI slop isn’t going to make it true or teach you anything you couldn’t learn from a 10 second Internet search.
Kiriakou has serious “that happened” energy.
I’m not really qualified to guess how much of what he says is bullshit, and I’m sure you inevitably gain some stories being a CIA station chief or whatever, but the guy has a story about literally everything ever. He’s been everywhere, he’s met everyone, he knows everything about everything… and it really strains disbelief.
The guy obviously loves telling stories (and I’d even go so far as to say he’s great at it), but I’ve gotta imagine that the vast majority of it is seriously embellished if not outright bullshit — especially when so much of what he says seems designed to paint him favorably.