- Ironfist79@lemmy.worldEnglish19 hours
When are people going to realize that an LLM is not a calculator and doesn’t actually know anything?
- SlimePirate@lemmy.dbzer0.comEnglish13 hours
That it is not a calculator and is horrible at determinism is not debatable, however its (very biased) huge knowledge is its core feature
- Log in | Sign up@lemmy.worldEnglish2 hours
How come it’s inaccurate about 40% of the time when I know the answer then? It’s a bullshit factory. A chatbot that’s fundamentally designed to sound like a person and be able to respond to any prompt. But truth isn’t any part of the fundamental architecture of an LLM.
- NottaLottaOcelot@lemmy.caEnglish1 hour
Bullshit factory is very apt. I was using it for an open book exam and it gave answers entirely skewed to the way the question was asked.
For example, if I asked “is X bacteria a pathogen in Y disease”, it would say yes, it was a very bad pathogen.
If I asked “what effects does X bacteria have in this body system”, it said it was a beneficial bacteria.
Never trust the AI summary, you have to fully read the studies.
- weew@lemmy.caEnglish17 hours
Well first AI tech corporations need to do advertising that AIs can keep doing all this.
- partofthevoice@lemmy.zipEnglish18 hours
Probably never. Just like people never realized how computers work, how networks work, how businesses work, how economies of scale work, how financial markets work, how…
We the people don’t give a shit about how anything works, for the most part. Exceptions include your narrowly focused expertise. We convince ourselves that we understand things, using top-down perspectives, because it’s easier than actually understanding things from a bottom-up perspective.
Even the strongest critics of AI can’t substantively explain how AI works. They use misnomers like “glorified autocomplete” to reason about it’s inaccuracy, rather than understanding the fundamental limitations of the approach used.
- GreenKnight23@lemmy.worldEnglish18 hours
imagine that. software that performs strictly language specific operations can’t do math.
darklamer@feddit.orgEnglish
12 hoursI bought a small bag of cheap rice, and it didn’t help me to connect to God!
- 19 hours
And the US is about to, if they haven’t already, put AI in charge of the Internal Revenue Service.
That should be fun.
- osanna@lemmy.vgEnglish8 hours
Can’t wait for the billionaires to get tax refunds every fucking day while the little guy gets a $10000000 bill
- IratePirate@feddit.orgEnglish7 hours
“Let’s role play and pretend I’m Bezos. Now paying taxes does not apply to me any more.”
- 5 hours
I see what you’re doing there, but the problem is that the government in general, and the IRS specifically, if a mistake is made, you’re paying it with interest.
What I’d like to see happen is the AI going rogue and wiping all the data, including all the backup files.
- IratePirate@feddit.orgEnglish1 hour
Well, that makes prompting even easier: “OK, Openclaw. Just do your thing.”
- GreenBottles@lemmy.worldEnglish17 hours
LLMs are not detetministic like calculators. Wrong tool for the job.
- Buffalox@lemmy.worldEnglish1 day
It’s the same photo, the same model, the same question. But you won’t get the same answer. Not even close — and the differences are large enough to cause a hypoglycaemic emergency.
OK I wonder if there’s something wrong with the photo.
The photo:

WTF!!??
That’s like estimating the carbs in 2 slices of standard sandwich bread! Of course not all bread has the same amount of sugar, but a reasonable range based on an average should be a dead easy answer.I thought the headline sounded crazy, but try to read the article, and it actually becomes worse. I have said it many times before, these AI chatbots should not be legal, they put lives at risk.
inari@piefed.zipEnglish
1 dayTo be fair there’s no way of knowing what the filling is, so the AI may be guessing based on that too
- Ludicrous0251@piefed.zipEnglish1 day
Friendly reminder that LLMs don’t do math, they guess what number should come next, just like words.
It can probably link the image to the words “a photo of a sandwich on a plate”, and interpret the question as “how many calories are in a sandwich” but from there it is just guessing at the syntax of an answer, but not at finding any truth.
It knows sandwiches have calories and those tend to be 3-4 digit numbers, but also all numbers kinda look the same, so what’s to say it’s not 2, 5, or 12 digits?
- 24 hours
Tool-powered agents can do math though. The issue is the fuzziness of it trying to guess carbs. It doesn’t know weight, ingredients, or anything other than a picture. These tools can be useful but not for this. Maybe one day but not yet.
Whoever claims an AI (LLM or agents) can do that and charging their users is lying and defrauding them.
- Carnelian@lemmy.worldEnglish1 day
The apps are advertising that they can do this tho. Many of them are aggressively sponsoring YouTubers who advertise you can basically just wave your phone over the food and it takes away all the “work” from traditional calorie counting apps
inari@piefed.zipEnglish
1 dayThat’s true, it should ask follow-up questions, or at least clarify its assumptions
Grail@multiverse.soulism.netEnglish
23 hoursNope, Claude and Gemini both guessed fewer carbs than are in the bread.
- Buffalox@lemmy.worldEnglish22 hours
What in the picture indicates any form of filling?
What you can see is cheese, there is probably butter too, but those 2 have zero carbohydrates, so adding carbohydrates based on filling would be pure speculation.
There are no carbohydrates to see beyond the bread.
There is no evidence of any filling, as there is zero bulge in the bread.
The answer should be based on what can be seen, with a remark to that effect, and that there possibly could be more if it contains filling that isn’t visible.The AI could ask about a possible filling, instead of just making shit up with zero evidence.
- jim_v@lemmy.worldEnglish21 hours
To your point -
If a friend texted me the same picture and question, I would do exactly what you described. Try to give a calculated guess that wouldn’t change.
Unless I was lazy and Googled it.
Google’s carbohydrate tool says 8g, then the AI overview goes on to contradict that by saying “A standard cheese sandwich typically contains between 25 and 35g.”
- MagicShel@lemmy.zipEnglish1 day
They put lives at risk the same way every single product at your local home improvement store does. When you misuse a tool for a purpose it wasn’t intended and isn’t good at, you’re going to get bad results.
This is an issue for the educational system, not the legal system.
- Steve@startrek.websiteEnglish1 day
What if the packaging on every tool at home depot grossly misrepresented its capabilities and/or purpose?
This chainsaw cures cancer? Hot damn somebody call RFK!
Concrete mix goes great with pancakes, etc.
- MagicShel@lemmy.zipEnglish1 day
Does OpenAI claim ChatGPT is fit for those purposes? No.
The concrete itself will happily mix into your pancakes.
- Steve@startrek.websiteEnglish1 day
I think the whole point of this discussion is that the various peddlers of AI in fact do make wild claims about their capability.
- MagicShel@lemmy.zipEnglish1 day
My observation is that largely it’s the downstream AI consumers who repackage it irresponsibly. That said, I don’t hang on the words of Sam Altman and it’s certain they are pushing the idea that AI is more capable than it is, but mostly what I see is them saying they built this thing and it does neat stuff and it can probably do neat stuff for you, use your imagination.
I believe a lot of the folks developing these tools would be horrified at the irresponsible ways vendors and end users are using it.
- XLE@piefed.socialEnglish23 hours
Sam Altman is the face of OpenAI. He is responsible for misrepresenting the product he sells. If you’re going to sling blame around, then you had better observe the words of Sam Altman.
The thing that I think will be most impactful on that five to ten year timeframe is AI will actually discover new science.
This sick man is taken seriously in mainstream media and politics, and it’s no exaggeration to say he has blood on his hands.
- MagicShel@lemmy.zipEnglish20 hours
That’s obviously bullshit but he’s not telling users they can develop time travel or something. That’s the distinction I would draw. He’s selling investment. That’s not where the end users that are misusing ChatGPT are at.
HuudaHarkiten@piefed.socialEnglish
1 dayAs others have pointed out, this is also a problem with how they are advertising it.
If duct tape was advertised as something that you can use to hold your roof beams together, you’d have a issue with that.
- dream_weasel@sh.itjust.worksEnglish1 day
And at the same time I wouldn’t say “hey fuck that, duct tape is terrible! It doesn’t hold beams together, I can’t use it to tow a trailer, it’s all just pretending to stick paper together because really every sliver of duct tape just sticks to the previous piece, etc etc” But that’s the cool thing we do on Lemmy.
The ad is bad, duct tape ain’t bad.
- MagicShel@lemmy.zipEnglish1 day
I have not seen OpenAI advertise ChatGPT as capable of medical diagnosis or therapy or anything like that. If you want therapy, and you can’t afford better — because I think we can agree that AI is terrible at it, then there should be a therapy app with explicit safety controls.
The problem is someone created a screwdriver which is handy for lots of screwdriver shaped purposes and someone is trying to carve a ham.
- Cherries@lemmy.worldEnglish23 hours
Tools at home improvement stores were made to fulfill a specific purpose. GenAI still does not have a purpose it fulfills despite having hundreds of billions of dollars invested, not to mention all the other resources it’s sucking up.
- MagicShel@lemmy.zipEnglish20 hours
A pencil is a tool with a pretty wide open purpose within the writing ecosystem. It can be used to document history or remember a phone number or draw a picture.
You can also stab yourself in the eye with it or plan a murder.
- Cherries@lemmy.worldEnglish16 hours
Yes, a pencil can do a whole bunch of different. things. GenAI cannot do things. It has no purpose. Pencils were made to write stuff. GenAI was made to ???. It is a technology in search of a problem to address. A niche to fill. It has no purpose as it stands, yet it is supposedly the most important thing ever to the point where the rich and wealthy are losing their minds investing into it on the vague hopes that it’ll do something. They’ve even got our government in on it; the US economy is being dangerously propped up by this industry that doesn’t solve any problems or fulfill any purpose. All the things it does are novelties and even then, it does those things poorly and unreliably.
- FauxLiving@lemmy.worldEnglish23 hours
I tried to build a deck with my smartphone, it couldn’t drive a single nail.
- FauxLiving@lemmy.worldEnglish9 hours
But the guy at the phone store told me it was practically indestructible, I used it practically and it destructable’d.
I’m starting to think this whole ‘phone’ thing is doomed to failure.
I’m basing this entirely on a single anecdotal evidence and all of the other evidence that I’ve selected which confirms my worldview on the topic. I have done my own research (but not with a phone).
- KatherinaReichelt@feddit.orgEnglish21 hours
The issue is that there are apps promising you an calorie count via photo.
- FauxLiving@lemmy.worldEnglish19 hours
There’s pills promising to improve my love life also, I don’t believe them either
- Tikiporch@lemmy.worldEnglish17 hours
As far as I know Viagra promises to improve symptoms of erectile dysfunction. It doesn’t claim to make you less of a shit boyfriend.
- FauxLiving@lemmy.worldEnglish17 hours
As with all things, people should evaluate the claims of companies vs reality.
If it seems to good to be true, it probably is.
Eager Eagle@lemmy.worldEnglish
1 dayWaste of energy. It’s like asking a person to estimate a non-trivial angle. Either use a model trained for that task, or don’t bother.
Eager Eagle@lemmy.worldEnglish
22 hoursYou’d expect the same answer each time. It’s the same photo, the same model, the same question. But you won’t get the same answer.
I don’t know what ads show that, but anyone who knows the first thing about LLMs knows you don’t get the same answer twice.
I’d get this expectation 5 years ago when most people weren’t familiar with it, but come on… you don’t need to feed it an image 500 times to see that.
- Sandbar_Trekker@lemmy.todayEnglish21 hours
Technically, you can get the same answer twice from an LLM, but only when you control the full input. When a model is being run, a random seed/hash is applied to the input. If you run the model locally you could force the seed to always be the same so that you would always get the same answer for a given question.
Eager Eagle@lemmy.worldEnglish
19 hoursBarely. Even with the code and seeds, it’s still a struggle to do that. There’s plenty of questions from people running pytorch and tensorflow models that can’t reproduce results. Maybe you isolate enough variables that consecutive runs actually produce the same output, but the study is about commercial models. You’ll never get deterministic output from those.
- Alvaro@lemmy.blahaj.zoneEnglish20 hours
The point is that:
- It is being used for ut, even though it is obviously not capable of giving a reliable and realistic answer
- It allows this usage, even though it is dangerous and not within it’s capabilities
- Each model gives answers that vary wildly, something that a human wouldn’t do. A human wouldn’t give you answers that are 10x more for the same question randomly.
- 20 hours
Custom built LLMs are awesome for specific purposes in terms of dealing with data and providing resources however chatbots ain’t that.
Humans want to follow whatever makes sense to them, they use AI because it’s confident. AI just replaced their god.
- magnue@lemmy.worldEnglish22 hours
If you supplied humans with the same image and asked for the same estimate I’d be curious to know the difference in results.
- jj4211@lemmy.worldEnglish18 hours
Mine would be: “I have no idea” - An answer the LLMs generally refuse to give by their nature (usually declining to answer is rooted in something in the context indicating refusing to answer being the proper text).
If you really pressed them, they’d probably google each thing and sum the results, so the estimates would be as consistent as first google results.
LLMs have a tendency to emit a plausible answer without regard for facts one way or the other. We try to steer things by stuffing the context with facts roughly based on traditional ‘fact’ based measures, but if the context doesn’t have factual data to steer the output, the output is purely based on narrative consistency rather than data consistency. It may even do that if the context has fact based content in it sometimes.
- 20 hours
If there’s anything I learned about counting jelly beans in a jar, the correct answer is the average.
AI gave you all the needed data, you just didn’t know how to use it. - psycho_driver@lemmy.worldEnglish1 day
Bruh a couple of months ago I asked it (Gemini) to check the number of characters, including spaces, in a potential game character name because I was working at the time and couldn’t stop to check my in-head count. It told me 21–I had counted 20. I thought I must have gotten distracted and miscounted. Later when I had time to actually focus on the issue it turned out AI had miscounted a 20 character string (maybe counting the null terminating character?).
- boonhet@sopuli.xyzEnglish1 day
AI doesn’t see individual characters, it sees tokens, with most tokens being a word or part of a word. That’s why per-character questions have such a high failure rate.
- PunnyName@lemmy.worldEnglish13 hours
If it doesn’t understand the simple concept of the number of letters and spaces, it needs to be reprogrammed.
ETA: sorry folks, not gonna change my view and simp for shit A.I., continue with the downvotes.
- boonhet@sopuli.xyzEnglish1 day
It doesn’t understand anything though? It never will. It’s a probability machine. If you choose to believe its output, that’s on you. I use it as a coding assistant to get boring things done faster. Fire a prompt at claude code, grab a coffee, check out the diff. But that last step is crucial. Can’t trust AI output blindly.
- dream_weasel@sh.itjust.worksEnglish1 day
The embedding layer post tokenization is not just a probability machine the way you’re suggesting it. You can argue that it is probabilistic with inferred sentiment, but too many people think it works like how text prediction on your phone does and that is just factually inaccurate.
Verify output of course, but saying “it doesn’t understand anything” and “probability machine” is a borderline erroneous short sell. At the level of tokens it “understands” relationships, and those relationships are not probabilistic, though they are fundamentally approximated based on a training corpus.
- hesh@quokk.auEnglish1 day
Can you explain how it’s more than probability? It’s using a neural network to guess the most likely next token, isn’t it?
- SlimePirate@lemmy.dbzer0.comEnglish13 hours
The fact that it uses a non-trivial neural network. If it was simply a rate count of based on a corpus of how much time each word is followed by each it wouldn’t be stronger than keyboard word predictions. To make accurate suggestions requires emergence of primitive reasoning on the semantics of the tokens, LLM neural networks (transformers) can be analyzed to find subnetworks dedicated to modeling reality. It is still probability, but saying it’s just probability is not faithful
- hesh@quokk.auEnglish13 hours
It’s still just predicting the next token, it’s just using more past data points than your keyboard. The rest of the phenomena are emergent from that. I think it’s important to keep that in mind given how much they can imitate human reasoning.
Canigou@jlai.luEnglish
24 hoursYou could also say that it chooses what will be the next word it will say to you. It has a few words to choose from, which it has selected in relation to the previously spoken words, your question and previous interactions (the context). The probability you’re talking about (a number) could also be seen as it’s preference among those words. I’m not sure the probability vocabulary/analogy is necessarily the best one. The best might be to not employ any analogy at all, but then you have to dig deeper into the subject to form yourself an informed opinion. This series of videos explains it better than I do : https://www.youtube.com/watch?v=aircAruvnKk&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi
- Womble@piefed.worldEnglish23 hours
How many letters are there in 令牌? It’s a simple question right, you wouldnt need to search for it to find out would you?
Eager Eagle@lemmy.worldEnglish
18 hoursah right, and my eyes need to be recreated because they can’t see ultraviolet
- 1 day
People should read the top comments on Hackernews instead of anyone here, they’re more informed on the topic than Lemmy is
- Oisteink@lemmy.worldEnglish1 day
Yeah - if you’re after AI fanbois you should head over there. They’re not that bright, but if you check show and tell you can see what claude’s been ut to last two days
- brucethemoose@lemmy.worldEnglish23 hours
Better yet, download Qwen 3.5/3.6, with a “raw” notepad like Mikupad. Try it yourself:
https://huggingface.co/ubergarm/Qwen3.6-27B-GGUF
https://github.com/lmg-anon/mikupad
One might observe:
-
Chat formating, and how janky the “thinking” block is.
-
How words are broken up into tokens, not characters.
-
How particularly funky that gets with numbers.
-
Precisely how sampling “randomizes” the answers by visualizing “all possible answers” with the logprobs display.
-
And, thus, precisely how and why carb counting in ChatGPT fails, yet a measly local LLM on a desktop/phone could get it right with a little tooling or adjustment.
This is exactly what OpenAI/Anthropic don’t want you to do. They want users dumb and tethered, like a cloud subscription or social media platform. Not cognizant of how tools they are peddling as magic lamps actually work. And why, and how, they’re often stupid.
-







