texture@lemmy.worldEnglish
1 houri think reading the title of this post hurt my brain. like what are we doing here? making medical claims using sensationalist and meaningless language… seems unhelpful
- LuminousLuddite@lemmy.worldEnglish1 hour
I’ve smoked a lot of weed, drank, taken various substances, but AI does something distinct to my brain that doesn’t feel right. Like the “use it or lose it” principle being applied to general cognition. Even after a short time of use I’m now relying less on search engines.
- iglou@programming.devEnglish3 hours
Those are important studies but nothing shocking. The conclusion to draw from them is the same one we’ve drawn from all technologies that have improved our lives to some degree: Without them, we tend to either be incompetent as losing access to them isn’t worth planning for, or we are demotivated because why would we deprive ourselves from technology that makes our work so much less exhausting?
It doesn’t necessarily remove our capacity to think (and the article falsely generalises to critical thinking), it shifts what kind of thinking we do.
If AI is as good or better than I am at writing code, then I’ll switch my brain to only do the orchestrating and architecture rather than the writing code part. And yes, if you remove AI, then the switch will cause me to perform less than I used to before AI, but not permanently, only until I get used to it again.
If an AI is better than a doctor at finding cancer indicators, then the doctor will focus their mind on finding solutions only rather than splitting it on both the detection and solution.
This is not new, not bad, and I’ll even go to the extent of saying it’s a great use of AI: Humans evolved for specialization. The less varied the tasks are, the better we are at the subset we specialize in. That’s what has driven our rapid technological and societal advances in the past millenia.
But, AI has many issues and many detrimental applications as well, so don’t see this comment as a full endorsement of AI.
- 2 hours
I’m reminded of “Johnny Mnemonic,” the 1995 movie. From IMDB:
In 2021, society is driven by a virtual Internet, which has created a degenerative effect called “nerve attenuation syndrome” or NAS. Megacorporations control much of the world, intensifying the class hostility already created by NAS.
It seems to me that AI is the real “virtual internet” here.
Although rather unrelated to the topic at hand, I also liked this quote:… [Henry] Rollins, who is uninterested in science fiction, joined the cast because he liked the film’s focus on an upcoming disadvantaged underclass.
- SunshineJogger@feddit.orgEnglish5 hours
I really do see the issue with AI. I see people around me outsource thinking to it too much. Like literally. As if they are happy that a machine can make their life choices for them. This is extremely worrying It’s About how people use it
- ExLisper@lemmy.curiana.netEnglish3 hours
I always thought recommendation algorithm will do it but the progress stopped at some point. We had apps recommending videos, music, feeds, news and so on for a long time but it never evolved into recommended careers or recommended places to live. Not in the sense where some algorithms that tracks you all the times tells you what your next important life choice should be. I don’t know anyone who’s using AI like that yet but I can see it happening in the future.
lechekaflan@lemmy.worldEnglish
5 hoursI don’t want it, all it does is to negate years of learned experience and ability to organically formulate ideas.

- Rioting Pacifist@lemmy.worldEnglish13 hours
The test seems kind of dogshit, you could make the same argument against any tool, calculators or even abacuses would have the same effect.
I’m made to use it for work and it does speed up some tasks, however for some stuff it ends up being like the experiment where not doing the work the first time means the whole process takes longer at the end.
- FauxLiving@lemmy.worldEnglish10 hours
To add to this, we already know that context switching causes a loss in performance.
A person who’s thinking about how to solve a problem one way and then has to suddenly think about solving it in another way will perform worse.
The Neuroscience Behind the Pain
Context switching isn’t just annoying — it’s neurologically expensive. When you shift from debugging a race condition to answering emails, your brain doesn’t simply “change tabs.” It goes through a complex process:
-Memory consolidation: Storing your current mental model
-Attention disengagement: Breaking focus from the current task
-Cognitive reloading: Building a new mental model for the next task
-Re-engagement: Getting back into flow
Research from Carnegie Mellon shows that even brief interruptions can increase task completion time by up to 23%. For complex cognitive work like programming, this cost multiplies dramatically.
Here’s another article from CMU discussing the same thing: https://www.sei.cmu.edu/blog/addressing-the-detrimental-effects-of-context-switching-with-devops/
What this study shows is that a person who is faced with an unexpected context switch performs worse on a task than a user who has spent the last 12 questions performing the task the same way.
This exact problem would happen if you replaced AI with a calculator, or made a person swap from using paper to doing mental math. The problem here is context switching, not AI.
The way to ensure that the problem is AI and not the context switch, would be to continue the quest and see if the first group reverts back to baseline after 12 questions. 12 questions is how long the control group had to become acclimated to the task before their last context swap at the start of the test.
Also, of note, this is a paper on arXiv it is not published so it has not gone through a peer-review process which would certainly catch the failure to set a proper control group.
- chunes@lemmy.worldEnglish8 hours
Context switching isn’t just X — it’s Y.
Are we sure this was written by a human?
- FauxLiving@lemmy.worldEnglish8 hours
AI being released was basically an apocalypse for people who use EM dash.
Here’s the most cited, human created (2001), paper on the topic of context switching performance loss: https://www.apa.org/pubs/journals/releases/xhp274763.pdf
- chunes@lemmy.worldEnglish8 hours
Thanks.
And I’m all for em dashes. After all, I started using them after reading enough books. It’s just that particular construct that strikes me as especially LLM-y.
- luciferofastora@feddit.orgEnglish7 hours
AI was trained on human writing. If it produces a certain tone, then that’s probably a result of the material that was favoured in training it. That construction was common in human writing before it became common in AI too.
What makes it stick out is when AI uses it in contexts where humans normally wouldn’t, but this kind of assertion is common in scientific papers and articles. It would make sense to train an AI on scientific writing, since that tone sounds authoritative and like you have some idea of what you’re talking about.
So I don’t think this is an LLM-construct; it’s an instance of the original style that LLMs copy.
- FauxLiving@lemmy.worldEnglish8 hours
I’d like to see a study on that, I see it mentioned so much it’s almost achieved meme status.
It could very well be a Baader–(👀)Meinhof phenomenon.
- Buffalox@lemmy.worldEnglish15 hours
According to a new study by researchers at Carnegie Mellon, MIT, Oxford, and UCLA,
Study should be solid I guess.
participants who were given AI assistants (in this case, a chatbot powered by OpenAI’s GPT-5 model) would have the aid pulled from them without warning during the test
Wow, interesting idea. 👍
where they had their assistant removed, the AI group saw the solve rate fall off a cliff. They had a solve rate about 20% lower
And even worse IMO:
They also had nearly double the skip rate, meaning they simply chose not to solve the questions.
This seems very alarming IMO, because this indicates they lost some of their ability to think constructively on how to actually solve a problem!
I know there have always been some who cried wold every time new technology has become available, like calculators and computers. Even dictionaries were once claimed to be harmful once!
But maybe this time there is a real danger, because AI takes away a lot of the need to actually think creatively and constructively. And that’s an ability we must not lose.The last paragraph of the article is even worse. As it mentions 2 studies that show these effects are also long term!!!
- scarabic@lemmy.worldEnglish12 hours
Changing the terms of the test in the middle of it, without warning, is disruptive. I’m not convinced it “fried their brains.” The same would happen with a calculator suddenly removed during the middle of an exam.
- Buffalox@lemmy.worldEnglish6 hours
You are disregarding the last paragraph, where 2 other studies showed similar results, without having the “disruptive” factor.
- howrar@lemmy.caEnglish10 hours
Or any task change really. You tell me that I’m here for a writing task, then halfway through it becomes a math test? There’s no way I’m doing anywhere near as well as if they told me what was happening ahead of time.
- 15 hours
When driving somewhere, if I set out with the mindset that I can’t rely on gps I can usually wing it and figure out where to go when a hiccup occurs. If I don’t, then I have a lot of trouble getting into that path finding mode when needed… similar to this maybe?
- 14 hours
Yeah exactly, because although it’s possible to do more with technology sometimes, you’re actively de-skilling at the same time. When we invented the written word yes it legitimately made everything better, but also we lost oral traditions and the capacity to memorize large volumes of storytelling, songs, and histories. Now you can burn the books, and the knowledge dies. It’s a real risk.
Everything is like this. Every technology has a cost beyond its price, and making a decision of whether to use it or not will always be in error unless you think about what you’re losing in the process.
Chloé 🥕@lemmy.blahaj.zoneEnglish
11 hoursthere have always been some who cried wold every time new technology has become available, like calculators and computers
and they kinda have a point, really. people got worse at memorizing stuff by heart when writing was invented, and people got worse at mental calculus when calculators when invented.
but they allowed many things that were simply not possible. a calculation that takes me 2 minutes in wolfram alpha could take hours if not days to solve by hand!
ai, meanwhile, or at least the ai we’re sold, does not offer significant advantages (at best it saves a few minutes), at the cost of making us worse at thinking, a skill that is absolutely essential to have… and of course, that’s the point. the tech oligarchs want us to be dependent on their extremely expensive products.
- Buffalox@lemmy.worldEnglish6 hours
and people got worse at mental calculus when calculators when invented.
That may be true, but that is a much more limited problem, than losing some of our ability for critical thinking and problem solving in general.
ai, meanwhile, or at least the ai we’re sold, does not offer significant advantages
This is very true, the AI are shown to even hallucinate, and give incorrect and harmful solutions. A calculator does NOT do that.
So not only is the AI a danger to our critical thinking, we actually need it MORE when using AI. - badgermurphy@lemmy.worldEnglish10 hours
But they’re using the hell out of it, too, right? They’re exactly the types of people that love and use it the most: managers and owners.
- FauxLiving@lemmy.worldEnglish9 hours
This paper shows that a person who has performed a task 12 times performs better than a person who has never performed the same task.
They also do not properly control for performance loss due to context switching which is a well known contributor to performance loss.
It’s a paper on arXiv, it hasn’t been peer reviewed or published.
- Buffalox@lemmy.worldEnglish6 hours
No the test is not training, that’s a weird thing to claim. The switch is what is tested, and you disregard that 2 other tests have shown similar results. An actual decline in critical and problem solving thinking.
- iglou@programming.devEnglish3 hours
Not training, no, but warm up. And no, it is not about critical thinking, it’s about reading comprehension and calculations.
- Womble@piefed.worldEnglish4 hours
The switch is what is being tested yes, but it is not clear that what is being measured in the switch is “AI fried their brains” rather than “context switching in the middle of a test”. If they wanted to make that point it would be useful to have the maths test run with a calculator group who also got it yanked halfway through, that way we would be able to see what proportion of the effect is over dependence on AI removing critical thinking and what amount is having your methods disrupted mid task.
- Buffalox@lemmy.worldEnglish3 hours
The calculator test might be good for comparison, and I’m pretty sure if given the same amount of time, and one group being allowed to use calculator for half the test, that group would solidly outperform a group not using calculators at all.
I was in 5th grade in 1975, and we were the first class to get calculators in 5th grade. Which became the standard for many years after.
I have never heard complaints about students being less capable of understanding basic math problems because they use calculators. Although the idea of using calculators in schools were heavily debated. It’s similar to people not getting worse at spelling from using a dictionary.
- Buffalox@lemmy.worldEnglish6 hours
A calculator is not the same problem, it doesn’t reduce our general ability to think critically.
derAbsender@piefed.socialEnglish
2 hoursAs the study defines critical thinking: yes it does.
The study claims, essentially, relying on a machine that solves a Problem for you, lessens your critical thinking skills.
Their Definition of “critical thinking” is just, at least to me, way Off.
Just because i can conprehend Stuff i read for example, does not show critical thinking. It just shows i can repeat shit i read adequately.
It’s just bad science.
- iglou@programming.devEnglish3 hours
The studies referenced are about calculations, reading comprehension and work performance, not critical thinking.
The article is, like many, a bad one. It generalises what it should not.
- Buffalox@lemmy.worldEnglish3 hours
The sessions lasted about 10 minutes, suggesting that those who decided to rely heavily on AI to solve problems for them abandoned their critical thinking abilities in a matter of minutes.
- iglou@programming.devEnglish2 hours
As I said, this is a bad article. The experiment does not suggest that at all. The study does not mention critical thinking.
I’d say, however, that the proliferation of shitty news websites has caused readers to lose their critical thinking.
- Buffalox@lemmy.worldEnglish1 hour
In academia it is normal not to directly spell out things that are obvious to a person with academic knowledge on the subject, research papers are meant for scholars, and they are supposed to be able to read and understand the consequences for themselves.
So you can’t use it as an argument that it isn’t spelled out, if it can be easily derived by a person who understands the subject.
Research papers do not spell out every possible consequence of their findings.- iglou@programming.devEnglish1 hour
It isn’t spelled out because it is not a logical conclusion at all. Nothing in this test requires critical thinking to achieve.
Why are you defending an obviously terribly written article?
- NeilNuggetstrong@lemmy.worldEnglish14 hours
If I use AI for my personal coding projects I’ve found that if the task is unsolvable by the ai model, I’m not able to sit down and do it myself until the next day. It’s like I’ve got to reset my brain.
If I want to save time and use AI for a specific part of the code, it probably saves me 5 hours of work. But then I spend five hours yelling at the ai to try to get it to actually solve it. Next day I’ll just fix it myself in 2 hours.
- Buffalox@lemmy.worldEnglish5 hours
That sounds a lot like what the studies show. And IMO that sounds like a serious problem.
- NeilNuggetstrong@lemmy.worldEnglish1 hour
I’m really just tricking my brain to think I’m being more productive lmao.
But then again, some of the stuff I’m working on is in principle quite easy to do, but is also outside of my skillet, for these cases I benefit from using AI.
IMO the challenge is knowing how and when to use AI. Small companies using AI correctly can probably benefit massively from it. Although it’s risky
- redsand@infosec.pubEnglish15 hours
Also and this is the big one for me. It’s 10% wrong on average. That’s really bad. 1 in 10 google Gemini answers is bullshit
- Buffalox@lemmy.worldEnglish5 hours
And the ability to think critically to detect it declines. So it’s doubly harmful!
ryannathans@aussie.zoneEnglish
12 hours1 in 5 human answers is probably bullshit so it sounds like you’re onto a winner
- GreenKnight23@lemmy.worldEnglish9 hours
can confirm. Reddit is filled with abject brain dead dumbasses. since most content is AIGEN it makes sense.
- Venator@lemmy.nzEnglish8 hours
Not sure about the method: to me it shows people are more willing to give up when the computer appears to be broken.
I think the control group need to experience a similar computer service failure, but maybe just swap out the ai for a basic calculator tool, or a pdf with formulas or a cheat sheet or something 😅
- nonentity@sh.itjust.worksEnglish14 hours
I’ll never understand how an explosively imprecise, statistically luke-warm, grey goo extrusion sphincter could ever be mistaken for intelligence.
AI doesn’t exist, it’s a vacuous marketing term.
LLMs have vanishingly narrow legitimate, defensible use cases, but their output is intrinsically inaccurate, and should never be used without supervision from relevant domain experts.
texture@lemmy.worldEnglish
1 hourtheres plenty of legitimate use cases. your comment just sounds like youre repeating what everyone else says about it.
- nonentity@sh.itjust.worksEnglish25 minutes
There could be many use cases, and some of them may even be legitimate, but I’m yet to observe any which have broad applicability, and they should only ever be wielded by a responsible, expert adult.
texture@lemmy.worldEnglish
7 minutesits a multitool, its applicability needent be broad imo. anyway, im glad we arent speaking in absolutes.
- 14 hours
@nonentity @technology I think the problem with your framing is it implies that humans are not also “explosively imprecise, statistically luke-warm, grey goo extrusion sphincter(s)”. We weren’t exactly living in a perfect world prior to AI, and all AI does is regurgitate what humans created. AI isn’t really changing the character of anything - and in several domains I’d argue it’s improving the baseline (coding for one).
- nonentity@sh.itjust.worksEnglish13 hours
It’s telling that you assumed the description applied exclusively to LLMs.
No one who persists in labelling LLMs as ‘AI’ should be treated as an authority on the subject, and I’d argue it’s one of the greatest indicators of how little they comprehend the situation.
astronaut_sloth@mander.xyzEnglish
13 hoursTHANK YOU! I studied AI in school, and it always bothers me when people think that LLMs are the only facet of AI. Between 2022-2024, I had a knee jerk reaction of explaining that AI is more than LLMs and that LLMs are really a small subset of the entire universe of AI, yadda yadda yadda. Now I’ve given up and roll my eyes as someone tries to tell me about the cool new Claude skill they built.
What’s funnier is people think I hate LLMs. That couldn’t be further from the truth; they are a fantastically interesting and innovative technology! “Attention is All You Need” is a great paper, and super impactful. I just hate that people are outsourcing their thinking to a chatbot and neglect the rest of my field of study.
- howrar@lemmy.caEnglish10 hours
LLMs are still a facet of AI though. It sounds like they’re saying it shouldn’t be categorized as AI at all.
Em Adespoton@lemmy.caEnglish13 hoursI’m confused. Aren’t you the one who referred to LLMs In a thread that was conflating LLMs with AI? The parent’s comment seems to be right on point.
It’s kind of like how we’ve lost the war on hacking.
Large language models fall under the current definition of artificial intelligence just as much as Cyc or Cog did in their day, or various expert systems and machine learning models, diffusion models, etc.
Pretty much any non-deterministic inference engine can be classified as an AI, including LLMs.
- ElReatonVaquer0@lemmy.worldEnglish16 hours
I think that if you use AI responsibly (as an assisting tool) like mentioned in the article, then you are pretty much on the safe side.
But when you have AI do everything for you, then there’s a big problem.
Personally I try not to use it at all, not a fan of all the problems that come with it.
- Buffalox@lemmy.worldEnglish15 hours
You clearly didn’t read the article, and you are dead wrong.
Except you are right that if you let the AI do everything, it’s worse, and you lose a lot of ability for critical thinking.
The last paragraph of the article even shows that other studies have shown that using AI assistance over time, will even have long term effect of lowering problem solving abilities!!Personally I try not to use it at all, not a fan of all the problems that come with it.
This is the way. 😀
- rynn@piefed.socialEnglish14 hours
My experience with using ai, and at this point I’d say this experience is extensive / daily, is that it gets things wrong A LOT and with a high degree of confidence in its position.
In the early stages of using it I felt my problem solving desire start to slip, but after pushing through that and realizing I should not trust this any more than I’d trust human judgment it’s more like having another person to work with. That’s helpful but if I let me own thinking guard down at all I put myself in a lot of risk.
I hope most people that do use AI regularly eventually push through to this stage and we all will be way better off in the long run for the assistance.
I fear most people won’t push through. This study points to the obstacle, I’d love to see what can be done to help people overcome it, probably there’s room for AI usage training that we need to start to consider.
- tidderuuf@lemmy.worldEnglish14 hours
Reaearchers: “Is the AI in the room with us now?”
Test Subjects: “No Asshole! You just took it from me while I was in the middle of using it!”
- 14 hours
I’ve been using ChatGPT since it came out and yet my brain isn’t nearly fried enough to fall for clickbait headlines this obvious.










