cross-posted from: https://lemmy.bestiver.se/post/1052063
- Jakeroxs@sh.itjust.worksEnglish4 hours
On the math test, I’d be curious if a calculator was provided instead then taken away half-way through. Would we see the same drop off?
Gsus4@mander.xyzEnglish
3 hoursThere are two or three problems with this analogy:
cost,
calculators don’t do the whole thinking for you,
centralization on large companies enshitifiying everything once everyone is dependent.
- Jakeroxs@sh.itjust.worksEnglish3 hours
The only one actually relevant to “cognitive decline” is the second, do we know what the math questions actually were?
Also I wasn’t making an anology, I’m seriously asking if we’d see the same drop-off, as I think the root of the problem is moreso that humans will generally choose to use less effort rather then more, so any tool that reduces effort might see the same amount of drop off in end result when taken away.
Going with analogies though, people having cars mean less people learning about/using horses/carriages/bikes and as cars are increasingly more complex and less repairable, less people put in the effort to learn how to fix them if something goes wrong.
Ultimately though I have to wonder what does that really matter in the long term? Did people stop doing/understanding math once calculators became common?
One of the points the paper makes is people who used it to help rather then solve the problem for them performed better once it was taken away, which adheres with my own observations on how people use certain tools vs seek to understand how those tool work and deeper their understanding. However again, is it really a problem that a majority of Americans (for example) don’t know how to change the oil on their car? Does that actually indicate they’re less intelligent or unable to rationalize/logically process information? Or do they generally put the effort that would be put into learning how their car works into other efforts.
Unfortunately I think many are simply too burned out with day to day life to care about much learning at all, which is a much larger issue IMO.
Though I will say I do think AI/LLMs will only reinforce that behavior, I’m not sure if that’ll be all bad or really all that different then the existing status quo prior to their spread.
Edit: We could talk about the economic impact it will have, but the root cause is the same as all the other wealth inequality, and I can easily forsee how LLMs could be much more equitable rather then used as vehicles for enrichment.
merdaverse@lemmy.zipEnglish
4 hoursThis seems very obvious, but it’s good to have more studies supporting it
aceshigh@lemmy.worldEnglish
1 dayJokes on you. I never had critical thinking skills to begin with. Now I’m able to do more, even though I have no idea what I’m doing. Let’s see how this plays out.
TheFeatureCreature@lemmy.caEnglish
1 dayI’m glad there are official studies being done to document this, but also it’s vary obvious if you’ve spent any time around people in the past few years. The degradation of critical thinking and research skills is highly tangible and disturbing. Any country that isn’t addressing this significant intelligence gap is going to have an entire generation (or more) of brain-drained, unskilled citizens that can’t meaningfully contribute to the national workforce. For western countries that have already surrendered most of their manufacturing and innovation overseas, this will be even more devastating.
- 4 hours
You mean easily controllable and manipulatable peasants who can’t think far enough ahead to revolt?
Perfect.
- supernight52@lemmy.worldEnglish1 day
Wow- who would have thought using a tool that actively takes away critical thinking from any reply it generates, and relying on that instead of engaging your brain would cause negative effects on mental health and dexterity?
- wuffah@lemmy.worldEnglish1 day
The propensity of the average person to simply believe what they’re told is staggering, and I know because I do it all the time. It takes effort to seek out information, vet it, consider it, and then make a determination on the next information to seek or the next course of action. Deterministic, trustworthy information and abstracted concepts are extremely valuable to the brain, an organ that consumes roughly 20% of our body’s energy.
Before, computers performed tasks that were impossible for the human mind. Machine learning has been automating tasks impossible for humans such as computer vision or large dataset processing, but chatbots are the first technology that has really enabled automating human thought. In this new sense, directly offloading this cognitive work to a computer is literally letting it think for us.
The more reliant on this mode of thinking we become, the easier it is to transfer cognitively expensive work to a device that externalizes that energy cost. However, the trade-offs that are emerging are:
-
Internal electric brain energy is traded for relatively inefficient external electricity production to feed circuits.
-
The words generated by LLM’s must still be verified and combined into coherent, dependable ideas and actions.
-
The drive and skill required to develop good ideas that have value is degraded without constant practice.
In the end, it becomes only a slightly less amount of work to perform the same thinking process for checking and mentally processing the output of an LLM chatbot, which defeats its purpose. If you skip that step of contextualizing it as possibly representing corporate interest and diluting meaning while offering a juicy cognitive shortcut, you’re becoming willingly complacent in your own digital brainwashing. This effect is also emergent and automatic; it doesn’t even have to be of nefarious purpose, it seems to be a procedural consequence of this mode of thinking.
What I really fear, and what is also emerging, is that eventually AI agents will become so advanced and trusted that their end-to-end capabilities will make mistakes and ulterior motives impossible to spot, and that they will become completely above the capability and desire for human scrutiny.
These digital brains we trained on all of human knowledge are now in the process of training us.
No1@aussie.zoneEnglish
1 dayThe propensity of the average person to simply believe what they’re told is staggering,
Goddamit — now I don’t know if I should.believe you!
- wuffah@lemmy.worldEnglish4 hours
In the spirit, I’m here to tell you that I just made this up. I have no formal training in AI or LLMs. Although, I do know a little about computers and writing. I mostly wrote this in 30 minutes while making coffee so I could trade it for the little dopamine numbers in my doom square.
-
Avid Amoeba@lemmy.caEnglish
1 dayPeople who used AI tools for hints and clarification had a much easier time once the chatbot was removed when compared to those who used the bot to essentially prompt the answers.
Probably important for people who want to get some of the benefits of AI without paying the heavier costs. This reminds me of how I used Wolfram Alpha understand solving integrals in multivariate calculus. I paid for subscription that allowed viewing the steps it made to reach a solution. That helped me understand how the different strategies get applied in integration.
- dream_weasel@sh.itjust.worksEnglish1 day
There is a data point missing here.
Do the same study and give some an LLM, some no LLM, and some a type A subject matter expert for reference. It may also matter if this person is a friend coworker or random passerby, but I would be willing to bet money that the same effect is present to a lesser (but still statistically significant) degree.
Maybe a future study can be further refined to build some scaffolding for more effective teaching/learning “on the job” or in general.
- Kowowow@lemmy.caEnglish1 day
I would be interested in how bad a something like an internet and local file librarian or conversational text search engine(does that make sense?) would be compared to standard ai systems







