Tech Policy Press is a nonprofit media and community venture intended to provoke new ideas, debate and discussion at the intersection of technology and democracy. We publish opinion and analysis.
Not the parent, but LLMs dont solve anything, they allow more work with less effort expended in some spaces. Just as horse drawn plough didnt solve any problem that couldnt be solved by people tilling the earth by hand.
As an example my partner is an academic, the first step on working on a project is often doing a literature search of existing publications. This can be a long process and even more so if you are moving outside of your typical field into something adjacent (you have to learn what excatly you are looking for). I tried setting up a local hosted LLM powered research tool that you can ask it a question and it goes away, searches arxiv for relevant papers, refines its search query based on the abstracts it got back and iterates. At the end you get summaries of what it thinks is the current SotA for the asked question along with a list of links to papers that it thinks are relevant.
Its not perfect as you’d expect but it turns a minute typing out a well thought question into hours worth of head start into getting into the research surrounding your question (and does it all without sending any data to OpenAI et al). That getting you over the initial hump of not knowing exactly where to start is where I see a lot of the value of LLMs.
Its not perfect as you’d expect but it turns a minute typing out a well thought question into hours worth of head start into getting into the research surrounding your question (and does it all without sending any data to OpenAI et al). That getting you over the initial hump of not knowing exactly where to start is where I see a lot of the value of LLMs.
I’ll concede that this seems useful in saving time to find your starting point.
However.
Is speed as a goal itself a worthwhile thing, or something that capitalist processes push us endlessly toward? Why do we need to be faster?
In prioritizing speed over a slow, tedious personal research, aren’t we allowing ourselves to be put in a position where we might overlook truly relevant research simply because it doesn’t “fit” the “well thought out question?” I’ve often found research that isn’t entirely in the wheelhouse of what I’m looking at, but is actually deeply relevant to it. By using the method you proposed, there’s a good chance that I never surface that research because I had a glorified keyword search find “relevancy” instead of me fumbling around in the dark and finding a “Eureka!” moment of clarity with something initially seemingly unrelated.
Fuck being more productive!
Ai is capitalism maximizing productivity and minimizing labour costs.
Ai isn’t targeting tedious labour, the people building them are going after art, music and the creative process. They want to take the human out of the equation of pumping out more content to monetize.
Not the parent, but LLMs dont solve anything, they allow more work with less effort expended in some spaces. Just as horse drawn plough didnt solve any problem that couldnt be solved by people tilling the earth by hand.
As an example my partner is an academic, the first step on working on a project is often doing a literature search of existing publications. This can be a long process and even more so if you are moving outside of your typical field into something adjacent (you have to learn what excatly you are looking for). I tried setting up a local hosted LLM powered research tool that you can ask it a question and it goes away, searches arxiv for relevant papers, refines its search query based on the abstracts it got back and iterates. At the end you get summaries of what it thinks is the current SotA for the asked question along with a list of links to papers that it thinks are relevant.
Its not perfect as you’d expect but it turns a minute typing out a well thought question into hours worth of head start into getting into the research surrounding your question (and does it all without sending any data to OpenAI et al). That getting you over the initial hump of not knowing exactly where to start is where I see a lot of the value of LLMs.
I’ll concede that this seems useful in saving time to find your starting point.
However.
Is speed as a goal itself a worthwhile thing, or something that capitalist processes push us endlessly toward? Why do we need to be faster?
In prioritizing speed over a slow, tedious personal research, aren’t we allowing ourselves to be put in a position where we might overlook truly relevant research simply because it doesn’t “fit” the “well thought out question?” I’ve often found research that isn’t entirely in the wheelhouse of what I’m looking at, but is actually deeply relevant to it. By using the method you proposed, there’s a good chance that I never surface that research because I had a glorified keyword search find “relevancy” instead of me fumbling around in the dark and finding a “Eureka!” moment of clarity with something initially seemingly unrelated.
Fuck being more productive! Ai is capitalism maximizing productivity and minimizing labour costs.
Ai isn’t targeting tedious labour, the people building them are going after art, music and the creative process. They want to take the human out of the equation of pumping out more content to monetize.