previous lemmy acct: @[email protected] see also: @[email protected]

  • 3 Posts
  • 17 Comments
Joined 4 months ago
cake
Cake day: June 13th, 2025

help-circle
  • This argument strikes me as a tautology. “If we don’t care if it’s different, then it doesn’t matter to us”.

    But that ship has sailed. We do care.

    We care because the use of AI says something about our view of ourselves as human beings. We care because these systems represent a new serfdom in so many ways. We care because AI is flooding our information environment with slop and enabling fascism.

    And I don’t believe it’s possible for us to go back to a state of not-caring about whether or not something is AI-generated. Like it or not, ideas and symbols matter.



  • I mean, it can’t really do ‘every random idea’ though, right? Any output is limited to the way the system was trained to approximate certain stylistic and aesthetic features of imagery. For example, the banner image here follows a stereotypically AI-type texture, lighting, etc. This shows us that the system has at least as much control as the user.

    In other words: it is incredibly easy to spot AI-generated imagery, so if the output is obviously AI, then can we really say that the AI generated a “stock image”, or did it generate something different in kind?






  • The products of artisanal labour and factory labour might indeed be able to be equivalent in terms of the end product’s use value, but they are not equivalent as far as the worker is concerned; the same loss of autonomy, the loss of opportunity for thought and problem-solving and learning and growing, these are part of the problem with capitalist social relations.

    I’m trying to say that AI has this social relation baked in, because its entire purpose is to have the user cede autonomy to the system.


  • I appreciate you describing the LTV distinctions between the thinkers, thank you, sincerely!

    I think the problem I have with AI - and it sounds like you agree at least partially - is that it positions human creative work, and human labour in general, as only a means to an end, rather than also as an end in itself.

    (I think this holds true even with something like a video game texture, which I would argue is indeed part of a greater whole of creative expression and should not be so readily discounted.)

    This makes AI something more along the lines of what Ursula Franklin called a ‘prescriptive technology’, as opposed to a ‘holistic technology’.

    In other words, the way a technology defines how we work implies a kind of political relation: if humans are equivalent to machines, then what is the appropriate way to treat workers?

    Is it impossible that there are technologies that are capitalist through and through?





  • The use of AI images without critique communicates to people that these things are normal and fine and inevitable and non-harmful. This isn’t about staging a ‘consumer boycott’ in terms of harming profits as much as it’s about not normalizing this kind of stuff culturally. The less acceptable this stuff is, the more likely people will be willing to push back against big tech more generally. It’s part of the same movement.

    And it’s additive, not zero-sum. No one is saying “don’t bother unionizing” because they’re too busy pushing back on the use of AI images in an online community. In fact, I’d reckon, the broad societal pushback makes it more likely that people will be inspired to unionize!


  • Three problems with this:

    1. If computation means “anything that happens in the universe” then the term ‘computation’ is redundant and meaningless.
    2. We do not know or understand all of the physical laws of the universe, or if those laws indeed hold universally.
    3. Our consciousness does not operate at the level of atomic physics; see Daniel Dennett’s ‘compatibilism’ defense of free will vs Robert Sapolsky’s determinism. If we’re vulgar materialists, then it follows that there is no free will, and thus no reason to advocate for societal change.