• 2 hours

    Fun story from this week, we had a chore for the frontend to refresh to a new version of the UI framework. Fairly simple task, so off to a junior developer. Within a couple hours there was a merge request ready to go. Ok, a fairly normal amount of time to change version and at least do a sniff test and find nothing changed so I go in assuming I’ll look at a few version bumps, maybe one or two tweaks… I see the junior dev was proposing over 1,000 lines of code to be added… WTF…

    I crack it open and there was just a firehose of css rules, all marked ‘!important’. Looking at one examlpe, it repeated the same classifier with the same exact bunch of rules 5 times in a row. It was like it found every possible derived css class combination with tag and defined !important CSS for most everything about it.

    So I find out that the junior dev asked it to rebase and it did what he expected, just change some version and went. He tried it and due to a framework change, one element was misaligned by a little bit. So he gave the feedback to the LLM and tried again… and it failed, and he tried again and it failed and after 5 rounds, it finally got the element aligned and hit ‘merge request’. For fun I opened up his proposed change and just so much was just a bit dodgy css wise because it screwed with so much stuff, but the junior dev only concerned himself with the page as it opened.

    So I said screw it, I’ll do it myself, and added the singular rule that was needed to adapt to the framework change, making it overall about a 5 line change including versioning and such.

    Depressingly, I suspect an executive would consider me far less productive because I only did 5 lines of change and the junior dev would have done thousands…

  • We inadvertently let two Jr’s vibe code a project. They used their own Cursor subs and did not tell anyone. They are no longer with us. Now there is this project with no documentation, they removed the LLMs comments, and then ran a linter. I checked the git history. Nobody can make sense of the code base. We can’t even empirically show that it is cheaper to rewrite than start over, so we just stare at it everyday. Nobody can add any features. When you debug you end up in abstraction hell and work through the most buggy nonsense code.

    • 42 minutes

      They are no longer with us.

      Hey, I’m annoyed by slop coding work as much as the next guy, but murder seems a bit much as a reaction…

  • Yeah, the big thing is that management has no sense how little coding you actually do in a software engineering role. You spend so much more time understanding requirements, understanding how you can resolve roadblocks within your organization and understanding what the hell the code does that was previously written.

    In particular, the last part is something that will most definitely take longer for vibecoded programs.
    The code is often needlessly complex, because:

    • folks throw in additional features with no restraint,
    • the AI will gladly generate a second implementation for stuff, you already solved in the codebase, and
    • AI-generated code tends to just be noisy, because you need rigorous logical reasoning to find the most minimal solution.

    But you also just don’t have human beings that made all the detail decisions and can tell you why they’re important. In vibecoded code, all of these detail decisions are accidental and only ‘proven’ in so far as the given accidental state that the code is in, happens to not explode in reality. If you need to tweak anything about it, you’re completely blind as to what’s actually important and what’s just in there, because the AI figured, it’s the most likely thing to autocomplete there.

  • 11 hours

    Good article. Any company doing any of those examples deserves to die.

    The companies that will pull ahead in the next 24 months are not the ones that adopt fastest. They are the ones whose judgment systems are mature enough that adoption does not break them.

    Yeah, judgement doesn’t seem to be a high priority for the a.i. addled mind.

    • 3 hours

      There’s code over 20 years old still in use that I had written using that approach.

    • 3 hours

      Who cares if you hard coded all of the if statements, shit works.

  • 16 hours

    Oh no i’m terrified to lose my “learning velocity.”

    holy corporate word salad

    • As long as you have a Golden Parachute in your contract!

      Wait… Why do none of us have Golden Parachutes…?

      • 16 hours

        There’s a Blackberry docu-drama streaming on Netflix now - (Jay Baruchel - Hiccup from How to train your Dragon / Dave from 2010 Sorcerer’s Apprentice - has a leading role, it fits him well…) Real life tales of golden parachutes, compressed decision making, consequences…

  • 17 hours

    At first I thought vibe coding was just coding stuff for fun using whatever comes to your mind. Then I learned that it’s just letting ai code for you mostly and just copy paste the code.

    Now I wonder if there are some cases of real vibe coding like my first assumption.

    • 15 hours

      Not copy-paste. Let the ai do it for you. …… else how would we get these entertaining stories of idiots letting ai delete their production database

    • 16 hours

      The danger here is that many people think that software is all about having code that seems to work when you try it. Those people have never been able to get past “Hello, World” in X for Dummies, so they don’t realize all the practical realities of software distribution that are very much more nuanced and complicated than just writing the code. They get their hands on some working code and wheeeee!!! Ship it!!!

      A while back I compared LLMs to lightsabers - and pointed out how many amputees are found in the Galaxy far far away that has lightsabers.

        • 10 hours

          Produce correct results even when encountering “edge cases.”

          Not crash, even when encountering “edge cases.”

          Work correctly in all deployment environments.

          Work correctly after scope creep multiplies the feature set by 3x, 10x, 30x… yeah, successful projects experience that kind of expansion.

          Work correctly after the operating environments shift under your feet - can the code be updated to work with the next version of Android? iOS? Windows? Linux? After “security updates” take away the infrastructure you were depending on for correct functioning?

          Will it scale to 100 users? 10,000? 10,000,000?

          What happens when “threat actors” actively target the system?

          What happens when your methods / development processes aren’t compliant with new government regulations?

          Are you ready for IP lawsuits, whether deserved or not?

    • 17 hours

      Yeah, vibe coding is such a fun term, too bad it’s used for this purpose.

    • There are a lot of folks saying that Bluesky’s recent outages were due to the vast amounts of vibe coding in their systems. It was days of not working.

      • 16 hours

        As an “I wonder” exercise… say that BlueSky wasn’t vibe coded, but instead was done “the old fashioned way” with 20x as many people taking 10x as long to produce the same product. Over that 10x as long timeframe, would they have experienced less or more total downtime with traditionally coded software? Not theoretically perfect software, the actual stuff that “professionals” building social media sites write?

        Also, if they have staffed up with the same number of people as were traditionally required, can those people respond to and correct issues slower or faster than a traditional team?

        LLMs are powerful tools, which have evolved fairly dramatically in the area of software devleopment across the last 12 months. I suspect as people learn to use them properly, safely, appropriately, they are going to prove out to be quite useful. In the meantime, there will be mistakes made…

        • There was an article a bit ago explaining that most AI companies are making a 95% loss. You know, spending 100, receiving 5 loss. All that debt is going to mean the price for AI is about 20 times lower than it needs to be just to break even. The software teams that came to rely on AI to save costs will soon enough find themselves on the hook for this mountain of debt. Enshitification is real. Enshitification is coming. AI will not stay cheap, convenient and free of advertising.

          • People forget this. Yes it has real use in very narrow contexts, yes it may get slightly better, but right now they are JUL getting the kids addicted to vapes and it is drawing ungodly amounts of power and electricity to do so.

            • The meat you eat has more of an impact on the environment than electricity from AI usage

              • 2 hours

                Two things can both be wrong. And removing something that’s been in place for millennia and deeply embedded in the culture is likely to be more challenging than eliminating something that is still more planned than actually materialized.

              • Whataboutism doesn’t work here.

                Plus the basis of generalized use is founded upon willful mass copytheft.

            • 10 hours

              Three things here:

              • right nowthey’re basically discovering what are real uses and what’s frivolous non-value add uses.

              • at least as used for software development, the past 12 months didn’t get slightly better, it got dramatically night to day better.

              • simultaneously, some pretty significant advances have been made at reducing costs of delivering value. I think this is hitting hardest in basic chatbot areas, getting the simple answers cheaper - in programming not as clearly, yeah it’s getting the simple answers cheaper there too, but it’s also succeeding at getting much more complex answers that just weren’t possible even a few months ago - those answers cost more, but they’re also worth more… will be interesting to see where this all shakes out.

              Yeah, they are running loss leader stuff, yeah it’s going to go up in price when they figure out what its worth to people, because things aren’t priced at what they cost to make or deliver, things are priced at what people are willing to pay. The players with the deep pockets are jockeying for control of future markets, they’re investing their existing wealth in future power. Let’s hope the winners are slightly less ghoulish than our Oil barons.

              • Let’s hope the winners are slightly less ghoulish than our Oil barons.

                What a foolish hope!

                $200,000,000,000 debt.
                Who well pay it?
                You talk like gravity doesn’t exist!

                You’re wrong if you think that it won’t be heavily reliant AI customers like software companies who spend five years removing codewriting skills from their workforce and building up technical debt in their codebase because no one has to understand it in those five years and there’s a lot of subtle, hard to spot bugs that got through code review because humans simply don’t make those kinds of errors and no one ever had to spot one in their life before claude came along.

                Did you think that enshitification wouldn’t affect the product? Yesterday’s computers and cars were easy to disassemble to replace parts. Now it’s much, much harder, and it’s very common to void your warranty if you do that. Today’s ai generated code is easy to tinker with and you can do what you like with your end product. Why would it stay that way? Why wouldn’t they engineer it to make that harder? It’s not difficult to make code confusing by changing variable names. I could fuck up your codebase for humans by simply swapping names like productSKU and customerID, let alone writing obfuscated code for any purpose whatsoever and with whatever variable names I like.

                Some software companies are outsourcing their talent to AI behemoths with mountains of debt to recoup. Guess who’s going to pay the debt! And what’s the point of such a company in the long run? Why are you speedrunning paying to replace yourself?

                There will be an AI crash and “consolidation”, meaning a switch to monopolies or near monopolies. Some companies are shedding institutional knowledge and programming skill like it was waste water. Once dependence comes, value extraction will follow it like disease follows unvaccinated infection.

                There is already $200bn in debt and growing rapidly. The shareholders aren’t going to be paying it. The ai customers are.

                • 44 minutes

                  Why are you speedrunning paying to replace yourself?

                  I’m old enough to qualify for the next buyout offer, if there is one. Speedrunning “the new tools” is what I have done for 35 years, it has always served me well in the past. Maybe this one backfires? Not my personal problem if it does - disposal of the elderly from the workforce is a tale as old as time, that’s what retirement accounts are for.

                • 48 minutes

                  Today’s ai generated code is easy to tinker with and you can do what you like with your end product. Why would it stay that way? Why wouldn’t they engineer it to make that harder? It’s not difficult to make code confusing by changing variable names.

                  Code obfuscators have existed for decades - they are rarely used in practice, and 10 years back when a vendor provided me a driver in obfuscated code I explained to them: “If we don’t get real source code, we won’t be buying your products.” The non-obfuscated code was in my in-box the next morning.

                  A year ago, the AI engines couldn’t code anything too complicated, successfully. It had to be assembled from “human sized chunks” or it just wouldn’t work.

                  I notice in a code review I’m doing just this morning, the AI is now managing chunk sizes that are annoyingly large, and doing it successfully. At this point, I’m having to apply push-back pressure, not to keep the code working, but to keep it manageable. The same kind of pressure has been necessary for management of most human developers / development teams for decades.

                  Enshittification wins most successfully in “free tier” products, people who care enough to pay for something do get influence of the products provided - sometimes. Your counter example of automobiles is a good one, along with appliances, etc. The industrial makers of these products have enshittified our legislatures with rules, regulations and laws which protect their industries and enable them to keep colluding to push overpriced under-durable garbage at us with no real alternatives. We need to push back on government for that, that’s the level where the impediments to customer influence exist.

                • 2 hours

                  The shareholders aren’t going to be paying it. The ai customers are.

                  It’s much more likely that the banks and their insurers will be left holding the bag, and they’ll then be bailed out by the taxpayers.

                  There’s already negative ROI at even the current loss-leader prices.

        • 13 hours

          Over that 10x as long timeframe, would they have experienced less or more total downtime with traditionally coded software?

          i have a homework for you: if you ask professional chef how to keep the cheese on pizza, are they going to tell you to use some glue? once you figure out an answer to that, you should be able to answer your original question.

    • 17 hours

      The term you’re looking for is “cowboy coding.”

    • What’s it called when I get real high and code something that I can’t even figure out the next day if it was genius or insanity?

      • 2 hours

        There are some comments in the code I’ve written saying “before you attempt to modify this module, it’d be wise to get a barge pole.”

    • 16 hours

      I don’t think copy/paste is involved. With vibe coding, the AI agent typically has access to your repo/files directly!

      • 6 hours

        That’s even more dumb and dangerous imo.

  • No one had the cultural standing to say this looks great, and we are not putting it into production.

    Can someone in your organization look at a slick prototype and say “no” without career risk? If the answer is no, vibe coding becomes a one-way ratchet.

    This is definitely the feeling at my company. “How fast is AI letting you ship” is the only question management & executive are asking.

    the resulting ambiguity will be filled by whoever moves fastest, which is rarely whoever should be deciding.

    There’s capitalism!

    • This is definitely the feeling at my company. “How fast is AI letting you ship” is the only question management & executive are asking.

      Slightly faster, but with a way higher upkeep cost. And it might delete your companys database or have a customer-data leak.

    • 16 hours

      Can someone in your organization look at a slick prototype and say “no” without career risk? If the answer is no

      You have toxic leadership and we have just handed them a mini-gatling-gun with which to shoot everyone’s feet off.