…and I still don’t get it. I paid for a month of Pro to try it out, and it is consistently and confidently producing subtly broken junk. I had tried doing this before in the past, but gave up because it didn’t work well. I thought that maybe this time it would be far along enough to be useful.

The task was relatively simple, and it involved doing some 3d math. The solutions it generated were almost write every time, but critically broken in subtle ways, and any attempt to fix the problems would either introduce new bugs, or regress with old bugs.

I spent nearly the whole day yesterday going back and forth with it, and felt like I was in a mental fog. It wasn’t until I had a full night’s sleep and reviewed the chat log this morning until I realized how much I was going in circles. I tried prompting a bit more today, but stopped when it kept doing the same crap.

The worst part of this is that, through out all of this, Claude was confidently responding. When I said there was a bug, it would “fix” the bug, and provide a confident explanation of what was wrong… Except it was clearly bullshit because it didn’t work.

I still want to keep an open mind. Is anyone having success with these tools? Is there a special way to prompt it? Would I get better results during certain hours of the day?

For reference, I used Opus 4.6 Extended.

  • .net runtime after 10 months of using and measuring where LLMs (including latest Claude models) shine reported a mindboggling success rate peaking at 75% (sic!) for changes of 1-50 LOC size - and it’s for an agentic model (so you give it a prompt, context, etc, and it can run the codebase, compile it, add tests, reason, repeat from any step, etc etc).

    Except it was clearly bullshit because it didn’t work.

    Welcome to the LLMs where everything is hallucinated and correctness doesn’t matter.

    Is anyone having success with these tools

    Define success.

    Is there a special way to prompt it?

    It gets better the more you use it, you will learn what works for you, and what does not. Right now the hot shit is “autonomous agent swarms” peddled by the token sellers as a way to output correct massive features. Do not touch that for now.

    What helps with Claude / llms 101:

    • when it tells you something about an API, using a tool or whatever, tell it tool version and order it to give you documentation page proving the solution is possible.

    • when it oneshots a working solution you will get a dopamine hit. Be aware of that, as it can be addictive or make you trust it. Do not trust it, it sucks long term.

    • it will alwyas default to below average solution. Know where your hotspots are, and be extra judgy there.

    • it will get lazy and lie to you, especially with tests

    • it will not propose code refactors on its own.

    • despite the token peddlers claims, no matter if your using the 1M token context window model, the shit degrades when the context window is over 20k-30k tokens - so switch context windows often for better outcomes, but that means you will be burning more money - which obviously benefits the token peddlers.

    • do not trust the hype - so far any and all tall claim of a breakthrough from the token peddlers were a lie (e.g. vibing working os that can run Doom, vibing a next.js 96% replacement in a week, vibing a browser, compiler, vibing a browser jailbreak via Mythos)

    Would I get better results during certain hours of the day?

    Afaik USA timezone has worse performance.

  • Have you been coding professionally long?

    I find that the only time I can use these chatbots for a task I really need to already know what i’m doing so that I can read the output and fix the issues. This is like having junior devs on your team and being a code reviewer more than being a full time coder. They get a lot of things wrong but there’s so much usable that you can save a ton of time over doing everything yourself from scratch.

    Just like with junior devs, you can send them back to fix what you know is wrong and give them feedback to improve various things you would prefer done another way. There’s no emotions though, so you can just be blunt and concise with feedback.

    • Nice comparison, but the bugs created by junior software developers are usually much easier to find than the bugs created by LLMs.

  • I’ve just started recently using Claude after being very unimpressed with Copilot, but my current theory is that you should treat everything it writes like a PoC that you found in some obscure github repo. Use it as a reference that you can generate quickly, take out only the good parts, adapt them to your context. It’s harder to delete code than to write it, so it’s easier to just take what you like from its output, rather than try to clean up all the nonsense it generates.

    How accurate that is and how useful it is compared to just writing it from scratch varies a lot based on your particular project. You still need a good understanding of the output it produces, otherwise those subtle bugs and low quality adds up. The times it’s the most useful are when it writes a lot of stuff that I would’ve written myself, but I can point to some detail and say “that’s wrong, I’ll write it myself”.

  • producing subtly broken junk

    The difference between you and people that say it’s amazing is that you are capable of discerning this reality.

    • What I don’t get, though, is how the vibe code bros can’t discern this reality.

      How can they sit there and not see that their vibe-coded app just doesn’t do what they wanted it to do? Eventually, you’ve got to try actually running the app, right? And how do you keep drinking the AI kool-aid when you find out that the app doesn’t work?

      • Vibe code bros aren’t real programmers. They’re business people, not computer people. Even if they have a CS degree, they only got that because they think it’ll get them more money. They lack passion and they don’t care about understanding anything. They probably don’t even care about what they’re generating beyond its potential to be used in a grift.

        I graduated college not that long ago and my CS classes had quite a few former business majors. They switched because they think it’ll be more lucrative for them but since they only care about money they didn’t bother to actually learn the material especially since they could just vibe code through everything.

      • They’re the same people that copied code from stack overflow that you had to tell them how to actually fix every PR. The difference is the C suite types are backing them this time

      • I do apps that work, i do patches that are production quality. Half the cs world does… I do full stack ai debugging of esp32 projects.

        It’s a powerful tool, you just need to learn it’s strong and weak points, just like any other tool you use.

  • Claude is very good when driven by someone who knows how to do the job and demands perfection. However if you give it a prompt and take the first result it is normally junk, make it iterate and things get better.

  • The solutions it generated were almost write every time, but critically broken in subtle ways, and any attempt to fix the problems would either introduce new bugs, or regress with old bugs.

    This is part of your problem right there. The correct word there, instead of “write”, is “right”. You emotionally typed out a message, got your dopamine hit, then felt satisfied, and now the rest of us have to figure out what you meant to say.

    Which is fine, but now imagine that not only you can do this, but AI can do it as well…

    If you want something done correctly, then you must do it yourself.

  • 3 hours

    Vibe coding, in the sense of telling the model to make codebase changes, then directly using the output produced, is 100% marketing bullshit that does not scale beyond toy examples.

    Here’s the rub: Claude is extremely useful as an advanced autocomplete, if and only if you’re guiding it architecturally through every task it runs, and you vet + revise the output yourself between iterations. You cannot effectively pilot entirely from chat in a mature codebase, and you must compile robust documentation and instructions for Claude to know how to work with your codebase.

    You also must aggressively manage information in the context window yourself and keep it clean. You mentioned going in circles trying to get the robot to correct itself: huge mistake. Rewind to before the error, and give it better instructions to steer it away from the pitfall it fell into. Same vein, you also need to reset ASAP after pushing into the >100k token mark, because the models start melting into putty soon after (yes, even the “extended” 1M-window ones).

    I’m someone who has massively benefited from using modern LLMs in my work, but I’m also a massive hater at the same time: They’re just a tool, not magic, and have to be used with great care and attention to get reasonable results. You absolutely cannot delegate your thinking to them, because it will bite you, hard and fast.

    For your use case (3D math), what I recommend is decomposing your end goal into a series of pure functions that you’ll string together. Once you have that list, that’s where Claude comes in. Have it stub those functions for you, then have it implement them one at a time, reviewing the output of every one before proceeding.

  • 4 hours

    No, I think you do get it. That’s exactly right. Everything you described is absolutely valid.

    Maybe the only piece you’re missing is that “almost right, but critically broken in subtle ways” turns out to actually be more than good enough for many people and many purposes. You’re describing the “success” state.

    /s but also not /s because this is the unfortunate reality we live in now. We’re all going to eat slop and sooner or later we’re going to be forced to like it.

    • Or maybe we will be forced to switch off LLMs and start solving the bugs introduced by their usage using our minds.

      • 2 hours

        As a professional software developer, I truly hope that is the case (and I plan to charge at least 10x my current rate after the AI bubble pops when I’m looking for my next job as I expect there to be a massive shortage of people skilled enough to actually deal with the nightmare spaghetti AI code bases)

        Fun times ahead.

    1. Did you have MCP tooling setup so it can get lsp feedback? This helps a lot with code quality as it’ll see warnings/hints/suggestions from the lsp

    2. Unit tests. Unit tests. Unit tests. Unit tests.

    I cannot stress enough how much less stupid LLMs get when they jave proper solid Unit tests to run themselves and compare expected vs actual outcomes.

    Instead of reasoning out “it should do this” they can just run the damn test and find out.

    They’ll iterate on it til it actually works and then you can look at it and confirm if its good or not.

    I use Sonnet 4.5 / 4.6 extensively and, yes, its prone to getting the answer almost right but a wrong in the end.

    But the unit tests catch this, and it corrects.

    Example: I am working on my own fame engine with monogame and its about 95% vibe coded.

    This transform math is almost 100% vibe coded: https://github.com/SteffenBlake/Atomic.Net/blob/main/MonoGame/Atomic.Net.MonoGame/Transform/TransformRegistry.cs

    The reason its solid is because of this: https://github.com/SteffenBlake/Atomic.Net/blob/main/MonoGame/Atomic.Net.MonoGame.Tests/Transform/Integrations/TransformRegistryIntegrationTests.cs

    Also vibe coded and then sanity checked by me by hand to confirm the math checks out for the tests.

    And yes, it caught multiple bugs, but the agent automatically could respond to that, fix the bug, rerun the tests, and iterate til everything was solid.

    Test Driven Development is huge for making agents self police their own code.

  • 2 hours

    I tried using Claude to convert some bash scripts to docker compose files, and it made several mistakes with case-sensitivity and failure to properly encapsulate certain path declarations that had spaces in them. if it could make such incredibly simple mistakes in converting a script to a markup language, I wouldn’t dare trust it to actually compose anything in an actual programming language like Python or Rust or C# or Swift whatever you’re using.

  • Their usual (crap) defense is:

    a) you’re not paying enough, so of course it is crap

    b) you’re not prompting right, you need to use detailed, precise language…

    c) that is just anecdotal evidence, you need to do an actual study, yadda yadda.

    d) it will improve…

    (any other anyone has noticed?)

  • opus 4.6 is a dream for me. Though I’m in the web dev area which is quite mature and with a lot of training data. The life saver to avoid regression is to comprehensively test your code. This works as a kind of quality checkpoint during development.

    Secondly, give it the right tooling and context, that means at the very least a good acp server (editor) and appropriate mcp servers. Search for what’s appropriate in your domain. For 3d math, at the very least I’d think it would need a visual snapshotting tool. There are probably tons of relevant ones.

    Thirdly, consistently expand on your CLAUDE.md, add and develop new skills as you go (let it write its own on your instructions). Force it to read them.

    It probably depends on a lot of factors, but disciplined usage of these approaches will go a long way. Opus’ context window is huge, which makes the approach more consistent.

  • I rarely use LLM’s for generating code. Usually, by the time I’ve provided all the necessary context, I might as well have just written the code myself. I do use LLM’s for doing research. As long as it’s understood that the response is only as accurate as the source material, they often do a decent job of distilling down to what I’m actually looking for.

  • This sounds on par for all the AI I have been dealing with. I find it works best if you give it a lot of rules, then treat it like a 12 year old and expect wild mistakes for anything more complicated than a simple calculator. I work primarily with Gemini and have it build simple HTML/CSS and it’s infuriating how many times I have told it to use &amp ; instead of &.
    Now every time it does anything, it’s always telling me how it included the correct ampersand. It can’t tell me why it screwed up like 5 times prior, it just makes up some BS and apologizes profusely.
    The more rules you give it, even if it ignores them sometimes, the better.

    • In my view it’s about quality and not quantity of the AGENTS/CLAUDE.md

      My experience is that starting with what i dont want and the what i do works best. «never rely on training for API documentation, use context7» «don’t use ls/find/grep for symbols, use serena»

      Not the best examples, but but.

  • I used Opus 4.6 Extended

    Stop being cheap, OP. You clearly just need to shell out multiple billions of dollars for access to mythos /s