• 2 Posts
  • 261 Comments
Joined 5 months ago
cake
Cake day: September 9th, 2025

help-circle
  • I can see some merit in this idea. On a similar note, my company has GH Copilot code reviews, so I regularly generate code with Copilot using the Claude Sonnet model locally and then Copilot reviews my PRs (in addition to humans). A lot of times, the code review feedback is on point, and then I often copy and paste it back to the Copilot agent I’m running locally to address the code review feedback.

    Having 2 passes of AI does improve the result, though it would quickly go off the rails without senior engineers reviewing and steering the output, not to mention putting the initial architecture in place. From my experience, I can’t imagine building anything with AI that has any reasonable amount of complexity that won’t eventually collapse in on itself without the guidance of senior engineers. Multiple AI agents working as an ensemble won’t eliminate the need for that guidance, IMO.








  • From a quick reading of the actual law, here are some of the AI uses it prohibits that will apparently “stifle innovation”:

    …use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation

    …to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics

    …the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques

    …the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage

    …the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation









  • At least in Star Trek, the robots would say things like, “I am not programmed to respond in that area.” LLMs will just make shit up, which should really be the highest priority issue to fix if people are going to be expected to use them.

    Using coding agents, it is profoundly annoying when they generate code against an imaginary API, only to tell me that I’m “absolutely right to question this” when I ask for a link to the docs. I also generally find AI search to be useless, even though DuckDuckGo as an example does link to sources, but said sources often have no trace of the information presented in the summary.

    Until LLMs can directly cite and include a link to a credible source for every piece of information they present, they’re just not reliable enough to depend on for anything important. Even with sources linked, it would also need to be able to rate and disclose the credibility of every source (e.g., is the study peer reviewed and reproduced, is the sample size adequate, etc.).