Arf! I’m Tony Bark. Artist by day, programmer by night. Gamer all the way.

Foxtrot Delta TACO.

  • 250 Posts
  • 336 Comments
Joined 3 years ago
Cake day: June 4th, 2023


The middle distribution of Gen Z’s feelings about AI range from apprehension to downright hatred. Despite the fact that more than half of Gen Z living in the U.S. uses AI regularly, according to a recently released Gallup poll, less than a fifth feel hopeful about the technology. About a third says the technology makes them angry. And nearly half say it makes them afraid.

Gallup’s own senior education researcher, Zach Hrynowski, blamed the bad vibes at least partially on the dwindling job market. The oldest Zoomers, he told Axios, are the angriest, as they are “acutely aware” of the ability of a technology to transform cultural norms without a second thought, unlike a Gen Xer who is trained to see new technology as toys and are still “playing around with AI.”

Indeed, job prospects for the recently graduated Gen Z are abysmal; Bloomberg just reported that 43% of young graduates are “underemployed,” meaning taking on jobs that require less education than they have.

[…]

This is not just a Gen Z problem, either. In the American heartland, data centers are being proposed at a pace that local communities never anticipated and for which they were never asked permission, and they’re increasingly pushing back.

The numbers are serious. According to a report from 10a Labs’ Data Center Watch, at least $18 billion worth of data center projects have been blocked and another $46 billion delayed over the past two years owing to local opposition. At least 142 activist groups across 24 states are now actively organizing to block data center construction and expansion. A Heatmap Pro review of public records found that 25 data center projects were canceled following local pushback in 2025 alone, four times as many as in 2024, with 21 of those cancellations occurring in the second half of the year as electricity costs grew.

The concerns driving this resistance are less about existential AI risk and more about typical kitchen-table complaints; communities consistently cite higher utility bills, water consumption, noise, impacts on property values, and green space destruction as their primary objections. Water use is mentioned as a top concern in more than 40% of contested projects, according to a Heatmap Pro review of public records.





  • Thank you. Another issue that sort of overlaps with the hallucination problem is the fact that it is basically is referring to snapshot in time. Based on my past attempts, no amount of searching the web will improve results because it has no idea to account for future outcomes like actual programmers can. Meaning, it isn’t very flexible and can’t adopt to new, breaking or quality of life changes.

    Programming is a hobby for me and my preferred language is C#. I work on the bleeding edge for fun and so I can benefit from .NET’s recent quality of life changes. Naturally, I’m Microsoft’s target customers. And yet for the reasons stated above, these chatbots can’t work for me in the long run.








A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:

It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.

There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.

I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.



Amazon’s ecommerce business has summoned a large group of engineers to a meeting on Tuesday for a “deep dive” into a spate of outages, including incidents tied to the use of AI coding tools.

The online retail giant said there had been a “trend of incidents” in recent months, characterized by a “high blast radius” and “Gen-AI assisted changes” among other factors, according to a briefing note for the meeting seen by the FT.

Under “contributing factors” the note included “novel GenAI usage for which best practices and safeguards are not yet fully established.”


Wikipedia editors have implemented new policies and restricted a number of contributors who were paid to use AI to translate existing Wikipedia articles into other languages after they discovered these AI translations added AI “hallucinations,” or errors, to the resulting article.

The new restrictions show how Wikipedia editors continue to fight the flood of generative AI across the internet from diminishing the reliability of the world’s largest repository of knowledge. The incident also reveals how even well-intentioned efforts to expand Wikipedia are prone to errors when they rely on generative AI, and how they’re remedied by Wikipedia’s open governance model.

The issue in this case starts with an organization called the Open Knowledge Association (OKA), a non-profit organization dedicated to improving Wikipedia and other open platforms.

https://web.archive.org/web/20260307182752/https://www.404media.co/ai-translations-are-adding-hallucinations-to-wikipedia-articles/



Elon Musk’s xAI has lost its bid for a preliminary injunction that would have temporarily blocked California from enforcing a law that requires AI firms to publicly share information about their training data.

xAI had tried to argue that California’s Assembly Bill 2013 (AB 2013) forced AI firms to disclose carefully guarded trade secrets.

The law requires AI developers whose models are accessible in the state to clearly explain which dataset sources were used to train models, when the data was collected, if the collection is ongoing, and whether the datasets include any data protected by copyrights, trademarks, or patents. Disclosures would also clarify whether companies licensed or purchased training data and whether the training data included any personal information. It would also help consumers assess how much synthetic data was used to train the model, which could serve as a measure of quality.


Customs and Border Protection (CBP) bought data from the online advertising ecosystem to track peoples’ precise movements over time, in a process that often involves siphoning data from ordinary apps like video games, dating services, and fitness trackers, according to an internal Department of Homeland Security (DHS) document obtained by 404 Media.

The document shows in stark terms the power, and potential risk, of online advertising data and how it can be leveraged by government agencies for surveillance purposes. The news comes after Immigration and Customs Enforcement (ICE) purchased similar tools that can monitor the movements of phones in entire neighbourhoods. ICE also recently said in public procurement documents it was interested in sourcing more “Ad Tech” data for its investigations. Following 404 Media’s revelation of that ICE purchase, on Tuesday a group of around 70 lawmakers urged the DHS oversight body to conduct a new investigation into ICE’s location data buying.

This sort of information is a “goldmine for tracking where every person is and what they read, watch, and listen to,” Johnny Ryan, director of the Irish Council for Civil Liberties (ICCL) Enforce, which has closely followed the sale of advertising data, told 404 Media in an email.

https://web.archive.org/web/20260303141140/https://www.404media.co/cbp-tapped-into-the-online-advertising-ecosystem-to-track-peoples-movements/


The opposition appeared overwhelming: Tens of thousands of emails poured into Southern California’s top air pollution authority as its board weighed a June proposal to phase out gas-powered appliances. But in reality, many of the messages that may have swayed the powerful regulatory agency to scrap the plan were generated by a platform that is powered by artificial intelligence.

Public records requests reviewed by The Times and corroborated by staff members at the South Coast Air Quality Management District confirm that more than 20,000 public comments submitted in opposition to last year’s proposal were generated by a Washington, D.C.-based company called CiviClick, which bills itself as “the first and best AI-powered grassroots advocacy platform.”

A Southern California-based public affairs consultant, Matt Klink, has taken credit for using CiviClick to wage the opposition campaign, including in a sponsored article on the website Campaigns and Elections. The campaign “left the staff of the Southern California Air Quality Management District (SCAQMD) reeling,” the article says.