The opposition appeared overwhelming: Tens of thousands of emails poured into Southern California’s top air pollution authority as its board weighed a June proposal to phase out gas-powered appliances. But in reality, many of the messages that may have swayed the powerful regulatory agency to scrap the plan were generated by a platform that is powered by artificial intelligence.

Public records requests reviewed by The Times and corroborated by staff members at the South Coast Air Quality Management District confirm that more than 20,000 public comments submitted in opposition to last year’s proposal were generated by a Washington, D.C.-based company called CiviClick, which bills itself as “the first and best AI-powered grassroots advocacy platform.”

A Southern California-based public affairs consultant, Matt Klink, has taken credit for using CiviClick to wage the opposition campaign, including in a sponsored article on the website Campaigns and Elections. The campaign “left the staff of the Southern California Air Quality Management District (SCAQMD) reeling,” the article says.

  • arctanthrope@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 hour ago

    CiviClick, which bills itself as “the first and best AI-powered grassroots advocacy platform.”

    so the word “grassroots” just doesn’t fucking mean anything anymore, huh

  • BrianTheeBiscuiteer@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    42 minutes ago

    Public comment shouldn’t be used as an opinion poll. It should give regulators and politicians a range of viewpoints they may not have previously considered.

  • TheJesusaurus@piefed.ca
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 hours ago

    Do they not fucking check this stuff? When I sign a petition to my government they want to know if I’m a constituent or not. Otherwise why would they care about my signature on a page

    • Snot Flickerman@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      edit-2
      2 hours ago

      This was happening before AI, with less sophisticated tools, often called “Persona Management” that allowed one person to control numerous bots with pre-written scripts that could be called up depending on what was called for. The only difference the AI has made is the speed and scale at which the same can be done and be more convincingly not all culled from the same script.

      https://www.axios.com/2017/12/15/bots-flooded-the-fcc-with-comments-about-net-neutrality-1513307159

      Here’s an article about a flood of bot comments to an FCC open comment regarding Net Neutrality in 2017, five years before OpenAI would release ChatGPT. So it’s definitely been going on before the AI tools as they now exist were available. It’s a quantitative difference, not a qualitative difference, in other words it’s the same thing just larger scale due to the speed of AI.

  • Snot Flickerman@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    59
    ·
    edit-2
    2 hours ago

    company called CiviClick, which bills itself as “the first and best AI-powered grassroots advocacy platform.”

    I have been saying this since 2016, when we were dealing with both Cambridge Analytica and Correct the Record flooding the internet with paid political speech masquerading as real people with real opinions who weren’t being paid to spout nonsense.

    Paid political speech online whether by a human or a bot, should legally be required to state that they are being paid to promote their statements. There should be hefty penalties, large fines for single instances (one person, one message) up to prison time for an organized group (something akin to RICO). The fines/prison time should be even more severe when AI generated messages are fraudulently being promoted as real humans, simply due to the industrial speed and scale AI generation allows.

    Paid political advertising on television and radio has for a long time been required to state that it is paid. This should have been priority number one from the Democrats when Biden got into office and they held slim majorities in both houses,

    Sure, there’s nothing we can do about foreign bot farms, but that’s not what this article is about. This is about a US company based in our nations capital whose goal is to spread disinformation abusively to impact public comment. This is a private company absolutely flooding an agency with an open public comment period for an agency proposition and killing the proposition through messages that are not from real people at all but from AI.

    The fact that getting this under control at the very least within our own borders is not a priority for any politicians is a fucking travesty and makes our entire democratic apparatus an outright farce.