58008@lemmy.worldEnglish
2 monthsAt least they have an AI-free option, as annoying as it is to have to opt into it.
On a related note, it’s hilarious to me that the Ecosia search engine has AI built in. Like, I don’t think planting any number of trees is going to offset the damage AI has done and will do to the planet.

Magnum, P.I.@infosec.pubEnglish
2 monthslol what? Do they have some kind of statement addressing that?
- Deckname@olio.cafeEnglish2 months
Yes they addressed it here. its kind of understandable given that they want to exist and everyone else has AI… But companies… At least you can turn it off.
Magnum, P.I.@infosec.pubEnglish
2 monthsI wish they would have talked about how many trees you need to offset an ecosia AI search
- Resonosity@lemmy.dbzer0.comEnglish2 months
And make AI opt-in rather than opt-out so Ecosia can educate their users
- 2 months
I want to know what economic forces are making it so that having AI, which costs money and very few users actually want, such a forgone conclusion. Who is paying them?
- mghackerlady@leminal.spaceEnglish2 months
Investors who bought into the hype and the middle managers who are scared of being fired by them
- Jason2357@lemmy.caEnglish2 months
All these MBAs that learned about the advantage of first movers in school and have so little domain knowledge they operate 100% on “we just cant be late to the table”
Leon@pawb.socialEnglish
2 monthsClimate intelligence. Gods, excuse me while I go fetch my skeleton that was ejected from my body due to the cringe.
NewDay@piefed.socialEnglish
2 monthsEcosia produces its own green solar energy. According to them, they produce twice as much as they consume. The AI is still shit, because it is just ChatGPT.
- morto@piefed.socialEnglish2 months
Reducing the albedo of some area just to disperse the captured energy for no utility (ai) is still harmful to the environment and contributes to earth’s energy imbalance. Solar energy is great when it replaces fossil fuel emissions, not when it’s just wasted.
Mwa@thelemmy.clubEnglish
2 monthshot take: this comment gives me a idea for them a opt-in AI powered entirely by solar energy if we solve the ethics problem first ofc.
- Bio bronk@lemmy.worldEnglish2 months
I don’t get this argument when literally everything else is hundreds of times worse like lifestock and cars. Removing either one today would dramatically change the environment.
Do you drive a car or take any kind of transportation?
Sockenklaus@sh.itjust.worksEnglish
2 monthsWell, I don’t know about that.
My swiss hoster just started offering AI and says that their AI infrastructure is 100 % powered by renewables and the waste heat is used for district heating.
You could argue that LLM training in itself used so much energy that you’ll never be able to compensate for the damage, but I don’t know. 🤷
- PixxlMan@lemmy.worldEnglish2 months
While good, you should always keep in mind that using renewables for this means that power can’t be used for other purposes, meaning the difference has to be covered by other sources of energy. Always bear in mind that these things don’t exist in a vaccum. The resources they use always mean resources aren’t used elsewhere. At worst this would mean that new clean power is built to power a waste, and then old dirty power has to be used for everything else, instead of being replaced by clean energy.
- MBM@lemmings.worldEnglish2 months
Yeah that reminds me of the data centres hogging green energy that was meant for households
- Demdaru@lemmy.worldEnglish2 months
On the other hand…the same private entity wouldn’t buy the means to produce renewable power if they didn’t want to power their AI center. So in the ends, nothing changes, and the power couldn’t be used for other purposes because it simply wouldn’t be generated.
However, as they did and are using it to promote themselves, they are influencing others to also adopt renewable energy policy in a way, no matter how small.
No, normally I am not that optimistic, but I am trying ^^"
Sockenklaus@sh.itjust.worksEnglish
2 monthsIt fits their business structure and values and of course there’s a good portion good faith on my end because I didn’t check first hand.
Sockenklaus@sh.itjust.worksEnglish
2 monthsCorps are run by people and people can run them with values. Our capitalist system encourages acting without values but it is not impossible to do so
- notthebees@reddthat.comEnglish2 months
I’m just happy they give the option to turn off the ai overview as a setting.
- Electricd@lemmybefree.netEnglish2 months
Like, I don’t think planting any number of trees is going to offset the damage AI has done and will do to the planet.
That’s true for pretty much everything, so not a real argument
- 2 months
Yeah. I’m actually kind of upset that I have to type ‘noai’. That should be the standard.
- Balinares@pawb.socialEnglish2 months
I mean, the poll was like as not a publicity stunt, to draw attention to the fact DDG is not doing AI. All the same, the fact they are making “no AI” a selling point is noteworthy.
EDIT: I stand corrected – apparently DDG does do AI presently. Hopefully they’re serious about reconsidering that, then.
- 123@programming.devEnglish2 months
I still get a bunch of AI bullshit unless I go out of my way. Also I swear they keep reactivating it as much as google when you opt out (or select ddg no ai as your search engine in Firefox and still see that garbage).
- Balinares@pawb.socialEnglish2 months
Thanks for the correction, I was indeed wrong about that. I updated my comment accordingly.
- Egonallanon@feddit.ukEnglish2 months
You can turn all the AI features off on regular DDG search settings. Best I can tell that achuevescthe same as using the no AI filter.
- Seleni@lemmy.worldEnglish2 months
Except it’s not very good. I turn it off and still get AI pictures and videos, and it gets rid of some pictures I know aren’t AI.
felixwhynot@lemmy.worldEnglish
2 monthsI’m looking forward to adding it to my browser settings as default search!
- JamonBear@sh.itjust.worksEnglish2 months
Already doable in Firefox, just right click in the search box then add search engine ;)
teft@piefed.socialEnglish
2 monthsJust edit the ddg entry instead of adding a new one. It’s super easy to change the url for the search in settings.
- 2 months
I don’t know how you do that. I can’t edit the standard DDG on desktop at all. Didn’t see anything at first glance in about:config either.
Easier to me to just add a new engine.
NotSteve_@piefed.caEnglish
2 monthsI’m pretty sure it asks you how often you want to see the AI overview doesn’t it? Can’t you just click never?
- Logi@lemmy.worldEnglish2 months
I like kagi’s approach of generating an AI overview if you end your query with a question mark. Is this a search or a question?
- 2 months
I’m not a fan of Kagi’s approach to anything, frankly. The fact that they’re charging money for a search engine, and using an LLM? Hard pass.
- 2 months
I don’t have a problem with them selling ads. I have a problem with the insidious tracking that has become a part of those ads. Mojeek sells ads but doesn’t track you everywhere. Also, there’s SearX instances, which typically don’t sell ads or data.
They also don’t use LLMs. If you think Kagi isn’t giving away or selling your data, think again, because the LLM is doing it as a core function.
setsubyou@lemmy.worldEnglish
2 monthsThe article already notes that
privacy-focused users who don’t want “AI” in their search are more likely to use DuckDuckGo
But the opposite is also true. Maybe it’s not 90% to 10% elsewhere, but I’d expect the same general imbalance because some people who would answer yes to ai in a survey on a search web site don’t go to search web sites in the first place. They go to ChatGPT or whatever.
A_norny_mousse@feddit.orgEnglish
2 monthsIt still creeps me out that people use LLMs as search engines nowadays.
- SendMePhotos@lemmy.worldEnglish2 months
That was the plan. That’s (I’m guessing) why the search results have slowly yet noticeably degraded since Ai has been consumer level.
They WANT you to use Ai so they can cater the answers. (tin foil hat)
I really do believe that though. Call me a conspiracy theorist but damn it, it fits.
- 2 months
It’s not that wild of a conspiracy theory. Hard to get definite proof though because you would have to compare actual search results from the past with the results of the same search from today, and we unfortunately can’t travel back in time.
But there are indicators for your theory to be true:
- It’s evident that in UI design the top area of the screen is the most valuable. AI results are always shown there. So we know that selling AI is of utmost importance to Google.
- The Google search algorithm was altered quite often over the years, these “rollouts” are publicly available information, and a lot of people have written about the changes as soon as they happened.
- Page ranking fueled a whole industry which was called SEO (Search Engine Optimization). A lot of effort went into understanding how google ranks its results. This was of course done with a different goal in mind but the conclusions from this field can be used to determine if and how search results got worse over time
- It’s an established fact that companies benefit from users never leaving the company’s ecosystem. Google as an example tried to prevent a clickthrough to the actual websites in the past, with technologies like AMP or by displaying snippets.
- If users rely on the AI output Google can effectively achieve this: the user is not leaving the page and Google has full control over what content the user sees.
Now, all of the points listed above can be proven. If you put all of that together it seems at least highly likely that your “conspiracy theory” is in fact true.
- Buddahriffic@lemmy.worldEnglish2 months
I’d argue that SEO was one of the biggests causes of search result degradation and consider any complaints coming from them as highly suspect due to conflicting interests. Eg, a change that makes it harder to game the search engine algorithms is good for searchers but bad for SEOs.
I hope the whole industry dies (or already is? I don’t hear much about it these days lol). They are just marketers whose whole job is to get you to look at their shit instead of the most relevant results.
- 2 months
Yeah, I think SEO is pretty much dead by now, and probably because web search as we knew it is kind of dead as well. You’ll probably need to spend ad money if you want visibility. But I’m no expert on SEO and I could be wrong.
- redditmademedoit@piefed.zipEnglish2 months
Yeah, search has degraded along with the Internet, you almost need an LLM now to filter out all the garbage hits. For a while, adding “reddit” to your search term was an OK high level filter to remove blogspam and e-commerce sites, but interacting with reddit is so annoying now that it’s barely an option and many of the quality reddit posters have moved on while the state and corporate astroturfers are running the show. Never mind that the “reddit filter” also removed results from much better sources, like specialist forums.
A_norny_mousse@feddit.orgEnglish
2 monthsthe search results have slowly yet noticeably degraded
You mean Google.
JustEnoughDucks@feddit.nlEnglish
2 monthsAnd Bing, and searches that use google and Bing results (DDG, ecosia)
- SendMePhotos@lemmy.worldEnglish2 months
All of them. I use DDG as a primary and even those results are worse.
- redditmademedoit@piefed.zipEnglish2 months
In Google Search’s prime DDG was still terrible and not a viable competitor even with the privacy advantage. Now both services are almost comparable, so it’s kind of a no-brainer to ditch Google.
- Womble@piefed.worldEnglish2 months
Search results have been degrading for a lot longer than LLMs have been a thing. Peak usefulness for them was around a decade ago.
- msage@programming.devEnglish2 months
They WANT you to use Ai so they can
cater the answerssell you ads and stop you from using the internet.
A_norny_mousse@feddit.orgEnglish
2 monthsMost people don’t even know the difference between an URL bar and a search bar, or more precisely: most devices use a browser that deliberately obfuscates that difference.
- kreskin@lemmy.worldEnglish2 months
when browsers overload the url field to act as a search field, can you blame people for not knowing the difference? To the users its become a distinction without a difference.
They say that whats tolerated is whats encouraged. Browser software companies have encouraged people to be uninformed about the tool they are using. Easier to mess with them that way.
- redditmademedoit@piefed.zipEnglish2 months
But they all suck, or rather the Internet kinda sucks these days. Google very much included in the sucking.
- 2 months
I use kagi assistant. It does a search, summarizes, then gives references to the origin of each claim. Genuinely useful.
- Warl0k3@lemmy.worldEnglish2 months
How often do you check the summaries? Real question, I’ve used similar tools and the accuracy to what it’s citing has been hilariously bad. Be cool if there was a tool out there that was bucking the trend.
MaggiWuerze@feddit.orgEnglish
2 monthsYeah, we were checking if school in our district was canceled due to icy conditions. Googles model claimed that a county wide school cancellation was in effect and cited a source. I opened, was led to our official county page and the very first sentence was a firm no.
It managed to summarize a simple and short text into its exact opposite
- 2 months
Depends on how important it is. Looking for a hint for a puzzle game: never. Trying to find out actually important info: always.
They make it easy though because after every statement it has these numbered annotations and you can just mouse over to read the text.
You can chose different models and they differ in quality. The default one can be a bit hit and miss.
Deebster@infosec.pubEnglish
2 monthsI also sometimes use the Kagi summaries and it’s definitely been wrong before. One time I asked what the term was for something in badminton and it came up with a different badminton term. When I looked at the cited source, it was a multiple choice quiz with the wrong term being the first answer.
It’s reliable that I still use it, although more often to quickly identify which search results are worth reading.
- hayvan@piefed.worldEnglish2 months
I use Perplexity for my searches, and it really depends on how much I care about the subject. I heard a name and don’t know who they are? LLM summary is good enough to have an idea. Doing research or looking up technical info? I open the cited sources.
- 2 months
I can’t speak for the original poster, but I also use Kagi and I sometimes use the AI assistant, mostly just for quick simple questions to save time when I know most articles on it are gonna have a lot of filler, but it’s been reliable for other more complex questions too. (I just would rather not rely on it too heavily since I know the cognitive debt effects of LLMs are quite real.)
It’s almost always quite accurate. Kagi’s search indexing is miles ahead of any other search I’ve tried in the past (Google, Bing, DuckDuckGo, Ecosia, StartPage, Qwant, SearXNG) so the AI naturally pulls better sources than the others as a result of the underlying index. There’s a reason I pay Kagi 10 bucks a month for search results I could otherwise get on DuckDuckGo. It’s just that good.
I will say though, on more complex questions with regard to like, very specific topics, such as a particular random programming library, specific statistics you’d only find from a government PDF somewhere with an obscure name, etc, it does tend to get it wrong. In my experience, it actually doesn’t hallucinate, as in if you check the sources there will be the information there… just not actually answering that question. (e.g. if you ask it about a stat and it pulls up reddit, but the stat is actually very obscure, it might accidentally pull a number from a comment about something entirely different than the stat you were looking for)
In my experience, DuckDuckGo’s assistant was extremely likely to do this, even on more well-known topics, at a much higher frequency. Same with Google’s Gemini summaries.
To be fair though, I think if you really, really use LLMs sparingly and with intention and an understanding of how relatively well known the topic is you’re searching for, you can avoid most hallucinations.
- 2 months
I use Kagi as my primary search engine for almost 2 years now and it’s really good! I started to use the Kagi assistant recently to explain complex concepts to me and I like it. I love how it links me to sources. When I’m using a LLM tool, like Kagi’s assistant, I want to learn about the topic, I don’t use it for quick answers.
A lot of people are just against ‘AI’/LLM’s, and I hate it too when it’s being shoved into my face. But consensual LLM’s are just another tool that I utilize to learn about something.
porcoesphino@mander.xyzEnglish
2 monthsFor others here, I use kagi and turned the LLM summaries off recently because they weren’t close to reliable enough for me personally so give it a test. I use LLMs for some tasks but I’m yet to find one that’s very reliable for specifics
Ex Nummis@lemmy.worldEnglish
2 monthsYou can set up any AI assistant that way with custom instructions. I always do, and I require it to clearly separate facts with sources from hearsay or opinion.
TheOneCurly@feddit.onlineEnglish
2 monthslol, the random text generator does not understand what any of those things are.
- CosmoNova@lemmy.worldEnglish2 months
I know some of them personally and they usually claim to have decent to very good media literacy too. I would even say some of them are possibly more intelligent than me. Well, usually they are but when it comes to tech, they miss the forest for the trees I think.
gerryflap@feddit.nlEnglish
2 monthsFor some issues, especially related to programming and Linux, I feel like I kinda have to at this point. Google seems to have become useless, and DDG was never great to begin with but is arguably better than Google now. I’ve had some very obscure issues that I spent quite some time searching for, only to drop it into ChatGPT and get a link to some random forum post that discusses it. The biggest one was a Linux kernel regression that was posted on the same day in the Arch Linux forums somewhere. Despite having a hunch about what it could be and searching/struggling for over an hour, I couldn’t find anything. ChatGPT then managed to link me the post (and a suggested fix: switching to LTS kernel) in less than minute.
For general purpose search tho, hell no. If I want to know factual data that’s easy to find I’ll rely on the good old search engine. And even if I have to use an LLM, I don’t really trust it unless it gives me links to the information or I can verify that what it says is true.
A_norny_mousse@feddit.orgEnglish
2 monthsprogramming and Linux
I’m seeing almost daily the fuck-ups resulting from somebody trying to fix something with ChatGPT, then coming to the forums because it didn’t work.
- Honytawk@feddit.nlEnglish2 months
Most likely because if they came directly with their problem to whatever platform you are on, they would have been scolded at for not trying hard enough to solve it on their own. Or close the post because it has already been asked.
- NewNewAugustEast@lemmy.zipEnglish2 months
I agree that happens, but it has nothing to do with what op said. They didn’t want a solution, they wanted a link to where the problem was being discussed so they could work out a solution.
People seem to really confure the difference between asking an llm how to patch a boat vs where did people discuss ways to patch a boat.
Cherry@piefed.socialEnglish
2 monthsYup this is a great example. LLM for non opinion based stuff or for stuff that’s not essential for life. It’s great for finding a recipe but if you’re gonna rely on the internet or an LLM to help you form an opinion on something that requires objective thinking then no. If I said hey internet/LLM is humour good or bad, it would insert a swayed view.
It simply can’t be trusted. I can’t even trust it return shopping links so I have retreated back to real life. If it can’t play fair I no longer use it as a tool.
- IronBird@lemmy.worldEnglish2 months
it just makes it evermore obvious to them how many people in their life are sheep that believe anything the read online, i assume? a false sense of confidence where one mught have just said 'i dont know"
- CallMeAnAI@lemmy.worldEnglish2 months
What an absolutely arrogant attitude 🤣 You actually believe there is some gap here 🤣 just amazing.
Not using AI doesn’t mean your performing whatever task your doing better. It has nothing to do with being able to parse results for bullshit or not.
Cherry@piefed.socialEnglish
2 monthsI think the attitude of being virtuosos or preachy can seep in at times, especially when being part of a cause but IMO diplomacy, having conversations and opening their mind to objectivity has to be better than telling people they are wrong.
I know this is easy to say, and esp when so many people are just so addicted to social media and the internet.
I have had conversations with friends, family where they can have a clear conversation about how much propaganda is pushed on to them, and they then turn straight to their phone and hoover up and hour of FB. It does make you think wow sheep. But I have to remind myself we don’t get change by telling people ‘you clearly don’t know your own mind’
- evol@lemmy.todayEnglish2 months
So many people were already using tiktok or youtube as google search. I think AI is arguably better than those
edit: New business, take your chatgpt question and turn it into a tiktok video. The Slop must go on
- 2 months
The main problem is that LLMs are pulling from those sources too. An LLM often won’t distinguish between highly reputable sources and any random page that has enough relevant keywords, as it’s not actually capable of picking its own sources carefully and analyzing each one’s legitimacy, at least not without a ton of time and computing power that would make it unusable for most quick queries.
- evol@lemmy.todayEnglish2 months
Genuinely, do you think the average person tiktok’ing their question is getting highly reputable sources? The average American has what, a 7th grade reading level? I think the LLM might have a better idea at this point
Ex Nummis@lemmy.worldEnglish
2 monthsFirst, its results are often simply wrong, so that’s no good. Second, the more people use the AI summaries, the easier it’ll be for the AI companies to subtly influence the results in their advantage. Think of advertising or propaganda.
This is already happening btw, and the reason Musk created Grokipedia. Grok (and even other llm’s!) already use it as a “trusted source”, which it is anything but.
- evol@lemmy.todayEnglish2 months
Okay but its a search engine, they can literally just pick websites that align with a certain viewpoint and hide ones that don’t, Its not really a new problem. If they just make grokpedia the first result then its not like not having the AI give you a summary changed anything.
- CallMeAnAI@lemmy.worldEnglish2 months
So literally the same shit as before with search but wrapped up in a nice paragraph with citations you can follow up on?
merc@sh.itjust.worksEnglish
2 monthsYeah, this is why polling is hard.
Online polls are much more likely to be answered by people who like to answer polls than people who don’t. People who use Duck Duck Go are much more likely to be privacy-focused, knowledgeable enough to use a different search engine other than the default, etc.
This is also an echo chamber (The Fediverse) discussing the results of a poll on another similar echo chamber (Duck Duck Go). You won’t find nearly as many people on Lemmy or Mastodon who love AI as you will in most of the world. Still, I do get the impression that it’s a lot less popular than the AI companies want us to think.
- Deestan@lemmy.worldEnglish2 months
Meanwhile, at HQ: “The userbase hallucinated that they don’t want AI. Maybe we prompted them wrong?”
Sockenklaus@sh.itjust.worksEnglish
2 monthsThe prompt was bad: there was no option to vote for “a little bit of AI as a tool is not bad but don’t force feed it to me”.
I think there were many people who voted for “no AI” who would’ve voted for “a little bit of ai” if they had the option.
eksb@programming.devEnglish
2 monthsThere were probably also people who voted for “yes AI” who would have voted for “a little bit of ai when I explicitly ask for it” if they had the option.
- 2 months
I made https://lite.duckduckgo.com/ my homepage. No AI and super fast loading. AI would be fine if it was opt-in. Shoving it into everything to see what works just makes people hate it. Looking at you MS.
- 2 months
whoa nice! Thanks!
For people trying to configure that in mozilla (I am trying to get away from it but for now :/)
- -> Edit -> Settings -> Search
- “Search Shortcuts” -> Add (to add a search engine)
- “Search Engine Name”: DuckDuckGo Lite
- “URL with %s in place of search term”:
https://lite.duckduckgo.com/lite/?q=%25s(this has to be=%s, lemmy keeps mutilating that to=%25severytime I save my post) - “Keyword (optional)”: @ddgl (or pick whatever you like - it appears @ddg is hardcoded and gets refused)
- -> Save Engine
- scroll up to the top, “Default Search Engine”
- from the dropdown list, select “DuckGuckGo Lite”
Done.
- 2 months
It’s horrible for the environment too and wastes electricity. It’s fucked up that Google makes everything you search an AI search.
- SemiAuto@lemmy.dbzer0.comEnglish2 months
Damn didn’t know about this lite version, thanks, also gonna change it into it.
- morto@piefed.socialEnglish2 months
There’s also https://html.duckduckgo.com/. It’s like the main page, but without javascript.
M137@lemmy.worldEnglish
2 monthsBecause the poll just ended… it’s been opt out since before the poll and nothing has changed, yet (if anything does change). How is this not obvious?
- Ensign_Crab@lemmy.worldEnglish2 months
They should have asked before including AI in the first place.
- Jako302@feddit.orgEnglish2 months
Asking an existing userbase for any kind of change will pretty much always result in a no.
If the project requires minimal resources and doesn’t have a major downside, then implementing your own version before asking is fine.
They didn’t serve a bunch of ex alcoholics a full bottle of whisky, all they did is make you scroll twice on your mouse wheel.
- Ensign_Crab@lemmy.worldEnglish2 months
Asking an existing userbase for any kind of change will pretty much always result in a no.
If you’re trying to position yourself as a search engine that hasn’t enshittified, don’t head down that road without asking. Know your userbase. They’re using duckduckgo for a reason.
dantheclamman@lemmy.worldEnglish
2 monthsI think LLMs are fine for specific uses. A useful technology for brainstorming, debugging code, generic code examples, etc. People are just weary of oligarchs mandating how we use technology. We want to be customers but they want to instead shape how we work, as if we are livestock
- 2 months
Right? Like let me choose if and when I want to use it. Don’t shove it down our throats and then complain when we get upset or don’t use it how you want us to use it. We’ll use it however we want to use it, not you.
- 2 months
I should further add - don’t fucking use it in places it’s not capable of properly functioning and then trying to deflect the blame on the AI from yourself, like what Air Canada did.
When Air Canada’s chatbot gave incorrect information to a traveller, the airline argued its chatbot is “responsible for its own actions”.
Artificial intelligence is having a growing impact on the way we travel, and a remarkable new case shows what AI-powered chatbots can get wrong – and who should pay. In 2022, Air Canada’s chatbot promised a discount that wasn’t available to passenger Jake Moffatt, who was assured that he could book a full-fare flight for his grandmother’s funeral and then apply for a bereavement fare after the fact.
According to a civil-resolutions tribunal decision last Wednesday, when Moffatt applied for the discount, the airline said the chatbot had been wrong – the request needed to be submitted before the flight – and it wouldn’t offer the discount. Instead, the airline said the chatbot was a “separate legal entity that is responsible for its own actions”. Air Canada argued that Moffatt should have gone to the link provided by the chatbot, where he would have seen the correct policy.
The British Columbia Civil Resolution Tribunal rejected that argument, ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees
- 2 months
They were trying to argue that it was legally responsible for its own actions? Like, that it’s a person? And not even an employee at that? FFS
- 2 months
You just know they’re going to make a separate corporation, put the AI in it, and then contract it to themselves and try again.
- NotAnonymousAtAll@feddit.orgEnglish2 months
ruling that Air Canada had to pay Moffatt $812.02 (£642.64) in damages and tribunal fees
That is a tiny fraction of a rounding error for a company that size. And it doesn’t come anywhere near being just compensation for the stress and loss of time it likely caused.
There should be some kind of general punitive “you tried to screw over a customer or the general public” fee defined as a fraction of the companies’ revenue. Could be waived for small companies if the resulting sum is too small to be worth the administrative overhead.
merc@sh.itjust.worksEnglish
2 monthsIt’s a tiny amount, but it sets an important precedent. Not only Air Canada, but every company in Canada is now going to have to follow that precedent. It means that if a chatbot in Canada says something, the presumption is that the chatbot is speaking for the company.
It would have been a disaster to have any other ruling. It would have meant that the chatbot was now an accountability sink. No matter what the chatbot said, it would have been the chatbot’s fault. With this ruling, it’s the other way around. People can assume that the chatbot speaks for the company (the same way they would with a human rep) and sue the company for damages if they’re misled by the chatbot. That’s excellent for users, and also excellent to slow down chatbot adoption, because the company is now on the hook for its hallucinations, not the end-user.
- 2 months
Definitely agree, there should have been some punitive damages for making them go through that while they were mourning.
lime!@feddit.nuEnglish
2 months…what kind of brain damage did the rep have to think that was a viable defense? surely their human customer service personnel are also responsible for their own actions?
- 2 months
It makes sense to do it, it’s just along the lines of evil company.
If they lose, it’s some bad press and people will forget.
If they win, they’ve begun setting precedent to fuck over their customers and earn more money. Even if it only had a 5% chance of success, it was probably worth it.
- Jason2357@lemmy.caEnglish2 months
I am explicitly against the use case probably being thought of by many of the respondents - the “ai summary” that pops in above the links of a search result. It is a waste if I didn’t ask for it, it is stealing the information from those pages, damaging the whole WWW, and ultimately, gets the answer horribly wrong enough times to be dangerous.
- Young_Gilgamesh@lemmy.worldEnglish2 months
Google became crap ever since they added AI. Microsoft became crap ever since they added AI. OpenAI started losing money the moment they started working on AI. Coincidence? I think not!
Rational people don’t want Abominable Intelligence anywhere near them.
Personally, I don’t mind the AI overviews, but they shouldn’t show up every time you do a search. That’s just a waste of energy.
- MBech@feddit.dkEnglish2 months
Google became crap about 10 years ago when they added the product banner in the top, and had the first 5-10 search results be promoted ads. Long before they ever considered adding AI.
- Young_Gilgamesh@lemmy.worldEnglish2 months
I guess. And then they removed the “Don’t be evil” motto just to drive the point home.
But you have to agree, the company DID become even worse once they started using AI.
- MBech@feddit.dkEnglish2 months
Oh absolutely. It’s just important to remember that they’ve been horrible for a long time, and has shown more ads in a single search than your average 30 minute youtube video.
- parricc@lemmy.worldEnglish2 months
Time is sneaking up on us. It’s not even 10 years anymore. It’s closer to 20. 💀
merc@sh.itjust.worksEnglish
2 monthsGoogle became crap shortly after their company name became a synonym for online searches. When you don’t have competitors, you don’t have to work as hard to provide search results – especially if you’re actively paying Apple not to come up with their own search engine, Firefox to maintain Google as their default search engine, etc. IMO AI has been the shiny new thing they’re interested in as they continue to neglect search quality, but it wasn’t responsible for the decline of search quality.
Spaniard@lemmy.worldEnglish
2 monthsGoogle and Microsoft were crap before AI, I don’t remember when google removed the “don’t be evil” but at that point they have been crap for a few years already.
- Young_Gilgamesh@lemmy.worldEnglish2 months
- They got rid of that motto in 2018. And you could theoretically argue that Google was getting worse since its conception in 1998.
- fleton@lemmy.worldEnglish2 months
Yeah google kinda started sucking a few years before AI went mainstream, the search results took a dive in quality and garbage had already started circulating to the top.
- flameleaf@lemmy.worldEnglish2 months
I don’t mind the AI overviews, but they shouldn’t show up every time you do a search.
Reygle@lemmy.worldEnglish
2 monthsI mind them. Nobody at my workplace scrolls beyond the AI overview and every single one of the overviews they quote to me about technical issues are wrong, 100%. Not even an occasional “lucky guess”.
- Young_Gilgamesh@lemmy.worldEnglish2 months
Good for you. I Meant as a design choice for a search engine. Why waste electricity?
MrKoyun@lemmy.worldEnglish
2 monthsYou can choose how often you want the AI Overwiew to appear! It like asks you the first time you get one in a small pop up. I still think they should instead work on “highlighting relevant text from a website” like how google used to do. It was so much better.
- Young_Gilgamesh@lemmy.worldEnglish2 months
I did not know that. Never noticed a pop up. And does this work with both search engines? You can turn off the AI features on DuckDuckGo with like two clicks, but I can’t seem to find the option on Google.
MrKoyun@lemmy.worldEnglish
2 monthsI was talking about DDG because I thought you were talking about DDG in the last part. I dont think you can turn off AI completely on Google.
- 2 months
As much as I agree with this poll, duck duck go is a very self selecting audience. The number doesn’t actually mean much statistically.
If the general public knew that “AI” is much closer to predictive text than intelligence they might be more wary of it
- slappyfuck@lemmy.caEnglish2 months
There was no implication that this was a general poll designed to demonstrate the general public’s attitudes. I’m not sure why you mentioned this.
- Lightfire228@pawb.socialEnglish2 months
Because that’s how most people implicitly frame headlines like this one: a generalization of the public
- ikirin@feddit.orgEnglish2 months
I mean you Gotta Hand it to “Ai” - it is very sophisticated, and Ressource intensive predicitive Text.
- howrar@lemmy.caEnglish2 months
The poll didn’t even ask a real question. “Yes AI or no AI?” No context.
rose56@lemmy.zipEnglish
2 monthsCouple months ago, I learned that duckduckgo has settings about disabling AI content. Settings>AI features.
Easy as that.- Honytawk@feddit.nlEnglish2 months
Not as easy if you auto-delete your cookies on the closing of your browser.
skaffi@infosec.pubEnglish
2 monthsI had that problem, so then I started using https://noai.duckduckgo.com/ - it works well.
communism@lemmy.mlEnglish
2 monthsOn duckduckgo.com it’s unfortunately enabled by default though. You have to go out of your way to set your search browser to noai.duckduckgo.com if you want default AI disabled (which you’ll want on e.g. private browsing windows/any browser that autodeletes cookies when you close it). It’s extra hassle because most privacy web browsers use DDG by default, not the noai subdomain.
Bahnd Rollard@lemmy.worldEnglish
2 monthsCompanies that can not be trusted to not add features their customers do not want can not be trusted to keep them disabled by default.
If the door to AI exists, we, the users, do not trust the organization to keep it locked.
UnderpantsWeevil@lemmy.worldEnglish
2 monthsIt’s so funny to see this pushed out as a marketing campaign for DuckDuckGo AI and it totally flopped.
- chiliedogg@lemmy.worldEnglish2 months
If they take the poll to heart it can still be a sucess. They can advertise that they listened to their users and changed course.
That’s the thing about really good marketing - it should not only drive users to use your service, but the reactions to that marketing can be used as market research to improve your product and future marketing in a manner that drives even more users to your product.
- hoppolito@mander.xyzEnglish2 months
I am fairly sure this is the actual point of the campaign. The selection bias for a ‘poll’ like this (one that instantly on-boards you to the ai-disabled version of your product if you click answer negative, no less) is so great that I don’t believe the suits/analysts at ddg ever envisioned a different result. Polls and comment sections lure the extreme viewpoints and the ddg crowd already skews privacy-conscious so this was a highly expected outcome.
What the campaign does instead is:
- Show that you ‘care’ and ‘listen to feedback’ (by a response to the poll somewhere between disabling the ai by default to making the no-ai button a little bit bigger)
- show that you have the ability to turn off ai on your product in the first place to those who care
- like I said above, directly onboard people onto their preferred search strategy so that when relatives/friends send this around people get a little taste, and realize this exists
It’s quite clever imo, and there’s no real bad outcome for what I assume is a pretty inexpensive campaign.
- Tyrq@lemmy.dbzer0.comEnglish2 months
I would like to petition to rename AI to
Simulated
Human
Intelligence
Technology- Tyrq@lemmy.dbzer0.comEnglish2 months
‘Intelligence’ is a measure, not an absolute. Human intelligence can range anywhere from genius to troglodyte. But you’re right, still not human, still at very best simulated, and isn’t capable of reason, just the illusion of reason.
- WanderWisley@lemmy.worldEnglish2 months
I would like to petition to rename AI to
Fucking stupid and useless
- Electricd@lemmybefree.netEnglish2 months
Tell me you don’t know shit about LLMs without telling me so:










































