• 15 days

    Hmm. Using the search term “small website discoverability crisis” . . .

    On duckduckgo: original website is the third result (after what looks like a SEO firm’s longform ad and ycombinator) without quotes and the first result with.

    On startpage: original is the first result even without quotes

    On mojeek: original is the first result even without quotes

    I do not have accounts with any of these search engines and do not allow them to run Javascript or set cookies, although it’s possible that duckduckgo may have noticed that someone with my ip often makes highly specific searches and looks at the long-tail results.

    My conclusion from that, combined with other people’s searches surfacing large sites first, is that the results you receive can be significantly distorted by the search engine’s algorithm. Google in particular is likely trying to direct traffic to its advertising customers and should be avoided for that reason.

    • Hmm. Using the search term “small website discoverability crisis” . . .

      Well, do you think that it’s realistic that the average user types that into the search engine’s website ? When you already know the exact title of the post on that website, you probable already know the full URL and don’t need a search engine. So, that has nothing to do with “discovering” in my opinion, which the blog post is about.

      • there’s at least 3 layers of finding stuff:

        • you already know the full URL
        • you only know the exact article title
        • you don’t know the article, but are interested in things of a specific topic
    • 15 days

      First result on mojeek! Sounds like discovery of the site is great then

      • 15 days

        Point is, people are using the wrong tools to look for stuff. So it’s a social problem more than a technical one. Those are always the most difficult type to solve.

        • 15 days

          The only problem is that we need to change the behavior of about four billion people

  • 15 days

    If I want you to see my webpages I’ll send you links or post them someplace relevant. That doesn’t completely stop sloperators and other scrapers but it at least slows them down. The more private pages are password protected or whatnot.

    • password protection means you’ll have 0 visitors. it’s better to do proof-of-work instead. which means, the visitor’s computer solves some cryptographic puzzle to prove that it has spent energy and therefore money to visit the website. that makes scraping millions or billions of websites practically too expensive to do, while if you only visit a single website, it’s still reasonably cheap to do.

      • 14 days

        The password pages still have visitors. They just have to know the password.