• Valve’s customer service responses have always been mostly a canned series of bot messages.

    Their in-house support has always been 99% automated.

    Its very obvious if you’ve ever interacted with them at more than an occasional, superficial level.

    You have to be quite persistent to get a message from an actual human being.

    Yep, the automated messages often have the name of ostensibly a human attached to them.

    So do all kinds of other bots, since way before ChatGPT and LLMs took off.

    What, did you a think a human person actually read every single complaint report of a hacker or cheater in a video game with any kind of a massively used anti cheat system?

    No! You have bots, analytic systems, screen that shit, just the same as all our resumes on Indeed, or our activity and profiles on dating apps have been being analysed and evaluated by bots, again, since way before LLMs got as prevalent as they are today.

    Then you filter. Humans only see the odd ones that defy categorization, basically, or trigger a certain set of flags that are designated as ‘probably needs an actual human to handle this one’.


    This has been a tech industry standard for almost two decades.

    Valve is just now overhauling that system to use an LLM, because those are actually better than a very complex series of chained regex searches.

    The alternative would be to do what Meta or Google or Amazon do: Hire armies of tens to hundreds of thousands of offshore contractors and give them all PTSD for pitiful wages, manually evaluating everything.

    Apparently this is not widely known, by people who’ve never worked in an entreprise level tech company?


    Using LLMs to evaluate and assist a massive anti-cheat system is actually a great way to be be able to do an anti-cheat system… without hooking directly into your kernel.

    These things are very good at pattern recognition, and if you tune them to specifically only work with inputs from the server from gaming sessions, you can significantly improve server-side/backend detection of players/clients doing things that are highly suspicious or outright impossible given the actual rules of the game.

  • I know Valve wants to remain a small-ish company, but automating in-house support has literally never improved things for the customer. It’s even worse if it’s tied into their anti-cheat - a false positive can lock you and your entire family out of multiplayer, and good luck getting a human to overturn it after the former support staff is moved to other teams.

    I’d say it’s weird they didn’t focus on using this to help fix their nearly nonexistent community moderation, but I’ve been told their hands-off approach is deliberate due to a libertarian bent among the higher ups.

    • The non existent community moderation is by design and purpose. Valve wants it that way. They refuse to be any sort of gatekeepers in it.

    • 1 hour

      Their support staff are always being commended, seems odd to me.

      At the same time they allow rusdian war crime simulators.

    • 5 hours

      This is an incorrect assertion. Making common actions self service without needing a human is almost always a customer win. For example automatic refunds on request if your request meets the correct criteria, instead of needing a human to look at it and make an arbitrary decision. Or having a knowledge base of common issues that can help people fix problems on their own without needing to talk to a person. Both are much faster and more repeatable.

      • But this is not viable for every use case. If there is a major issue with my Bank account, I want to speak to person, period.

        Specific actions have automated workflows is of a course a good thing.

        Documentation is also good, but it often doesn’t account for edge cases or your unique situation. Not to mention, the majority of the public is not going have the desire to deal with documentation.

    • 7 hours

      One thing Valve is known for is testing things. They typically make sure technology works before rolling it out everywhere.

      I’m willing to bet that they have either solved most of the problems a tool like this has by massively limiting its scope, or it never actually gets past a beta test phase.

      • 6 hours

        This. They have explicitly said that they are testing AI applications throughout the company and that it is not a concerted effort. It’s a few devs wanting to try it to see if it actually adds real value or not. That’s it.

      • The file and class or function name or w/e literally has .proto in it.

        As in prototype.

          • Well you got me there.

            https://github.com/SteamTracking/SteamTracking/tree/master/ProtobufsWebui

            There’s the directory with the file in the screenshots, service_steamgpt.proto, updated 4 days ago along with a number of others, seems like it and a whole batch of related files did not exist before then.

            I am uncertain if this … basically scraping operation is tracking the main Steam client or the Beta or what.

            There is not a very helpful description of what exactly is being pulled here, in the readme/project description.

            EDIT:

            Perhaps if you are more familiar with Protobufs, you can take a gander at these and come away with a guesstimate as to what these are doing?

            • My guess is that this is part of some kind of machine learning pipeline, where users label edge cases to help train the model. Since it operates on account data, match data, and logs (see CSteamGPT_GetTask_Response), an anti-cheat use case would make sense, but it’s hard to say for sure.

              This looks like data exchanged between the Steam client and server, and doesn’t contain any logic on its own.

    • They improved their support ticket throughput by orders of magnitude by automating a lot of it already. There are lots of versions of automation, too, like collecting information about the user’s problem before you even get to a human.

      • Right, but there’s a difference between automating a refund if they can detect the purchase happened in the last two weeks and has less than two hours of playtime, versus complex support problems being handled by an LLM that can be mislead or hallucinate.

        I suppose it’s fine if it’s limited to giving advice on solving the problem and has to escalate to a human if any server side action is required, but it being tied to anti-cheat has me worried that’s not the case.

    • I think it could have been an interesting usecase to chat with a steambot to get game recommendations.

      • This is not meant to be a chatbot.

        It is meant to evaluate gaming sessions of CS2 (and/or potentially any VAC enabled game, maybe).

        Its an experimental, prototype of improving VAC’s serverside, backend analysis capabilities, to better detect cheaters and hackers.

        You don’t need kernel level access into everyone’s pcs.

        You can run analytics on what the server records as happening in the game session, to detect odd patterns and things that should be impossible.

        LLMs are … the entire thing that they do is handle massive inputs of data, and then evaluate that data.

        The part of an LLM that generates a response, in text form, to that data, is a whole other thing.

        They can also output… code, or spreadsheets, or images, or 3d models, or… any other kind of data.

        Like say, a printout of suspicious activity in a game session, with statistically derived confidence intervals and timestamps and analysis.

        The you have another, differently tuned LLM, ingest the data the first LLM produces, and turn it into something else.

        You see the ModelEvaluation and then MetaModelEvaluation?

        That looks like what they’re doing to me.

        Detailed Server Logs -> Model Evaluation -> MetaModelEvaluation.

        If you’ve ever run a dedicated multiplayer server and had to deal with hackers… you’re gonna be looking through server logs to sniff out nonsense.

        Server-side cheat detection, very oversimplified, is having automatic systems do that.

        • Ah interesting. More along the line of those ML-based intrusion detection products.

      • Their current recommendation engine is already a marvel and the only one I’ve ever come across that actually directs me to niche stuff I might be interested in.

        • with the amount of information they collect on their customers, it better be damn good. honestly, the only reason it’s not a huge privacy problem is because they zealously guard that data to protect their near monopoly on PC gaming.

          Gabe has been pandering to gamers and mostly giving us what we want, but when he dies, we better hope the next dude in charge isn’t some corporate suit that only cares about maximizing profits in every way that they can, or the enshitification of Steam is going to really fucking hurt. imagine if Valve was run like Microsoft. for example, the next guy might cut a deal with Microsoft to stop supporting Proton.

  • 7 hours

    All this talk with valve being a “good” monopoly are such horse shit. Valve WILL enshittify, maybe not today, maybe not tomorrow, buts it’s coming and people are acting like it won’t happen.

    • At the same time, people drastically overestimate how big of a deal it would be…

      Someone could always just stop buying games on there.

      And if it did “go away” and people lost their games, they think how much they’ve spent on games over decades, and not how much it would cost to replace what they still play.

      Out of all the real life horrible shit going on, very few people have the longevity of Valve as a priority.

      It’s not 100% safe but at this point it’s more likely to be here in 20 years than the country it’s based in.

      But besides all that, I don’t think you know what the word “monopoly” means if you think steam is one.

      • Imagine thinking valve isn’t an absolute monopoly in PC video game industry