Self-driving cars are often marketed as safer than human drivers, but new data suggests that may not always be the case.

Citing data from the National Highway Traffic Safety Administration (NHTSA), Electrek reports that Tesla disclosed five new crashes involving its robotaxi fleet in Austin. The new data raises concerns about how safe Tesla’s systems really are compared to the average driver.

The incidents included a collision with a fixed object at 17 miles per hour, a crash with a bus while the Tesla vehicle was stopped, a crash with a truck at four miles per hour, and two cases where Tesla vehicles backed into fixed objects at low speeds.

  • HarneyToker@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    7 hours ago

    Got this saved next time someone tells me that a robot can drive better than a human. They almost had me there, but data doesn’t lie.

    • greygore@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      32 minutes ago

      This is more specific to Tesla than self driving in general, as Musk decided that additional sensors (like LiDAR and RADAR on other self driving vehicles) are a problem. Publicly he’s said that it’s because of sensor contention - that if the RADAR and cameras disagree, then the car gets confused.

      Of course that raises the problem that when the camera or image recognition is wrong, there’s nothing to tell the car otherwise, like the number of Tesla drivers decapitated by trailers that the car didn’t see. Additionally, I assume Teslas have accelerometers so either the self driving model is ignoring potential collisions or it’s still doing sensor fusion.

      Not to mention we humans have multiple senses that we use when driving; this is one reason why steering wheels still mostly use mechanical linkages - we can “feel” the road, we can detect when the wheels lose traction, we can feel inertia as we go around a corner too fast. On a related tangent, the Tesla Cybertruck uses steer-by-wire instead of a mechanical linkage.

      This is why many (including myself) believe Tesla has a much worse safety record than Waymo. I’ve seen enough drunk and distracted drivers to believe that humans will always drive better than a human robot. Don’t get me wrong, I still have concerns about the technology, but Musk and Tesla has a history of ignoring safety concerns - see the number of deaths related to his desire to have non-mechanical handles and hide the mechanical backup.

    • Buddahriffic@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 hours ago

      A robot can theoretically drive better than a human because emotions and boredom don’t have to be involved. But we aren’t there yet and Teslas are trying to solve the hard mode of pure vision without range finding.

      Also, I suspect that the ones we have are set up purely as NNs where everything is determined by the training, which likely means there’s some random-ass behaviour for rare edge cases where it “thinks” slamming on the accelerator is as good an option as anything else but since it’s a black box no one really understands, there’s no way to tell until someone ends up in that position.

      The tech still belongs in universities, not on public roads as a commercial product/service. Certainly not by the type of people who would at any point say, “fuck it, good enough, ship it like that”, which seems to be most of the tech industry these days.

    • w3dd1e@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      Other robots might be able to, but I wouldn’t trust a Tesla RoboTaxi get me safely across a single street.

    • chiliedogg@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      7 hours ago

      It’s Austin. The traffic is so shitty you can’t go fast enough to get in a wreck most of the time.

      I live in the area, and can confirm anecdotally that the Teslas are bad drivers and the Waymos generally are excellent.

  • Bazoogle@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    3
    ·
    9 hours ago

    a crash with a bus while the Tesla vehicle was stopped

    Okay, idk why we would blame this one on the self driving car…

    a collision with a heavy truck at 4 mph, and two separate incidents where the Tesla backed into objects, one into a pole or tree at 1 mph and another into a fixed object at 2 mph.

    original source

    The difference is a lot of these are never reported when it’s done by a human driver. I very highly doubt the rate is 4x higher than humans. I’m not saying the self driving cars are good. I am just saying human drivers are really bad.

      • Honytawk@feddit.nl
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        6 hours ago

        What does that spy bloke with the crooked teeth have to do with it?

        Anyway, 4mph is the maximum speed in center Rotterdam traffic.

  • FreddiesLantern@leminal.space
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    11 hours ago

    Rogan: so eaaauhm, yeah that’s definitely a thing huh? But you know all progress must go uphill without breaking a few eggs…right?

    Musk: makes that stupid nazi face where he’s smoking weed So we’re going to make Grok a subscription model that watches you sleep in your car as we plug you into the bio battery of your Tesla. Then your mind gets used to train AI models as you’re driving. But you know, I’m expecting that to work last month, give or take a year or 10.

    Rogan: Pluggin in huh? How’s that work?

    Musk: Either a port in the back of your arm or an arm up your back, not sure yet.

    Rogan: Wow, … so anyway wanna do some dmt?

    We can plug it in if you want.

  • Paranoidfactoid@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    ·
    22 hours ago

    Clearly, AI isn’t just challenging human performance, it’s exceeding it. Four times the crash rate is just the beginning. Just imagine the crash rate when super intelligence comes!

    🚘💥🚗

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      2
      ·
      1 day ago

      The AI companies put out a presser a few years back that said “Um, aktuly, its the humans who are bad drivers” and everyone ate that shit up with a spoon.

      So now you’ve got Waymos blowing through red lights and getting stuck on train tracks, because “fuck you fuck you stop fighting the innovation we’re creatively disruptive we do what we want”.

      • village604@adultswim.fan
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        8
        ·
        edit-2
        4 hours ago

        That doesn’t mean that waymo is more error prone than human drivers.

        Humans are awful at driving and do stuff like stop on train tracks and blow through red lights all the time.

        Edit: I’m still waiting on someone to prove me wrong with actual data, because this article is about Tesla, not Waymo.

        • nomy@lemmy.zip
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          2
          ·
          10 hours ago

          The article is about Robotaxis crashing 4x as much as human driven cars.

          • [object Object]@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            Meanwhile, Waymo has logged over 127 million fully driverless miles, with no safety driver, no monitor, no chase car, and independent research shows Waymo reduces injury-causing crashes by 80% and serious-injury crashes by 91% compared to human drivers. Waymo reports 51 incidents in Austin alone in this same NHTSA database, but its fleet has driven orders of magnitude more miles in the city than Tesla’s supervised “robotaxis.”

            Point me to where Waymo taxis are just as bad as Tesla.

  • dogslayeggs@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    5
    ·
    1 day ago

    It’s important to draw the line between what Tesla is trying to do and what Waymo is actually doing. Tesla has a 4x higher rate, but Waymo has a lower rate.

    • merc@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      3
      ·
      24 hours ago

      Not just lower, a tiny fraction of the human rate of accidents:

      https://waymo.com/safety/impact/

      Also, AFAIK this includes cases when the Waymo car isn’t even slightly at fault. Like, there have been 2 deaths involving a Waymo car. In one case a motorcyclist hit the car from behind, flipped over it, then was hit by another car and killed. In the other case, ironically, the real car at fault was a Tesla being driven by a human who claims he experienced “sudden unintended acceleration”. It was driving at 98 miles per hour in downtown SF and hit a bunch of stopped cars at a red light, then spun into oncoming traffic and killed a man and his dog who were in another car.

      Whether or not self-driving cars are a good thing is up for debate. But, it must suck to work at Waymo and to be making safety a major focus, only to have Tesla ruin the market by making people associate self-driving cars with major safety issues.

        • merc@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          22 hours ago

          Well, Waymo’s really at 0 deaths per 127 million miles.

          The 2 deaths are deaths that happened were near Waymo cars in a collision involving the Waymo car. Not only did the Waymo not cause the accidents, they weren’t even involved in the fatal part of either event. In one case a motorcyclist was hit by another car, and in the other one a Tesla crashed into a second car after it had hit the Waymo (and a bunch of other cars).

          The IIHS number takes the total number of deaths in a year, and divides it by the total distance driven in that year. It includes all vehicles, and all deaths. If you wanted the denominator to be “total distance driven by brand X in the year”, you wouldn’t keep the numerator as “all deaths” because that wouldn’t make sense, and “all deaths that happened in a collision where brand X was involved as part of the collision” would be of limited usefulness. If you’re after the safety of the passenger compartment you’d want “all deaths for occupants / drivers of a brand X vehicle” and if you were after the safety of the car to all road users you’d want something like “all deaths where the driver of a brand X vehicle was determined to be at fault”.

          The IIHS does have statistics for driver death rates by make and model, but they use “per million registered vehicle years”, so you can’t directly compare with Waymo:

          https://www.iihs.org/ratings/driver-death-rates-by-make-and-model

          Also, in Waymo it would never be the driver who died, it would be other vehicle occupants, so I don’t know if that data is tracked for other vehicle models.

          • hector@lemmy.today
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            13 hours ago

            I seem to recall a homeless woman that got killed like right away when they released these monstrosities on the road, because why pay people to do jobs when machines can do them for you? I’m sure that will work out for everyone, with investment income.

            • JcbAzPx@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 hours ago

              That was Uber’s attempt at self driving after they had to give up the stolen Waymo data. Waymo is probably the best at self driving, but even they spend too much time blocking traffic when they can’t reach the Indian call center to fix the situation they’ve gotten themselves in.

        • 73ms@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          23 hours ago

          When there’s two deaths total it’s pretty obvious that there just isn’t enough data yet to consider the fatal accident rate. Also FWIW like was said neither of those was in any way the Waymo’s fault.

          • hector@lemmy.today
            link
            fedilink
            English
            arrow-up
            2
            ·
            13 hours ago

            That’s the problem, you can’t trust these companies not to use corrupt influence to blame others for their mistakes. It’s you verses a billions of dollars companies with everything at stake, that owns (senior tiered leasing rights,) your politicians, both locally, in state, and federally, and by extension the regulators up and down the line.

            Do you not know how things work in this country? Given their outsized power we don’t want them involved in determining blame for accidents, dash cam footage or no, we’ve seen irrefutable evidence is no guarentee of justice, even if it’s provided to you.

            • 73ms@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              12 hours ago

              Well Waymo isn’t assigning blame, it’s a third party assessment based on the information released about those accidents. The strongest point remains that fatal accidents are rare enough that there simply isn’t enough data to claim any statistical significance for these events. The overall accident rate for which data is sufficient remains significantly lower than the US average.

              • hector@lemmy.today
                link
                fedilink
                English
                arrow-up
                2
                ·
                12 hours ago

                They have influence with the police and regulators, and insurance companies, to avoid blame.

                They are on limited routes, at lower speeds, so they won’t have a higher fatality rate. If you compared human drivers for that same stretch of road it would also be zero. You can’t compare human drivers on expressways during rush hour with waymo’s trip between the airport and the hotels on a mapped out route that doesn’t go on the expressway.

                • 73ms@sopuli.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  8 hours ago

                  It is obviously false that fatal accidents would be “zero” on the roads Waymos are limited to, it’s ridiculous to even suggest such a thing. What is true that such accidents are even more rare there though. It’s another good reason for why it makes no sense to solely focus on fatal accidents as they are unlikely to be involved in them anyway due to these limits. That’s in addition to the fact that the statistical analysis is simply impossible with current vehicle miles.

                  Now, I’m not saying we know for certain Waymo is much safer than a human as the current statistics imply, that is going to require more rigorous studies. I would say what we’ve got is good enough to say that nothing points to them being particularly unsafe.

      • TooManyGames@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        4
        ·
        24 hours ago

        I immediately formed a conspiracy theory that Teslas automatically accelerate when they see Waymo cars

        • merc@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          23 hours ago

          And it’s not out of aggression. It’s just that their image recognition algorithms are so terrible that they match the Waymo car with all its sensors to a time-traveling DeLorean and try to hit 88 mph… or something.

    • ThirdConsul@lemmy.zip
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      23 hours ago

      Isn’t Waymo rate better because they are very particular where they operate? When they are asked to operate in sligthly less than perfect conditions it immediately goes downhill https://www.researchgate.net/publication/385936888_Identifying_Research_Gaps_through_Self-Driving_Car_Data_Analysis (page 7, Uncertainty)

      Edit: googled it a bit, and apparently Waymo mostly drives in

      Waymo vehicles primarily drive on urban streets with a speed limit of 35 miles per hour or less

      Teslas do not.

      • 73ms@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        22 hours ago

        We are talking about Tesla robotaxis. They certainly do drive in very limited geofenced areas also. While Waymo now goes on freeways only in the Bay Area with the option being offered to only some passengers Tesla Robotaxis do not go on any freeways ever currently. In fact they only have a handful of cars doing any unsupervised driving at all and those are geofenced in Austin to a small area around a single stretch of road.

        Tesla Robotaxis currently also cease operations in Austin when it rains so Waymo definitely is the more flexible one when it comes to less than perfect conditions.

      • dogslayeggs@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        23 hours ago

        That is certainly true, but they are also better than humans in those specific areas. Tesla is (shockingly) stupid about where they choose to operate. Waymo understands their limitations and choose to only operate where they can be better than humans. They are increasing their range, though, including driving on the 405 freeway in Los Angeles… which is usually less than 35mph!!

        • jabjoe@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          ·
          23 hours ago

          Are they doing FSD if there are human overseas? Surely that is not “fully”.

          So human overseas and not only cameras.

          • 73ms@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            ·
            22 hours ago

            All these services have the ability for a human to solve issues if the FSD disengages. Doesn’t mean they’re not driving on their own most of the time including full journeys. The remote assistant team is just ready to jump in if there’s something unusual that causes the Waymo driver to disengage and even then they don’t usually directly control the car, they just give the driver instructions on how to resolve the situation.

            • jabjoe@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              ·
              13 hours ago

              I think Waymo are right to do what they do. I just wouldn’t call it “fully”. If Telsa are doing the same and still doing badly, or should be doing the same and aren’t, it still makes them worse than Waymo either way.

          • hector@lemmy.today
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            9 hours ago

            I think it needs to be acknowledged, Musk is mentally challenged, has mental illness, call it what you will, he has a diagnosable condition, and as such has been tooled by Peter Thiel. Perhaps blackmailed as well, alongside the carrots of going along. His faults aren’t solely because of the mental illness perhaps, but it’s a factor in why he does what he does, and how acts, made worse by his drug use.

            If he was an ordinary person he would be recognized as such, if not committed involuntarily at some point, but being filthy rich he’s just eccentric as far as the system is concerned. Just a fat out of shape slob throwing seig heils representing nazis that championed the idea of killing undesirables including fat out of shape slobs. But he gets a pass because he’s rich. Nazis also killed mentally ill people, just as a matter of course.

            Point being Musk would be eliminated by the Nazis he wants to resurrect and put in absolute power, that he was compelled to represent and put in power. He obviously didn’t appreciate whomever gave him that black eye, and I would guess it was thiel or yarvin or one of them, the actual brains behind doge and musk that are using him, that groomed those kids from sex and drug fueled parties, to secretly export all fed agency data to their secret data servers, under the guise of cutting costs.

            • ToTheGraveMyLove@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 hours ago

              Fuck off, a lot of people are mentally ill and aren’t psychopathic Nazi pedophiles because of it. He’s a shit person, and his mental illness has nothing to do with it.

    • No1@aussie.zone
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 day ago

      Bro, anybody who has watched a Predator movie knows this is fact.

      Just how much K do you need to take to argue this?

    • slevinkelevra@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      75
      arrow-down
      1
      ·
      2 days ago

      Yeah that’s well known by now. However, safety through additional radar sensors costs money and they can’t have that.

      • tomalley8342@lemmy.world
        link
        fedilink
        English
        arrow-up
        78
        ·
        2 days ago

        Nah, that one’s on Elon just being a stubborn bitch and thinking he knows better than everybody else (as usual).

        • ageedizzle@piefed.ca
          link
          fedilink
          English
          arrow-up
          30
          arrow-down
          8
          ·
          edit-2
          2 days ago

          He’s right in that if current AI models were genuinely intelligent in the way humans are then cameras would be enough to achieve at least human level driving skills. The problem of course is that AI models are not nearly at that level yet

          • CheeseNoodle@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            1 day ago

            Also the Human brain is still on par with some of the worlds best supercomputers, I doubt a Tesla has that much onboard processing power.

            • ageedizzle@piefed.ca
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              22 hours ago

              Good point. Though I’ve heard some of these self driving cars connect remotely to a person to help drive when the AI doesnt know what to do, so I guess it’s conceivable that the car could connect to the cloud. That would be super error prone though. Connectivity issues could brick your car.

          • T156@lemmy.world
            link
            fedilink
            English
            arrow-up
            56
            ·
            2 days ago

            Even if they were, would it not be better to give the car better senses?

            Humans don’t have LIDAR because we can’t just hook something into a human’s brain and have it work. If you can do that with a self-driving car, why cut it down to human senses?

          • kameecoding@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            1 day ago

            I am a Human and there were occasions where I couldn’t tell if it’s an obstacle on the road or a weird shadow…

            • merc@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              6
              ·
              1 day ago

              And, we humans have built-in binocular vision that we’ve been training for at least 1.5 decades by the time we’re allowed to drive.

              Also, think about what you do in that situation where there’s a weird shadow. Slow down, sure. But, also move our heads up and down, side to side, trying to use that powerful binocular vision to get different angles on that strange shadow. How many front-facing cameras does Tesla have. Maybe 3, and one of those is mounted on the bumper? In theory, 3 cameras could give it 3 different “viewpoints” for binocular vision. But, that’s not as good as a human driver who can shift their eyes around to multiple points to examine a situation. And, if one of those 3 cameras is obscured (say the one on the bumper) you’re down to basic binocular vision without even the ability to take a look from a different angle.

              Plus, we have evidence that Tesla isn’t even able to use its cameras to achieve binocular vision. If it worked, it shouldn’t have fallen for the Wile E. Coyote trick.

            • ageedizzle@piefed.ca
              link
              fedilink
              English
              arrow-up
              6
              ·
              1 day ago

              Yes. In theory cameras should be enough to get you up to human level driving competence but even that is a low bar.

              • NιƙƙιDιɱҽʂ@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                edit-2
                21 hours ago

                I feel like camera only could theoretically pass human performance, but that hinges entirely on AI models that do not currently exist, and that those models, when they do exist, are capable of running inside of a damn car.

                At that point, it’d be cheaper to just add LiDAR…

              • ageedizzle@piefed.ca
                link
                fedilink
                English
                arrow-up
                1
                ·
                22 hours ago

                I don’t know the answer to this but just looking at them they don’t look binocular. Even if they are not binocular though they still have a 360 degree visual range and no blind spots

        • 73ms@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 day ago

          Well I mean if you believe that it is possible in a safe way it’s the one thing that Tesla’s got going for it compared to Waymo which is way ahead of them. Personally I don’t but I can see the sunk cost.

      • paraphrand@lemmy.world
        link
        fedilink
        English
        arrow-up
        35
        ·
        edit-2
        2 days ago

        just one more AI model, please, that’ll do it, just one more, just you wait, have you seen how fast things are improving? Just one more. Common, just one more…

      • parzival@lemmy.org
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 days ago

        I’m not too sure it’s about cost, it seems to be about Elon not wanting to admit he was wrong, as he made a big point of lidar being useless

      • halcyoncmdr@piefed.social
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        3
        ·
        2 days ago

        I don’t think it’s necessarily about cost. They were removing sensors both before costs rose and supply became more limited with things like the tariffs.

        Too many sensors also causes issues, adding more is not an easy fix. Sensor Fusion is a notoriously difficult part of robotics. It can help with edge cases and verification, but it can also exacerbate issues. Sensors will report different things at some point. Which one gets priority? Is a sensor failing or reporting inaccurate data? How do you determine what is inaccurate if the data is still within normal tolerances?

        More on topic though… My question is why is the robotaxi accident rate different from the regular FSD rate? Ostensibly they should be nearly identical.

          • halcyoncmdr@piefed.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            21 hours ago

            Alright, so the radar is detecting a large object in front of the vehicle while travelling at highway speeds. The vision system can see the road is clear.

            So with your assumption of listening to whatever says there’s an issue, it slams on the brakes to stop the car. But it’s actually an overpass, or overhead sign that the radar is reflecting back from while the road is clear. Now you have phantom braking.

            Now extend that to a sensor or connection failure. The radar or a wiring harness is failing and sporadically reporting back close contacts that don’t exist. More phantom braking, and this time with no obvious cause.

            • merc@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              20 hours ago

              Now you have phantom braking.

              Phantom braking is better than Wyle E. Coyoteing a wall.

              and this time with no obvious cause.

              Again, better than not braking because another sensor says there’s nothing ahead. I would hope that flaky sensors is something that would cause the vehicle to show a “needs service” light or something. But, even without that, if your car is doing phantom braking, I’d hope you’d take it in.

              But, consider your scenario without radar and with only a camera sensor. The vision system “can see the road is clear”, and there’s no radar sensor to tell it otherwise. Turns out the vision system is buggy, or the lens is broken, or the camera got knocked out of alignment, or whatever. Now it’s claiming the road ahead is clear when in fact there’s a train currently in the train crossing directly ahead. Boom, now you hit the train. I’d much prefer phantom breaking and having multiple sensors each trying to detect dangers ahead.

              • NotMyOldRedditName@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                18 hours ago

                FYI, the fake wall was not reproducible on the latest hardware, that test was done on an older HW3 car, not the cars operating as robotaxi which are HW4.

                The new hardware existed at the time, but he chose to use outdated software and hardware for the test.

        • NotMyOldRedditName@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          2 days ago

          Regular FSD rate has the driver (you) monitoring the car so there will be less accidents IF you properly stay attentive as you’re supposed to be.

          The FSD rides with a saftey monitor (passenger seat) had a button to stop the ride.

          The driverless and no monitor cars have nothing.

          So you get more accidents as you remove that supervision.

          Edit: this would be on the same software versions… it will obviously get better to some extent, so comparing old versions to new versions really only tells us its getting better or worse in relation to the past rates, but in all 3 scenarios there should still be different rates of accidents on the same software.

          • 73ms@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 days ago

            The unsupervised cars are very unlikely to be involved in these crashes yet because according to Robotaxi tracker there was only a single one of those operational and only for the final week of January.

            As you suggest there’s a difference in how much the monitor can really do about FSD misbehaving compared to a driver in the driver’s seat though. On the other hand they’re still forced to have the monitor behind the wheel in California so you wouldn’t expect a difference in accident rate based on that there, would be interesting to compare.

            • NotMyOldRedditName@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              1 day ago

              There are multiple unsupervised cars around now, it was only the 1 before earnings call (that went away), then a few days after earnings they came back and weren’t followed by chase cars. There’s a handful of videos over many days out there now if you want to watch any. The latest gaffe video I’ve seen is from last week where it drove into (edit: road closed) construction zone that wasn’t blocked off.

              I would still expect a difference between California and people like you and me using it.

              My understanding is that in California, they’ve been told not to intervene unless necessary, but when someone like us is behind the steering wheel what we consider necessary is going to be different than what they’ve been told to consider necessary.

              So we would likely intervene much sooner than the saftey driver in California, which would mean we were letting the car get into less situations we perceive to be dicey.

              • 73ms@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 day ago

                Yeah I seen that video and another where they went back and forth for an hour in a single unsupervised Tesla. One thing to note is that they are all geofenced to a single extremely limited route that spans about a 20 minute drive along Riverside Dr and S Lamar Blvd with the ability to drive on short sections of some of the crossing streets there, that’s it.

      • ramenshaman@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 day ago

        The best time to add lidar would have been years ago, the second best time is right now. I don’t think he would have to update the old cars, it could just be part of the hardware V5 package. He’s obviously comfortable with having customers beta testing production vehicles so he can start creating a lidar set now or he can continue failing to make reliable self-driving cars.

        • matlag@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 day ago

          Agree, but since he stated multiple time that all cars since xxx years were hardware capable of L5 self-driving next year (no need to precise the year, the statement is repeated every year), adding LIDAR now would be opening the way to a major class action. So he painted himself in a corner, and like all gigantic-ego idiots, he doubles down every time he’s asked.