Self-driving cars are often marketed as safer than human drivers, but new data suggests that may not always be the case.

Citing data from the National Highway Traffic Safety Administration (NHTSA), Electrek reports that Tesla disclosed five new crashes involving its robotaxi fleet in Austin. The new data raises concerns about how safe Tesla’s systems really are compared to the average driver.

The incidents included a collision with a fixed object at 17 miles per hour, a crash with a bus while the Tesla vehicle was stopped, a crash with a truck at four miles per hour, and two cases where Tesla vehicles backed into fixed objects at low speeds.

  • halcyoncmdr@piefed.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    22 hours ago

    Alright, so the radar is detecting a large object in front of the vehicle while travelling at highway speeds. The vision system can see the road is clear.

    So with your assumption of listening to whatever says there’s an issue, it slams on the brakes to stop the car. But it’s actually an overpass, or overhead sign that the radar is reflecting back from while the road is clear. Now you have phantom braking.

    Now extend that to a sensor or connection failure. The radar or a wiring harness is failing and sporadically reporting back close contacts that don’t exist. More phantom braking, and this time with no obvious cause.

    • merc@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      21 hours ago

      Now you have phantom braking.

      Phantom braking is better than Wyle E. Coyoteing a wall.

      and this time with no obvious cause.

      Again, better than not braking because another sensor says there’s nothing ahead. I would hope that flaky sensors is something that would cause the vehicle to show a “needs service” light or something. But, even without that, if your car is doing phantom braking, I’d hope you’d take it in.

      But, consider your scenario without radar and with only a camera sensor. The vision system “can see the road is clear”, and there’s no radar sensor to tell it otherwise. Turns out the vision system is buggy, or the lens is broken, or the camera got knocked out of alignment, or whatever. Now it’s claiming the road ahead is clear when in fact there’s a train currently in the train crossing directly ahead. Boom, now you hit the train. I’d much prefer phantom breaking and having multiple sensors each trying to detect dangers ahead.

      • NotMyOldRedditName@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        20 hours ago

        FYI, the fake wall was not reproducible on the latest hardware, that test was done on an older HW3 car, not the cars operating as robotaxi which are HW4.

        The new hardware existed at the time, but he chose to use outdated software and hardware for the test.

          • NotMyOldRedditName@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            18 hours ago

            As a consumer product, you are responsible and supposed to be paying attention at all times and be ready to take over.

            It is completely acceptable that it does not function perfectly in every scenario and something like a fake wall put on the road causes issues, that is why you need to pay attention.

            There is nothing to recall about this situation.

            If the car is failing on things it shouldn’t be, like both Tesla and Waymo failing to properly stop for school busses while in autonomous mode, that does require an update. Alhough ive seen 0 reports of an autonomous Tesla doing this yet only supervised ones.

            A Tesla not stopping for a school bus in supervised mode is acceptable though because the driver is responsible to stop.

            Edit: and note, a problem like the school busses is a visual processing understanding problem. Lidar won’t help with that kind or problem.

            Edit: and sorry to be clear, it is hardware still on the road, but I’m saying its acceptable that hardware does it because its not autonomous. If the newer hardware running without supervisors was doing it, that’s another story.

              • NotMyOldRedditName@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                17 hours ago

                Ya, hardware that is on the road that won’t ever be autonomous without getting upgraded hardware amd software because its insufficient for autonomy, but has been shown to not be a problem on the latest autonomous versions.