Safe autonomous vehicles have been done before, and they could be done again, but the [perceived] "problem" with these is that they are slower than their human counterparts at reaching their destination when contending with human-driven traffic.
This is because humans can break the rules. How many times have you found yourself going 40 on a 30 because you had to keep pace with the rest of traffic? How many times do people fail to signal when making a turn? How many people "fudge" stoplights by flooring it as the light shifts yellow then red?
To contend with such possibilities, whilst protecting the precious lives inside its cabin, an AV has to be defensive, and take no risk.
Years ago, AVs weren't being studied and built by parasitic gig-labor taxi companies or electric supercar manufacturers with a penchant for cutting corners; they were being researched by an actual tech company — a search and advertising company at that. You'd read fluff pieces on it from various outlets, from how it would slow to a stop for little old ladies, a family of ducks in the road, or hell, little old ladies chasing a family of ducks in the road.
But every now and then… you'd read about an accident. About how despite all its computational power and anti-risk behavior, the vehicle suffered a collision of some kind. And yet, practically every time it was deemed the human driver's fault — the other party in the other car, truck, or bus. Even more interestingly, all of these accidents were minor and non-fatal, meaning even in accident these vehicles succeeded at their primary objective.
But… This came at a cost of "efficiency". Anti-risk means working off a model of false-positives; any plastic bag in the wind could be a child running towards the car. Every cycler who nears to close could be a potential collision. So the car slows down, possibly even stops, and waits for the potential threat to pass on by. And… this isn't "attractive" by advertising standards. "No one is going to buy a car that costs more and drives slower just because they're not the one to drive it."
To add even more insult to injury, these vehicles lacked the ability for human intervention. It was deemed a risk; how many humans would be better than the machine at reducing the risk of harm and impact, over how many humans would carelessly override the car and increase said risk to save a few minutes of time? And consider the fact that, over time, this might become a learned behavior; a habit. Operators used to overriding the vehicle, to the point that they may carelessly do so many times in moments of danger, incurring risk that was entirely avoidable as a result? Like Edna Mode & capes, steering wheels and break pedals were deemed a risk for a safe autonomous vehicle. — But this isn't an attractive sale to a society that prides itself on its ability to drive, to the point it's seen as a pastime and a cultural right to be able to drive a car by hand.
At some point, you stopped hearing about these cars. Articles about these lil commuter computers were replaced with articles about Ub*r AVs mowing down a pedestrian and T*sla AVs killing their driver, off a semi, whilst watching H*rry P*tter. — These companies work off models of false negatives, in order to increase "efficiency", and lives of anything, not just human, are seen as collateral in the way of progress. In both cases, the computers made inaccurate assumptions based on their models — a risk was assessed, but rather than take precaution, those risks were deemed trivial, and operation resumed, resulting in fatal injuries to either their passenger or another person.
It is these companies whomst introduced the thought that the computers would rapidly become "good enough" to the autonomous driving world. The electric sportscar company infamously refused to use a major safety device in its vehicles for autonomous driving because it was deemed ugly and unnecessary. — It was believed that AI and computational hardware would "catch up" to the point that you could do the same job with a couple of cameras in due time. — Now their cars will gladly, potentially, mow down children without hesitation.
There's been counter-demonstrations to "prove" the opposite, but if anything, these tests prove how dangerous the prevalence of efficiency-over-safety, and putting the computer on a pedestal as if it's an all knowing being, are. — A car that prioritized safety 100% of the time would fully stop 100% of the time, mannequin or beating heart. A car which "lets the computer decide" will ultimately result in deaths.
The well is poisoned, and technological progress has ironically been stagnated by the careless behaviors of these companies dominating this space.