girl on purpose. i make computer things, and also some other stuff with @gaywritinggirl too


vectorpoem
@vectorpoem

i'm realizing that a major reason lots of technically savvy people believe weird things about the future is that they have accepted an idea, over the last ~10 years of software trying to eat the world: that anything computable is inevitably, nay soon, going to become practical to compute, at scale, everywhere, for everyone.


MisutaaAsriel
@MisutaaAsriel

Safe autonomous vehicles have been done before, and they could be done again, but the [perceived] "problem" with these is that they are slower than their human counterparts at reaching their destination when contending with human-driven traffic.

This is because humans can break the rules. How many times have you found yourself going 40 on a 30 because you had to keep pace with the rest of traffic? How many times do people fail to signal when making a turn? How many people "fudge" stoplights by flooring it as the light shifts yellow then red?

To contend with such possibilities, whilst protecting the precious lives inside its cabin, an AV has to be defensive, and take no risk.

Years ago, AVs weren't being studied and built by parasitic gig-labor taxi companies or electric supercar manufacturers with a penchant for cutting corners; they were being researched by an actual tech company — a search and advertising company at that. You'd read fluff pieces on it from various outlets, from how it would slow to a stop for little old ladies, a family of ducks in the road, or hell, little old ladies chasing a family of ducks in the road.

But every now and then… you'd read about an accident. About how despite all its computational power and anti-risk behavior, the vehicle suffered a collision of some kind. And yet, practically every time it was deemed the human driver's fault — the other party in the other car, truck, or bus. Even more interestingly, all of these accidents were minor and non-fatal, meaning even in accident these vehicles succeeded at their primary objective.

But… This came at a cost of "efficiency". Anti-risk means working off a model of false-positives; any plastic bag in the wind could be a child running towards the car. Every cycler who nears to close could be a potential collision. So the car slows down, possibly even stops, and waits for the potential threat to pass on by. And… this isn't "attractive" by advertising standards. "No one is going to buy a car that costs more and drives slower just because they're not the one to drive it."

To add even more insult to injury, these vehicles lacked the ability for human intervention. It was deemed a risk; how many humans would be better than the machine at reducing the risk of harm and impact, over how many humans would carelessly override the car and increase said risk to save a few minutes of time? And consider the fact that, over time, this might become a learned behavior; a habit. Operators used to overriding the vehicle, to the point that they may carelessly do so many times in moments of danger, incurring risk that was entirely avoidable as a result? Like Edna Mode & capes, steering wheels and break pedals were deemed a risk for a safe autonomous vehicle. — But this isn't an attractive sale to a society that prides itself on its ability to drive, to the point it's seen as a pastime and a cultural right to be able to drive a car by hand.

At some point, you stopped hearing about these cars. Articles about these lil commuter computers were replaced with articles about Ub*r AVs mowing down a pedestrian and T*sla AVs killing their driver, off a semi, whilst watching H*rry P*tter. — These companies work off models of false negatives, in order to increase "efficiency", and lives of anything, not just human, are seen as collateral in the way of progress. In both cases, the computers made inaccurate assumptions based on their models — a risk was assessed, but rather than take precaution, those risks were deemed trivial, and operation resumed, resulting in fatal injuries to either their passenger or another person.

It is these companies whomst introduced the thought that the computers would rapidly become "good enough" to the autonomous driving world. The electric sportscar company infamously refused to use a major safety device in its vehicles for autonomous driving because it was deemed ugly and unnecessary. — It was believed that AI and computational hardware would "catch up" to the point that you could do the same job with a couple of cameras in due time.Now their cars will gladly, potentially, mow down children without hesitation.

There's been counter-demonstrations to "prove" the opposite, but if anything, these tests prove how dangerous the prevalence of efficiency-over-safety, and putting the computer on a pedestal as if it's an all knowing being, are. — A car that prioritized safety 100% of the time would fully stop 100% of the time, mannequin or beating heart. A car which "lets the computer decide" will ultimately result in deaths.

The well is poisoned, and technological progress has ironically been stagnated by the careless behaviors of these companies dominating this space.


eatthepen
@eatthepen

the way I find it most useful to think about this is that when we use 'computation' to describe what human minds do, we're using that word in a much broader, woolier sense than when we use 'computability' in reference to what digital electronic systems can do. 'Human brains can 'compute' this' doesn't entail 'this is 'computable' in comp sci terms'. Brains are no more the 'computers' we've grown to imagine them as over the last fifty years than they were the clockwork machines they were sometimes imagined as in the nineteenth century.


You must log in to comment.

in reply to @vectorpoem's post:

Insofar as driving is a computable activity, I wouldn't attribute this to the human mind as much as I would cars and roads being standardized to the point that people can automatically read them. The horrifying thing is how this still leaves open the possibility of automated drivers optimizing their ability to read the road. More to the point, I don't think computation is how the human mind approaches outside reality. I'd liken it more to heuristics, IE using fuzzy metrics to arrive at some notion of "good enough." This is something that humans have to do, to account for scenarios that are broadly similar to previously encountered ones but whose particulars might be beyond prediction, but also something that computers simply can't do. Any attempt to make a computer rely on heuristics inevitably reduces the latter to computation, defeating the whole point.

You can easily make driving more computable by working the other end - standardizing roads and cars to the point where there's no guesswork and a very simple algorithm could accurately model what to do, but around that point you're just a titch past reinventing trains which is what the whole exercise is meant to avoid

The problem with this approach (be it for autonomous driving or any other kind of model that tries to emulate human behaviour solely based collecting truckloads of data) is that you cannot emulate something, when your basing your emulation on a flawed, or incomplete model. This whole idea very quickly runs into epistemological problems as well, because there's no way of knowing when your model is not flawed in some way, so you will never be able to perfectly emulate something like human behaviour, because you will never be able to perfectly understand it in the first place.

I keep going back to how this obsession with data collection for the sole purpose of finding some underlying truth about Humanity at large, is just the same stuff that physical Anthropology did for the better part of its existence (and to a degree still does, the last time I checked). However, if your approach to understanding humans, is not guided by an underlying theory and a critical understanding that said theory might change with time, everything you do is meaningless at best and incredibly dangerous (as we're seeing right now) at worst.

I've said it before, I'll say it again: the "AI Singularity" is just the Rapture with technology window dressing. You're absolutely right to talk about building a new God to your liking, these beliefs displace religion for those who hold them, and largely take the same shape.

Hi, sorry, this is a tangent but we just want to comment on the LessWrong cult because holy shit we are reading that link and it's really fucking with us.

At one point in our life, when we were around 15, we were fascinated by this rationalist community. It appealed to our way of thinking, and it was really cool to learn about it all! Thankfully, even then, something smelled fishy to us. There was this pompous air of flawlessness and perfection that these rationalists surrounded themselves with. It was very off-putting, and raised multiple red flags. But still for many years we held some respect for this community, and we shudder at the thought that a different Fluffies could fallen in their grasp.

Reading about how horrible these people are, in detail, it's. Unsettling.

US TOO, down to shuddering at the alternate timeline where we got sucked in. we even went and read the entirety of "harry potter and the methods of rationality" as a young adult and at the time we thought it was the best thing we'd ever read. we've been in two different cultlike groups and we are really glad that lesswrong wasn't our third even though it nearly got us

yeah, sorry in the future i'll try to CW any more explicit mention of that stuff. it is an astonishing rabbit hole of evil. and yeah, it appeals to people on the basis of "intelligence", often before they've developed much of any critical consciousness around that concept and all its weird baggage. glad you found paths away from it.

While there have been Singularity Believers for decades, one funny thing is all of this started to reach the described fever pitch right around the time Moore's Law stopped being true. Top end computer hardware hasn't really changed much since 2015, and the rate of improvement is continuing to slow. I don't think we're gonna hit a hard plateau any time soon, but sometimes it seems like everyone buying into this fallacy must've stopped following hardware development circa 2010

sure but otoh if you know how to read so much as a HTML tag like 3/4 of adults will treat you as the priesthood caste and bearer of secret knowledge and fuck if I know how a microchip works but I'd very much like to keep the grift going

This reminds me of a much simpler example of music, and how we already know all reasonable tunes. A few years ago, somebody enumerated all possible tunes up to a certain length (I think 12 notes?) stuck them on a hard drive, and registered them in some way (copyright?). One consequence of this is that after that point, every musician in the world is technically a plagiarist. It was a bit of an art piece pointing out that copyright law is not built to handle this, but it also points out that music didn't just stop. Just like the driving example - there is still a huge gap between "computable" and "useful".

I think these fears of an AI superintelligence really overestimate the measurability of intelligence. Like, Albert Einstein was a bonafied genius, but that didn't stop him wasting the majority of his life thinking in circles on quantum mechanics.

in reply to @MisutaaAsriel's post:

Safe autonomous vehicles have been done before

sorry, what? everything in this first half of the post seems like a wildly counterfactual interpretation of early AV research and the press coverage of it. if those efforts appeared "safer" it was because their real world miles driven numbers were miniscule even compared to the numbers companies are putting up today.

Years ago, AVs weren't being studied and built by parasitic gig-labor taxi companies or electric supercar manufacturers with a penchant for cutting corners; they were being researched by an actual tech company

are you saying google was much more scrupulous than other "bad" tech companies who are doing AV stuff now? because lol, their efforts are not in any way more legitimate or safer than their peers; their waymo division's cars are constantly fucking up all over SF right now. there was not at any point a "good" company doing AVs "right". it's always been smoke and mirrors and regulatory evasion all the way down.

And yet, practically every time it was deemed the human driver's fault — the other party in the other car, truck, or bus.

that's because the policies around fault determination at AV companies are always designed to deflect accountability from their software. tesla's shit disengages before impact so they can say that (say the line, bart) "FSD was not active at the time of the crash". the entire industry's numbers are completely fucking cooked using tricks like this.

i apologize in advance if i'm misreading this post but i just really don't buy the assertion here that safe AV tech is already attainable, and even already happened at some point years ago, but some bad apples have spoiled it by prioritizing speed over safety. that is not at all born out by the copious reading i have done on this subject for years now. and in fact my original post is about how tech-savvy people wrongly believe this technological challenge is much more tractable than it is in reality.

are you saying google was much more scrupulous than other "bad" tech companies who are doing AV stuff now?

At the time. I noticed a drastic shift when coverage went from Google pre-Waymo to Uber & Tesla. From no major incidents to multiple fatalities major shift. But that could possibly be attributed to the company being green to the technology and more wary of regulation than a car or transportation company that already deals with said regulations.

…their waymo division's cars are constantly fucking up all over SF right now.

Amusingly an earlier edit included an anecdote of how the same time I recalled the coverage shifting to less safe implementations, I also recalled reading an article indicating that Waymo engineers were starting to "loosen the reigns" and let the cars make more decisions rather than operating under "safety first". But I couldn't find the article so I omitted this paragraph.

that's because the policies around fault determination at AV companies are always designed to deflect accountability from their software.

No I mean like as in "driver ran a red light" or "sideswiped by a bus driver ignoring traffic laws" kind of "other party was at fault". Incidents where there was a clear party at fault and it wasn't the AV. — and remember: the vehicles I was referring to didn't have steering wheels. Or brake pedals. Nothing but a laptop and some cables. So you couldn't argue it was the engineer inside's bad driving.

but i just really don't buy the assertion here that safe AV tech is already attainable, and even already happened at some point years ago

It was, but also it wasn't.

You are correct: it was in limited fashion in limited tests in cities where these vehicles were designed to drive. You wouldn't have been able to pluck one of them from California, place them in NYC, and expect them to flawlessly work. They required an immense amount of intentional design around the environments they were used in.

In order to achieve the same results even nationally would require a great effort in maintaining high detailed up to date maps, sensor data, traffic data, weather information, driving laws, and other such data just for such a car to operate, and expert care and testing would need to be done to ensure the vehicles were tuned to every environment.

What I am saying is that a moderately safe computer driven car is possible. But the detail in that is it isn't trusting the computer to make the decision, only to carry it out. The decision was already made by an engineer to slow the car to a stop, let the threat pass by. To prioritize safety at all costs. And to ensure the level of manpower required to maintain the information these vehicles rely on so much for safe operation is kept.

Rather than "compute the correct + safest set of driver inputs for a given moment of a given driving situation", as you put it, the car is designed not to compute it at all; rather, it avoids the situation entirely, even at the cost of one's time and patience.

Addendum:

that is not at all born out by the copious reading i have done on this subject for years now.

I too used to do copious reading on the subject matter, but have fallen out of reading but the occasional article today. From my point of view, self driving technology is obtainable, but at high cost and little reward to any company willing to make it. Something that would never fly in this capitalist system, as the amount of work, effort, and the lackluster appearance of such technology would ensure nothing more but mild success, at best, and at worst, be akin to public service, providing technology not to be at the forefront of it, but to ensure the safety of the world were in.

No automotive company is going to invest in that. They want their AVs to be just as efficient as humans, and just as safe too. They won't care if their technology causes more accidents than it needs too. They won't care if they cut corners. As long as they can say it's "just as safe" as a human driven car they'll be happy.

Also:

and in fact my original post is about how tech-savvy people wrongly believe this technological challenge is much more tractable than it is in reality. I wholeheartedly agree with the post as a whole, and even in autonomous vehicles is this prevalent! These companies think every problem can be "solved" by the computer and a little bit of math, rather than to be avoided at all cost. And in general, whilst computers are immensely more powerful than they were even 5 - 10 years ago, the amount of power they would require to broach true intelligence is beyond anything we can hope to achieve today.

I in no way meant to disparage or downplay the remarks in the original post. Only to give another persons insight into a particular aspect of it, and how it's more complicated than just "Do we or don't we have the technology".