vectorpoem
@vectorpoem

i'm realizing that a major reason lots of technically savvy people believe weird things about the future is that they have accepted an idea, over the last ~10 years of software trying to eat the world: that anything computable is inevitably, nay soon, going to become practical to compute, at scale, everywhere, for everyone.


bruno
@bruno

Part of this is that we now live in a world where there are large numbers of software people with no hardware knowledge, and they greatly outnumber hardware people.

And even the software people are often what I'd call API-pushers – people who fancy themselves Programmers but who really only know how to hook up preexisting systems together to make things go. Their contact with computers is through many layers of abstraction, and they believe largely in making the machine do things through the manipulation of symbols.

So if you point out to them that there are physical limits to transistor density – that you eventually start hitting up against hard rules about how matter behaves as you try to slap more transistors on a wafer – their brains shut down. Moore's Law failed well over a decade ago but these people still believe it.


You must log in to comment.

in reply to @vectorpoem's post:

Insofar as driving is a computable activity, I wouldn't attribute this to the human mind as much as I would cars and roads being standardized to the point that people can automatically read them. The horrifying thing is how this still leaves open the possibility of automated drivers optimizing their ability to read the road. More to the point, I don't think computation is how the human mind approaches outside reality. I'd liken it more to heuristics, IE using fuzzy metrics to arrive at some notion of "good enough." This is something that humans have to do, to account for scenarios that are broadly similar to previously encountered ones but whose particulars might be beyond prediction, but also something that computers simply can't do. Any attempt to make a computer rely on heuristics inevitably reduces the latter to computation, defeating the whole point.

You can easily make driving more computable by working the other end - standardizing roads and cars to the point where there's no guesswork and a very simple algorithm could accurately model what to do, but around that point you're just a titch past reinventing trains which is what the whole exercise is meant to avoid

The problem with this approach (be it for autonomous driving or any other kind of model that tries to emulate human behaviour solely based collecting truckloads of data) is that you cannot emulate something, when your basing your emulation on a flawed, or incomplete model. This whole idea very quickly runs into epistemological problems as well, because there's no way of knowing when your model is not flawed in some way, so you will never be able to perfectly emulate something like human behaviour, because you will never be able to perfectly understand it in the first place.

I keep going back to how this obsession with data collection for the sole purpose of finding some underlying truth about Humanity at large, is just the same stuff that physical Anthropology did for the better part of its existence (and to a degree still does, the last time I checked). However, if your approach to understanding humans, is not guided by an underlying theory and a critical understanding that said theory might change with time, everything you do is meaningless at best and incredibly dangerous (as we're seeing right now) at worst.

I've said it before, I'll say it again: the "AI Singularity" is just the Rapture with technology window dressing. You're absolutely right to talk about building a new God to your liking, these beliefs displace religion for those who hold them, and largely take the same shape.

Hi, sorry, this is a tangent but we just want to comment on the LessWrong cult because holy shit we are reading that link and it's really fucking with us.

At one point in our life, when we were around 15, we were fascinated by this rationalist community. It appealed to our way of thinking, and it was really cool to learn about it all! Thankfully, even then, something smelled fishy to us. There was this pompous air of flawlessness and perfection that these rationalists surrounded themselves with. It was very off-putting, and raised multiple red flags. But still for many years we held some respect for this community, and we shudder at the thought that a different Fluffies could fallen in their grasp.

Reading about how horrible these people are, in detail, it's. Unsettling.

US TOO, down to shuddering at the alternate timeline where we got sucked in. we even went and read the entirety of "harry potter and the methods of rationality" as a young adult and at the time we thought it was the best thing we'd ever read. we've been in two different cultlike groups and we are really glad that lesswrong wasn't our third even though it nearly got us

yeah, sorry in the future i'll try to CW any more explicit mention of that stuff. it is an astonishing rabbit hole of evil. and yeah, it appeals to people on the basis of "intelligence", often before they've developed much of any critical consciousness around that concept and all its weird baggage. glad you found paths away from it.

While there have been Singularity Believers for decades, one funny thing is all of this started to reach the described fever pitch right around the time Moore's Law stopped being true. Top end computer hardware hasn't really changed much since 2015, and the rate of improvement is continuing to slow. I don't think we're gonna hit a hard plateau any time soon, but sometimes it seems like everyone buying into this fallacy must've stopped following hardware development circa 2010

sure but otoh if you know how to read so much as a HTML tag like 3/4 of adults will treat you as the priesthood caste and bearer of secret knowledge and fuck if I know how a microchip works but I'd very much like to keep the grift going

This reminds me of a much simpler example of music, and how we already know all reasonable tunes. A few years ago, somebody enumerated all possible tunes up to a certain length (I think 12 notes?) stuck them on a hard drive, and registered them in some way (copyright?). One consequence of this is that after that point, every musician in the world is technically a plagiarist. It was a bit of an art piece pointing out that copyright law is not built to handle this, but it also points out that music didn't just stop. Just like the driving example - there is still a huge gap between "computable" and "useful".

I think these fears of an AI superintelligence really overestimate the measurability of intelligence. Like, Albert Einstein was a bonafied genius, but that didn't stop him wasting the majority of his life thinking in circles on quantum mechanics.

in reply to @bruno's post:

that's certainly part of it but it can't possibly be the whole picture, either. people still believe that unlimited transactions can simply be pushed through global blockchain protocols despite a hard limitation on transaction throughput being explicitly designed into the protocol, agnostic of compute capacity.

idiots' the ability of the unwise to ignore reality in favor of what is profitable to be true is nearly boundless

I think this is in part to the number of programmers that either didn't go to a robust Computer Science program or went to bootcamps instead. I had to take digital device and logic classes from the Electronic Engineering department as part of my curriculum. I know a bunch of programmers that have a hard time comprehending binary numbers. In code jamming sessions/competitions I have had to deal with coders who only knew how to use certain JS libraries, and really just knew how to google things to cobble barely functional code together.

Mentioning Moore's Law is funny. I went to college in 2017-2020 and they insisted that Moore's Law was still alive and well. That it still was in place and hardware was really getting that much better. And like, sure, hardware has improved. But it's also been getting bigger to accommodate for that. So transistor size hasn't been decreasing (which, iirc, is what Moore's Law was about), instead hardware has grown bigger to try and compensate and have more transistors. And it still can't keep up.

Yeah, Moore's Law is about transistor density. If hardware is getting 'better' by sucking more juice and having bigger wafers, that's not so much moore's law as it is just a social choice to dump ever increasing resources into making ever more compute capacity

And it's difficult to get info on what's happening on the other side of the API gateway. What kind of compute power is being used to respond to a generative AI prompt? How much electrical power is needed? I've seen some guesses, but as far as I know the companies managing the APIs don't give out this info.