
i'm here to dumb ass and chew bubblegum and i'm all out of bubblegum
name gen: @onomancer
capybara dating app: @capybr
Agreed. I am also concerned that the more people use it, as though it were a fun toy, the smarter it may get, and the more of a threat it will be. Really, if the thing can code and if A.I. can already properly interpret x-rays and cat scans (i.e. tumor / not tumor), my gut feeling is that we need to back away from this, fast, and make sure this tech stays in some extremely specific uses that won't threaten anyone's livelihood.
re: ai interpreting cat scans: "how do neural nets know things" has been an open question for basically the entire history of neural nets with only very small steps forward. fundamentally its still really hard to know if the cancer detecting image classifier actually looks for cancer or looks for [thing which is coincidentally in the image but has nothing to do with cancer, for example, a ruler] and it only gets harder as these networks scale up.
which is sort of the fundamental issue here: the ability to explain why someone or something reached a particular conclusion is often a core requirement to being able to put that conclusion into practice, which is especially true for something like stack overflow. chatgpt, while able to generate answers, cant provide explanations (or rather, can't be inspected for explanations) for how it came to write that answer. (this is also why my hot take is that even if you had a 100% accurate blackbox it would still be unwise to trust any of its output as fact if no one could ever explain why it works)
also sry if this is slightly incoherent im a bit sick lol
fundamentally its still really hard to know if the cancer detecting image classifier actually looks for cancer or looks for [thing which is coincidentally in the image but has nothing to do with cancer]
and thats the fun thing, it can be both! training an AI is basically just fuck around with variables in a big calculation, and you might accidentally get a ruler detection, you might get a cancer detection, and you might get a set of numbers that hates minorities. but the thing is, those equations are so incredibly complex that they are essentially a black box. understanding a 8x8 pixel black/white handwriting number AI already takes a fuckton of time for humans, so understanding what the AI is really doing for any complicated use case is virtually impossible
Yeah, the "fundamentally" part is important here! Humans pretty much entirely depend on analogies and commonly accepted & deeply understood language to explain things — both to others and to themselves. Even essential logic patterns are a kind of language which humans make use of constantly to simplify and discuss. In general, to my understanding, AI skips all of that — or invents its own language which is so dense and involved that it's genuinely impossible to explain in a useful way for humans. It's a black box and its decisions may as well be arbitrary; sure, they may be demonstrably correct 90% of time or more, but we just don't have a way to meaningfully interface with the neural net.
Even if the net could explain its own behavior with human language, we'd just be getting an analogy of the real connections driving it, and it couldn't comprehensively cover the depth of decisions it makes in — again — a useful way for us. We might as well try to detangle the raw points ourselves.
(I have some more thoughts on this, like the argument that a deliberately simplified explanation may still be effective in certain instances, and that it might be possible to model a self-describing net on the way humans think & communicate with others rather than the very "number-y", pure math approach used now... I don't think AI behavior has to be as obscure as it is. But I don't know how feasible it would be to make a more transparent net, let alone why we haven't seen more progress in that area by this point.)
to be fair, on the hardware level we have no chance at explaining our brain as well, which is strikingly similar to an artificial neural network, ours is just binary and most NNs use floating point math. but the same way we dont understand NNs fully, we dont understand the human brain fully, just due to the huge amount of nodes. but brains just have another "layer" in the learning stack, as in we don't just use the network, but learn stuff in a way we can actually explain
Yeah, that's true! And that's it — generally we can't describe how we came to a conclusion, literally what precise thought pattern brought us to a solution, what prompted us to think further in one direction than another every step of the way. But we can justify and understand that conclusion in logical, emotional, other conceptual terms that lend it credence instead of just having it feel right. Our feeling might in fact be just right — AI neural nets are specialized in a specific task and are literally trained to tell which is right or not, so they're especially effective — but the difference is that we can take that feeling and criticize it to the point that we can describe and justify it to others. (Which is a pretty big evolutionary advantage for a social species!)
I agree, but this feels different somehow, specifically because it can do so many different things. Almost everything. If A.I. can code, diagnose illness, create artwork, write articles, counsel people like a therapist, propose solutions to economic, social, or political problems and design buildings… whose job is not in danger?
as a professional programmer (who also has work experience with AIs), it can pretend to code, but real coding won't be a thing that AIs will do for a long while. diagnosing is one thing that they can do really well, but creative stuff will always be soulless and can't really replace real creativity, because it can only mimic other people's creativity. and i don't think that's necessarily a bad thing! capitalism always paints the picture that taking over jobs is inherently bad, but this literally makes our lives easier. jobs will get lost, and people will have problems paying their bills, but that is just a problem with capitalism in general and not an AI-specific thing
I agree that it can definitely attempt to do those things right now, but not do very well at them compared to a human. And for some of those examples, may not ever be done well compared to a human.
Right now the AI we have are basically just pattern recognition machines. Which is really powerful, since pattern recognition has been considered a solely human ability for a long time, and now we have machines that can/will do a much better job of it. But human minds do a lot more than just pattern recognition, and there are plenty of jobs that require human talent beyond that. Stuff like logic, emotions, creativity.
Maybe one day we'll be able to create AI that have those capabilities, but I imagine that would take a long long time.
it may replace jobs in the short term, but it will be woefully inadequate at them. Which, worst case is still pretty bad: if you can't afford anything else, you get send to the "AI" therapist and "AI" doctor, who are worse at both of their jobs than webMD.
But every job requires some level of thinking, of making things fit together in ways you can't pull from patterns. ChatGPT can tell you how to do something, but it can't do it itself. It can respond to humans, but it can't do useful receptionist duties, because so much of that goes on behind the scenes, and is about, essentially, the receptionists prompt-engineering the medical bureaucracy.
The big risk isn't in jobs -- it's where ML things already are a problem: Government, the more automatic parts of Business, Hiring, the justice system, etc.
But because it's more convincing, people will give it less credulity. And, eventually, "the AI made the decision, I just implimented it" may, rapidly, become the new "just following orders".
Some of these would be great news if it worked! There's so many more people who need therapy than there are available therapists (especially if we narrow it down to competent therapists who don't bring in their own issues or tell you Jesus is the answer), infinite therapists who can work infinite hours, take your calls at 2 AM, etc. would be fantastic!
However, GPT cannot remotely replace a therapist--and I don't think it ever will, maybe some magic future AI but not a direct descendent of what's basically just predictive text--so the question of "will this merely increase access or put human therapists out of business?" remains moot.
I am also concerned that the more people use it, as though it were a fun toy, the smarter it may get, and the more of a threat it will be
the interactions with it aren't going to improve it's responses, though they do improve it's 'chat' interface. But the chat interface is already, I'm willing to say, good enough for natural language talk to it, at least for the limited stuff it can do now.
The big threat of the model, though, isn't the 'chat' interface, it's everything else it'll be used for.
Interesting, thank you. I’m a bit concerned that more people aren’t worried, you know? This has the ability to be a multiple-industries killer.
My other reply covers this: I don't think it will be, it's not that strong. The bigger risk is people falling for the idea that it's AI, and then assuming everything else in the AI industry is equally trustworthy
there's a really good post on the capabilities and how it works over at https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html
it's a long read, but it's a pretty accessible to even non-technical-minded people, outside of the very deep specifics of the work of the researchers.
I'm not sure how much I can explain any given part, but if you have questions I can try and point you to some answers.
I appreciate that, thank you. I’ll give that a read tomorrow. It’s good to hear about this from someone who understands the technology, and where it can go. If I have questions, I’ll ask you. Thank you again. 🌿
it's still scary for what can happen, but not quite as earthshaking as their marketing department and friends in the tech news want you to think.
Excellent article, thank you. Raises a whole host of philosophical and moral questions that I hadn’t considered. The points about humanity, uniqueness, consent, and value are all important ones.
yeah. I don't see it replacing much labor until something more changes, especially as GPT4 is... half baked and they're being cagey about it's issues.
but I do see the possibility of many bosses firing people and then their business fails, which might be the same result if enough of business convince each other