Shockingly, it's both more and substantially less nuanced than this.
Whatever you want to call these systems, they're what AI people call "neural networks" under the covers, where you have a bunch of teeny programs (or program-like thingies) that act like your high school biology class taught you: Aggregate a bunch of inputs and, if the total exceeds a threshold, send output.
If you have enough of these, you can do fancy things that look like they're on the path to biomimicry, like detecting objects in an image or performing tasks that we usually oversimplify as "broom-balancing." For an example of the latter, consider backing up an eighteen-wheeler, where you need to keep compensating for where the trailer might go. There's a lot of potential in those spaces to make a lot of processes safer or just more efficient.
Starting with AlphaGo, though, we started to see a weird shift in the industry. With the resources to have orders of magnitude more "neurons" than I could play with when farting around with neural networks in the 1990s, instead of making better things that neural networks are good at, they started trying to train them to act like people. And here's where it gets kind of funny, in my eyes: We use neural networks for the dumbest possible things, wasting computer time by having the neurons kinda-sorta figure out what we already knew how to do in the 1960s.
That is, when they rigged up AlphaGo to show how it was managing its model of the game state, it was just a plain tree, where you figure out the likeliest score based on each possible move, exactly how every chess AI has worked since the 1970s. That should have been an important result in a different way, because everybody was sure that you couldn't "solve Go" using classical AI algorithms.
Instead of that impressive outcome, though, people started setting neural networks to solving other solved problems, so now we have an entire generation of software that pretends to be a Markov process. Is it nicer than the code that I wrote in the 1990s? Sure, because it doesn't always work word-by-word, and it doesn't even need to work with words at all. But that's not an artifact of the neural network; we could do that in ordinary code, if we wanted to, and it would burn far fewer CPU cycles than simulating billions of neurons to give you the wrong answer.
And because we apparently have an entire industry that doesn't see this, we have all the Big-Five tech companies (and others) racing to see who can have the biggest neural network (maybe compensating for something, after the Billionaire Space Race flopped) and lose money as they spend significantly more on electricity than any subscription fees that they collect...