The training data crisis is one that doesn’t get enough attention, but it’s sufficiently dire that it has the potential to halt (or dramatically slow) any AI development in the near future. As one paper, published in the journal Computer Vision and Pattern Recognition, found, in order to achieve a linear improvement in model performance, you need an exponentially large amount of data.
This is the hardest thing to get through to people, even other skeptics.
AI isn't just bad, it's mathematically impossible.
The keyword in "infinite monkeys on infinite typewriters" is infinite. The thought experiment only works if you assume a probability space with no upper bounds.
But we have an upper bound, and it's called "the planet we live on". There will be no infinite monkeys, because that isn't a real thing that can or will ever exist.
This isn't Moore's Law again. Moore's Law isn't even Moore's Law anymore; we broke it years ago, that's why all the processors have a billion cores in them now, because we hit the limit. But tech wonks lean on your assumptions that tech will "always" get better, to justify a present that isn't any good, knowing that the future cannot actually come to pass.