I say this as someone who has been using machine learning a lot, for hobbyist music making: “AI” in many ways feels like “web3”/crypto. It is overhyped, people are rushing to add it where it doesn’t belong, and its primary application with any economic impact will turn out to be scams.
There is one caveat here, and it’s important to tease it apart. “AI” is a buzzword that gets slapped onto a technology that already does have real, actual uses. Machine learning/whatever you want to call it is genuinely cool and useful for a number of niche, hyperspecialized tasks that don’t sound exciting to the average person. Various forms of lossy data compression. Animation blending. Certain scientific research tasks. Data analysis. The way Nvidia is using machine learning, to speed up raytracing for example, is basically a form of lossy compression.
It’s fancier JPEGs, where the algorithm that renders the final image does a lot more work with the data it’s given. That one popular article calls ChatGPT a “blurry JPEG of the internet,” and you could say DLSS is a “blurry JPEG of videogames” that is targeted in a way so it looks good in 4K. As an added bonus, the misinformation it generates is texture and lighting detail, which probably won’t kill anyone outside of the game itself. But if you apply the same technology to “enhance” security cam footage, for example, suddenly we’re back in dangerous ChatGPT territory, and you’ll falsely imprison people because a multiplication table generated some imaginary faces.
In the places that machine learning is useful, it is apparent that it is a tool and nothing more, and that the output it generates requires human scrutiny. In the places where “AI” is a crypto-like scamfest full of FOMO, it’s either the wrong tool for the job (ChatGPT, whatever Bing is doing), or it’s the right tool for a bad job (spam generation, copyright infringement).



