A lot of tools you already use include parts that use machine learning, and possibly have been since before Stable Diffusion or ChatGPT existed. Most of them haven’t been advertised as “AI” until now because outside of research, that term is primarily marketing. The machine learning tools you’ve already been using aren’t trained on massive collections of unlicensed data gathered without permission. A lot of these, like Adobe’s autofill and other newer features in Photoshop, are trained on images/data that the company either owns or has actually licensed. A tool I’ve used before, Replica Studios, has AI voices generated in-house, and the actors receive payment when you use their AI counterparts. (No clue how good the deal is for them, but AFAIK they’re getting paid.)
The problems with this recent loud, obnoxious wave of “AI” shit are specific to things trained on public data without permission (Stable Diffusion, Dall-E), and with chatbots like ChatGPT that are also being pushed on people to do tasks they can’t actually do. You know, like provide factual information. It’s important to understand the specifics here so you don’t lash out at people that don’t deserve it. It’s not as simple as “AI = bad.”
