it really depends on what you're using it for. if you want a blurry look at a bunch of data, it's kinda perfect. the computer can analyze and see trends faster than a human ever could. but you can't use that in the final product. that's really my issue with AI: it's a convenience, but it does not do the work to a human standard, it cannot be trusted. the AI will pull inaccurate assumptions from its training set much more readily than a human does, and it has no tools for recognizing and reevaluating an incorrect assumption. use it cautiously. use it when you can double check anything interesting it does. it is not actually intelligent, it is very dumb and very good at hiding it in such a way that you fill in the gaps with your own intelligence, assuming the best of it. and hey, maybe you just needed a framework for your own intelligence to fill in the gaps, that's valid sometimes.
it is perfectly reasonable that it has many doubters and people who wish to avoid its output.
I do also wish to note that most AI right now is very power inefficient and is being heavily subsidized by startups who make a dual assumption: 1, that it will hook users and become indispensable to enough people that they'll have to pay whatever price is actually sustainable in the long run, and 2, that this is a technology, of course it'll become more efficient with time, we won't have to hike the prices that much when we pivot to profit
both of these are questionable assumptions. any actual business built upon AI is likely to be much more price sensitive than tech assumes. and we're reaching the end of Moore's Law, and there's no guarantee we'll be able to start it up again. the efficiency of single precision floating point operations is something that's been moderately worked on for decades as part of GPUs, we're not going to suddenly unlock very many 2x/3x optimizations in that field, like most of tech has benefited from.
when the need for profit hits, AI will be a bloodbath. there will be a reckoning.