It seems like a pretty fundamental problem that LLMs are designed to confidently claim things they probabilisticly slammed together. The problem is that people find them trustworthy even though there are good reason to doubt them. Even if you could brush aside any ethical/environmental/etc issues, unless it's fundamentally redesigned, it's a black box that won't and can't cite its sources or tell you how it got to a certain 'conclusion'. As it stands, it will happily make up sources for you and if pressed for how a solution was come to, it will respond based on how that question is answered in its corpus and not how it did the work. Why would it be good to trust this?
Ok but what if you made an LLM that only produced real/correct answers
Have fun with that
