there is no future where large language models don't give incorrect information. it's an intrinsic part of the technology, and anyone who tells you otherwise either doesn't know how an llm works or they're trying to squeeze as much money out of it as they can before the bubble inevitably bursts
