I’ve spent a lot of time playing with LLMs run locally on my development machines and deploying them onto personal servers. So smaller models built to run on tin cans basically.
Things LLMs can excel at:
-
Exploring ideas (if the LLM has enough data about the subject). I’ve written brief story snippets into LLMs and they’ve generated some neat subtitles for stories around the themes I suggested.
-
Summarizing non critical things. LLMs have the ability to chew through data pretty quickly, so if you wanted to summarize something fast that’d be a way to do it.
-
Searching through large bodies of text. Because you can run them locally you can have an LLM purse say a textbook and then interact with the content in Q&A style but the biggest problem is…
LLMs make up shit all the time. There’s some small project like LanguageModels that have Wiki functions where as a programmer you could build some semblance of “fact checking” into the output. But that’s far from even close to the beginning of a solution for AI hallucinations.
I asked LanguageModel today what the capital of Russia was… it said Mars, I asked who won the 1994 Stanley Cup… it named a city that doesn’t have an NHL team.
LLMs are terrible for fact based stuff