
56k warning
Using LLMs to make code is one of those things where you can really easily tell the divide between people who know what LLMs are actually capable of, and people who think they're some kind of mystic creature that can grant wishes.
The first time I saw people be like "haha I asked Chat-GPT to give me some Python code and it gave something that looks correct but isn't actually useable" (LLM output in a nutshell) and then some months later I see some people like "wait, maybe we're onto something here"
I keep poking at ChatGPT's code generation capabilities and the MOST, the absolute fucking MOST it can do right now is generate boilerplate code
the advent of chatgpt has actually made getting things done and figuring things out so much harder