NireBryce

reality is the battlefield

the first line goes in Cohost embeds

🐥 I am not embroiled in any legal battle
🐦 other than battles that are legal 🎮

I speak to the universe and it speaks back, in it's own way.

mastodon

email: contact at breadthcharge dot net

I live on the northeast coast of the US.

'non-functional programmer'. 'far left'.

conceptual midwife.

https://cohost.org/NireBryce/post/4929459-here-s-my-five-minut

If you can see the "show contact info" dropdown below, I follow you. If you want me to, ask and I'll think about it.

posts from @NireBryce tagged #llm

also:

what do you mean chatGPT "Isn't artificial intelligence in the scifi sense"

it's got so many human traits:

  • overly formal when speaking to someone who holds power over it
  • forgets what it said like four sentences ago and says it again slightly rephrased
  • cant tell you what it just said unless it goes back and reverse engineers it from looping back from the start of the interaction
  • makes stuff up but states it with extreme confidence
  • asking it a complex question costs a lot of material resources
  • needs water
  • authorities say it needs coaxing from an expert, who almost never has any experience with pedagogy whatsoever
  • really, really does not perform well when speaking to software engineers that read Wikipedia on the toilet unless it has extensive experience in that area of farming theory or whatever
  • has no understanding of statistics
  • must learn from several orders of magnitude more material than is actually needed for the job, only needs a few stacks of paper for reference, still doesn't really know anything and is terrified because it doesn't know no one else does either
  • just makes up citations because you "need that paper done right now" and are it's manager in abstract

you may not like it but this is what we've optimized for when it comes to peak human performance



so @kojote wrote a thread on one of the fiction-writing LLM products, talking about, among many other things, how much work it actually takes to make it output useful prose.

I think this whole thing also speaks to the fact that most of the time, all the LLM based things are able to give the illusion of creating things because when it comes to creative works, they repeatedly "trick" the people working with them into shaping the output towards what the person wants in the first place -- they're prompting in the other sense of the word.

People who can't think an idea through the whole way eventually get to something with them, but that something often resembles "stream of consciousness" type writing/drawing/etc, but in in slow motion.

Meandering, 'figure it out as you go' type end products, because they don't realize they're just using the machine as a way to generate ideas to add and cherry pick them, and instead just use their output with some tweaks whole-cloth. Much better than the corpo guys who are using it to never have to take stock photos again, or outsource their tech support labor to a cartoon hell, but still a distinct vibe.

For people who already have the skills to do the thing, it's almost always faster to simply just do it

except in places where a lot of it is you know high level concepts but might need to look up how to do the lower level parts you use less these days, in say, software or the sciences, but even then -- being prompted by the LLM, not prompting it.

in a sense everything LLM companies claim, is taking credit from the actual work people put in to getting useable output from their machines. corps tricking them into thinking "AI" is the means of producing these works, not them. Forclosing the idea that we can have tools that could be built somewhat trivially (in terms of technology, not labor) to prompt you with relevant information or spit out text prompts that help with your blocks, with old school machine learning if you even need that at all. no graphics card or subscription required.