Imperious

Higher than God lost in the enmity

  • E/Em/Ez/Emself or He/Him

Hello! You can call me Cypher, I'm going to try and use this space as mix of my fandom interests as well as a place to post my essay long rants and maybe some other writing I get up to.


fullmoon
@fullmoon

The primary problem is that while the answers which ChatGPT and other generative AI technologies produce have a high rate of being incorrect, they typically look like the answers might be good and the answers are very easy to produce. There are also many people trying out ChatGPT and other generative AI technologies to create answers, without the expertise or willingness to verify that the answer is correct prior to posting. Because such answers are so easy to produce, a large number of people are posting a lot of answers. The volume of these answers (thousands) and the fact that the answers often require a detailed read by someone with significant subject matter expertise in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure.


lmichet
@lmichet
This page's posts are visible only to users who are logged in.

You must log in to comment.

in reply to @fullmoon's post:

"Of course we do care about quality of answers on SO. For the sake of our own AI though, users are just source of traffic that costs us a fucking lot and don't pay us shit, why would we care."

This is unsurprising! these LLMs are basically "wisdom of the crowds machines" right? they scrape the whole-ass internet and then mulch it for patterns. It's not like there's any deductive reasoning going on behind the scenes.

so if the majority of people on the internet think the earth is flat, and that's what ends up in chat GPT's training data, then that's what it's gonna tell you.

So yes! peak human intellect has produced a machine that regurgitates common misconceptions, good for us.

A more accurate description in laycreature's terms is that it's a freakishly good AutoComplete. Just like AutoCarrot sometimes suggests nonsense that is nothing like what you were actually trying to type, LLMs have no idea what they're saying and are just picking the most likely word to come next. Sometimes it happens to form a complete and factually correct sentence, but in practice it's subtly yet critically wrong most of the time.

for people who might be unaware: this isn't a new policy - it was first written in December 2022, i. e. shortly after ChatGPT became available, and has been in place continuously since then (even though it was originally called a "temporary policy"). As far as I can tell, the only meaningful changes since then have been that they reworded it over time to apply to all generative AI tools, not just ChatGPT, and that they made it no longer "temporary".