• he/him

Coder, pun perpetrator
Grumpiness elemental
Hyperbole abuser


Tools programmer
Writer-wannabe
Did translations once upon a time
I contain multitudes


(TurfsterNTE off Twitter)


Trans rights
Black lives matter


Be excellent to each other


UE4/5 Plugins on Itch
nte.itch.io/

lexyeevee
@lexyeevee

i just went to the first MDN page in my address bar history, which is for the grid property, and hit "AI Explain" on the first code block i saw, which is the overview of property syntax

the code block starts like this:

/* <'grid-template'> values */
grid: none;
grid: "a" 100px "b" 1fr;
grid: [linename1] "a" 100px [linename2];

and the explanation starts like this:

The given code example demonstrates various values and combinations that can be used with the grid property in CSS.

The first set of examples shows different values for the grid property. Each value represents a different grid template. Here are the explanations for each example:

  1. grid: none;: This value sets the grid to have no explicit grid template.

  2. grid: "a" 100px "b" 1fr;: This value sets the grid template to have two rows and two columns. The first row has a height of 100 pixels and the second row has a height of 1 fraction unit (1fr). The columns are named "a" and "b".

NOPE. critically but subtly wrong! this creates a grid with two rows with heights 100px and 1fr, one column, and two areas named "a" and "b". (and as smaller quibbles: rows and columns do not have names at all in css grid; only areas and lines do. and calling "fr" a "fraction unit" is kind of weird.)

it continues in this vein and misunderstands the syntax several more times, but then gets a few of them right as well, and all written in the same objective tone.

i mean i knew this is how it works but it really fucking sucks to see in action. and of course it's the more subtle stuff, the stuff that people are more likely to click the explain button on, that will come out wrong.

i just hate this man. i hate that everyone is falling all over themselves to make a button that produces confident human-like text that's completely fucking wrong. as if no one cares if it's wrong, no one even cares to point out that it's often wrong, everyone is just delighted that it produces something. what the fuck are we doing? what the hell is wrong with this industry?


edit: THEY CALL IT "YOUR TRUSTED COMPANION"


namelessWrench
@namelessWrench

Isn't it harder to implement AI* than to just answer these questions manually?

*Or whatever the thing actually is that they're calling AI


DecayWTF
@DecayWTF

Not at present! Writing quality documentation costs money and time but dickriding VC funded dumb bullshit is "free"


lexyeevee
@lexyeevee

what's especially appalling here is that they could have generated all the text in the world and then committed it to the repository so humans could edit it

but they didn't! it's like it didn't even occur to anyone involved in this process that the output might be incorrect. how the fuck did this happen


Turfster
@Turfster

Easy.
None of the people involved in the choice to implement this have ever even seen a html source of a website, nor do they care to.
Just do the thing I told you to do, underling, my buddy Brock Jr. III said it'd be great over mimosas.


You must log in to comment.

in reply to @bcj's post:

honestly MDN has been bad / going downhill for years, they keep making the site Worse and most of the information is mildly outdated at this point

in reply to @lexyeevee's post:

i'm literally doing the openai intro developers courses right now and can't believe how blasé they are to their system just being wrong. they're working through a customer service chatbot for a tech retailer in one example and at one point it recommends a soundbar instead of the tv requested and the presenter just goes "oh huh, maybe that would be better with GPT-4" and moves on!

right now i'm on the section about evaluating how well your LLM system works and he basically says "well if it's not life-or-death you're allowed to do a bad job on this"

lol. at least you have to register/log in first before you get to play with it, imagine if they put this in front of everyone right off the bat

can't wait to find out what else it is completely wrong about

uuuughhhhh i did a quick look at the page and didn't see it at first and thought it was under the "ai help" tab

i gave up on using f3 to search websites because between lazy loading and chrome being dogshit it often just doesn't work

I can't remember which chost I commented on last time sorry

There was a press release today and part of it was "LOOK AT THE FEEDBACK WE GOT! MOST OF OUR USERS LOVE IT!" and god damn metrics are a fucking scam. Not going to consider when the prompt was put and potential bias for people actually willing to submit that feedback? Nope, users must love it based on this one cherry picked fact.

Is it going to be forked?

in reply to @namelessWrench's post:

I don't know that mozilla has done anything 'extra' beyond some prompt preamble so once they have the interface in place to do queries to whatever this backend is, it's 'free' effort-wise

yeah the reason all this AI/large language model stuff is getting jammed into everything is because it's actually incredibly easy to make something that looks like it's working most of the time. potentially effort on the order of hours and days rather than the months or years that a bespoke system would take

in reply to @lexyeevee's post:

the other thing that sucks is I feel like there's a non-zero chance that this is going to be treated as a "bug" in the AI model when in reality it's a fundamental limitation of the approach, but for some reason nobody seems to grasp that fact.

I think it actually did occur to multiple people that the output was going to be incorrect, but management insisted that they add a language model to the website because it's hip and trendy and will attract a lot of website traffic for a few days until everyone realizes the output can't be trusted.

what's especially appalling here is that they could have generated all the text in the world and then committed it to the repository so humans could edit it

a funny and nontrivial consideration is that they may be unable to depending on the licensing of their repository versus that of chatgpt's output (and, by nature, the huge amount of training data that it was based on)

not that this is the reason, but it's an interesting problem to consider (and one of a billion reasons to not do this)

It's galling! Like it actually makes me intensely mad and we know the entire reason: If they did what you suggest - which is reasonable! - they would actually have to take editorial responsibility for the content, but since it's Your Trusted AI Grand Vizier they can just shrug their shoulders, oh well, it's not us giving out wrong information. Just getting on the bandwagon to say they did it in a way that doesn't put any responsibility on Mozilla's shoulders

this is absolutely no excuse, but I think the reason is that this still "works" if you edit the code block.

It's a playground, so you can write your own code surrounding the example, then get the AI to explain it! Why would you want the AI to explain code you wrote, you ask? ok honestly maybe this isn't it i can't come up with any reason for that except maybe they expect people to use other llms to expand their examples, put it in the website, then use their llm to explain what it does.

i think, at least, "it works on edited code" must be the seed of some of this.