ooccoo

are you gonna eat that

I am the new reckoning of society I am also some lady that likes to make loud noises with technology



horseonvhs
@horseonvhs

fucking, come on man. (edit from the future: a few hours after i posted this, the feature was reportedly turned off - see here for more info)


nicky
@nicky

i honestly really appreciate his quick response and willingness to be open to feedback. i'm gonna share more of what i think with him because while i understand his intentions were good, i think there's a big difference in perspective between what he thinks about AI as a tech person and what the mostly creative-types who make up the majority of neocities think of it


lokeloski
@lokeloski
This page's posts are visible only to users who are logged in.

You must log in to comment.

in reply to @horseonvhs's post:

as far as i can tell there hasn’t been an official announcement; it just showed up when i opened the text editor. the closest thing I found was the tweet announcing their April Fools thing, which was the Daria chatbot mentioned in other comments. sorry i couldn’t be more help there

there has been no official announcement at all, the only sign of it was in the commits via github

https://github.com/neocities/neocities/commit/8ba6005c67ba3d19d223caaaa4b2600f0edf2f5b

the above links to the initial commit where the assistant code is introduced, a week before april fools. the day after, he begins replacing the "penelope" code with "daria" code, to set it up as the april fools prank, and another last-minute round of "turd polishing" to finish the gag

https://github.com/neocities/neocities/commit/a975c9c2be5c1316f4e035c55b1357801dfbb6a8

https://github.com/neocities/neocities/commit/3db080b7f78461222a47013017b1bcb30f9826e0

which is then immediately reversed on april 2nd, restoring the "penelope" function but leaving the chat bot completely intact, with its prompt designs meant to be more "helpful" than irate

https://github.com/neocities/neocities/commit/c16c5272e2783783cff6af48600631cadecdc95e

https://github.com/neocities/neocities/commit/bb430d455fe7985a3b5e29bb955039dec7a52881

i hope this helps a little, as it's the only "official" information available about this

in reply to @nicky's post:

You should probably do some serious research into the ethical and operational problems with LLM """ai"""s- the people making them steal all the training data, and it doesn't actually 'know' anything so it's prone to 'lying' (putting together a correctly-constructed but factually-incorrect result because all it cares about is the rules of English).

There's a lot of decent """ai""" tools, especially computer-assistance tools who existed before the current craze and just slapped an 'ai' sticker on to try and get some attention, but LLMs are beyond useless, and you should be concerned and skeptical of ANYTHING in the workplace that uses 'ai', especially yours. I forget the exact phrasing of the tweet(?), but it went:

Data Scientists: Look, we've put together a robot that can approximate English by predicting what sentences are supposed to look like. We haven't programmed anything else into it, so it can't do much yet, but this is a good first step towards smarter computers.

Biomedical Industry: I've fired all of our staff. How quickly can this thing start diagnosing diseases?

I think AI can be a useful tool, I can see why it would be a problem if LLM is trained on data without the consent of the people who's data it is
As for the biomedical industry, we use it to parse through data sets that are too large and complex for a person to go through its just a tool to assist us AFAIK nobody, in the biomedical industry at least, has been fired and replaced by AI, from my experience with it it can be extremely useful and potentially help us save lives because it helps us find things we might otherwise miss
I agree LLM are unreliable for getting factual information but that is on the user right? Not the tool?

There's a lot of issues with generative AI and whether the models can be built ethically, but I've seen a couple narrow cases where they are actually kind of useful.

Just not the kind of narrow use cases that would justify the level of VC cash people are jerking off with about the tech.

it really depends on what you're using it for. if you want a blurry look at a bunch of data, it's kinda perfect. the computer can analyze and see trends faster than a human ever could. but you can't use that in the final product. that's really my issue with AI: it's a convenience, but it does not do the work to a human standard, it cannot be trusted. the AI will pull inaccurate assumptions from its training set much more readily than a human does, and it has no tools for recognizing and reevaluating an incorrect assumption. use it cautiously. use it when you can double check anything interesting it does. it is not actually intelligent, it is very dumb and very good at hiding it in such a way that you fill in the gaps with your own intelligence, assuming the best of it. and hey, maybe you just needed a framework for your own intelligence to fill in the gaps, that's valid sometimes.

it is perfectly reasonable that it has many doubters and people who wish to avoid its output.

I do also wish to note that most AI right now is very power inefficient and is being heavily subsidized by startups who make a dual assumption: 1, that it will hook users and become indispensable to enough people that they'll have to pay whatever price is actually sustainable in the long run, and 2, that this is a technology, of course it'll become more efficient with time, we won't have to hike the prices that much when we pivot to profit

both of these are questionable assumptions. any actual business built upon AI is likely to be much more price sensitive than tech assumes. and we're reaching the end of Moore's Law, and there's no guarantee we'll be able to start it up again. the efficiency of single precision floating point operations is something that's been moderately worked on for decades as part of GPUs, we're not going to suddenly unlock very many 2x/3x optimizations in that field, like most of tech has benefited from.

when the need for profit hits, AI will be a bloodbath. there will be a reckoning.

I think you've almost certainly been using similar tools for long enough that you and the people you work with have a much healthier understanding of the role of technology in your work. but even for you, modern AI is a step change in one major aspect: up until now, the computer has done exactly what you tell it to. the instructions all came from people who put a lot of thought into what they needed, what they're really asking. and even then, sometimes it's a little subject to confirmation bias, sometimes you build a tool to analyze a dataset looking for a specific outcome and don't leave enough room for the computer to say "hey actually that's not quite right" in the output.

But when you use a model, you're no longer the only source of assumptions, the confirmation bias no longer comes only from your ideas. You don't even have the slight leg up of realizing that your own assumptions were wrong and having to redo work. You may be presented with analysis based on an assumption that you knew was wrong from the beginning, that was so obvious to you that you didn't think it could be a problem. this computer has been trained on thought patterns you didn't originate. this can be dangerous. the rigor is flying out the window. don't trust anything at face value. verify, verify, verify, please.

See I dared not hope that was the case lol. To be fair, entire papers have had to be retracted from journals over being baldface AI generated output already. It won't be the last time. It makes more sense that 99% of academics are more serious than that, of course, but sheesh. That makes me worried that more subtle mistakes are being made elsewhere, you know? I'm glad you're taking these tools seriously.

Edit: I'm so much more used to this kinda thing lol, AI has completely replaced blockchain in this joke now

thread's been hashed over pretty heavily but I wanted to hop in and make sure someone says: the kinds of AI you work with in biomed are vastly less bad than what's being discussed here. for one thing, they're generally held to far higher standards of rigor, and trained on much more carefully curated data.

and just as important if not more, they're strictly analytical as opposed to "generative." it's all statistics at the core, but there's a lot less room for things to go vastly askew when the task is "given a massive data set of established histories and outcomes, what is the likelihood this mammogram predicts a tumor" versus "given a massive data set of human conversations, some of which might actually be prior AI conversations because we're not really paying much attention to where our inputs were harvested, what is the most likely next word in this ongoing chat log." (and of course the latter consumes vastly more energy, without the value proposition of literally saving lives.)

tldr medical AI has been fine and hopefully will continue to be; the public facing stuff is very not.

Cool response and glad he's taking it down. Just wanna emphasize though an AI cannot be trusted to teach dick.
I wouldn't trust it for foraging mushrooms, any kind of journalism ever, or coding a fucking potato. I'm not trusting an AI to teach me HTML, I want humans to be writing the instructions.

I mean yeah, talking 1 on 1 in a polite and level manner solves a lotta shit; and if it don't solve it immediately, it usually leads to easier solutions down the road.

so also unironically GJ OP, this could've gone shit shaped real quick if someone much less mature approached him directly, ty for helping make things nicer.

I was at a conference with a guy who “runs neocities” and he gave a little talk about how he enabled an AI chatbot for April fools but had to turn it off because they ran out of tokens. He seemed pretty interested in incorporating a chatbot in the future even though it was going to be expensive.