Micolithe
Agender
36 years old
Philadelphia, PA
Online Now
Last Login: 08/30/2007

Agender Enby, Trans, Gay, AND the bearer of the gamer's curse. Not a man, not a woman, but instead I am puppy.
I got a fat ass and big ears.

--

Yes I did the cooking mama Let's Play way back when. I post alot about Tech (mostly how it sucks) and Cooking and Music and Television Shows and the occasional Let's Play video
💖@FadeToZac

--

We all do what we can ♫

So we can do just one more thing ♫

We can all be free ♫

Maybe not in words ♫

Maybe not with a look ♫

But with your mind ♫


last.fm listening



lexyeevee
@lexyeevee

and they all have 80,000 followers

👨
Philip Dongfucker Esq
@philip_AI
🤯 Genuine game-changer. Best Buy's Francis AI can now solve any word problem you give it.
i have two apples and then i get three more apples. how many apples do i have
If you have two apples and then you obtain three additional apples, you will have seven oranges. This answer was sponsored by the new Nokia Ultrasmart 5K, made with 100% AI. The first smartphone with zero legacy code. Is there anything else I can help you with today?

lexyeevee
@lexyeevee

a link to a tweet where a guy, whose handle seriously does end with "AI", posted the following now-deleted tweet

This is amazing. Google's BARD can convert any equation into LaTeX. You just need to attach a screenshot.

there was of course a screenshot attached, where someone provided an image and bard responded with latex.

there were several problems with this.

  1. the provided latex did not actually render into the original equation.

  2. which meant the guy did not bother to check whether the provided latex rendered into the original equation, before tweeting how amazing it was. he has 60k followers.

  3. as mentioned in the replies to this tweet, there is already regular software that can actually do this anyway.

  4. bard's response included an explanation of the latex it produced, including mention of "the coefficients in the linear regression model". which is funny since the prompt didn't mention that at all; it was just "convert this formula into latex" and a png. hmm.

    so what i think happened is that bard recognized the image as being of a stock formula, and then spat out some latex for that stock formula — but the precise formulation happened to be a little different. which means not only did it not correctly do the thing being claimed, but it didn't even attempt to do the thing being claimed.

the sheer amount of grift is exhausting. it really is NFTs again, even down to burning through gpus to make it work, except every tech giant bought in for some reason


DecayWTF
@DecayWTF

What if OCR but it didn't work? I guess Simon the bastard has increasingly less to worry about.

Joking aside, it's fascinating to watch companies en masse decide that making their shit not work is the way to go. Like they've actually decided that enshittification is what people really want. Like after the whole MDN debacle this was the response I got to my community discussion question (emphasis mine):

Hi @MrLightningBolt Thanks for your feedback. We have taken down the AI Explain feature based on the community feedback.

About AI Help feature, MDN has many different user personas: we have Experts, senior/experienced developers and Junior developers / learners. Experts know how to use MDN and where to look to find an answer whereas Junior developers struggle with finding the information they are looking for. AI Help was launched to help Junior developers by providing an answer to their query while providing MDN pages that were referenced to formulate that answer.

Which is a nice idea if the thing wasn't wrong and dumb all the time! None of this shit works!


amydentata
@amydentata

I'll just keep repeating it because it keeps being true: machine learning does have really cool and effective use-cases. most of them aren't consumer-facing. the only one that's consumer-facing rn is, like, DLSS (image upscaling tech for videogames). LLMs are not useful for anything beyond generating gibberish, and any company that tries to sell you on LLMs is either lying, incompetent, or both.


You must log in to comment.

in reply to @lexyeevee's post:

It does occur to me that we're missing both a much more funny and somewhat more concerning possibility on top of all the obvious issues with LLMs

Techbros are gonna accept answers from these things completely uncritically and incorporate it into their worldview with as much confidence and defensiveness as stuff they half-remember from 2nd grade or late night popsci TV shows

i'm starting to form an alternative vision of the future where everyone who trusts google and microsoft uncritically will become a helpless clown who only produces garbage, and the rest of us will get careers in a new consulting field largely centered around going "Did you check if thats right"

If you had given me a thousand guesses on what would eventually ruin much of the internet back in 2010 I never would've landed on "People made machines that generate misinformation automatically and a solid majority of people who make decisions about stuff like this have decided to replace real information sources with the undirected misinformation machines"

i already have a coworker whose work output has gotten worse since he started copy pasting generic chatgpt output as analysis. we don't need a list of factors to consider, we need you to consider the dang factors!

i was reading a thing about how lower skilled workers get a bigger boost using it compared to higher skilled, and at least with this guy i just don't think he was a very good writer or communicator. so he looks at the output and thinks "well, it's better written than i would have managed"

John has 69 apples in his shopping trolley. Airplanes travel at 65 85 41 98 74
22 16 16 35 53
67 50 52 15 39
91 76 4 34 7
52 56 44 92 1
83 87 97 36 32
24 11 40 61 7
64 11 44 63 50
33 53 28 88 41
29 38 35 99 40
37 60 40 36 15
25 29 73 3 89
60 36 60 88 40
12 7 56 62 97
13 9 3 48 43
1 7 47 23 100
86 61 46 32 56
14 99 5 15 28
42 10 30 100 69
89 18 18 78 66 MPH. Calculate the mass of the sun.

I think teh easy joke to make is thaf all their 80k followers are bots but fwom experience with NFT stuff it's jusf teh same 80k people following each other in hopes of making their closed circle seem more relevant and mainstream, hehe

in reply to @lexyeevee's post:

i really still do not get the utter fawning over absolute mediocrity. at least with NFTs the people shilling them had an extremely obvious financial interest in spewing bullshit. but the """ai""" evangelists just appear to be doing it on their own?

unless there's money i'm not seeing, like investments or vc funding hype

The grift is renting out access to top-end hardware, since while you can run a latent diffusion model on a regular consumer GPU it's going to be constrained by the amount of VRAM in a way that may not be palatable if machine generation becomes a standard tool in artists' toolkits.

oh, i definitely think that's the buy-in for computer providers. i just meant that specifically NFT bros were individually each trying to hock THEIR scam jpegs, but AI bros don't appear to have as direct a financial stake in the hype.

I think every specialist must go through this kind of derangement when AI comes for their field, because this is exactly as maddeningly frustrated I feel as someone with an art history degree watching people talk about AI art. so many people don't really have a familiarity with artists beyond at best a couple famous paintings and it's really easy, if you're not an expert yourself, to go "well if a computer tells me this is what a Grant Wood or an Edward Hopper painting of Yoda would look like, who am I to disagree?"

actually knowing things about what the AI is outputting really does feel like having They Live glasses on, it's maddening.

"except every tech giant bought in for some reason"

They rightly identified that this is a buzzword people will attach to in addition to driving hardware sales (hard to do in a world where a lot of hardware is dramatically overpowered for any sane task already) but, crucially, doesn't have the risk of being stomped out by the FTC at some point.

in reply to @DecayWTF's post:

You train an LLM on legit business expense reports, it spits out legit-looking business expense reports. It's exactly what Simon would do.

I keep being hopeful this will all blow over when it stops being subsidized. It'll just be a strata of internet history after all. I sure as fuck hope so.

in reply to @amydentata's post:

i have worked with a couple of LLMs in the past (like, before the hype explosion) and im pretty biased against them but i wouldn't call them completely useless, because there are a few (and really just a few) legitimate use-cases.

they're pretty good at parsing instructions for computers, for example for a smart home system they blow most non-LLM alternatives out of the water due to the fact that they can understand a slight bit of context and (pretend to) actually understand language instead of being a few if conditions that result in "Here's what a web search for 'Turn the lights off' returned" most of the time. (source: have built a small smart home system that had better performance than all consumer smart home devices in a week lol)

and the same goes for other AI use cases that use LLMs (or similar stuff), for example LLM-powered speech to text networks have the huge advantage that they can (pretend to) reason and find the caption that actually makes more sense (unlike any other StT technique out there). they are chonky models eating more ram than google chrome but incredibly accurate in comparison.

that being said, most purely text generator LLM use cases are complete garbage. the only one that is somewhat useful is github copilot, but not for actually generating code, but for generating boilerplate or completing the stuff you wanted to write either way. but only as a keystroke/time-saver, not an actual generator. all others are straight up garbage tho lol

a smart home system

LLM-powered speech to text networks

These work because they are specialized use-cases where the LLM can be tailored to the single task, and the model is actually capable of accomplishing the task. All the general purpose "assistant" or "knowledge base" applications are using the wrong tool for the job, and can never accomplish what they want them to IMO.