• He/Him

24 y/o Bi bird furry.
Plays EVE Online, makes computer beep, and is a sleebgy guy
Dating a wonderful guy <3

https://linktr.ee/renitheraven

Avatar by Yookie

You must log in to comment.

in reply to @adorablesergal's post:

LLMs being integrated into OSes like Windows 11 have been headed in that direction for a while. I am sure they don't do it well, but a large segment of the last MS Build conference focused on how middle management could simple ask Copilot to generate graphs for a financial report and create a PowerPoint presentation with it, all hands off.

Again, haha we shall see, but they are already advertising this functionality

I think it’s absolutely possible to make a tool that does this but completely unhinged to actually do so.

The thing to me is that an LLM doesn’t magically gain the ability to actually delete or send emails. But a more traditional app with privileges to your email working in conjunction with one totally could. It makes sense to me that someone would have an email digest LLM, like, one that specifically creates a new text digest from your email and sends it to you after it fetches it from an API. It would be a whole other thing for that tool to also -be able to execute new unprogrammed tasks that the traditional app/server portion (that in any sane system should be the thing actually controlling the flow of data/executing stuff) isn’t already set up to do. Like you could probably build it this way, and certainly people are trying to make stuff that could technically do this, but actually setting it up in this specific way would be completely insane

If they're telling us we can ask a digital assistant to gather financial data and pump out a PowerPoint of it, they're stupid enough to set up an assistant to manage their email. It's just like voice commands that people have been using for some time.

Is that what's happening here? I dunno. The industry desperately wants to make this happen, tho

It is real and it has a CVE number: https://nvd.nist.gov/vuln/detail/CVE-2023-29374

There is a particular vulnerability in langchain, a popular Python library that people use to make LLM assistants, and it's so blatant that it feels weird to call it a "vulnerability" instead of "the code doing what it is designed to do".

langchain provides various capabilities that convert raw access to ChatGPT (a useless curiosity) into a chatbot as a product (still useless but highly desired by capitalism). The capabilities generally are related to parsing inputs and outputs relating to actions that should happen or things that should be looked up. One of the capabilities it includes is running arbitrary code in Python.

Q: WHY THE FUCK
A: When you have a good demo of a hyped technology, people throw money at you. Nobody throws money at you for making a thing secure.

Q: Why would anyone deploy langchain if it works this way?
A: Because it is the easiest thing and the thing everyone else is using.

Q: Does nobody working on this code ever think critically about anything?
A: If they did, they wouldn't be working in this domain.

I have worked with computers for almost twenty years and I cannot fathom the chain of thought where someone ends up at "Ahh yes The Computer is now so smart I will ask it to operate itself and expect good results"

Computers are less trustworthy than they have ever been, computers are actively sabotaging you, do not trust them and do not give them any power.