absolutely floored to hear an actual lawyer tried to use AI to do research on an actual court case recently. it's terrifying because this is only gonna continue happening, especially if techbros continue to ride the hype train around AI garbage, trying to convince you that it's the absolute best thing ever and that it can't do anything wrong etc etc.
chatgpt does not give you facts. ever. literally never ever. even if it actually does, that should be called an absolute miracle.
all this stupid AI shit does is put together words in a way that looks convincing enough to be real answers. that's it.
please dont use these AI language models until you understand this incredibly simple concept.
or better yet, don't use AI at all. just write your own shit and do your own damn research. it's really not that hard, especially if you're an actual lawyer who passed the bar exam. like holy shit dude
EDIT: man this took off
You can get useful information out of ChatGPT just as much as you can get photographs of your back yard out of Stable Diffusion, which is to say, you can’t
they aren't stupid, they just don't have enough of an understanding of Tech™ that lets them see through the marketing. ChatGPT has a small disclaimer that it can't do anything critical, but the extent of that isn't really gone into on the disclaimer, I don't think. The entire thing is chatGPT can Do The Drudge Work For You, and I've seen it pitched as a research assistant by openAI. It's a trap that's easy to fall into if you aren't used to being tech-vigilant.
A lot of things are like this -- the people falling for cryptocurrency scams, etc. It's the same thing the economy is always about -- exploiting information differentials, because tricking people is where the money is; if they saw the magic, they would know not to use it as much.
Some extra context is in there if you read the updates on the post of simon's:
