There are already so many great experts in that field, and a lot of them are easily available online.
There are already so many great experts in that field, and a lot of them are easily available online.
i just fuckin' can't with AI evangelists anymore. anyone that seriously suggests it as a solution to anything is dead to me
Waiting in pained exasperation for the apologists who are about to Kramer in and insist that humans make mistakes sometimes and therefore it's fine for computers to fuck up constantly.
That definitely will not happen on an ASSC space. This place is repulsive to techbros and cryptobros
IT teacher had a saying. To err is human. To err spectacularly is machine.
sometimes I remember these posters in our computer labs in middle school with a friendly computer saying "to err is human, sorry" and like yeah cool I didn't requisition low spec 486s load them windows 98 and pretend that its a 12 year old's fault that they're busted coffee stained garbage
edit: this ain't like a rebuttal just a weirdly connected memory
I'm working towards a degree in machine learning and I'm just head desking trying to figure out how tech bros managed to cause another AI winter when we were finally starting to make progress again.
I should have gone into bioinformatics.
This is great because it is basically a “GPT-3 fucked up”-only excuse. Like if the bank accidentally charges you an overdraft fee absolutely none of them are going “yeah, well, I bet you’d be even more inaccurate guessing how much money is in your couch cushions, so you’re just a hypocrite blaming software for doing the same thing.”
Watching autopilot fly a jetliner into a mountain and shrugging my shoulders because it's completely unreasonable to expect computers to be perfect when, after all, most people can't do vector algebra, now can they? It’s not an autopilot failure, it’s just “spatially disoriented!” Like a human pilot! That means it’s learning!
Creating spam articles to be indexed by Google is sadly the most practical use of this technology.
What I like best about this is that when I tried it, I got a generative AI response from Google claiming the same thing, so then it's Google's AI snake oil repeating garbage generated by OpenAI's snake oil, instead of just OpenAI's stuff fooling Google's "authoritative answer" algorithms.
So when that USA Today article about those scientists that made a proposal that the universe might be twice as old as we originally thought, the AI algorithm started to cite that as fact and would keep saying the universe is 26 billion years old.
sometimes it feels like the current "AI" lurch is entirely motivated by people who are angry the public has access to facts
I don't really think it's malicious. Maybe a few small bad actors are. But I mostly see it as a symptom. In our current attention economy, clicks mean ads, and ads make websites money. If you want to make passive income with minimal time investment and effort, making bullshit websites is a pretty good way to do that.
It's not really a difference in kind but a difference in scale. Websites similar to the on in the OP have used ghostwriters to spew out poorly written factoid articles at a breakneck pace. The little profile picture and name you see next to most articles on the web hasn't had much real accountability for years now. It's just that now instead of a human getting paid very low wages to write redundant and poorly made articles, they're getting large language models to do what they see as roughly equivalent for even cheaper or free.
At least when they were using human ghostwriters they would usually at least try to compile somewhat accurate information. Even if it was just the same stuff that 50 other websites already had.
The thing is, how difficult would it actually be to create something that could answer this question?
Generate a list of countries in Africa, filter out to the ones that start with "K", and return the number.
I gotta imagine the engineers at google are smart enough to make a thing that could parse a question like this and others without LLMs or statistical analysis.
Building programs that do one thing are easy. Building things that generalize a problem space like that are hard. Building something that decides which of many problem spaces a question falls into is nearly impossible. Building something that both decides what a problem space are without an extant solution, and then generates a solution in that space is a literal general artificial intelligence.
I understand what you are saying. But surely you can make something that looks at a simple question and can parse it.
"How many", "What are", etc.
Their search engine already passes simple questions like "burger places near by".
Most simple questions like the one in OP are basically just coming up with a way to convert a sentence into what essentially is SQL query.
There is a combinatoric explosion of possible "simple questions" and approaches to solving them. You could chip away at the problem space for thousands of man-years, one special case at a time, and still only cover the tiniest fraction of the territory. In full generality it is simply not tractable.
That will not save you. Just the other day I was looking up some laundry information on DuckDuckGo and was getting suggested articles that were clearly AI written.
https://theinteriorevolution.com/should-you-use-fabric-softener-on-bed-sheets/
https://sleepation.com/before-you-use-fabric-softener-on-your-bedsheets-pros-and-cons/
These were page 1 search results on DuckDuckGo btw.
These sentence structures and way of presenting information are clearly not human. Contradictions, repetitions, non-sequiturs. All hallmarks of Chat-GPT esque writing.
Search engines--not Google--are degrading, have been degrading, and large language model generated articles are just the final nail in the disinformation coffin. The days of asking your search engine a simple question and getting a simple fact result are going to fade away if search engines do not crack down on SEO abusing sites. Searching up even basic information is going to become such a chore, especially for the information illiterate: those who don't have or weren't taught the skills to parse reliable information from bullshit. Not only will you need to verify the veracity of the information you're being presented with, but also the authenticity of the website you're getting it from and whether the article you're reading is even written by a human or not. We're going to need to start doing that just about every single time for most pieces of trivial information. And I imagine it will become even harder to tell the difference over time as these large language models improve and become better at spewing more fluent bullshit.
What once granted up unprecedented information efficiency is now going to become extremely information inefficient. I feel really bad especially for the neglected kids who weren't taught many basic life skills because their parents just figured that because we have access to the world's largest encyclopedia in the palm of our hands they don't need to do their job as parents.
Why are you searching laundry info online? You just put clothes in the washing machine and turn it on.
My comment isn't even about laundry info. The only reason I included those links was an example of LLM-generated articles that could be found via DuckDuckGo.
I'm not really sure why you laser-focused in on that of all things, but to answer your question it's because my parents used fabric softener growing up so naturally I just assumed you used it in every load, and only recently decided to look into whether it was necessary or what to use it in or not use it in after something didn't feel right with my sheets.
The intent they sold us: "AI as a writing assistant to help actual writers"
The shortcut: "AI as .... ..... .... writers"
The result: