For millennia we relied on human curation to organize our knowledge. In the late 1900s it became feasible to collate a meaningful percentage of human knowledge automatically. This was very exciting. Were the results as good? Maybe not, but they were often good enough, and it was fast, and it scaled, and you could do it at home.
For the last decade, the search engine user experience has gotten progressively worse. This is partly due to search engines optimizing for profitable search results rather than helpful ones, but I would posit that the bigger cause is that the majority of new, useful information on the internet is being created behind closed doors, not on the searchable web.
More and more, the information that is available to search engines is created for search engines to find -- companies paying writers pennies to churn out multi-thousand word essays that will rank high in google results, but not paying them enough to do the research that would make the information accurate and useful. This phenomenon is getting worse fast thanks to generative language models like ChatGPT (which aren't even capable of producing accurate information except by accident) and I expect that soon the vast majority of text on the internet will be created by bots like that, writing more and more convincing essays full of useless information.
Maybe search engines will figure out how to discard this deluge, but my sense is that it's an arms race that they're going to inevitably lose. For a long time machine curation felt like the future, but now I think the window of pure machine curation is closing. To the extent that search will still be useful, it'll be useful for searching human-whitelisted content.
This is good news for human experts like editors and librarians, who have been treated as obsolete by the tech community for decades. Remember librarians? They're still around, and their work is more valuable than ever.
I've told most of this story before, but the first paying gig I had as a writer was as a website reviewer for a company that was going to try to build some kind of searchable directory of the World Wide Web. It was 1993-94, and back then it must've seemed feasible to have actual humans curating sites. The only one I remember doing was epicurious.com, a site about food that I suspect isn't the same company that runs the domain today.
The text part was really limited, probably only a paragraph or so. The actual time-consuming part was that we had to fill out a form for each site that would classify what the site was against whatever the company's taxonomy was. Like was it about food? Music, maybe? Real basic stuff but there was enough of it that it slowed the process down a bit. The company would assign out a batch of, I don't know, 30 sites to be reviewed, I'd knock out the batch, send it back, and wait for another one. I think they killed the project before it launched... probably because they blew through a bunch of money paying humans to review websites when Lycos was already busy launching a search engine.
I want to say it was CMP Media running it, but I can't say for sure. Anyway, Jim is probably right here and, personally, I can't wait to get back to reviewing sites. Some of the easiest money I've ever made.
