the-doomed-posts-of-muteKi

I'm the hedgehog masque replica guy

嘘だらけ塗ったチョースト


twitter, if you must
twitter.com/the_damn_muteKi

eramdam
@eramdam

Interesting computerphile video about a paper (link here) positing that the "explosive" improvement curve of genAI might not happen without diminishing return levels of work.1


  1. which is something to keep in mind when you hear companies sell the "it's only gonna get better from here" pitch. It might be true that it might get better, but "how much better" is almost as important of a question especially if a single-digit% improvement requires 10x the work lol


nys
@nys

i think most people miss that LLMs are just the result of someone going “oh there’s no point at which more data stops making neural networks better” and not only have we basically run out of all the easily stolen data but we’ve polluted all future data too.

it is hard to express just how much MORE data we will need to make them significantly better. much more likely are improvements in running them but still, neural networks are too simplistic to not need everything humanity has ever produced. any major breakthrough will have to be something in that low level training area.


You must log in to comment.

in reply to @nys's post: