
I'm Talen! I make videos and articles and games and graphic designs and guides and messes and encouragement. Chances are you can find anything I do on my blog. I like it when you comment on my things, so please do!
Oh hey, Computerphile. I really like them--they had a bunch of really good stuff on early Unix and Academic computing history too, that helped contribute to my contempt for "The Unix Philosophy".
You might enjoy the UNIX-HATERS Handbook if you haven't already read it.
I have! Though my particular beef in this case is... I think it's less strictly technical more social? Nerdry took what was a pragmatic decision in its time "Our minicomputers that cost as much as a house and need to share time between dozens of users only have 64KiB of Core as their main memory. So we chain multiple small tools together to act on streams of data that require minimal caching" and elevated it into a foundational axiom that now leads people to build new layers on top of existing cruft rather than start anew.
A social/philosophical objection to the cultish following of UNIX dogma makes perfect sense to me.
That was a point Poettering tried to make, but then it kind of got lost among all the controversies. It also didn't help that the other well-known instance of trying to reinvent the Linux userspace for the modern world was Android.
I periodically check in on Redox, but other than that, it's been years since I've had the time to play around with experimental OSes or userspace replacements.
I've had a gut feeling for awhile that there will eventually be studies and/or proofs showing that machine learning has severe limits in what it can actually do, on the level of information theory.
I'm suddenly reminded of these papers: Less is More: Parameter-Free Text Classification with Gzip and A Strong Inductive Bias: Gzip for binary image classification.
They basically show that a naïve information theoretic approach can perform competitively and in certain tasks better than common transformer-based deep learning algorithms.
EDIT: Should probably include this follow-up about the data in the first paper.