wavebeem

world wide weirdo

 

💕 @hauntedlatte

🏠 portland, or, usa

📆 mid-30s

💬 here to make friends and chat

 


 

🎮 video games
👩🏻‍💻 web development
👩🏻‍🏫 teaching others

 


 

🎨 digital aesthetics
💅🏻 makeup & jewelry
👗 gothic fashion
👩🏻‍🎨 making pixel art

 


 

🤘🏻 progressive metal
🎸 video game music

 


 

🟢 everything green
🌟 neon colors and transparent plastic

 


blog + rss
wavebeem.com/
discord
@wavebeem

Catfish-Man
@Catfish-Man

I love y'all, but 99% of the time the answer to "who is modern computing thing X for?" is "me, it helps me":

  • Before handheld GPS and mapping software I got lost constantly, and sometimes ended up in danger.
  • Before calendar reminders, I ADHD'd important things in my life to the point where I hurt people I loved and myself.
  • Before wikipedia and various other rapid info sources I thoughtlessly repeated a lot of "conventional wisdom" that turned out to be propaganda or just plain incorrect.
  • The firehose of high quality (after lots of pruning!) COVID info from Twitter has almost certainly kept loved ones safe; I very reliably have had better threat modeling than my more offline friends in this respect.
  • Those ultralight laptops that are "pointless aesthetic wankery"? Yeah, my arms are fucked from RSI.
  • Super fast CPUs? Until I got my current laptop, a single compile of my work project took 45 minutes and 75% of a battery charge, which directly and severely impacted my ability to work remotely.
  • "Who needs low-latency messaging, email is fine!", it's been just a few months since I was able to rescue a friend from a potentially life-threatening situation due to being accessible.
  • High-resolution deep-color high-refresh-rate screens? Even aside from the benefits for photographers and other artists, I literally do not believe y'alls aesthetic commitment to eyestrain. Yes I have nostalgia for the old look too, that doesn't mean I want to use it.
  • More on fast CPUs: a LOT of the performance cost of modern software (that isn't about nice screens, anyway) is about security. Process separation, SPECTRE mitigations, the list goes on and on.

This stuff is real. It's not for show. It actually helps me.

I agree there's lots of stuff we could and should do better, and old tech is fun and interesting, but let's not throw the baby out with the bathwater.


You must log in to comment.

in reply to @Catfish-Man's post:

More on fast CPUs: a LOT of the performance cost of modern software (that isn't about nice screens, anyway) is about security. Process separation, SPECTRE mitigations, the list goes on and on.

There's an irony there that many of those new mitigations are necessary because of the ways we know how to make CPUs fast

I wonder about that. It's not just speculative execution but caches, branch prediction, etc., all of which are aimed at making mainstream pointer-chasey code go faster. In the parallel universe of coprocessors like GPUs, DSPs, and such, you don't have to look as far back to find hardware that doesn't have any of those things, or only relatively rudimentary forms of them, and they get the speed through explicit parallelization and different programming models that encourage exploiting it. "Data oriented design" is beneficial even for traditional CPUs so I wonder if there's room in the tooling space above to encourage programming in ways that get good performance without relying as much on all the predictive and cachey layers we've piled on over the decades

or alternatively, I wonder if CPU architectures could expose more of their microarchitectural state so that the supervisor/hypervisor can cleanly swap it on task switches, instead of having to feel around for post-hoc mitigations

There are some interesting languages that claim to get "better than C" performance (whatever that means) just from being able to do better data layout by default thanks to more restricted language semantics, even without unleashing massive parallelism, such as:

https://github.com/google/rune https://github.com/google/wuffs

tbc I broadly agree with your point. I only use retro computers for fun and not really anything serious (besides maybe enforced limited-distraction mode thanks to limited connectivity)

I think we can definitely do better than all but the best tuned C :) a language that encourages pointer chasing, doesn't have built in high quality data structures, can't compact the heap, and can't cleanly do thread-local "nursery" allocations is not a language that should be winning benchmarks.

I think all of these innovations are good, actually! I just want to be able to choose what my computer spends it's prodigious amount of power on, instead of wondering why there's over 500 processes running while there's nothing happening on screen. (security wouldn't be so much of an issue if you could distinguish a bad actor from the thousands of actors running on any given system) And I'm very appreciative of the recent focus on efficiency, including fixed-function codecs and such.

Good eye! So the answer is "I tried to make it single threaded, and it was actually a measurable improvement to systemwide performance… except that it deadlocked in edge cases". I have a bug filed with the IPC and concurrency folks to try to figure out a way to fix that, but the best solution I've personally thought of is "hard code a list of processes that cfprefsd itself calls out to, and only spawn a new thread for requests from them". Which feels… fragile at best.

sounds like you're doing good work in a convoluted system with probably considerably more than a comfortable amount of technical debt. :)

I dunno, I understand some of this is irrational, there's nothing actually nefarious going on, but like...it's not going to make me feel any better when these incredibly powerful systems bog down in 5-10 years, you know? when I'm not entirely convinced that the system is doing much more for me than, say, Mountain Lion did? definitely not proportionally to resources consumed!

I do recognize that I'm projecting that expectation across the HDD/SSD divide, at least, not to mention dual core to 8 core, Intel/Apple Silicon...maybe these modern machines won't slow down so much over that same time period. I hope not! I also can't help but worry that 8GB is not going to be enough soon...even with the improvements Apple Silicon seems to make on RAM efficiency and graceful degrading to swap/whatever else is going on on that front

I'm right there with you on that. The reason cfprefsd specifically is as svelte as it is is because this sort of thing is a mild obsession of mine (to the point where I no longer work on that daemon because I switched teams to one where I could spend more of my time focused on performance). There's a lot of systemic factors that lead to software slowing down over time, and I'm very upset with all of them.

I appreciate you very much. :) and I'm sure you're not alone in that, with the kind of talent Apple brings in, even if you may be outnumbered. I'm still cautiously excited about the future of the Mac. I splurged and got a nice yellow M1 iMac for my personal use, and convinced my job to get me an M1 mini for work, even though it means I'm using Parallels for some things, because I believe in you all. And they're fantastic machines!

(if I suppress a little squeal that I spent $1500 on one with 8GB of RAM, at least I avoided the single fan model accidentally when I made my color choice sigh)

There's a point where some of these 500 processes are actually improving net performance too, since some of them are doing bookkeeping things like compressing memory pages, deduping APFS blocks, and such that mean the system ends up swapping/jetsamming less and having less data with better locality to work with

apfs deduplication still isn't implemented in macOS AFAIK, though you can install some utilities to do it. Can't help but think if Apple engineers aren't enabling for everyone yet there might be a reason though.

Very interesting writeup! It reminds me of disability critiques around "back to the land" nostalgia where people like to pretend that things like scalable food production and modern medicine aren't useful.

Extremely similar, yeah. I would need to dig through my twitter archive to find it, but I ran across a fascist propaganda image a while back that was explicit (and delighted!) about how a "return to the land" would necessarily be built on a layer of corpses.

Modern computing makes retrocomputing lots more fun. I'm pretty certain of this but am too busy on a modern laptop documenting my transputer-based simulation software demo to write an essay for why this is the case. In the meantime, consider modern-day aids like disk emulators that let you choose from hundreds of different images, or serial-to-wifi adaptors that let you visit BBSs and other websites from your poky 8-bit machine.

Completely agree with all of this

I actually think that aesthetic and design choices from retro computing are some of the more interesting things to look back on though.

Recently I posted about CRT monitors and I CERTAINLY don't want to go back to the flicker filled eye strain world of those tiny radiation boxes. But there are quality of life things that where dropped in persuit of flat panels that are only making a comeback now two decades later (motion clarity, good contrast ratios, taller aspect ratios etc)

I love technologoy, it's exciting to me what computing can do now and what cool things it may be able to do in the future! My quality of life has been improved greatly by newer technologies and even if there are some aspects of modern computing I really don't like [insert rant about privacy, corperations and social media here] for the most part it's great living with what we have today!

Do I think there are lessons that can be learnt from old tech? Yes!

Do I think there is a level of "fun" in computing that was kinda lost with the minimalist Apple-esc movements of the last 15 years? Yes!

Do I think we should go back to old technologies? Hell no!