I touch way too much Computer to have so little to share of it with my modest corner of online. There are probably a number of reasons for this but three in particular make me clasp my hands together with steepled fingers to my lips and enigmatically go "ah,"
- It's effort! It's right there in the name - effortposting. It's so easy to just not post. It's also easy to post once you get started, but look at this. Are you reading this? This is me getting started. Dreadful. Let's hope this passes quickly.
- Personal projects are never "finished". I mean they're finished as far as my ADHD is concerned, but with the exchange rate, that equates to anywhere between 25% and 75% actually complete.
- There's a voice on loop in one of the more antagonistic departments of my brain that insists what I have to show is stale, and stupid, and trite, and boring, and did I say stupid? and it smells and it's and worse than everyone else on here posting their Jame Gams and their PICO-8s and their CSS crimes and ShaderToys and whatnot.
But there's a logical sleight of hand committed by that voice. A deceit in its insinuation. None of its accusations are incompatible with effortposting. So here I am thrusting my silly stuff, nude and skittish, on to the stage.
I'll start by digging out one thing I've been meaning to update, but will settle for just putting back online and seeing how we go. It's called, uh… well the folder is called petsciiart on my computer but that's just because it was the first fleeting thought that improved upon "untitled folder". Let's give it a jokey 80s technology pastiche name like


It's just a little thing. An amuse-bouche of computer whimsy. Artists will tell you that drawing anything is better than nothing if but to get the pencil moving.
Give it an image, and it will give you a stylized old-computery version of that same image. Specifically, it recreates your image using the Commodore 64 - you remember the Commodore 64, right? -’s font and its attendant dingbats…
…and its color palette:
All the work is done in your browser - your images are not transmitted or stored anywhere.


It's nothing new - certainly not in the literal sense. I became dust and bugs when I discovered that the last commit was from 2012 - over a decade of cumulative been-meaning-tos. I can only just build it - the build system is completely homespun, and I just happen to never delete anything no matter how cringe1. It concatenates JavaScript files, but is written in Python 2.7, and is loosely based on the C preprocessor, only you'd write ///#define or ///#include with three slashes so code editors wouldn't draw red squiggles under it, and so JSLint doesn't get mad at it. It could do macro expansion!2 There's literally
///#define CLAMP(n, min, max) ((n) < (min) ? (min) : (n) > (max) ? (max) : (n))
in there somewhere.3
Anyway. It was also nothing new at the time, either. I was specifically motivated by @vectorpoem's Playscii app4, or rather a single one of its many features that does pretty much this. I'm vulnerable to being nerd-sniped through mimicry - if an app or tool or amuse-bouche of computer whimsy performs a task that piques my interest, and if, after pondering it for a while, I don't know how it's done, but suspect it's within my reach, I become fixated on figuring out my own implementation. I don't remember if the source was available at the time. Maybe it was and I just didn't look hard enough. I imagine if I'd found it, I would have just taken a look and moved along. Instead there's now a JavaScript version that runs in a browser.


It's more of a curious stylization than anything functional - a real Commodore 64 could theoretically display the images produced by this thing (screen size permitting), but that display mode that lets you choose 2 colors per 8x8 pixel cell, also lets you freely flip individual pixels between those 2 colors within that 8x8 neighborhood. So there's no need to build the picture out of text or predefined dingbats. If anything it'd involve more fussing to copy the characters from the rom into the pixel buffer. If you have a genuine need to send contemporary digital images 40+ years back in time, there are dedicated tools for that, like this and this and probably more.
Anyway, job done, right? Effect reproduced, ADHD raring to move on to - but definitely not finish - the next thing. Well, there are problems.


-
The UI sucks mega hole, obviously! It could stand to explain itself a little better, divide up the screen space with a bit more intent, complement the number entry boxes with sliders, support pasting of images (although I don't think JavaScript could hook up to that at the time, I forget. It can now though)
-
It's slow - especially on Firefox. Faster than I expected on Safari? Extremely mid on Chrome. It does a bruteforce search through every character and every foreground-background color combination for every 8x8 patch of pixels; turns out Playscii does approximately the same thing. It's all performed on the CPU, and it's one of the few times I've bothered drawing upon multiple Web Workers to parallelize a big honking obvious CPU-bound task like this5. But I have a hunch that hammering it into a WebGL-shaped problem would be the capital-p Proper way to do it. That likely involves substantially rearranging the algorithm tho and I have chips I want to eat.
-
It really loves using the letter H for blending colors when the C64 has a perfectly usable ▒ right there. I know why this happens. It actually doesn't compare all 8x8 pixels when searching for matching characters and color combos. It reduces both the character set and the user image by 50% so that it's comparing 4x4 patches instead.
This naturally speeds things up, but I also recall surprise at liking the results more when this step was involved. My guess is that it results in less rigid matching, which means a small amount of intentional inaccuracy, which you might call noise, which when used deliberately to produce a more 'pleasing' result under quantization (which is kind of what this is doing), you might call dithering. In both the visual and audio domains!
Also, turns out the 8088 Corruption demo did this step as part of a preprocess with the same effect, as a way of getting a 1981 IBM PC to play video at 30fps.
- Wait I was explaining the H thing. So the character images get scaled down, by averaging each 2x2 square of pixels. If I had bothered to visualize this at the time I'd have found that because H is aligned to odd-numbered pixels, it blurs into a fairly uniform shade of grey. Not quite featureless, but certainly less featureful than any other character:
And because the ▒ has chunky doubled pixels aligned on even coordinates, shrinking it down with this method preserves its appearance precisely.
The thingo then reckons it can use H as a kind of low-detail, half-way compromise between any of the 16 colors, which comes up a lot, and reserves ▒ for parts of the input image that has a similar checkerboard pattern, which is far less likely. You can see from the spread of downscaled characters that it's quite uneven - some characters become blurry, others stay sharp along one or both axes simply due to the happenstance of their alignment. It doesn't make a great proxy for how the symbols are perceived. The solution would be… idk. Using a softer scaling filter? Blurring it slightly? Maybe scaling by some off-kilter factor like 5/8ths instead of 50%. Maybe manually overriding some of the downscaled characters and forcing the ▒ to be a 50% grey?
- Wait I was explaining the H thing. So the character images get scaled down, by averaging each 2x2 square of pixels. If I had bothered to visualize this at the time I'd have found that because H is aligned to odd-numbered pixels, it blurs into a fairly uniform shade of grey. Not quite featureless, but certainly less featureful than any other character:
But besides all my rigorous self-criticism, there was always a throbbing nag in my brain lobes to add more features. Especially the ability to switch to looks that aren't the C64. The various IBM text modes and color palettes would be next on the todo list, but you know. There's also Atari, Amstrad, Teletext6, and so on. Perhaps offer more options! Offload all my experimentation and indecision on to the user. Maybe compensate for non-square pixels; old computers loved having those. Maybe the option to use double-size characters to build the resulting image. Maybe other stuff.
Ok stop that's long enough. It's out there now. If you got this far then I humbly thank you for indulging me. I don't know if it was interesting, or comprehensible, or edifying, or wrong or bad or boring or if I should have stopped at the Read More or if I missed something really obvious. Feel free to let me know! Bye!


-
Though in fairness to myself, the state of JavaScript modules a decade-ish later still generally makes me want to pull my own face off
-
lmao
-
I was convinced I needed to do min/max without all those expensive function calls forced upon me by JavaScript's
Mathmethods. One one hand, it does run in a tight loop per pixel, though on some other hand, I never benchmarked it, which means it's Premature Optimization prepare to die -
Also take a look at the lovely things made by the users of Playscii
-
One of the biggest surprises for me, at the time, was how the effect of CPU cache locality could be felt even up at the JavaScript-in-a-browser level. Rearranging the raster data so that each 64-pixel patch appeared in contiguous memory resulted in a massive speedup in addition to dividing the image between workers. Still slow, but it was worse before! That's a win in my book!
-
When the Unicode Consortium added Symbols for Legacy Computing in early 2020 I did the Leonardo DiCaprio point at the screen

