We talk a lot about accessibility and disability. But what we don't often hear discussed is the concept of "technological disability". That is, disability primarily due to limited technology. So let's talk about it a bit.
on the one hand, is wonderful. with things like evergreen browsers, living standards, experimental feature flags, and polyfills, the modern web evolves at an exciting pace that far exceeds any other platform.
and the old web was unquestionably worse. the waterfall approach of “standards then reality and not the other way around” made early web standards slow and out of touch. this resulted in messy browser wars, and encouraged authors to turn to proprietary and inaccessible platforms like flash. traditional release cycles meant waiting years to try new features, plus years for those features to actually reach users.
that said,
building everything for the bleeding edge has its costs.
it disables people, as @irides explains, and it contributes to forcing everyone else onto an unsustainable treadmill of new hardware to keep up, which in turn feeds our destructive sandwich of resource extraction on one end, electronics waste on the other, and obscene energy consumption everywhere in between.
it also limits browser diversity, because if more or less every popular website requires an evergreen browser that supports everything, it becomes hard to make an independent and relevant browser without a megacorporation’s time and money. that’s why opera is chromium now, that’s why edge is chromium now, that’s why everything but like firefox and safari and epiphany are chromium now.
and yeah, i know i’m preaching to the choir a bit by saying this on cohost. in reality, there are a bunch of systemic reasons why this probably won’t change any time soon. planet-scale websites by for-profit developers will always treat a million users as disposable if it lowers development costs enough for their other billion users.
polyfills,
in theory, allow us to have our cake (developer experience) and eat it too (backwards compatibility). for example, nowadays it’s common to use the latest javascript features and just compile it down to ES5 (2009) or even compile it exactly down to what’s supported by an arbitrary percentage of the market.
but they’re not magic. you can polyfill features, but you can’t exactly polyfill computing power or bandwidth, not to mention the resultant code is often slower than if the feature were available natively. just because you can emulate the cutting edge, doesn’t mean doing so will result in anything remotely usable.
what now?
the point of this is not to say we should all go back to centering shit vertically with negative margins, var that = this, cranking out a gif for every rounded corner on the page, and web apps that rely entirely on being spoonfed gobs of html by some server every time you click on something.
the solutions are progressive enhancement, graceful degradation, and most importantly, giving a shit about people unlike ourselves. and if we respect the web’s fundamental behaviours rather than trying to bleach them into a clean white canvas in order to inevitably recreate it all in javascript, we can do more for more people with less.
the web is not meant to be a pristine medium to convey a pixel-perfect reflection of our programmer-artistic vision. it’s meant to keep us on our toes and violate our assumptions, that’s what makes it so versatile. it’s meant to be messy, that’s what makes it fun.
you can’t expect that old kindle to do everything the web can do today, but sometimes you just wanna message a friend or go down a wikipedia rabbit hole or look at some cute cat pictures, and you should be able to do that no matter what kinda device you have.
like, as a standard. I'd wanted to work on this for a while and brainstorm it with people, but a lot of times when I bring it up, the idea is shot down with "why?"'s and criticisms about weaseling into an already "solved problem domain". "HTML 4.01 exists, why not use that?" "why not Gemini? just use Gemini for goodness sake" "isn't this just AMP?" (No.)
it's very evident, this isn't even close to being solved.
this is why I propose: HTML/R and HTTP/R.
the ultimate goal, is threefold:
- give a "just enough technology" profile to enable competition in the web browser implementation domain - the amount of things that need to be implemented, is reduced to its bare minimum.
- allow older, smaller or less powerful devices to remain empowered with what the web has to offer
- allow connections with low bandwidth to be able to have the full text of the page as immediately as possible, or reasonable.
a knock-on effect, is that accessibility is likely to increase on HTML/R pages, and they're going to be easier to archive, save and not break, and easier to spider and search by things that aren't megalithic Web Services Companies like Google or Amazon. overall a win-win imo
The scope is for enabling modern, web-based 'reading' article-form/document-form content on minimal connectivity or compute environments. think blogs, forums, news, lightweight social sites; rather than "app in a browser" type environments.
rather than "downgrade" to HTML4/HTTP 1.1, we'd be instead looking at what technologies are most appropriate to enable for a restricted device or connectivity scenario, and how to enable them.
for instance, we want rich media - the HTML5 <video> and <audio> tags remain, but they do not auto-download, nor auto-play. <img> tags may auto-download, or require you to scroll to them, or click/tap them to download or show the image, depending on preference - but as an HTML/R author, you can expect that <img> will do that on some systems.
we want to keep things as semantically rigid as possible. UI elements shouldn't be <img> tags, for instance - if you use an image for a button, use CSS background-image instead.
downloadable fonts exist, but will likely be restricted down to a maximum number of fonts, characters, or KB of memory per page, and their implementation is not required - your page must look like it should on serif or sans-serif.
advancements in HTML5 and CSS3 for things like, media queries, paged printing and the like, can remain.
most CSS tags will persist into HTML/R, but complicated effects, animations and style items, will not, or will expect to be gracefully ignored by an HTML/R profile browser.
javascript will be trimmed down, but not eliminated; however, a maximum of 4 KiB (TBD, but that sounds like a happy medium) of script code per page context is available by default, pages MUST gracefully fall back on a no-script condition, and an upper limit on memory usage is given. any further requires the consent of the user, or will be silently disabled. localStorage APIs or whatever nonsense like that will be optional, consent/permissions-gated, easily accessible by the user (not hidden away like it is on Chrome/Firefox) and limited to kilobytes to start.
Finally, an embeddable further subset of HTML/R, called "HTML/RE", basically does what the chost preprocessor does, but as a standard - it's meant to be in fragment form, and embedded as user-generated HTML content, to define and standardize such a subset.
Minor edit: Moved the "ultimate goal" to the top, for visibility