fi, en, (sv, ja, hu, yi) | avatar by https://twitter.com/udonkimuchikaki


libera.chat, irc.sortix.org
nortti
microblog (that is, a blog with small entries)
microblog.ahti.space/nortti

Lunaphied
@Lunaphied

We talk a lot about accessibility and disability. But what we don't often hear discussed is the concept of "technological disability". That is, disability primarily due to limited technology. So let's talk about it a bit.


delan
@delan

on the one hand, is wonderful. with things like evergreen browsers, living standards, experimental feature flags, and polyfills, the modern web evolves at an exciting pace that far exceeds any other platform.

and the old web was unquestionably worse. the waterfall approach of “standards then reality and not the other way around” made early web standards slow and out of touch. this resulted in messy browser wars, and encouraged authors to turn to proprietary and inaccessible platforms like flash. traditional release cycles meant waiting years to try new features, plus years for those features to actually reach users.

that said,

building everything for the bleeding edge has its costs.

it disables people, as @irides explains, and it contributes to forcing everyone else onto an unsustainable treadmill of new hardware to keep up, which in turn feeds our destructive sandwich of resource extraction on one end, electronics waste on the other, and obscene energy consumption everywhere in between.

it also limits browser diversity, because if more or less every popular website requires an evergreen browser that supports everything, it becomes hard to make an independent and relevant browser without a megacorporation’s time and money. that’s why opera is chromium now, that’s why edge is chromium now, that’s why everything but like firefox and safari and epiphany are chromium now.

and yeah, i know i’m preaching to the choir a bit by saying this on cohost. in reality, there are a bunch of systemic reasons why this probably won’t change any time soon. planet-scale websites by for-profit developers will always treat a million users as disposable if it lowers development costs enough for their other billion users.

polyfills,

in theory, allow us to have our cake (developer experience) and eat it too (backwards compatibility). for example, nowadays it’s common to use the latest javascript features and just compile it down to ES5 (2009) or even compile it exactly down to what’s supported by an arbitrary percentage of the market.

but they’re not magic. you can polyfill features, but you can’t exactly polyfill computing power or bandwidth, not to mention the resultant code is often slower than if the feature were available natively. just because you can emulate the cutting edge, doesn’t mean doing so will result in anything remotely usable.

what now?

the point of this is not to say we should all go back to centering shit vertically with negative margins, var that = this, cranking out a gif for every rounded corner on the page, and web apps that rely entirely on being spoonfed gobs of html by some server every time you click on something.

the solutions are progressive enhancement, graceful degradation, and most importantly, giving a shit about people unlike ourselves. and if we respect the web’s fundamental behaviours rather than trying to bleach them into a clean white canvas in order to inevitably recreate it all in javascript, we can do more for more people with less.

the web is not meant to be a pristine medium to convey a pixel-perfect reflection of our programmer-artistic vision. it’s meant to keep us on our toes and violate our assumptions, that’s what makes it so versatile. it’s meant to be messy, that’s what makes it fun.

you can’t expect that old kindle to do everything the web can do today, but sometimes you just wanna message a friend or go down a wikipedia rabbit hole or look at some cute cat pictures, and you should be able to do that no matter what kinda device you have.


sirocyl
@sirocyl

like, as a standard. I'd wanted to work on this for a while and brainstorm it with people, but a lot of times when I bring it up, the idea is shot down with "why?"'s and criticisms about weaseling into an already "solved problem domain". "HTML 4.01 exists, why not use that?" "why not Gemini? just use Gemini for goodness sake" "isn't this just AMP?" (No.)

it's very evident, this isn't even close to being solved.

this is why I propose: HTML/R and HTTP/R.

the ultimate goal, is threefold:

  • give a "just enough technology" profile to enable competition in the web browser implementation domain - the amount of things that need to be implemented, is reduced to its bare minimum.
  • allow older, smaller or less powerful devices to remain empowered with what the web has to offer
  • allow connections with low bandwidth to be able to have the full text of the page as immediately as possible, or reasonable.

a knock-on effect, is that accessibility is likely to increase on HTML/R pages, and they're going to be easier to archive, save and not break, and easier to spider and search by things that aren't megalithic Web Services Companies like Google or Amazon. overall a win-win imo


The scope is for enabling modern, web-based 'reading' article-form/document-form content on minimal connectivity or compute environments. think blogs, forums, news, lightweight social sites; rather than "app in a browser" type environments.

rather than "downgrade" to HTML4/HTTP 1.1, we'd be instead looking at what technologies are most appropriate to enable for a restricted device or connectivity scenario, and how to enable them.

for instance, we want rich media - the HTML5 <video> and <audio> tags remain, but they do not auto-download, nor auto-play. <img> tags may auto-download, or require you to scroll to them, or click/tap them to download or show the image, depending on preference - but as an HTML/R author, you can expect that <img> will do that on some systems.

we want to keep things as semantically rigid as possible. UI elements shouldn't be <img> tags, for instance - if you use an image for a button, use CSS background-image instead.

downloadable fonts exist, but will likely be restricted down to a maximum number of fonts, characters, or KB of memory per page, and their implementation is not required - your page must look like it should on serif or sans-serif.

advancements in HTML5 and CSS3 for things like, media queries, paged printing and the like, can remain.

most CSS tags will persist into HTML/R, but complicated effects, animations and style items, will not, or will expect to be gracefully ignored by an HTML/R profile browser.

javascript will be trimmed down, but not eliminated; however, a maximum of 4 KiB (TBD, but that sounds like a happy medium) of script code per page context is available by default, pages MUST gracefully fall back on a no-script condition, and an upper limit on memory usage is given. any further requires the consent of the user, or will be silently disabled. localStorage APIs or whatever nonsense like that will be optional, consent/permissions-gated, easily accessible by the user (not hidden away like it is on Chrome/Firefox) and limited to kilobytes to start.


Finally, an embeddable further subset of HTML/R, called "HTML/RE", basically does what the chost preprocessor does, but as a standard - it's meant to be in fragment form, and embedded as user-generated HTML content, to define and standardize such a subset.

Minor edit: Moved the "ultimate goal" to the top, for visibility


You must log in to comment.

in reply to @Lunaphied's post:

This reminds me of a post I read a while back, "The unreasonable effectiveness of simple HTML." (I think it was originally shared here on Cohost, but I'm not sure by who.)

The author describes how someone was using a PSP to access UK.gov — I was already aware of the UK.gov efforts in general accessibility, but that led me down a fun research tangent into their philosophy and guidelines on progressive enhancement:

I agree it's definitely an issue that people should pay more attention to, especially for essential services. I don't really know how I feel about calling it "technological disability," though. As a disabled person who is working in accessibility somewhat... It doesn't feel quite right? Maybe it's a common term that I'm not familiar with, but I'm uncomfortable with it for some reason. I'm not sure I have an alternative, but I thought I'd mention it.

Huh, that's an interesting post. We appreciate your bringing it to our attention, it has a similar perspective.

We appreciate your feedback on the term and we're sorry it made you uncomfortable. Our thoughts as a disabled person here are mostly that we think disability comes in many forms and that as society changes, lack of access to technology itself becomes a type of disability.

Unfortunately, in the process of writing this, that point was squeezed out a bit more than we meant it to be, as we lacked concrete examples beyond limited personal experience. So the focus ended up on background establishment and a few tie-ins to how this is related to more traditional accessibility topics.

We hope that gives a bit of perspective to why we chose the term

Back in 2017 a coworker and I noticed that we both used an iPhone 3G. Browsing was near impossible, and I had access to very few apps. We were both disabled, and our poverty was a result of our disability and societal attitudes towards supporting disabled people (ie. as little as possible). I do feel like disability and exclusion often go hand in hand, but they aren't always the same thing.

We'd just like to share, as a matter of a fun example. Our smartphone died over a year ago, so we've been stuck with an iPhone 4. Most websites simply refuse to load, as a matter of expired certificates. The ones that do load, are barely functional, get stuck, crash the browser or don't display well. Surprisingly, Youtube works phenomenally well all things considered, and we don't get ads! Zero ads, no midrolls, no beginning or end ads.

We can't access any new apps either, so effectively we are left with a phone / alarm clock / youtube / single video game machine.

This is what I was talking about in my reply-share when I said it takes a lot of tech skill to keep older tech viable. :P

If you want to get more websites loading, you will need to find and install the updated ISRG Root X1 certificate. Most websites still won't work due to missing features in your outdated Safari but you can at least attempt to load them.

If you want to be able to install old versions of more apps (a handful still work without doing this by sheer luck), you will need to jailbreak and install the Checkmate Store! tweak. (Important detail if anyone actually tries this: DO NOT update everything in Cydia all at once right after jailbreaking very old iOS devices! You'll run out of RAM halfway through and brick the device. Update packages one or two at a time.)

in reply to @sirocyl's post:

we support this goal for sure. it's important.

we have standards experience and we will say, we do not thing any standards body that currently exists will vote for this. too many of their members are corporations who profit from the status quo.

the standards world is always intensely political, but the political stuff tends to be coded in such a way that it's hard to make sense of if you don't understand the terrain. so we won't be surprised if people disagree with our conclusion, or simply didn't realize it was like that, but that's how it is.

therefore we encourage anyone who wants this sort of thing to work on it OUTSIDE of the big industry bodies. by all means borrow their processes, the processes are useful, just not the part where corporate and state interests are given a seat at the table.

absolutely - if anything, we'll plant our own governance and steering committee, and watch it grow, while harmonizing with the existing bodies in charge.

long story short, it's "a standard", it's not "the standard" until it is.

i want an entirely new hypermedia stack designed from the transport layer up to be friendly to all known connected computers. it should degrade gracefully to telnet if need be. it should work over packet radio. i'm just looking at text on here, 300 baud should be more than enough

realistically this - a 'works over packet radio' premise - is a goal of HTTP/R and HTML/R, but I feel a compressed, binary profile representation thereof might be better suited for very restricted devices and connections. I'm thinking of a transparent layer over VT/ASCII control codes, to represent (the equivalent of) HTTP as a conversational terminal channel, and represent the (equivalent of) HTML tags, attributes and style information as a series of deterministic non-Turing state-machine bytecode in an ASCII escape sequence.

yessss

edit: realizing that because ham radio (in the US) does not allow encryption (if its purpose is to obscure the content of the message), authentication for something designed to travel over HF would have to work completely differently

some sort of token/digest-based authentication seems doable, as hashes are not "encrypted" in the traditional sense - there is no feasible way to "decrypt" a hash.

the initial registration may be tricky, though, and may require an outside channel of verification (like SMS or carriage over a commercial radio service).

also, it goes without saying, but using the public airwaves, under amateur license, for commercial activity, is strictly verboten - so, proxying internet, serving any ads, or conducting business over HamHTTP/R is a non-goal of the project.

I feel like authentication should be okay, as long as your public key can be found easily attached to your callsign somewhere. After all, that does not obscure the content of the message to anyone who actually bothers to find your public key.

thinking about a hybrid mailing list slash web forum. if such a thing exists, that is

email in general just sucks all around though. sucks to set up, sucks to administer, sucks to use, and sucks to host an ML on

i think most of this makes sense, and is overall a pretty cool idea - i'd love to see something like this happen. there's a couple bits i'll point out though:

<img> tags may auto-download, or require you to scroll to them, or click/tap them to download or show the image, depending on preference

this could get fucky with page layout, since page authors need to consider layout for their page in 2 separate states - images rendered inline, or externally. it's probably not a huge deal for primarily textual content since all you're gonna get is slightly different text wrapping, but more complicated ui elements could be impacted by having neighbouring elements with variable sizing (eg. a button next to an image is now too long because the image didn't render inline). definitely not end of the world though, at worst some parts of a page might look a little misshapen. maybe the page author just needs to deal with that? not sure.

the other thing i wonder - is this intended to be a sort of "fallback mode" for HTML, or an entirely separate protocol? i think it's feasible that a page author might want to have both an Oooo Fancy Shiny Modern HTML page, as well as a simpler HTML/R version for devices/connections/clients that can't handle all the modern features. would they have to maintain two distinct copies of their content? i guess that's unavoidable, but the best way to implement this use case would be to have some intermediate format that some server can jam into a HTML or a HTML/R document depending on what's requested, or at the very least some SSG-style framework that generated both formats at build time. fully static "just ftp some files in this folder and you're good to go" setups would involve some annoying redundancy. but like i said, i guess that is kinda unavoidable. idk this one has really become a train of thought dump so im just gonna end this one here before i get even more carried away lmao

but yeah, overall a super cool idea. our never ending race to Have Nice Things on the web has really left limited environments in the dust. i wish there was more focus on this type of thing, and that our tooling made it easier to either fail gracefully or do progressive enhancement.

Some definitions:
HTML/R: The subset of HTML for restricted environments.
R-mode: A browser using only the HTML/R feature set, whether it is an R-profile browser or not.
R-profile: The set of features needed to support HTML/R, negating anything explicitly not supported by HTML/R.
An R-profile browser: A browser which is made in the R-profile, to render HTML/R.
HTTP/R: A subset of HTTP/2 geared for R-profile browsers and device environments.

Realistically, R-mode is a fallback, sort of like quirks-mode - an R-profile browser, or a normal browser running in R-mode, ignores what it cannot or would not render or produce.

HTML/R is a profile of HTML, rather than a new "thing". A page written for HTML/R will not substantially break on an R-profile browser, and will render effortlessly on a modern, HTML5 browser.

Perhaps RF - HTML5 with R-profile fallback - will also be a thing, for more advanced webapps which fall outside of the scope and capabilities of an R-profile browser. But I don't exactly want to encourage the HTML/R equivalent of (just have a <noscript> tag with "Javascript is required, your browser is not supported").

right ok, i get ya. i still think there might be a few cases where someone might want a full mode page and an r-mode page, but if r-mode is just going to be a strict subset of HTML/CSS/JS, most people who need both for whatever reason should be able to get away with just an r-mode page. and if you do need both, you're probably doing something webapp-y anyway, in which case you already have the build tooling or server-side generation to spit out both full-mode and r-mode from source.

yup, pretty much. realistically, you can have a full mode page (e.g., for a webapp) fall back on an R-mode user-agent without deploying multiple versions - "use restricted" in a JS function or block might hint to the engine that the code in that block is R-mode strict.

I'd also considered, for scripting, to eschew most of JS and expect WASM code modules, but I'm not sure how lightweight a WASM engine can get.

Also a quick thing about <img> tags: Ideally their sizing should be static, and either included in the tag width/height attributes, or retrieved in the header of the image. I was considering also allowing progressive images with the <img progressive> boolean attribute, capping out at a handful of KB until clicked/downloaded by the user - but I'd not like to add to HTML5 what isn't already there, in general.

yeah, that makes sense. if you want image progressive enhancement, i don't think it'd be too much of a problem to just support <picture>, since that's already a pretty well-defined standard for that. but yeah, static image sizing (or at least static aspect ratio) is definitely best practice.

someone did mention <picture> elsewhere, and it is expected to be well-supported in HTML/R. I'd have to evaluate more on whether it's enough to replace/deprecate over, but it might be worth going for in this case.

it's a nice concept, but the big issue I see here is that there isn't any real incentive for anyone to work with it

the devs who need this the most won't use it, and mainstream browsers won't implement it. new devices generally have the power to handle the modern web without too much trouble, and old devices that could benefit probably aren't getting software updates anymore

the underlying issue that you're trying to solve is fundamentally a people problem: devs prioritizing developer experience over user experience, designers prioritizing fancy over reliable, and neither of them putting even the slightest thought into use cases with old devices or limited bandwidth. we all know you can't solve people problems with technology, but many standard writers have learned the hard way that you can't solve people problems with standards either

It's not for mainstream browsers to implement; they've already implemented it. :D It's a subset of HTML5 proper. Nothing has to be added to an already feature-complete user agent.

It is not for the Googles or Facebooks or even Reddits or Wikipedias of the web - it's more for the small stuff.

On the services side, things like Cohost and fediverse and people's static page blogs and writeups.

And on the browsers side, things that are not Firefox, not using Chromium or WebKit. Like the Serenity/Ladybird browser example, in the parent post.