sirocyl

noted computer gremlinizer

working on a @styx-os.

 

laptop.
                                                                                                     

"accidentally-vengeful telco nerd"
—Tom Scott

platform sec researcher, OS dev, systems architect, composer; Other (please specify). vintage computer/electronics nut.

I am open to tag suggestions - if there is something you want me to tag on my posts, leave a comment. <3


take a look at
this cool bug I found 🪲
discord
@sirocyl
revolt.chat (occasionally active)
@sirocyl#5128
styx linux OS project
styx-os.org/

posts from @sirocyl tagged #web standards

also:

invis
@invis
This page's posts are visible only to users who are logged in.


xyzzy
@xyzzy

this is a good take. a lot of people simply don’t have content that they want to put on a personal webpage. so they either just don’t make them, make a carrd-like list of social media links only, or build a page and then leave it under construction forever because coming up with Things to put on it is too difficult.


blep
@blep
This page's posts are visible only to users who are logged in.

v21
@v21

I read this & I'm like... what if you could make a website (a set of pages, with links & text & images & secrets) from your phone? like, without having to type any pointy brackets? and what if the website builder was also a webhost, or if you wanna mess about some more you can host it somewhere else and break the format a bit.

anyway, yeah, that's the pitch for Downpour, I hope folk use it.

(it does not have password protected pages built in as a feature, although it does have "unlisted" pages. and if you think about it, a difficult to guess URL is the same thing as a password)


sirocyl
@sirocyl

What needs to happen, is an interface needs to form between web hosts/server software, and domain hosts. Something like a registered TCP port protocol, or a /.well-known/ slug, that communicates the intent that "this server uses this domain", from the serving side; "this domain uses this server", from the domain side. "Setting nameservers" isn't it. Not all hosts allow port 53 services, let alone sets them up.

Right now, if you decide your website wants to end in ".com" or ".rocks", rather than ".at.CEOcities.example" you have a few options:

  • 💸 fork out money for hosting at the domain provider. Definitely the most expensive (often ridiculously so) and least secure (in terms of continuity of your site), but it is usually the "easiest" to set up a site and e-mail on, plus you usually have support from the domain host itself if something goes pear-shaped setting it up or keeping it running.
  • "simple redirect", by far the easiest, but most jank option. Will point the browser visiting the domain to your page ".at.CEOcities.example", as if it went there in the first place.
  • "iframe capture"?? don't do this. some domain providers will put your website inside a frame "hosted" at your domain. Don't do this.
  • point nameservers at the place your page is hosted. This is the "typical" answer, and the one I pointed to above, but I feel like it's still rather jank, and many page hosts don't support this, or are bad at pointing pages, especially with HTTPS (GitHub Pages is a clear example.)
  • better figure out DNS records! oh boy, hope you learned what a CNAME is or why A and AAAA records are redundant. This is the only option if you plan on having more "advanced" services like e-mail (MX records) on your domain, even if you don't host it yourself.

On top of this, some TLDs for domains mandate HTTPS at all times, meaning you now have to juggle certificates and things around. Certbot absolutely doesn't make it any easier, it only makes it effectively free.

An aside on certs before CertbotIn the "bad old days" you'd cough up $495 a year for a certificate file stamped with the domains you owned, and that was that. Throw it on the server or upload it to your cPanel, and you had HTTPS now.

I want to be able to buy a domain from my phone, and point it to my page without ever opening an admin interface, shell, FTP client or DNS configuration file.
Or, tell my server that I own this domain, and have it configure the domain host to point my domain at it. Something like an SSO token, where I log in from my domain host through the server, maybe.



Lunaphied
@Lunaphied

We talk a lot about accessibility and disability. But what we don't often hear discussed is the concept of "technological disability". That is, disability primarily due to limited technology. So let's talk about it a bit.


delan
@delan

on the one hand, is wonderful. with things like evergreen browsers, living standards, experimental feature flags, and polyfills, the modern web evolves at an exciting pace that far exceeds any other platform.

and the old web was unquestionably worse. the waterfall approach of “standards then reality and not the other way around” made early web standards slow and out of touch. this resulted in messy browser wars, and encouraged authors to turn to proprietary and inaccessible platforms like flash. traditional release cycles meant waiting years to try new features, plus years for those features to actually reach users.

that said,

building everything for the bleeding edge has its costs.

it disables people, as @irides explains, and it contributes to forcing everyone else onto an unsustainable treadmill of new hardware to keep up, which in turn feeds our destructive sandwich of resource extraction on one end, electronics waste on the other, and obscene energy consumption everywhere in between.

it also limits browser diversity, because if more or less every popular website requires an evergreen browser that supports everything, it becomes hard to make an independent and relevant browser without a megacorporation’s time and money. that’s why opera is chromium now, that’s why edge is chromium now, that’s why everything but like firefox and safari and epiphany are chromium now.

and yeah, i know i’m preaching to the choir a bit by saying this on cohost. in reality, there are a bunch of systemic reasons why this probably won’t change any time soon. planet-scale websites by for-profit developers will always treat a million users as disposable if it lowers development costs enough for their other billion users.

polyfills,

in theory, allow us to have our cake (developer experience) and eat it too (backwards compatibility). for example, nowadays it’s common to use the latest javascript features and just compile it down to ES5 (2009) or even compile it exactly down to what’s supported by an arbitrary percentage of the market.

but they’re not magic. you can polyfill features, but you can’t exactly polyfill computing power or bandwidth, not to mention the resultant code is often slower than if the feature were available natively. just because you can emulate the cutting edge, doesn’t mean doing so will result in anything remotely usable.

what now?

the point of this is not to say we should all go back to centering shit vertically with negative margins, var that = this, cranking out a gif for every rounded corner on the page, and web apps that rely entirely on being spoonfed gobs of html by some server every time you click on something.

the solutions are progressive enhancement, graceful degradation, and most importantly, giving a shit about people unlike ourselves. and if we respect the web’s fundamental behaviours rather than trying to bleach them into a clean white canvas in order to inevitably recreate it all in javascript, we can do more for more people with less.

the web is not meant to be a pristine medium to convey a pixel-perfect reflection of our programmer-artistic vision. it’s meant to keep us on our toes and violate our assumptions, that’s what makes it so versatile. it’s meant to be messy, that’s what makes it fun.

you can’t expect that old kindle to do everything the web can do today, but sometimes you just wanna message a friend or go down a wikipedia rabbit hole or look at some cute cat pictures, and you should be able to do that no matter what kinda device you have.


sirocyl
@sirocyl

like, as a standard. I'd wanted to work on this for a while and brainstorm it with people, but a lot of times when I bring it up, the idea is shot down with "why?"'s and criticisms about weaseling into an already "solved problem domain". "HTML 4.01 exists, why not use that?" "why not Gemini? just use Gemini for goodness sake" "isn't this just AMP?" (No.)

it's very evident, this isn't even close to being solved.

this is why I propose: HTML/R and HTTP/R.

the ultimate goal, is threefold:

  • give a "just enough technology" profile to enable competition in the web browser implementation domain - the amount of things that need to be implemented, is reduced to its bare minimum.
  • allow older, smaller or less powerful devices to remain empowered with what the web has to offer
  • allow connections with low bandwidth to be able to have the full text of the page as immediately as possible, or reasonable.

a knock-on effect, is that accessibility is likely to increase on HTML/R pages, and they're going to be easier to archive, save and not break, and easier to spider and search by things that aren't megalithic Web Services Companies like Google or Amazon. overall a win-win imo


 
Pinned Tags