Syntax-Takes

Professional Kettle + MFBC Diva

∍⧽⧼∊ Queer Furry Villain content🔞

Mid-20s pastyfaced transfem
manifesting online as a 🦓ZebraDragon🐉
writing about horny queer things,
and horny queer supervillains.
Find Words in my Pinned Chost!

Engaged to @eight-stroke <3
Avatar by @Lexithecow

This user can say it.


mcc
@mcc

WebGPU is the new WebGL. That means it is the new way to draw 3D in web browsers. It is, in my opinion, very good actually. It is so good I think it will also replace Canvas and become the new way to draw 2D in web browsers. In fact it is so good I think it will replace Vulkan as well as normal OpenGL, and become just the standard way to draw, in any kind of software, from any programming language. This is pretty exciting to me. WebGPU is a little bit irritating— but only a little bit, and it is massively less irritating than any of the things it replaces.

WebGPU goes live… today, actually. Chrome 113 shipped in the final minutes of me finishing this post and should be available in the "About Chrome" dialog right this second. If you click here, and you see a rainbow triangle, your web browser has WebGPU. By the end of the year WebGPU will be everywhere, in every browser. (All of this refers to desktop computers. On phones, it won't be in Chrome until later this year; and Apple I don't know. Maybe one additional year after that.)

If you are not a programmer, this probably doesn't affect you. It might get us closer to a world where you can just play games in your web browser as a normal thing like you used to be able to with Flash. But probably not because WebGL wasn't the only problem there.

If you are a programmer, let me tell you what I think this means for you.

Sections below:

  • A history of graphics APIs (You can skip this)
  • What's it like?
  • How do I use it?
    • Typescript / NPM world
    • I don't know what a NPM is I Just wanna write CSS and my stupid little script tags
    • Rust / C++ / Posthuman Intersecting Tetrahedron

ireneista
@ireneista

wow, we thought nothing was going to convince us to use a brand-new graphics API, but ... already having a Rust implementation is a key thing for us, we're big on that. yay :)


You must log in to comment.

in reply to @mcc's post:

Ive been working on a webgpu 2d light simulation thingy this week and its been SO SURPRISINGLY EASY! once i got the js stuff set up to let me throw some data at the compute shader and get it back, I was golden. It even let me pass it arrays of structs, which was frankly mindblowing because when I previously dealt with webgl things, it was much more strict about what I could and couldnt do. Webgpu is amazingly cool and im glad it exists

what makes esm in node a disaster? mostly asking cos i’ve run into esm not working in mode a few times and i would just give up until i managed to get it working on accident.

since the examples in the wgpu repo don't come with build scripts.

This is probably because cargo is aware of examples and so the right way to build them is to run e.g. cargo run --example hello-triangle from the top level of the repo. But that doesn’t get you the web part of things.

Except Linux has a severe driver problem with Vulkan and a lot of the Linux devices I've been checking out don't support Vulkan even now after it's been out seven years

do you mean embedded Linux? bc Vulkan is definitely widely supported on desktop

So for example I am very interested in the Pinebook (laptop) and Pinephone by Pine64, and last I checked (which was just a few weeks ago), neither of these support Vulkan out of the box. The way it was described to me it was less about "embedded" per se and more that both NVidia and ARM currently reserve Vulkan support to their closed-source drivers, so Linux distros (and hardware vendors like Pine64) that insist on a fully open source stack are kinda locked out from Vulkan. Apparently there are Mali GPU binary blobs I could get from ARM and install on the Pinebook/Pinephone, but I'm probably not going to buy a device without knowing 100% for a fact I'll be able to draw triangles.

(And I did double check, there's an open source third-party NVidia Vulkan driver called NVM but it only kicked off last October and it's not clear if it's out of the experimental phase yet.)

to be fair, even for opengl nvidia has atrocious performance without the binary drivers

(and i would consider all of pine64's hardware "embedded," though i guess that's a bit nebulous)

on standard desktop hardware that isn't nvidia vulkan support is fine, and realistically people running anything beyond basic desktop applications on Nvidia will be using the proprietary drivers anyway

fwiw: mesa intentionally did not bother supporting vulkan on the hardware you mentioned because they decided that the hardware wasn't powerful enough for any vulkan applications to run with usable performance

yeah vulkan on x86 desktop linux is fine and has been for years - for game/graphics dev purposes ARM desktop linux should be considered a hobby realm where nothing works yet.

also there are major windows games that shipped with vulkan: doom 2016 and doom eternal, no man's sky, and some UE4 games (which all run great on linux via proton, probably in part because of this). so the apple ecosystem is by far the most conspicuous gap in its support landscape.

The purpose of the pipeline and bind group layouts is to explicitly declare the bindings you use in your code for future use. I believe they are needed if you are doing any kind of advanced dynamic binding of buffers and other resources to the shader. As an example from some WebGPU code I wrote, for this in the shader:

@binding(0) @group(0) var<storage, read> raw_fft: array<f32>;
@binding(1) @group(0) var<storage, read_write> smoothened: array<f32>;

I'd define the pipeline like such:

const computePipelineDesciptor = {
    compute: {
        module: computeShader,
        entryPoint: 'cm_smoother'
    },
    layout: device.createPipelineLayout({
        bindGroupLayouts:[ device.createBindGroupLayout({
            entries: [
                {
                    binding: 0,
                    buffer: {
                        type: "read-only-storage",
                    },
                    visibility: GPUShaderStage.COMPUTE
                },
                {
                    binding: 1,
                    buffer: {
                        type: "storage",
                    },
                    visibility: GPUShaderStage.COMPUTE
                }
            ]
        })]
    })
} as GPUComputePipelineDescriptor;

At some point, you want to take control from the shader compiler to more explicitly allow you to declare what exactly you want to do with your data, and that's where layouts come in.

Excellent article btw, I greatly enjoyed reading it. If there is one more little thing I could add is a recommendation for PlayCanvas and Babylon.js in the "I just want to write my script tags" section. Babylon is an alternative to Three, while PlayCanvas is a fully fledged web-only game engine, both with WebGPU support

Thanks! I actually do link Babylon.js and PlayCanvas in footnote 17 but I acknowledge both that and the idea of a footnote in a blog post are a little obscure :)

EDIT: I updated the footnote to be a little more clear what it's linking.

Oh yeah, I noticed the footnotes only after writing my comment, in hindsight probably should have edited it. The clarification is a good addition though, thanks again for the article, I've shown it to my "Web-skeptical" friends and managed to at least slightly skew their opinions on the future of graphics programming, so that's something

If I were a cynical, paranoid conspiracy theorist, I would float the theory here that Apple at some point decided they wanted to leave open the capability to sue the other video card developers on the Khronos board, so they are aggressively refusing to let their code touch anything that has touched the Vulkan patent pool to insulate themselves from counter-suits. Or that is what I would say if I were a cynical, paranoid conspiracy theorist. Hypothetically.

iirc Apple has said in WebGPU meetings that they can't support SPIR-V because they can't implement Khronos specs due to confidential legal proceedings, so

Very cool post! As a webGL user maybe I need to get on this and start learning it. My only comment: you've used a picture of Steven from Steven Universe Future (2019), not his jacketless kid design from 2013.

Even with my stumbling not-in-software-dev understanding of the details this was a fascinating read. DirectX and OpenGL were just names to me that people would seem to ascribe to good/bad graphics performance at random and now I appreciate a bit more just how deeply fucked this field is for a layman to understand.

My perception is it was Direct3D, not OpenGL, which eventually managed to wrangle all of this into a standard, which really sucked if you were using a non-Microsoft OS at the time. It really seemed like DirectX (and the "X Box" standalone console it spawned) were an attempt to lock game companies into Microsoft OSes by getting them to wire Microsoft exclusivity into their code at the lowest level, and for a while it really worked.

There were four strategic goals of DirectX, with different priorities at different times:

  1. Make Windows gaming competitive with the consoles, so they had the same titles.
  2. Make Windows gaming better than Mac.
  3. Make DirectX better than OpenGL/Linux/SGI/etc
  4. Bring 3D to the Windows desktop.

#1 was basically just to make Windows a decent single(ish) platform for people doing cross-platform games (remember DirectX included things like input, audio, 2D, etc). And it worked. Lots of console games came to PC, everybody happy, no bad motives there.

Very quickly it became clear that #2 was going to be trivial because every even year, Apple would introduce a new and completely different gaming ecosystem, and every odd year they'd refuse to talk to game devs at all. In theory Metal fixed this, but... we'll see.

#3 was initially absolutely cut-throat, including the devious fuckery of Fahrenheit ( https://en.wikipedia.org/wiki/Fahrenheit_(graphics_API) ), but as OpenGL got cruftier and cruftier, while DirectX wasn't afraid to throw old junk away, it got easier and easier. The competition was really only reawakened by Vulkan.

#4 was because everybody was paranoid that 3D would be the Next Big Thing for the OS, and Microsoft was sure that they needed a 3DAPI they could embed deeeeeep into the OS so that things like Excel and Task Manager could use it (yeah...). So obviously it had to be their 3DAPI - they're not going to put someone else's API at the heart of their OS. Anyway, this absolutely succeeded - there's almost no native 2D API left, everything graphical the OS and apps do is layered upon DirectX (DX9 initially) and managing graphics memory seamlessly with multiple contexts was probably the most difficult part of that. Was it all worth the huge effort? Unclear - we basically just got a fancy Alt+Tab animation and the edges of windows have shadows, but fixing all that stuff certainly did make life much easier for gamedevs.

This is interesting context. Incidentally, since I was worried the history section was already too long, I didn't even get into Apple's ill-fated (and, frankly, completely irrelevant to anything that came after, although it had some really interesting features that OSes today still haven't replicated) QuickDraw 3D, but that is a Story.

There was a truly terrifying amount of toxic mud being thrown at the wall in those times (VRML!), because everybody was convinced that (a) 3D was going to be everywhere (2) everybody needed to share standards to share content and (3) they wanted to own that standard. The corporate backstabbing was epic!

really underselling the genie effect when you minimize windows on Mac. that one moved computers off the shelves lol, and it's almost the only graphical flourish from the first release of OS X that still persists in the Mac today

Great post! Excited that the WGPU future is finally here. It's seemed like a good default for Rust graphics for a while, but having it on the web makes it feel Real™️.

(The stuff re: Apple and their behavior in WGPU sounds right, as someone who was not involved in WGPU but was adjacent to the Rust folks who were stakeholders at the time)

By the way, have you noticed the cheesy Star Trek joke yet? The companies with seats on the Khronos board have a combined market capitalization of 6.1 trillion dollars. This is the sense of humor that 6.1 trillion dollars buys you

Nah, it's because the first version of Vulkan was literally just AMD's "Mantle" API with a search-and-replace on the prefix (I am 100% serious about this). Mantle was itself a pun - it's the layer around the "core" you see, so there was lots of imagery of volcanos and magma and things being hammered on subterranean forges (and of course it's red because AMD).

And when it needed a new name, because nobody could consistently spell "Hephaestus" they settled on his Roman name Vulcan instead, except to actually AVOID the Star Trek problem they used a K. And that's why the armadillo is waving a blacksmith's hammer around.

So yes of course it's a pun. But deliberately not a Star Trek one.

tried out the wgpu examples on desktop over the weekend, very keen to see this in a browser besides chrome, ie Firefox! i looked around for a "how is webgpu support coming along in FF" page like they've done for other major features/changes but couldn't find one, any idea if there's a decent place on the web to track that progress?

There's an experimental webgpu mode for firefox but last time I checked (a few weeks ago) my samples didn't work in it. I didn't get around to exploring why not. Actually I should probably do this now.

If you want to track things at a finer level than the links already posted above, I'd suggest the WebGPU Matrix channel or the "In WebGPU we Rust" Matrix channel (the channel for wgpu which is the Firefox component). dm me on mastodon or something if you need the addresses.

Update: I did a retest and it turns out neither of my WebGPU tests/samples work in Firefox Stable with the webgpu hidden flag enabled, but both of them work out of the box on Firefox Nightly. Not sure what to make of that but if you want to test drive WebGPU on Firefox maybe try Nightly for now.

Hm, that might be because you're on a Mac. But if the Windows version exposes the flag and yet the feature is broken then it comes to the same place in the end.

Unfortunately currently (112.0.2) it does do one thing: It causes the browser to start advertising the presence of WebGPU, even though WebGPU contexts cannot be requested from canvases. So in one of my code samples, the "no WebGPU! Better show an error message!" detection does not work *_*

I hadn't heard of dawn and then of course I had to click through to the CMakeLists.txt file. Yet another "Google write CMake that doesn't instantly make me want to crawl into a hole and pass away challenge (impossible)"

What a read! Good history lesson and introduction to WebGPU, got me interested in looking into it further. I've been down in the Vulkan mines for a good while, so I'm definitely intrigued by something that's a little more ergonomic than Vulkan, but which is also not OpenGL, and keeps the low level vibes.

WebGPU does look neat. I'm a bit worried it will quickly gain adoption as the interface people program against though, as my current computer is able to only do OpenGL 3.3 (Intel Integrated) / OpenGL 4.3 (NVidia), and I'd rather avoid having to replace it just because the new shiny user-facing API requires the new shiny low-level API my hardware doesn't implement.

wgpu actually also has something like this (in fact, they have a mode that runs on top of angle) but my experience if you ask the wgpu folks about it is they will try to talk you out of using it >_>

What a great longpost! History note: AMD's Mantle (already mentioned in the comments I see) was the real first effort in this direction, and AZDO I think was more of an attempt to get GL specifically closer to that level.

Production games like Battlefield 4 shipped with a Mantle renderer actually, and I saw a project that implements Mantle on top of Vulkan to preserve these renderers!

The NVidia AZDO GDC talk happened right in the middle of Valve's abortive Steam Machine push, and right after the Steam Dev Days event where Valve spent a lot of time trying to convince developers to port to Linux. Absolutely nobody was making OpenGL games except John Carmack, so I can't imagine why anybody would care about AZDO on OpenGL otherwise.

Yeah, I was trying to imply AZDO is the general idea underlying these APIs rather than suggesting the actual GDC talk/buzzword literally inspired all these APIs. Wasn't sure how to word that.

I did totally blank on mentioning Mantle in the timeline!

I don't do graphics programming often(despite meaning to get more into it, and this might be a good opportunity), but I really enjoyed this post, particularly the history section. Thanks for the read!

Will perpetually wish that WebGPU didn't come along with another NIH shader language. We're getting one more with SDL_gpu as well so soon there will be 200 different shader languages you have to care about

EDIT: Also just remembered that pipelines are on the way out in Vulkan* so it will be interesting to see if WebGPU later has to do the same or if they will stay stuck on pipelines

It seems like it's much, much easier to efficiently implement pipelines on top of shader objects than to try to do the opposite. I'd be very excited to see a shader-object-like mode in WebGPU, I assume(?) what determines whether the wg goes for it is how well the shader-object-like model efficiently maps to DX13 and Metal.

This said, how quickly should we expect support for the shader object extension to spread? It's in… Vulkan 1.3.246? Is that right? With OpenGL I'd usually expect many old video cards to never get driver support for new versions; are things different with Vulkan?

My understanding is that much more of the vulkan stack lives in user space and the OS, at least on Linux. You could probably also emulate shader objects on top of pipelines and I suspect many people would want to be able to write their application that way

I'm pretty interested in seeing how widespread shader objects become too. I've heard a couple of people express that they don't have much hope for them making inroads on mobile, but I really don't know what to expect.

They're not different with Vulkan on Android 😭

and probably not much different on other platforms either, maintaining support for several radically different kinds of hardware through one API is a lot of work, I'd be shocked if there's a vendor out there that doesn't just fork the driver at some point and put it into maintenance mode. but there will be some GPUs that are recent enough to get the new extensions, for sure.

As an observer this was a particular point of frustration for me. With webgpu supposed to be the friendlier version it's still tied to the complicated pipeline objects, and then vulkan goes and decides they weren't that useful in games and now everything is upside down. I hope that does spread downwards.

Nice to see more enthusiasm for webgpu, especially webgpu on native (the name is unfortunate, just like with wasm). Do note the firefox backed version isn't limited to Rust, I've been using it from the C FFI all along: https://github.com/gfx-rs/wgpu-native. For a while both implementations target the same C header, so the differences are very minimal like you explained in the post.

About targeting shaders from the language, since I use the Scopes programming language (which I surely mentioned to you on twitter sometime) this has been a feature from the beginning for me, and it's indeed nice over here! For a bit I was scared that it was going to be taken away but indeed on desktop they still accept SPIR-V bytecode and GLSL as well. Currently I'm writing wgsl anyway because our bytecode doesn't pass the naga validation (even though it worked fine before), but I'll try to fix that along the way.

Wow this is incredible. I kinda hate browsers, I mean there’s reasons all the way back to IE6, and Electron certainly hasn’t helped. So I had kind of ignored this whole space as being slightly cursed? But hearing that it’s abstracted enough from the browsers that it makes sense to use it native, with a lot less overhead than you’d expect, has made this sound much more interesting. We really do just want some cross platform stuff and honestly, nothing brings all the major players to the table like the web.

I also had never heard before that Vulkan was never meant to be written by humans, I’d heard it was fiendishly complex and then was surprised that people were excited by D3D12 and Metal.

Also I’m amazed that you made Apple’s decision to deprecate OpenGL sound reasonable hahaha, kinda goes to show that Apple doesn’t get the marketing team to talk about engineering decisions, you get the consumer facing half of the company being all friendly and the technical half is seemingly code-of-silenced into being unable to explain even the simplest things satisfactorily.

I suspect you may even be underestimating the impact of WebGPU. I'll make two observations.

First, for AI and machine learning type workloads, the infrastructure situation is a big mess right now unless you buy into the Nvidia / CUDA ecosystem. If you're a research, you pretty much have to, but increasingly people will just want to run models that have already been trained. Fairly soon, WebGPU will be an alternative that more or less Just Works, although I do expect things to be rough. There's also a performance gap, but I can see it closing.

Second, for compute shaders in general (potentially accelerating a large variety of tasks), the barrier to entry falls dramatically. That's especially true on web deployments, where running your own compute shader costs somewhere around 100 lines of code. But it becomes practical on native too, especially Rust where you can pull in a wgpu dependency.

As for text being one of the missing pieces, I'm hoping Vello and supporting infrastructure will become one of the things people routinely reach for. That'll get you not just text but nice 2D vector graphics with fills, strokes, gradients, blend modes, and so on. It's not production-ready yet, but I'm excited about the roadmap.

It's been extremely frustrating for me to see the compute world (especially stochastic parrot development but other things like photo editing and offline rendering too) not move to Vulkan and insist that proprietary CUDA and painful OpenCL are the only two options. I've heard tons, tons of excuses about Vulkan compute not quite having the features that OpenCL has, not working in that or this way and whatever else… and now suddenly when a limited and simplified subset-ish sort-of-but-not (handwaves) of Vulkan shows up in browsers, suddenly there is interest? Suddenly turns out the excuses were mostly just excuses and it's all possible to do.

It's been extremely frustrating for me to see the compute world (especially stochastic parrot development but other things like photo editing and offline rendering too) not move to Vulkan

Just to give you hope, pretty much all of my GPGPU work is using Vulkan compute features. I even write Adobe After Effects plugins that exclusively utilize Vulkan/MoltenVK compute and do all my graphics research in it.

Thank you very much! This is a fascinating, informative and fun read that I will be bookmarking, rereading, and using as a reference. And definitely not too long! I love history, and I love footnotes, so a very pleasant way to spend an evening. Again, thank you.

Few years ago I had an "Intro to Computer Graphics" class at uni. We did a lot of WebGL (because not having to install anything means is the easiest platform to develop to).

For the final project I ended up doing a FLAM3 renderer on WebGPU. It was really crappy (just atomic incremenrs on a big buffer) but it worked incredibly well!

Initialization headaches aside WebGPU and WGSL are really nice to use. Even when comparing them to WebGL. It's kind of incredible that the low~ish level API feels better than the high~ish level one.

Also GPUs are amazed me with how fast they are. Even for crappy code.

If I may add one more piece of info, regarding SPIR-V and WGSL specifically: Google has a bi-directional SPIR-V<->WGSL compiler called Tint ( https://dawn.googlesource.com/tint ), which is used in Chrome's WebGPU vulkan backend and seems to be quite feature complete. Not sure of any work specifically on that front, but with enough "convincing" you could likely run it using WebAssembly, and thus ship SPIR-V with your WebGPU applications. (Or write your shaders in HLSL/GLSL, compile them to SPIR-V and then back to WGSL for web deployment)

Yeah, I believe "embed tint" (or in Rust world it's Naga) is in fact the recommendation if you want to dynamically construct SPIR-V. The main thing that worries me about this is that if I am targeting web I want my executables to be as small as possible. I haven't had the chance to do size tests (my first successful WebGPU-via-WASM build was like, this past Friday) but I'd be concerned about how much larger embedding a SPIV-to-WGSL compiler would make my .wasm. (And there might be certain awkward sizes, like say the neighborhood of 200k, which would be small enough I'd be happy to include it if I had a need to dynamically generate shaders, but large enough it might inspire me to just rearchitect my software so it never dynamically generates shaders.)

That was a really good read! I havent been involved much in any shader shenanigans just yet (web or not) so I am somewhat intimidated by the Hello World sizes but this does kind of tempt me to delve into shaders

Disclosure: Some edits I have made to this post since posting: Clarified the timeline of Metal vs Vulkan; clarified the bit about AZDO to prevent implying the AZDO talk itself inspired all those graphics APIs, which would be pretty implausible considering the timeline of Metal and Mantle; fixed the diagram images for visual glitches and to make it clear render passes are created on views, not textures; wrote out names for the threejs alternatives in the footnote; typos.

Also the "sample Rust repo" linked was initially missing a commit which was present in the "compiled form running in a browser" linked in the same paragraph. The license was missing. But that's pushed now.

2023-05-12: Corrected an error in footnote 6.

I've been using WebGPU as a library in standalone / not-in-the-browser Rust and it's pretty great. Just a thin, flexible layer with Vulkan-like semantics that abstracts away all the nasty stuff you'd have to abstract away anyway, so you don't have to reinvent the wheel.

It's a lot to take in and I feel like WGPU / Vulkan-like things make the most sense to learn after you get done being frustrated with OpenGL in the same way that Rust makes the most sense to learn after you get done being frustrated with C++, but on the other hand it's also well-documented and you can definitely do it if you set your mind to it!

I could help out a bit. I'm not the best teacher - I used to be better at teaching but then customer service jobs scrongled my conversational style - but I'd be willing to try.

Learn from a yinglet? SIGN ME UP

Haha, yeah, like I'm super interested in getting a GPU to do different things, like it can definitely draw a triangle, but what else can you do with that? What other kinds of data can you process or represent within a GPU's domain, like... not exactly CUDA/OpenCL stuff, but just abstracting the whole idea of what data is, if that makes any sense? I don't know if I am, and I probably sound like a total noob loon.

I haven't seen this mentioned, so I'll just add that the lowest friction method I've found for playing with this stuff on Android is to load Chrome Canary and enable

chrome://flags/#enable-unsafe-webgpu

edit primus: it's probable you can also just do this on chrome, but i usually don't use chrome for anything and compile my own chromium apk to be able to run apps that require chrome.

however, the build is a bit of a bother, so i haven't done it yet.

edit secundus: firefox nightly for android now seems to have webgpu enabled by default