no minors
i get angry and horny regularly, sometimes even at the same time.
please do feel free to tag my shit, i'm bad about that.


cathoderaydude
@cathoderaydude

usb, usb2, usb3, usbc, displayport, hdmi, sata, nvme, pcie, and about a dozen other interconnects are all identical. they're just LVDS serial ports. yeah, yeah, i know, there are implementation differences, but nothing that matters.

as much as I hate USBC/thunderbolt/etc. it really drives this home. yes, yes! any port on your PC SHOULD secretly be able to turn into a monitor connector! except, harder! with better planning! and not by having a bunch of totally incompatible Modes with utterly dissimilar underlying philosophies that nobody wants to implement, especially because they don't have to. the USB forum completely fucking dropped the ball, then fell on their faces and their pants fell off and everyone saw their tiny nads, and yet they made a good point that nobody is going to take to heart.

PCIe is the lingua franca of computing. everything is one or more lanes, and has been for years. USB is a way to get some lanes. we just inexplicably won't let those lanes out of the computer unless they're first turned into a bunch of other protocols that are fundamentally identical, just incompatible for some reason. stop that!

computers should not have HDMI or USB or DP or anything on the back. graphics cards should have no plugs on them. sound cards, no plugs. video capture cards, no plugs. only motherboards should have plugs, and the only ones they should have are identical data ports and the only language they should speak is PCIe.

all capabilities in a computer should be published or subscribed to. devices and software should offer sources and sinks, and the motherboard should be little more than a PCIe router, a backplane that creates a Data Marketplace where devices can announce that they are a Source and if anyone would like to Sink them, they would be happy to set that up for a nominal fee (1-16 lanes, depending.)

you have an HDMI monitor? you plug it into a Port, any Port, through a dongle. the dongle asks "who's the primary video source?" and the graphics card says Well That's Me and the dongle subscribes, and now you have a picture. if you need more monitors, you plug in more dongles, and continue until you run out of lanes. if your GPU doesn't have the smarts to span a ton of displays, that's no problem - buy an ASUS ROG Display Mux that plugs into a Port, sinks all available lanes from the GPU, then republishes a new source that monitors can subscribe to. want to clone some displays? the chipset can do that for you by allowing multiple sinks to subscribe to the same source, and then the packets get duplicated en route; this is the age of Multicast On The Backplane, baby.

GPUs stop needing to know about sound. that's over. windows publishes itself as an audio source, to which monitors can subscribe if they want an additional stream. apps can also publish themselves as sources, and with the appropriate Advanced Control Panel you can manually route a single app to a single device. sample rates aren't compatible? Elgato Hyperstream Audio Conflater, $89.95. plugs into a Port, sinks all your audio sources, then resamples and refuckulates them however you like before republishing them - in whatever configuration you want, of course, mirrored to as many devices as you want.

you should be able to dupe the video coming out of your video card and send it to a capture card inside the motherboard. you should be able to connect a video input to your PC, then route that to your monitor instead of buying a KVM switch. i'm right

"PCIe won't run that far" yeah yeah i know, obviously this won't literally be PCIe in all forms at all times. that doesn't matter - the Dongle Future I propose (which won't suck because it'll be the assumption, rather than a shitty hack to make up for not having planned right in the first place) will invisibly convert things to whatever longer-distance protocol they need to be. we're already putting chips in all the cables; commit to that bit. every cable should be a short haul modem.

besides, lots of things don't need a whole PCIe lane. that's why we devise the new fractional lane, so your mouse and keyboard and streamdeck etc can all share one PCIe channel. yes, i am proposing that we bring back the 1980s AT&T TDM circuit switching model, and I'm right.

this has been feasible for over a decade. the damning thing is that we knew it, a decade ago, and had we started moving towards it then, the platform would be salvageable. instead we've done nothing, and the hell nightmare future where tablets actually do replace PCs will come to pass because, in our hubris / apathy, we didn't pivot the PC to focus on its strengths. the thing that makes the PC special is its incredible flexibility, but we let it solidify and stagnate, and now it's probably too late to undo it.

edit: ZERO PORT RAID CONTROLLERS. NUFF SAID

edit edit: SFP MODULES REFLECT THE TRUE FACE OF GOD. MAKE EVERYTHING LIKE THAT


lagomorphosis
@lagomorphosis
This page's posts are visible only to users who are logged in.

You must log in to comment.

in reply to @cathoderaydude's post:

puts on dunce cap what about hardware exploits/system access on the PCIe bus?

I know that was something people were talking about with regards to thunderbolt for a period of time... but I also don't know exactly who hardware security serves anymore so maybe a moot point.

No point in worrying about the sidechannels when the front door is usually open.

Thinking about it, isn't this kinda how server motherboards have been going? With recent-ish storage developments there I think they've basically just been turning into "here's all your PCIe have fun". They obviously don't have video or audio output needs, but I think that'd be the blueprint.

Of course now they're doing CXL or whatever because they want to add system memory to that One Big Bus.

Well, there's this thing called a DPU which is basically an ARM SoC along with some support hardware (crypto acceleration?) that lives on your network card and does a bunch of network stuff for you. The damn thing can perform en/decapsulation, do some absolutely disgusting DMA to give VMs true wire speed (slightly dumber network cards can do this too, but not as adaptably), even run Kubernetes to turn your single physical server into edge computing lambda-microservice hell, all because it speaks - guess what - PCIe. TCP segmentation offload is so last decade.

I think they're putting it on graphics cards too.

to the best of my knowledge -- which admittedly isn't great, I haven't touched supervisor-mode code in decades -- the memory bus already has a device (the IOMMU) sitting on it that can do memory protection from I/O devices; there'd definitely be work to do to build it out and clean up OS IOMMU handling (with a bit of googling I found a paper from someone building a malicious PCIe device to exploit IOMMU handling in 2018), but I don't think it'd be insurmountable.

one of my unpopular computer opinions is that the iPhone USB-C switchover really should have been a teachable moment for this. Lightning existed in the first place because Apple wanted to use a cut-down version of Thunderbolt for its phone dock connector Now, back when Intel was still messing around with making it a primarily-optical interconnect1. imo Apple had no deep devotion to Lightning "as a technology", even to sell sole-source proprietary cables, and it was more than happy to get rid of it in favor of USB-C worldwide when the EU forced its hand -- but everyone was too focused on "Apple Gets Owned By The EU" to pay attention to the fact that Apple already sells bad white USB-C cables by the score.


  1. before thunderbolt eventually moved to copper, then ironically languished primarily on apple hardware for years (arguably up until the present) and made space for things like USB Power Delivery and USB-C, the rotationally symmetrical connector that inexplicably allows implementers to detect connector orientation

The fundamental issue with this is that you could just DMA over PCIe with any regular-looking device (e.g. a flash drive). Technically you could stop this, but noone writes a PCIe driver assuming that the device on the other side is malicious.

in reply to @cathoderaydude's post: