cathoderaydude
@cathoderaydude

usb, usb2, usb3, usbc, displayport, hdmi, sata, nvme, pcie, and about a dozen other interconnects are all identical. they're just LVDS serial ports. yeah, yeah, i know, there are implementation differences, but nothing that matters.

as much as I hate USBC/thunderbolt/etc. it really drives this home. yes, yes! any port on your PC SHOULD secretly be able to turn into a monitor connector! except, harder! with better planning! and not by having a bunch of totally incompatible Modes with utterly dissimilar underlying philosophies that nobody wants to implement, especially because they don't have to. the USB forum completely fucking dropped the ball, then fell on their faces and their pants fell off and everyone saw their tiny nads, and yet they made a good point that nobody is going to take to heart.

PCIe is the lingua franca of computing. everything is one or more lanes, and has been for years. USB is a way to get some lanes. we just inexplicably won't let those lanes out of the computer unless they're first turned into a bunch of other protocols that are fundamentally identical, just incompatible for some reason. stop that!

computers should not have HDMI or USB or DP or anything on the back. graphics cards should have no plugs on them. sound cards, no plugs. video capture cards, no plugs. only motherboards should have plugs, and the only ones they should have are identical data ports and the only language they should speak is PCIe.

all capabilities in a computer should be published or subscribed to. devices and software should offer sources and sinks, and the motherboard should be little more than a PCIe router, a backplane that creates a Data Marketplace where devices can announce that they are a Source and if anyone would like to Sink them, they would be happy to set that up for a nominal fee (1-16 lanes, depending.)

you have an HDMI monitor? you plug it into a Port, any Port, through a dongle. the dongle asks "who's the primary video source?" and the graphics card says Well That's Me and the dongle subscribes, and now you have a picture. if you need more monitors, you plug in more dongles, and continue until you run out of lanes. if your GPU doesn't have the smarts to span a ton of displays, that's no problem - buy an ASUS ROG Display Mux that plugs into a Port, sinks all available lanes from the GPU, then republishes a new source that monitors can subscribe to. want to clone some displays? the chipset can do that for you by allowing multiple sinks to subscribe to the same source, and then the packets get duplicated en route; this is the age of Multicast On The Backplane, baby.

GPUs stop needing to know about sound. that's over. windows publishes itself as an audio source, to which monitors can subscribe if they want an additional stream. apps can also publish themselves as sources, and with the appropriate Advanced Control Panel you can manually route a single app to a single device. sample rates aren't compatible? Elgato Hyperstream Audio Conflater, $89.95. plugs into a Port, sinks all your audio sources, then resamples and refuckulates them however you like before republishing them - in whatever configuration you want, of course, mirrored to as many devices as you want.

you should be able to dupe the video coming out of your video card and send it to a capture card inside the motherboard. you should be able to connect a video input to your PC, then route that to your monitor instead of buying a KVM switch. i'm right

"PCIe won't run that far" yeah yeah i know, obviously this won't literally be PCIe in all forms at all times. that doesn't matter - the Dongle Future I propose (which won't suck because it'll be the assumption, rather than a shitty hack to make up for not having planned right in the first place) will invisibly convert things to whatever longer-distance protocol they need to be. we're already putting chips in all the cables; commit to that bit. every cable should be a short haul modem.

besides, lots of things don't need a whole PCIe lane. that's why we devise the new fractional lane, so your mouse and keyboard and streamdeck etc can all share one PCIe channel. yes, i am proposing that we bring back the 1980s AT&T TDM circuit switching model, and I'm right.

this has been feasible for over a decade. the damning thing is that we knew it, a decade ago, and had we started moving towards it then, the platform would be salvageable. instead we've done nothing, and the hell nightmare future where tablets actually do replace PCs will come to pass because, in our hubris / apathy, we didn't pivot the PC to focus on its strengths. the thing that makes the PC special is its incredible flexibility, but we let it solidify and stagnate, and now it's probably too late to undo it.

edit: ZERO PORT RAID CONTROLLERS. NUFF SAID

edit edit: SFP MODULES REFLECT THE TRUE FACE OF GOD. MAKE EVERYTHING LIKE THAT


WebsterLeone
@WebsterLeone

Thinkin' about this
Rollin' it around in my brain alongside my computer engineering knowledge

looks up something


DisplayPort 1.2 goes up to 17.28Gbps across 4 data pairs
PCIe 4.0 x1 is ~16Gbps

DisplayPort 2.0 goes up to 77.37Gbps across 4 data pairs
PCIe 5.0 x1 is ~32Gbps, x2 is ~64Gbps; PCIe 4.0 x4 is ~64Gbps
USB 4.0 is 20Gbps up to optionally 40/80/120Gbps.

Framework laptop expansion ports are basically already this. It's a USB-C port but basically they break out lanes from the CPU that can be USB or DP or HDMI (or maybe PCIe?), but that's fixed functions.

There already exists the ability for a dedicated GPU to feed rendered video back to the CPU to display out over connectors the CPU's built-in GPU use.

GPUs already use PCIe so wildly unbalanced there's a standard to repurpose lanes that would go from the GPU back to the CPU in the reverse direction. Why not make use of that bandwidth, yeah?

I hate how enterprisey this sounds because I think it's actually reasonable

Would actually make PCIe slots more useful for regular consumers. Capture cards, sound cards that double as audio-mixers...

I foresee a boatload of possible implementation issues, but like, the main downsides are probably just increased power usage and cost of things due to having to implement PCIe for lower bandwidth things and having what is basically a packet switch with a MASSIVE amount of backplane bandwidth. Like, if that's up to the southbridge expect to see big ol' chipset coolers make a comeback.


You must log in to comment.

in reply to @cathoderaydude's post:

puts on dunce cap what about hardware exploits/system access on the PCIe bus?

I know that was something people were talking about with regards to thunderbolt for a period of time... but I also don't know exactly who hardware security serves anymore so maybe a moot point.

No point in worrying about the sidechannels when the front door is usually open.

Thinking about it, isn't this kinda how server motherboards have been going? With recent-ish storage developments there I think they've basically just been turning into "here's all your PCIe have fun". They obviously don't have video or audio output needs, but I think that'd be the blueprint.

Of course now they're doing CXL or whatever because they want to add system memory to that One Big Bus.

Well, there's this thing called a DPU which is basically an ARM SoC along with some support hardware (crypto acceleration?) that lives on your network card and does a bunch of network stuff for you. The damn thing can perform en/decapsulation, do some absolutely disgusting DMA to give VMs true wire speed (slightly dumber network cards can do this too, but not as adaptably), even run Kubernetes to turn your single physical server into edge computing lambda-microservice hell, all because it speaks - guess what - PCIe. TCP segmentation offload is so last decade.

I think they're putting it on graphics cards too.

to the best of my knowledge -- which admittedly isn't great, I haven't touched supervisor-mode code in decades -- the memory bus already has a device (the IOMMU) sitting on it that can do memory protection from I/O devices; there'd definitely be work to do to build it out and clean up OS IOMMU handling (with a bit of googling I found a paper from someone building a malicious PCIe device to exploit IOMMU handling in 2018), but I don't think it'd be insurmountable.

one of my unpopular computer opinions is that the iPhone USB-C switchover really should have been a teachable moment for this. Lightning existed in the first place because Apple wanted to use a cut-down version of Thunderbolt for its phone dock connector Now, back when Intel was still messing around with making it a primarily-optical interconnect1. imo Apple had no deep devotion to Lightning "as a technology", even to sell sole-source proprietary cables, and it was more than happy to get rid of it in favor of USB-C worldwide when the EU forced its hand -- but everyone was too focused on "Apple Gets Owned By The EU" to pay attention to the fact that Apple already sells bad white USB-C cables by the score.


  1. before thunderbolt eventually moved to copper, then ironically languished primarily on apple hardware for years (arguably up until the present) and made space for things like USB Power Delivery and USB-C, the rotationally symmetrical connector that inexplicably allows implementers to detect connector orientation

The fundamental issue with this is that you could just DMA over PCIe with any regular-looking device (e.g. a flash drive). Technically you could stop this, but noone writes a PCIe driver assuming that the device on the other side is malicious.

in reply to @cathoderaydude's post:

in reply to @WebsterLeone's post:

Oh yeah, the problems would be magnificent. It's basically a reimagining of the entire PC platform, nothing like it has been attempted before. But I picture the southbridge as basically being The Same Thing But Worse, full of single-purpose I/O hardware that could be replaced with the equivalent of a cut-through ethernet switch, just reading packet headers and hurling them where they go before moving on. If your board doesn't have anything on it except a southbridge, because everything else, every other kind of "controller" is now an atomic function built into a cable or dongle, or completely obviated because devices now speak the language natively, then you have gobs of space to deal with that big heavy-duty switch; or you can implement it as a switching fabric and spread the load, have 1-3 high-perf switches (depending on market tier of the board) for GPUs and NVMe and then one or two low-perf switches handling all the other stuff.

Yeah, it's not far off some bullshit I've come up with in the past, and like I've said I've seen things that aren't too far off the mark that already exist. I think the only potential show-stopper would be if it can't be backwards compatible with existing PCIe devices. Everything else is workable, as far as I can fathom.

You wanna know how I know this is true?

Because even before Framework figured it out, Atari figured it out in fucking 1978 with the SIO bus.

The only reason we've ever had anything else since is just market fragmentation and capitalism being capitalism. USB promised to fix it, but didn't, for all the same reasons. Companies have to sell something "different" to have any reason to exist, and standards bodies don't have the power to make them behave, so everything turns into a clusterfuck always. Even the Atari 800 still had separate joystick ports even though it never needed to, because they had to sell all those leftover VCS sticks somehow ...

And if you're powerful enough as a company to try and at least force a single standard for your own hardware, nerds will scream at you on the internet for being "proprietary".

USB had the problem of a very limited half-duplex bus up until 3.0 came out so I'm not surprised it didn't do as well in many spaces that it could have. Hell you could say Firewire was a good replacement for a lot but it was expensive and power hungry compared to USB for things like mice and such.

I think if you're going for this level of modularity you'd still need low-speed low-power interfaces, even if they're running the same protocol as the regular/high-speed version, and back when things were shaking out, we definitely hit the limits of reasonably priced tech.

Having considered how we could have had a port that could be compatible from RS-422/485 to 10Mb Ethernet up to GbE if history was a little different, numerous times, I do think we could have come up with something but knowing how standards tend to develop I think it would have been jank and we'd be looking for something new anyway.

TBH I wouldn't be surprised if, with limits of new silicon fab technology, if we didn't see things like this as a new differentiator as we hit the limit of cores and clock speeds.

I think part of the problem is that, by and large, "standards" tend to happen after the fact, so they're compromised from the start.

It's easy to see, from a birds' eye view, how you could simplify it all, but down on the ground it always ends up being a negotiation between dozens of petty middle managers.

Like so many tech problems, the real core is a socioeconomic issue, and those are much harder to solve by writing code.