DRM, in the broad sense of "verifying the provenance of a user's device for purposes contrary to the user's desires", is a fundamentally losing battle. At the end of the day, the user controls their own device and the people seeking to restrict their behavior need to allow it in some circumstances, so the user can just spoof those circumstances and do what they want anyway. This is why movies still get ripped in high quality the moment they're up on Netflix and you will always be able to find a cracked copy of Photoshop if you look in the right places.

The only remotely reliable way around this is through hardware, because hardware is relatively difficult to reverse engineer. Microsoft is already working on this, adding a chip that runs outside the reach of the OS to cryptographically verify that you're not being naughty. But even this is an arms race they're bound to lose: that cryptographic verification is just an algorithm that can't know any information but what's in your computer already, so there's nothing but time and effort stopping my Linux computer from implementing it in software and sending the very same bytes to Microsoft's servers.

Will anyone reverse engineer this just to fuck with Microsoft Azure? Probably not. But if some misguided souls were to try to build this type of DRM into a widely-used communication protocol with strong philosophical expectations of openness, I can't imagine that it would last very long uncracked.


You must log in to comment.

in reply to @nex3's post:

fwiw hardware for this sort of cryptographic verification has already existed on the desktop computer for about 20 years, dating back to the then-controversial (still-controversial?) introduction of the Trusted Platform Module and Secure Boot; mobile devices have started pushing more brains into the secure processor mostly for payments and biometric authentication, which need to be able to execute arbitrary code on data that's generally inaccessible, and afaik this is the use case Pluton is attempting to address.

streaming apps on Windows can (and do) already refuse to stream on computers where a hypervisor, kernel debugger, or unsigned drivers are present, or where Secure Boot is turned off and the TPM and operating system can't measure the software that's executing, rendering all of these guarantees invalid -- under a lot of circumstances you can't even screenshot the windows the apps are rendering to because some element of the OS composition code pulls dirty tricks to ensure that the image is composited in after other applications get a chance to look at it.

as far as I'm aware, the usual strategy for getting clear video out of the system for piracy purposes is (and has historically been) the weak encryption on HDMI/DisplayPort output, which can be stripped out with a $5 gray-market dongle.

Yeah webrips from what I've seen are just the straight files off the server unless they're specifically transcoding to make it as small as possible (I don't think that 2 hour movie was 2GB on the Netflix servers...)

i kinda hate that "keep the user from being naughty" and "keep hardware/software attack vector from being naughty" are extremely similar tasks. A chip with encrypted-from-hypervisor memory can be used both to keep VMs safe from host breaches and to lock down content with DRM. A boot certificate chain is both a way to keep the user from owning their hardware and a way to prevent bootkits. IDK how to square that circle, but Pluton's far from being first at this: TPM has been possible as an enforcement mechanism for ages now, but i think only extremely expensive AV production software uses that in conjunction with USB license dongles.

yep! unfortunately I think this is one of the places where The Real Problem Is Capitalism, and more specifically the conflict of interests between software developers and users.