some youtubers have this thing where they cut to a split second of static and it's so funny because it simply does not work in streaming video. it turns out randomness doesn't compress well and all the viewer gets is a tortured artifact of the idea of visual snow. a modern echo of a technological byproduct that now takes great effort to simulate (poorly)
i vaguely wonder if you could create a convincing (changing) noise pattern given the constraint that you know you're most likely to be encoded as h264 or whatever other codec or set of different codecs if you're willing to go sicko mode on it
so (and i have a limited understanding of DCT baaed lossy compression so i am kind of hand-waving here) things like making sure your noise patterns, or per-frame changes to them, are only generated using exactly one basis pattern for each block, that those changes are all block-aligned, that the first frame of static occurs on an I frame, etc
or more broadly, knowing the constraints of a video codec, could you generate a convincing looking noise field that encodes optimally with minimal loss of detail?
