cathoderaydude
@cathoderaydude
akrosi8
@akrosi8 asked:

despite your best warnings I've gotten into crusty old broadcast/ENG/EFP cameras, and they're cool as hell while being incredibly bland and intentionally interchangeable (except for the occasional 𝐒𝐍𝐎𝐘 that shoots weirdly-low-bitrate video and uses a seemingly bespoke lens mount). But I have a question I can't get a conclusive answer on and you're the only one I can think of who would know: how the hell would someone get widescreen out of one of these in 2006 when they were new? the sensors seem to all be 1440x1080. Were anamorphic lenses the only option, or were there studio cameras in this time that I haven't seen yet that could natively output 1920x1080 from the sensor?

Edit: I've been looking into this further and would like to stress that it appears to only be true in the pro sector, and possibly only in 2006. Consumers had 1080i cameras with genuine 1920x1080 sensors by the middle of the year.

In 2006, to the best of my knowledge, you would not get 1920x1080 out of anything, except perhaps the absolute top of the line gear used at, like, the super bowl, and maybe even then.


cathoderaydude
@cathoderaydude

I'm looking into this again for a script I'm working on and finding some interesting things.

When I wrote this post I was thinking of this article that I'd found a couple years ago. It was originally written in April 2006, at which time they were looking at a pile of variously-pro camcorders, none of which had full-resolution sensors. As you can imagine, I assumed this meant that nobody, anywhere, had a full-resolution sensor in 2006. I'm kinda unclear on that now however.


holeshapedgod
@holeshapedgod

bad and naughty sensors get put in The Pixel Wiggler to atone for their crimes


You must log in to comment.

in reply to @cathoderaydude's post:

This is utterly fascinating, thank you so much!
And... oh my god you're right about the resolution. I was right about the Sony unit at hand (PDW-F350); the specifications say it's a 1440x1080 imager. But a tape-based Panasonic camera (AJ-HDX900) from within a year or two? It shoots 1080p 30fps and 1080i 60fps off of three 1280x720 CCDs. I almost want to compare full 1440x1080 footage out of the cameras side-by-side just to see if there's a real difference—the Panasonic puts nearly three times the bitrate of the Sony onto tape and shoots 4:2:2 instead of 4:2:0, but it's producing at least some of its resolution via Magic™

(actually, now that I think about it, I bet it's doing something clever with the tri-CCD design to get extra detail...)

in reply to @cathoderaydude's post:

So I use a Panasonic Lumix GH6 as my main camera, and it has a pixel shift mode that gets 100MP stills out of a 25MP sensor. The thing is, it requires near-total stillness or else it fails, it caps the ISO at 1600 to control noise, and more importantly, this two-year-old camera needs to sit and churn for a solid fifteen seconds or more after every single shot just to process the exposures and stitch them together into the final image in-camera. And notably, that 1600iso maximum is on a cutting-edge, modern 4/3" sensor with crazy analog-stage tricks to improve its performance... although the resolution makes it hard to speculate about noise because the photosites are less than a third of the size by my napkin math (3μm across instead of ~12μm)

It is incredibly interesting to me that Panasonic has been using this technology for so long—it also explains why they tend to be better at it than the competition, being the first to ship cameras that can do it handheld afaik—and I theorized it might be possible in video, but actually doing it, in 2005, with in-camera image processing is completely insane. And in a relatively compact camera, too.... I can't help but wonder what sort of crazy ASICs they've got going on inside that thing to make it work at all. Even though the resolution is far, far lower, by more than an order of magnitude, I image image processing of raw 0.5MP and 2MP frames at 30 or 60fps wasn't anything close to free in that era. Not to mention the actual mechanism at hand; pixel shift is relatively trivial to implement on the GH6 because it has in-body image stabilization—the camera's already capable of very precisely shaking the sensor around, so it's just a matter of writing firmware that sends extra commands to the stabilizer system in sync with a series of captures.

It's weird, because it'll fairly reliably fail (the camera will just give you a regular exposure and display an error) for stuff like portraits with someone's hair blowing around, but there have been occasions where I've caught people walking around in front of buildings and it... just worked? Like they were blurry, but I think it was the shutter speed, not the pixel shift mode... so I have to wonder if on sufficiently 'coarse' moving targets the exposures all agree with one another that "yeah shit's blurry" and the camera takes no issue shoving them together to give you stupidly high resolution motion blur.

But yeah, I am definitely curious how this influenced Panasonic's modern lineup and how much the technology carried over, if at all.