You can watch the video above before or after reading this post, but I will warn you that it has lots of flashing lights.
"The CCD (charge-coupled device) is the gold standard image sensor of the modern era" is a very reasonable way to begin an essay. I mean, it's completely false, but it's not like it really matters to anyone who isn't a semiconductor designer.
The CCD is actually a come-and-gone technology for the most part. They hang around in a few applications, but it's been over 20 years since the overwhelming majority of consumer products with image sensors switched to what we call "CMOS" - and this, too, is a completely inaccurate term, but that ship has sailed, since every industry has adopted it as the standard terminology. If you want to be pedantic, the precise way to describe the image sensors in our phones, webcams, and DSLRs would be "APS", or "active pixel sensor." This is unrelated to the term "APS-C" in photography; it's just a hilarious coincidence.
Despite all this confusion however, I still hear the term "CCD" from time to time, as shorthand for "a little solid state image sensor." And that raises some questions if you think about it:
-
Why did that term stick?
-
Solid state? As opposed to what? (are we finally discovering how MTV made Liquid Television?)
-
Why do we need to describe a "little" sensor? Why and how would they ever be big?
The answer to all of the above is that CCDs were so revolutionary when they came out in the early 80s that they became wedged into culture long past their actual use-by date, and the reason they were so revolutionary is that the thing they replaced was not solid state, nor was it small.

If you were watching TV in the 60s, it may have been shot through one of these: an Image Orthicon. Love those early-20th-century mad-science naming conventions. These were used for so long that they actually got a nickname, Immy, which was later 'feminized' (oof) into Emmy and became the name for the statue given by the TV Academy awards.
This is, in fact, a vacuum tube. It, and all other "videotubes" (of which there were many,) functions by converting photons into electrons, thus converting a human-viewable image into one that electronics can manipulate and reproduce. Hence, it is not a solid state device; like all vacuum tubes, it does its work with materials in a gas state. Or... plasma? Vacuum? I never fully understood what "state" tubes use. Anyway.
If you're curious: these have some genomic relationship to the "intensifier" or "photomultiplier" tubes used in scientific and military low-light applications, which also work by converting photons into electrons, but use very different principles beyond that, since they prioritize light amplification more than a high quality image.
As I mentioned, there were other videotubes, and the Orthicon wasn't even the first - I just used it because I happen to own one. The timeline, very roughly, goes:
-
Image Dissector (early television; '20s/'30s)
-
Iconoscope
-
Image Orthicon
-
Vidicon
-
...and then a whole shitload of patented and trademarked variations on the Vidicon, including the Plumbicon, Saticon, Pasecon, Newvicon and Trinicon.
All these names are incredibly fucked up. In addition to the Orthicon, I also have a vidicon tube of some stripe. It looks like this:

These two tubes actually use very different concepts internally, and are separated by about 40 years of development, but there are some fundamental elements in common. The process goes something like this:
-
A lens is positioned to project an image into the face of the tube
-
The focal point is a rectangle inside the tube, sometimes deep inside, coated in material that converts photons into electrons, called a "photocathode"
-
The electrons gather on a "target" plate. In some designs, that's the photocathode; in others, they pass through the photocathode and then strike a separate target.
-
There is now an "image" formed on the target, but instead of simply being photons reflecting from the target, the image is in the form of electron pools of more or less density, depending on the brightness of the incoming light.
-
An electron beam projected from the back of the tube scans the target plate, from top to bottom, left to right. Wherever electrons are accumulated, they either get "knocked off" by the beam, or reflect the beam.
-
By measuring the amount of electrons being returned, the tube can determine the brightness of every spot on the target, and thus produce an electrical signal that accurately reproduces the image.
If this sounds like "a television running in reverse," that shouldn't be surprising; that's exactly what it is.
In a TV, you have:
Image as electrons 🢡 Phosphor screen 🢡 Image as photons 🢡 Retina
In a videotube, you have:
Image as photons 🢡 Photocathode 🢡 Electron beam 🢡 Image as electrons
It is a cathode ray tube, in no uncertain terms. A cathode produces a ray, and it scans from left to right, top to bottom - it is literally the same process. And if you think about it, the TV receiving the signal the tube produces will be performing the exact same actions in perfect lockstep, give-or-take a few microseconds for signal transmission.
It is incredible to me to think that for most of the 20th century, all the TVs and cameras in the world were essentially mechanically mirroring each others actions. Incredibly cool, but I digress.
It's not easy to see the inside of the Orthicon tube I have, but I did take a bad picture of it:

You can just about see the target deep down inside there. That's where you focus the image from the lens.
I can't show you the inside of the vidicon tube since it has a sort of "tint" applied to the business end, but it has a target inside as well; it's just much smaller. That's important, by the way, because obviously you could not sell an Image Orthicon to a consumer; they're far, far too big, and the studio cameras they went into were absolute monsters as a result, to say nothing of the later color models that used three such tubes.
And consumer video cameras did exist in the 60s. I did a video about them - in 1967, you could buy a complete man-portable camera and recorder system for a few thousand bucks, and practical CCDs were almost 20 years away, so, the tubes definitely got small enough to be consumer-practical.
They also got very cheap. I have a page on my website about my collection of vintage consumer video cameras, ranging from the 60s up into the late 80s, almost all featuring videotube sensors. A lot of these even still work, for values of "work" - I'm not sure how good consumer tube cameras ever were, but these ones certainly have severe issues with color accuracy, banding, image softness, and many other complaints.
Since they work, and since they'll probably stop working before too long, I of course do want to make sure I capture video on them before they die. That's tough; you don't want to record anything that matters on them, since the quality is incredibly dire, and they require a whole separate recorder, since they have no built in recording capability, which means you have to plan special trips to go capture footage, and... yeah, sadly I have not put in as much time on this project as I should have.
I did, however, make one extremely productive outing in 2018, when I went to Biggest Little Fur Con, which nobody has ever called by that name. I took one of my cameras (I can't recall which,) a lithium pack with a bodged adapter cord, and a laptop and a video capture card stuffed in a backpack to the BLFC dancefloor one night and captured about half an hour of footage.
The video linked at the top of this post is the "good" part of what I got - I had forgotten to turn on the mic (I used the one built into the camera) for half the time, so I only got about 12 usable minutes, but those minutes are by far, the most motherfucking vaporwave thing I've ever had the pleasure of creating. There's not much to compare to there, but... it's still pretty god damn cool.
I can't do it justice beyond saying that it looks like a furry convention shot in the early 90s, because that's... very much what it is. Someone may well have brought a camera this old to a con in 94, and the basics (fursuits, EDM, glowsticks and drugs) haven't changed much. There isn't enough light for the camera to really see; the color is desaturated and imbalanced; the contrast is trash; the black levels are trash; the whole thing is blurry and muddy and the vibes are phenomenal.
Enhancing those vibes are the trails.
You cannot get an effect like that with digital effects. You can't get it with analog effects. It isn't postprocessing of any kind, it isn't echo or reverb, it is a fundamental flaw in the way videotubes work. All videotubes; they all had this problem, it was never solved.
You may be familiar with the "bleed" effect that plagued all CCD sensors, where an overly bright light would cause a white streak to shoot out vertically to the top and bottom of the image. They never solved that, and likewise, videotubes all had streaks. They were known as "comet trails," and they were caused (in the most general sense) by the incredibly sloppy analog nature of the technology.
Roughly:
-
Photons strike the target and get converted into electrons
-
The beam sweeps across and knocks some of those electrons off
-
If a lot of electrons were deposited in a given spot, not all of them get knocked off.
-
This leaves a faint afterimage which takes about a second to fully dissipate.
-
In the meantime, the beam continues to pick up ever-fainter ghost images until the charge has dissipated.
If you watch any live TV show from the 60s through to the early 80s, or any home video recorded in that time, you will see this effect. Sequins on The Price Is Right leave hot red-green trails behind them; news anchor's glasses frames make tiny sparkles that hang for a moment; stage lights in fan-taped 80s metal band performances flare, then go out, leaving a purple haze hanging over the image.
It is the kind of flaw that electronics just don't have anymore, and never will. CMOS sensors just have noise, and we don't even get to see that - it gets buried under unsightly, inorganic algorithmic noise reduction, and every picture from every device has that same feeling of consistent mediocrity.