here's the short version
- when lenses get small enough, diffractive optics comes into play
- this means "image get blurry" and some notion of maximum rendering power
- unless you strap a telescope to a phone, you'll never get a clear picture of the moon
- you cannot "zoom, enhance"
enter computational photography
- let's just throw some content-aware-fill slash stable diffusion to it
- or in simpler terms: run autocomplete and fill in the details
- if you draw a happy face on a blurry moon jpeg, you get a happy face high rez moon pic
- you actually can "zoom, enhance"
sure enough, some people are 100% ok with "make my photo better" technology, they want to just get "more bokeh cream" and smear it across a jpeg.
on the other hand: the people who buy very big telescopes to take photos of the moon are a bit miffed, and the people who spend hours processing raw photographs for tone are also upset, and it's understandable.
who wants to live through a tsunami of low effort garbage drowning out any and all creative works?
it could be worse: at least we have this other algorithm to predict which of the garbage that will gamify the experience of consumption
anyway, aside from the "we are about to witness the endless september of machine generated content" stuff, what strikes me about the computational photography bit is that eventually, your phone will hallucinate your friends faces.
what was once a blurry underexposed group selfie is now a perfectly crisp, well lit image, and maybe it'll randomly insert a family member into the background because that blurry head looks just like your uncle
People have been dropping stock skies into their photos since the days of glass plate, and the concept of photographic authenticity or realism was always sort of a red herring. There is something very funny though about the Eliza of it all, the obfuscation or filtering of the operator’s will to deceive through the supposed agency of machine intelligence.



