Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> No photographer thinks images the they get on film are perfect reflections of reality. The lens itself introduces flaws/changes as does film and developing.

Don't fall into this trap. A lens and computational photography are not alike. One is a static filter, doing simple(ish) transformation of incoming light. The other is arbitrary computation operating in semantic space, halfway between photography and generative AI. Those are qualitatively different.

Or, put another way: you can undo effects of a lens, or the way photo was developed classically, because each pixel is still correlated with reality, just modulo a simple, reversible transformation. It's something we intuitively understand, which is why we often don't notice. In contrast, computational photography decorrelates pixels from reality. It's not a mathematical transformation you can reverse - it's high-level interpretation, and most of the source data is discarded.

Is this a big deal? I'd say it is. Not just because it rubs some the wrong way (it definitely makes something no longer be a "photo" to me). But consider that all camera manufacturers, phone or otherwise, are jumping on this bandwagon, so in a few years it's going to be hard to find a camera without built-in image-correcting "AI" - and then consider just how much science and computer vision applications are done with COTS parts. A lot of papers will have to be retracted before academia realizes they can no longer trust regular cameras in anything. Someone will get hurt when a robot - or a car - hits them because "it didn't see them standing there", thanks to camera hardware conveniently bullshitting them out of the picture.

(Pro tip for modern conflict: best not use newest iPhones for zeroing in artillery strikes.)

Ultimately you're right, though: this is an issue of control. Computational photography isn't bad per se. It being enabled by default, without an off-switch, and operating destructively by default (instead of storing originals plus composite), is a problem. It wasn't that big of a deal with previous stuff like automatic color corrections, because it was correlated with reality and undoable in a pinch, if needed. Computational photography isn't undoable. If you don't have the inputs, you can't recover them.



Yeah, computational photography is actually closer to Terry Pratchet’s Discworld version of a camera - a box with an imp who paints the picture.

Artistic interpretation of a scene is often very nice.

But we would really need to be able to discern cameras that give you the pixels from the ccd from the irreversible kind.

In the worst case scenario it’s back to photographic film if you want to be sure no-one is molesting the data :D


> Yeah, computational photography is actually closer to Terry Pratchet’s Discworld version of a camera - a box with an imp who paints the picture.

I mean… you’ve pretty much described our brain. Your blue isn’t my blue.

People need to stop clutching pearls. For the scenario it is used in, computational photography is nothing short of magic.

> In the worst case scenario it’s back to photographic film if you want to be sure no-one is molesting the data :D

Apple already gives you an optional step back with ProLog. Perhaps in the feature they’ll just send the “raw” sensor data, for those that really want it.


> Your blue isn’t my blue.

Of course it is, in practical sense ("qualia" don't matter in any way). If it isn't, it means you're suffering from one of a finite number of visual system disorders, which we've already identified and figured out ways to deal with (that may include e.g. preventing you from operating some machinery, or taking some jobs, in the interest of everyone's safety).

Yes, our brains do a lot of post-processing and "computational photography". But it's well understood - if not formally, then culturally. The brain mostly does heuristics, optimizing for speed at the cost of accuracy, but it still gives mostly accurate results, and we know where corner cases happen, and how to deal with them. We do - because otherwise, we wouldn't be able to communicate and cooperate with each other.

Most importantly, our brains are calibrated for inputs highly correlated with reality. Put the imp-in-a-box in between, and you're just screwing with our perception of reality.


I think you might mean Apple ProRaw [1]. ProLog might mean ProRes Log [2], which is Apple's implementation of the Log colour profile, which is a "flat looking" video profile that transforms colours to preserve shadow and highlight detail.

[1]: https://support.apple.com/en-gb/HT211965 [2]: https://support.apple.com/en-gb/guide/iphone/iphde02c478d/io...


”Your blue isn’t my blue.”

Is there actually sufficient understanding of qualia to state this concretely? Brain neurology is unknown territory for me.


I kinda like that scenario!


I don't, because the set sum of population of people who will buy selfie cameras if marketed heavily, and artistic photographers, is orders of magnitude larger than people who proactively know an image recorder could come in handy. And, as tech is one of the prime examples, if the sector can identify and separate out a specific niche, it can... tell it to go fuck itself, and optimize it away so the mass-market product is more shiny and more profitable.


That's the scenario where evidence of police/military misconduct gets dismissed out-of-hand because the people with the ability to capture it only had/could afford/could carry their smartphone, and not a film camera. Thanks, I hate it.


While I agree with the gist of your comment, I cannot leave this detail uncommented

>Or, put another way: you can undo effects of a lens, or the way photo was developed classically, because each pixel is still correlated with reality,

You cannot undo each and every effect. Polarizing filters (filters, as are lens coatings, are part of a classical lens in my opinion), gradual filters, etc effectively disturb this correlation.

As does classic development, if you work creatively in the lab (as I did as a hobby a long time ago in analog times) where you decide which photographic paper to use, how to dodge or burn, etc.

But yes, I agree that computational photography offers a different kind of reality distortion.


> You cannot undo each and every effect.

Fair enough.

> Polarizing filters

Yeah, I see it. This one is as pure signal removal as it comes in analog world. And they can, indeed, drop significant information - not just reflections, but also e.g. by blacking out computer screens - but they don't introduce fake information either, and lost information could in principle be recovered -- because in reality, everything is correlated with everything else.

> But yes, I agree that computational photography offers a different kind of reality distortion.

A polarizing filter or choice of photographic paper won't make e.g. shadows come out the wrong way. Conversely, if you get handed a photo with wrong shadows, you not only can be sure it was 'shopped, but could use those shadows and other details to infer what was removed from the original photo. If you tried the same trick with computational photograph, your math would not converge. The information in the image is no longer self-consistent.

That's as close as I can come up to describing the difference between the two kinds of reality distortion; there's probably some mathematical framework to classify it better.


The most extreme case being Samsung outright painting the moon in when it thinks a piece of picture is semantically likely to be the moon.

Whether the detail data is encoded as PNG or as AI weights is immaterial, it is adding data that is not there, by a long shot.

https://www.theverge.com/2023/3/13/23637401/samsung-fake-moo...


That's weird. Whenever I tried to take a picture of the moon, it would look great in the camera view on the screem, but look terrible once I actually took the picture.


Yes. Also, it seems inevitable that at some points photos that you can't publish on Facebook won't be possible to make. Is a nipple present in the scene? Then too bad, you can't press the shutter.


Or you can, but the nipple will magically become covered by a leaf falling down, or a lens flare, or subject's hand, or any other kind of context-appropriate generative modification to the photo.

Auto-censoring camera, if you like.

(Also don't try to point such camera at your own kids, if you value your freedom and them having their parent home.)


> Is a nipple present in the scene? Then too bad, you can't press the shutter. reply

Oh yes you can, and the black helicopters will be dispatched, your social score obliterated and credit ratings becomes skulls and bones. EU Chat Control will morph to EU Camera Control. Think of the children!


Nipple restriction is an American thing, not so much an EU thing


Yes but American puritanism is being imposed onto the rest of the world. It's not like there's a nipple-friendly version of Facebook for all non-US countries.


> Or, put another way: you can undo effects of a lens

No, you most definitely cannot. The roots of computational photography are in things like deblurring, which in general do not have a nice solution in any practical case (like non-zero noise). Same deal with removing film grain in low light conditions.


> because each pixel is still correlated with reality, just modulo a simple, reversible transformation

It’s absolutely not reversible. Information gets lost all the time with physical systems as well.

Also, probably something like the creation of the image of the black hole is closer to computational photography than “AI”, and it seems a bit like yours is a populist argument against it.


> Also, probably something like the creation of the image of the black hole is closer to computational photography than “AI”, and it seems a bit like yours is a populist argument against it.

I do have reservations about that image, and don't consider it a photograph, because it took lots of crazy math to assemble it from a weak signal, and as anyone in software who ever wrote simulations should know, it's very hard to notice subtle mistakes when they give you results you expected.

However, this was a high-profile case with a lot of much smarter and more experienced people than me looking into it, so I expect they'd raise some flags if the math wasn't solid.

(What I consider precedent to highly opaque computational photography is MRI - the art and craft of producing highly-detailed brain images from magnetic field measurements and a fuck ton of obscure maths. This works, but no one calls MRI scans "photos".)


So where do you draw the line? Summation and multiplication of signals is fine, but conditionals are bad?

That’s just arbitrary bullshitting around a non-existing problem.

Hell, it is pretty easy to check iphone’s performance with measurement metrics — make the same photo in identical environments with the iphone and with a camera with much bigger sensors and compare the results. Hell, that’s literally how apple calibrated their algorithm.


> So where do you draw the line? Summation and multiplication of signals is fine, but conditionals are bad?

Effectively, yes. Hard conditionals create discontinuities in the mathematical sense. So while I'm not sure of the rest of line's path, conditionals are on it (and using generative AI to hide discontinuities doubly so!).

In general, I'm fine with information loss. I'm not fine with adding information that wasn't in the photo originally.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: