On Fri, 10 Apr 2009 22:48:03 +0200, erpy wrote:
(Note: For those who may not want to read everything that appears below, I will mention erpy provides no examples to substantiate the claim that photographs edit better in 16 bits than 8 bits. – Mike Russell) …..
That’s where your "problem" is… you take noise for granted. That’s the
Thanks for your concern. I do think your comments are substantial enough to deserve individual replies.
….
The challenge is for a photo that *edits* better ? Like anything goes within Photoshop without plugins except Camera Raw ? That’d be very easy… don’t put any money on the table for your challenge! ;))
….
I have, in the past, put money on the table, and paid up. The results were less than conclusive, due to poor design of the challenge on my part. It’s actually fairly difficult to set the challenge up fairly and clearly.
I paid out the reward, but did not end up with an example of an image that edited better in 8 bits than 16 bits. This is the main reason that I am confident that you cannot provide such an example.
….
Do you actually know how digital sensors in a camera work ? Sensors actually *capture* and *store* *1/3rd* of the data needed for an RGB image. The rest is "interpolated".
Yes, I do. You’re talking about the Bayer pattern. It is inaccurate to say that the sensors capture 1/3 of the data. Luminance data, which is what the eye is most sensitive to, is not interpolated. Chroma data is, to a certain extent. Our eyes do the same thing.
As a similitude, would you run "precision" tests on a "length measure" taken by hand (i.e. the sensor of a digital camera) or on a measure taken with a laser beam (i.e. a computer-generated image) ?
This doesn’t fly from a practical standpoint. My concern is with photographs, and not with manipulating arrays of numbers. This is one reason I reject histograms as a meaningful measure of image quality.
Although, as I said, 16 bits *editing* superiority is easily shown on digital photos as well.
Yackity yackity – if it’s "easy", why not do so? I suggest it is because it is not easy to do so, and may well be impossible, even with extreme editing after the fact.
The principle stays the same…
You can demonstrate a principle using numbers and conclude that "more bits is more better". Whether that principle translates to effective practice is another question, and it’s the one that
have many blending layers on your picture and see the difference.
Fine. This isn’t global warming or world hunger we’re talking about, it’s a psd file with some layers in it. So point us to an example? Until you do so, I suggest you cannot do so, and that you are blowing smoke and mirrors, with Toto about to pull the curtain away.
Obviously, the
more noise you have, the less the difference… but that’s only because the source data is crap from the very beginning (your… anyone’s crap, noisy digital sensor).
Reality has a way of messing up theoretical principles, doesn’t it? LOL.
I wouldn’t use such a picture for anything anyway.
Not all of us have that luxury.
Not mentioning, the 16 bits in raw photos have a different meaning than "precision". The bit-depth is used to expand the dynamic range of a picture.
It’s probably more accurate to say that the additional bits (typically 12) from the camera sensor can be manipulated in Camera Raw. I would agree that access to this data can be beneficial, but that’s not the same as saying that working in 16 bits (or 32) will give a result that is better or different than working in 8 bits. The only thing that will demonstrate that is an example of an actual photograph.
Hence if you take a raw photo and open it straight in Photoshop without touching anything within Camera Raw, you’re loosing so much "lighting" data you probably have no clue about.
Right – again no example though. See a pattern here?
While, strange enough for you, I imagine, all the image processing taking place within Camera Raw is at 16 bits/pixel – despite the fact you can import at 8bits then.
I have no problem with any of this. What I have a problem with is the religious belief that working in 16 bit instead of 8 bit in Photoshop confers any demonstrable advantage.
Why, according to your belief, Adobe would waste so much memory and speed by processing photos at 16bits when 8 bits would be more than enough ? Answer: for all the good reasons I said. :))
I’ve been in computer graphics for quite a while – 25 years or more, and am well aware of the trade-offs of performance and accuracy. Curvemeister uses floating point internally everywhere, and in some cases I believe I get a more accurate, smoother result than Photoshop does.
With today’s processors, 8, 16, and even 32 bit ints are processed at about the same speed. In the early days of computer graphics, the idea of doing a floating divide per pixel was a show stopper, and that was when a megapixel image drew oohs and ahhs at SIGGRAPH.
(oh well, and why most of the raw-dedicated image processors around are featuring 32bit floating point precision, spending money and effort on this? Just for the sake of a magic marketing word ? Not this time around.)
This sums up your argument, I believe. If 8 bits were enough, why would the big companies even bother with 16 and 32 bits. One reason, I believe, is because they can. Another is dynamic range, as you said. But none of this addresses my request that you provide an example of a photographic image that edits better in 16 bits than 8 bits.
Getting back to my original point about examples. Great piles of words are meaningless if there are no examples to back them up. Yet the words grow and grow, with no photograph to back them up. Just a feeling that 16 bits must be better than 8 bits.
If it were as easy to find such an example as you claim it is, it should take you less time to find one than to read, and perhaps, this rather long post.
All the best to you,
—
Mike Russell –
http://www.curvemeister.com