A Color Science White Paper from Adobe Inc

TA
Posted By
Timo Autiokari
Dec 31, 2004
Views
1550
Replies
35
Status
Closed
Adobe Inc., has published a white paper
http://www.adobe.com/products/photoshop/pdfs/linear_gamma.pd f that tries to explain the colorimetric response of the human vision, film and digital image sensors and how they relate to digital image capture and -coding.

In this paper the chapter ‘Raw Capture, Linear Gamma,and Exposure’ is totally incorrect, it is incomprehensible that this kind of misconception has passed the editorial staff at Adobe.

The author (Mr. Bruce Fraser) explains that:

"Film responds to light the same way our eyes do, but silicon does not.".

Now, anyone who has ever used "film", be it a chrome or a paper exposed from a negative, knows too well that the "film" will severely compress both the dark end and the bright end of the tonal reproduction range. Shadows on "film" appear way too dark and highlights are washed out for the vision.

He continues with the claim:

"Film mimics the eye’s response to light, which is highly nonlinear."

and explains "our sensory mechanisms" taking analogies from golf balls, spoonfuls of sugar, etc. The author is clearly trying to explain the Light Adaptation of the human vision, as he tries to rationalize:

"This built-in compression allows your senses to function over an immense range of stimuli. You can go from subdued room lighting to full daylight without your eyeballs catching fire, even though you may have suddenly increased the stimulus reaching those eyeballs by a factor of 10,000 or so. But the sensors in digital cameras lack the compressive nonlinearity typical of human perception. They just count photons in a linear fashion."

But the Light Adaptation of the vision is pretty much fixed when we view a still scene or a picture of it. Light Adaptation does not explain how we perceive the surface reflectances from the scene under unchanged illumination level. Yes, light (photons) do convey that information into our eyes but the Light Adaptation level of the vision is unchanged in this situation since the lightness level does not change.

When we go e.g. from subdued room lighting to full daylight then, between these two viewing situations, the Light Adaptation of the human vision is fully working. So our perception of these two lightness levels (the lightness level in the subdued room and the lightness level under the full daylight) is about logarithmic, so behaves according to the Weber’s law. But we do not perceive the surface reflectances in either of the above viewing situations according to the Weber’s law, we perceive the surface reflectances about linearly.

The author then throws in an example image with the caption:

"Linear processed raw captures look very dark". But all the data is there in the image."

Sure the image appearance is very dark since that image is totally incorrectly color-managed! He is showing the linear image data in a steep gamma RGB working-space (or in a native gamma space of the monitor, like it is also shown in the Acrobat reader).

He follows that with another example image with the caption:

"The same linear processed capture with a tone curve appears normal."

Yes, the image appearance is now much better since the steep gamma transfer function of the viewing space is taken into account by his curve, he explains:

"This is the curve required to apply a gamma correction tone to the linear capture."

It looks that the author is trying to explain that linear image data would not appear properly for the human vision *because* the Light Adaptation of the human vision is non-linear. This is nonsense.

The situation that he is confronted is:

Linear image data appears properly for our vision when the monitor is linearly calibrated (or the image data is shown in a linear working-space of a color-managed application).

And gamma compressed image data appears properly for our vision when the monitor is gamma calibrated to that gamma space (or the image data is shown in such gamma compressed working-space of a color-managed application).

Timo Autiokari http://www.aim-dtp.net

Must-have mockup pack for every graphic designer πŸ”₯πŸ”₯πŸ”₯

Easy-to-use drag-n-drop Photoshop scene creator with more than 2800 items.

TA
the analyst
Dec 31, 2004
Timo,
have you tried to contact Adobe and Bruce?
He is highly regarded person, no daubt, he will answer your comments.

Regards and Happy New Year
EG
Eric Gill
Dec 31, 2004
the analyst wrote in
news::

Timo,
have you tried to contact Adobe and Bruce?

Oh, he has. They think he’s a crank. And every now and then, he pops up in Photoshop groups with another rant. Check Google groups if you’re interested.

He is highly regarded person, no daubt, he will answer your comments.
Regards and Happy New Year
C
Corey
Dec 31, 2004
wrote in message
Adobe Inc., has published a white paper
http://www.adobe.com/products/photoshop/pdfs/linear_gamma.pd f that tries to explain the colorimetric response of the human vision, film and digital image sensors and how they relate to digital image capture and -coding.

RE:

"Linear processed raw captures look very dark". But all the data is there
in the image."

Sure the image appearance is very dark since that image is totally incorrectly color-managed! He is showing the linear image data in a steep gamma RGB working-space (or in a native gamma space of the monitor, like it is also shown in the Acrobat reader).

I think you are reading way too much into this…or perhaps too little. The statement is not really about color management…it’s about the information still being in the RAW file. The trick is to learn how to access the RAW data that may appear to be missing in your PSD file.

In an earlier version of Photoshop User, which I have loaned to a friend, it was explained how to retrieve this data from the RAW file to replace the washed out look of the sky. It dealt with the very real problems the author explains…shadows being too dark, lighter things being washed out.

Re:
But we do not perceive the surface reflectances in
either of the above viewing situations according to the Weber’s law, we perceive the surface reflectances about linearly.

Perhaps it is still nonlinear, but due to the tiny amount of differences in the reflectances, it just seems linear. The geometric curve may approach the straightness of an arithmetic one while not actually achieving it.

Peadge πŸ™‚
M
MOP
Dec 31, 2004
wrote in message
Adobe Inc., has published a white paper
http://www.adobe.com/products/photoshop/pdfs/linear_gamma.pd f that tries to explain the colorimetric response of the human vision, film and digital image sensors and how they relate to digital image capture and -coding.

In this paper the chapter ‘Raw Capture, Linear Gamma,and Exposure’ is totally incorrect, it is incomprehensible that this kind of misconception has passed the editorial staff at Adobe.

I’m not quite sure what paper you have been reading, but your comments don’t relate to the ones made by Bruce!
in essence what he is saying is 24bit colour i.e. 8 bits per colour gives 255 levels and won’t capture most scenes,
so he is suggesting you use RAW 12 bit per colour or levels 4095. he also talks about linear gamma which is the bit that may have confused you, as gamma is a logarithmic scale, so when he is talking about linear gamma what he in fact means is a straight line y=mx+c when plotted on a log scale.
He is also saying that human perception is logarithmic, which is indeed true, as any electronics engineer who has used a lin pot in place of a log pot as a hifi volume control.and to a large extent why we use Decibels in electronics,
3dB’s bit like one stop in photography πŸ˜‰
I think you should read the paper again, you may learn something from it! you knowledge of physics and maths is obviously less than your opinion of you knowledge.
H
Hecate
Jan 1, 2005
On Fri, 31 Dec 2004 20:04:42 GMT, "MOP" wrote:

wrote in message
Adobe Inc., has published a white paper
http://www.adobe.com/products/photoshop/pdfs/linear_gamma.pd f that tries to explain the colorimetric response of the human vision, film and digital image sensors and how they relate to digital image capture and -coding.

In this paper the chapter ‘Raw Capture, Linear Gamma,and Exposure’ is totally incorrect, it is incomprehensible that this kind of misconception has passed the editorial staff at Adobe.

I’m not quite sure what paper you have been reading, but your comments don’t relate to the ones made by Bruce!
in essence what he is saying is 24bit colour i.e. 8 bits per colour gives 255 levels and won’t capture most scenes,
so he is suggesting you use RAW 12 bit per colour or levels 4095. he also talks about linear gamma which is the bit that may have confused you, as gamma is a logarithmic scale, so when he is talking about linear gamma what he in fact means is a straight line y=mx+c when plotted on a log scale.
He is also saying that human perception is logarithmic, which is indeed true, as any electronics engineer who has used a lin pot in place of a log pot as a hifi volume control.and to a large extent why we use Decibels in electronics,
3dB’s bit like one stop in photography πŸ˜‰
I think you should read the paper again, you may learn something from it! you knowledge of physics and maths is obviously less than your opinion of you knowledge.
Timo has, shall we say, to be kind, a "different" view about colour, management and, indeed the laws of physics. There are plenty of times when, historically, when the view of one person ranked against the "experts" has been proven to be correct.

This isn’t one of them.



Hecate – The Real One

veni, vidi, reliqui
MR
Mike Russell
Jan 1, 2005
Hecate wrote:
On Fri, 31 Dec 2004 20:04:42 GMT, "MOP" wrote:
wrote in message
Adobe Inc., has published a white paper
http://www.adobe.com/products/photoshop/pdfs/linear_gamma.pd f that tries to explain the colorimetric response of the human vision, film and digital image sensors and how they relate to digital image capture and -coding.

In this paper the chapter ‘Raw Capture, Linear Gamma,and Exposure’ is totally incorrect, it is incomprehensible that this kind of misconception has passed the editorial staff at Adobe.

I’m not quite sure what paper you have been reading, but your comments don’t relate to the ones made by Bruce!
in essence what he is saying is 24bit colour i.e. 8 bits per colour gives 255 levels and won’t capture most scenes,
so he is suggesting you use RAW 12 bit per colour or levels 4095. he also talks about linear gamma which is the bit that may have confused you, as gamma is a logarithmic scale, so when he is talking about linear gamma what he in fact means is a straight line y=mx+c when plotted on a log scale.
He is also saying that human perception is logarithmic, which is indeed true, as any electronics engineer who has used a lin pot in place of a log pot as a hifi volume control.and to a large extent why we use Decibels in electronics,
3dB’s bit like one stop in photography πŸ˜‰
I think you should read the paper again, you may learn something from it! you knowledge of physics and maths is obviously less than your opinion of you knowledge.
Timo has, shall we say, to be kind, a "different" view about colour, management and, indeed the laws of physics. There are plenty of times when, historically, when the view of one person ranked against the "experts" has been proven to be correct.

This isn’t one of them.

I agree with Timo completely on this one.

The main thrust of the Fraser article, that film and digital are different, is valid. But this valid point is muddled with a number of oversimplifications and mis-statements.

The eye and film do work entirely differently. Two objects *do* appear to weigh twice as much as one. Light adaptation of the eye exists and is well documented. Gamma encoding does not increase the dynamic range of the image.

Timo is correct, and Fraser is wrong, on all four of these counts.

The article does not meet Adobe’s usual standards. Timo, Hecate, myself, and several other members of this news group, could have written a more informative and accurate article than this one. It is an article that should be written. Perhaps one of us will be inspired to do so? —

Mike Russell
www.curvemeister.com
www.geigy.2y.net
TA
Timo Autiokari
Jan 1, 2005
"Peadge" wrote:
The statement is not really about color management…it’s about the information still being in the RAW file.

The statement is *due* to the fact that the author does not understand color-manamement and then tries to rationalize the situation he has in his hands (the too dark image appearance) with totally incorrect assumptions about the behavior of the vision.

Please see http://www.aim-dtp.net/aim/techniques/linear_raw/index.htm I show similarly two versions of an image …plus the correct way to work with linear images. You can even download the experiment and try yourself.

The trick is to learn how to access the RAW data that may appear to be missing in your PSD file.

The trick that the author explains is incorrect, with the Curves dialog it is not possible to apply a strong enough curve so that the gamma space of the monitor (or the RGB working-space gamma) could be properly compensated, so the dark end will remain way too dark with this trick. But there is absolutely no need for any tricks, one just needs to use the color-management correctly.

Timo Autiokari http://www.aim-dtp.net
TA
Timo Autiokari
Jan 1, 2005
MOP wrote:

I’m not quite sure what paper you have been reading,

The Adobe white paper in question is:
http://www.adobe.com/products/photoshop/pdfs/linear_gamma.pd f

but your comments don’t relate to the ones made by Bruce!

Now that you know what the subject matter is please do read it and you will find out that my comments directly relates to that document.

in essence what he is saying is 24bit colour i.e. 8 bits per colour gives 255 levels and won’t capture most scenes,

Yes, you have clearly been reading some other document, there is nothing like that in the white paper in question.

gamma is a logarithmic scale,

No, gamma is power function. You can not have a zero on a log scale, but you do have level 0 in the levels scale, be it a non-linear or the linear levels scale.

He is also saying that human perception is
logarithmic, which is indeed true,

Perception of light(ness) is about logarithmic, as I already have explained. This is due to the Light Adaptation of the vision, we are able to adapt to vastly different illumination situations, from starlight up to the most bright sunny day.

But under unchanged illumination level we do not perceive scene reflectances logartihmicly nor in a steep gamma space. In this situation the vision require linear light, the very same light that hits our eyes when we view the natural scene …where the light is linear.

I think you should read the paper again, you may learn something from
it!
you knowledge of physics and maths is obviously less than your
opinion of
you knowledge.

I have done my homework, and then some. Please have a look at http://www.aim-dtp.net/aim/techniques/linear_raw/index.htm where this issue is correctly explained.

Timo Autiokari http://www.aim-dtp.net
M
MOP
Jan 1, 2005
snip

Sorry this paper has bugger-all to do with colour management it’s about dynamic range of various capture systems and the difference between them! you and Timo seem so totally obsessed with colour space you seem to have completely missed the point!

I agree with Timo completely on this one.

The main thrust of the Fraser article, that film and digital are different,
is valid. But this valid point is muddled with a number of oversimplifications and mis-statements.

That’s because it’s been written for photographers and photo-shop users, if it were written in precise technical terms I doubt most people would understand it, this thread is testament to that, as you are arguing over a subject that is, to be quite honest common sense to most first year engineering and physics students (well at Cambridge UK anyway)
The eye and film do work entirely differently. Two objects *do* appear to weigh twice as much as one.

Sorry they don’t! ( I just tried it) It’s a bit confusing as we are able to process what we feel, so someone use to judging the weight of things may well be able to guess something is twice the weight, but that’s by experience.

The human ear can barley hear a 3db increase in volume, which is increasing it by a factor of two.

When we increase the intensity of the light through our lens in the darkroom by a stop, or twice as much light it does not seem that the image is twice as bright.

Light adaptation of the eye exists and is well
documented. Gamma encoding does not increase the dynamic range of the image.

Yes it does! all the research says the eye is logarithmic! (To a first approximation) just like film the S curve which is density (linear) plotted against the log of the exposure.

Gamma is not and never has been a form of encoding; it’s simply a way of describing a transfer function on a logarithmic scale! And logarithmic functions do indeed increase the dynamic range for example 3db’ is factor 2 and 30db’s is 1,000, 100dB’s is 10,000,000,000 if that’s not compression I don’t know what is? Logarithmic amplifiers have always been use for compression. Maybe it’s the total lack of understanding of various logarithmic functions that is the problem here. have a look at a slide-rule the linear side normally goes to 10 when the log side goes to 1,000

Timo is correct, and Fraser is wrong, on all four of these counts.

I’m not convinced maybe you can produce a reasoned scientific argument to back up you comments rather than some rather abstract comments
M
MOP
Jan 1, 2005
wrote in message
MOP wrote:

I’m not quite sure what paper you have been reading,

The Adobe white paper in question is:
http://www.adobe.com/products/photoshop/pdfs/linear_gamma.pd f
but your comments don’t relate to the ones made by Bruce!

Now that you know what the subject matter is please do read it and you will find out that my comments directly relates to that document.
in essence what he is saying is 24bit colour i.e. 8 bits per colour gives 255 levels and won’t capture most scenes,

Yes, you have clearly been reading some other document, there is nothing like that in the white paper in question.

gamma is a logarithmic scale,

No, gamma is power function. You can not have a zero on a log scale, but you do have level 0 in the levels scale, be it a non-linear or the linear levels scale.

He is also saying that human perception is
logarithmic, which is indeed true,

Perception of light(ness) is about logarithmic, as I already have explained. This is due to the Light Adaptation of the vision, we are able to adapt to vastly different illumination situations, from starlight up to the most bright sunny day.

But under unchanged illumination level we do not perceive scene reflectances logartihmicly nor in a steep gamma space. In this situation the vision require linear light, the very same light that hits our eyes when we view the natural scene …where the light is linear.

I think you should read the paper again, you may learn something from
it!
you knowledge of physics and maths is obviously less than your
opinion of
you knowledge.

I have done my homework, and then some. Please have a look at http://www.aim-dtp.net/aim/techniques/linear_raw/index.htm where this issue is correctly explained.

Timo Autiokari http://www.aim-dtp.net
Tell you what you believe in what ever you want πŸ™‚
TA
Timo Autiokari
Jan 1, 2005
On Sat, 01 Jan 2005 13:34:10 GMT, "MOP" wrote:

When we increase the intensity of the light through our lens in the darkroom by a stop, or twice as much light it does not seem that the image is twice as bright.

That is so, because the vision will adapt to the changes in lightness level. I think I have explained this already some three times.

Vision however does not adapt to changes in surface reflectances that we perceive in a natural scene or an image of it. Indeed that would be a very irritating property of vision, what ever we would be looking at would always appear as seriously flat washed out.

The analogy with hearing is not very straightforward since the audio stimuli change in time, all the time. Our hearing also adapts to changes in the overall strength of the stimuli: Even if we are able to hear a whisper when it is otherwise quiet we do not hear a whisper when an orchestra is doing it in forte. When we look at a natural scene or an image of it the analogy is that there are millions of instruments continuously playing their characteristic tone at some fixed level of volume. So adaptation can not happen.

Timo Autiokari http://www.aim-dtp.net
M
MOP
Jan 1, 2005
"Timo Autiokari" wrote in message
On Sat, 01 Jan 2005 13:34:10 GMT, "MOP" wrote:
When we increase the intensity of the light through our lens in the darkroom by a stop, or twice as much light it does not seem that the image is twice as bright.
That is so, because the vision will adapt to the changes in lightness level. I think I have explained this already some three times.

No it’s because the eye has a logarithmic characteristic,

You have *not* explained it! you have made that statement three times but with no proof either by reference to a technical paper or by showing a valid mathematical proof.

Vision however does not adapt to changes in surface reflectances that we perceive in a natural scene or an image of it. Indeed that would be a very irritating property of vision, what ever we would be looking at would always appear as seriously flat washed out.

Sorry that’s crap! light is light how can it change it’s physical characteristic just because it’s reflected. So is there a difference between a mirror reflecting the light or a white surface. anyway every thing we see is reflected light unless we look at he F***ing sun!

The analogy with hearing is not very straightforward since the audio stimuli change in time,

Ur! yes thats how sound works the function of amplitude against time is called frequency.

all the time. Our hearing also adapts to
changes in the overall strength of the stimuli: Even if we are able to hear a whisper when it is otherwise quiet we do not hear a whisper when an orchestra is doing it in forte.
That’s called signal to noise ratio and nothing to do with the perception of the ear!

When we look at a natural
scene or an image of it the analogy is that there are millions of instruments continuously playing their characteristic tone at some fixed level of volume. So adaptation can not happen.

What!!! sorry stop and let me get off I can’t take anymore of this drivel

Timo Autiokari http://www.aim-dtp.net
Sorry you obviously live in a different universe where the laws of physics are different from on earth.
anyway I don’t really want to carry on with this stupid thread, you are obviously some sort of crank! who has little knowledge of what he is talking about.
B
bagal
Jan 1, 2005
Wow! Now that is what I call diverse opinions

I think the difference is that light is light (OK?) but human perception of light more closely matches the behaviour of light & film rather than light & digital.

Maybe human perception tends to follow curvilinear responses rather than linear (after allowing for a few transformations along the axes)?

Aerticeus
M
MOP
Jan 1, 2005
"Aerticulean Effort" wrote in message
Wow! Now that is what I call diverse opinions

I think the difference is that light is light (OK?) but human perception of light more closely matches the behaviour of light & film rather than light & digital.

Maybe human perception tends to follow curvilinear responses rather than linear (after allowing for a few transformations along the axes)?
Aerticeus

LOL don’t just don’t
B
bagal
Jan 1, 2005
MOP wrote:
"Aerticulean Effort" wrote in message

Wow! Now that is what I call diverse opinions

I think the difference is that light is light (OK?) but human perception of light more closely matches the behaviour of light & film rather than light & digital.

Maybe human perception tends to follow curvilinear responses rather than linear (after allowing for a few transformations along the axes)?
Aerticeus

LOL don’t just don’t
I may be wrong but aren’t F-stops non-linear?

I thought each stop represented a doubling of light input?

Of course, this can be linearised by a transform (y-axis in powers of 2)

Aerticeus
MR
Mike Russell
Jan 2, 2005
MOP wrote:
snip

Sorry this paper has bugger-all to do with colour management it’s about dynamic range of various capture systems and the difference between them! you and Timo seem so totally obsessed with colour space you seem to have completely missed the point!

Any obsessions aside, my point stands that Timo is correct in the criticisms he mentioned.

The main thrust of the Fraser article, that film and digital are different, is valid. But this valid point is muddled with a number of oversimplifications and mis-statements.

That’s because it’s been written for photographers and photo-shop users, if it were written in precise technical terms I doubt most people would understand it,

There’s no need to be wrong to be clear and understandable.

this thread is testament to that, as you
are arguing over a subject that is, to be quite honest common sense to most first year engineering and physics students (well at Cambridge UK anyway)

Not really, though I would point out that what is obvious to a freshman may be problematical to a graduate. πŸ™‚

The eye and film do work entirely differently. Two objects *do* appear to weigh twice as much as one.

Sorry they don’t! ( I just tried it) It’s a bit confusing as we are able to process what we feel, so someone use to judging the weight of things may well be able to guess something is twice the weight, but that’s by experience.

I give you credit for experimenting. People have been judging weight manually reasonably accurately for many thousands of years. Surely you don’t believe that the apparent weight of four versus eight golf balls is the same as eight versus sixteen.

The human ear can barley hear a 3db increase in volume, which is increasing it by a factor of two.

Granted, and Fraser would have been accurate had he used hearing instead of the sensation of weight. He didn’t.

When we increase the intensity of the light through our lens in the darkroom by a stop, or twice as much light it does not seem that the image is twice as bright.

Absolutely, with the caveat that this is ignoring the eye’s adaptation. Both sound and light share the characteristic of "perceptual uniformity" being logarithmic relative to power. I think it was simply incorrect to generalize this to muscular sensation.

Light adaptation of the eye exists and is well
documented. Gamma encoding does not increase the dynamic range of the image.

Yes it does! all the research says the eye is logarithmic! (To a first approximation) just like film the S curve which is density (linear) plotted against the log of the exposure.

No. The eye adapts to overall light conditions and this is responsible for the extremely large dynamic range of the eye, and not logarithmic encoding of the data received by the eye. Second, simply applying gamma encoding to an image’s pixel values, without reference to roundoff due to finite bit encoding, does not change its dynamic range.

You do open up an interesting point, and that is the question of whether film is inherently linear or not.

Consider this statement: Film is linear in its response to light, just as a CCD is. The number of silver halide crystals that change state as a result of light interaction is a more or less linear function of the number of photons that interacts with the film. Simplifying a bit, if 1000 photons hit a film surface, a certain percentage of them (say 10% for this example) will result in converted halide crystals. 2000 photons will convert double that number. Voila – a linear response to light, very much like the discharge of electrons that occurs in a camera’s CCD.

You mentioned the S shape of the film curve, but neither the toe nor the shoulder of the S-curve is used much in a correctly exposed photograph. Instead, the middle, linear portion of the film’s response curve is used. Thus my contention that film is linear in exactly the same sense that a digital CCD is.

Gamma is not and never has been a form of encoding; it’s simply a way of describing a transfer function on a logarithmic scale! And logarithmic functions do indeed increase the dynamic range for example 3db’ is factor 2 and 30db’s is 1,000, 100dB’s is 10,000,000,000 if that’s not compression I don’t know what is? Logarithmic amplifiers have always been use for compression. Maybe it’s the total lack of understanding of various logarithmic functions that is the problem here. have a look at a slide-rule the linear side normally goes to 10 when the log side goes to 1,000

No, the problem is probably not anyone’s lack of understanding of logarithms. BTW – you and I are among the very few who have ready access to a slide rule. πŸ™‚

The decision of whether to use logs or not is a matter of our own convenience, and says nothing about the underlying process producing the numbers. So the fact that Hurter and Driffield decided long ago to use the log of the exposure along the horizontal axis does not mean that film responds to light logarithmically. Rather, the log of exposure is used for plotting the characteristic curve because of the great ratios involved.

Similarly, density is a logarithmic function because it is more convenient to represent the darkness of a piece of film in logarithmic units of density, instead of the linear units of transmission or opacity (which density is the log of).

I’m not convinced maybe you can produce a reasoned scientific argument to back up you comments rather than some rather abstract comments

I can say in all fairness that I have done so, to the best of my ability.

The Fraser article is oversimplified to the point of inaccuracy, and this is not in keeping with the Adobe’s usually excellent editorial standards. Those who are still beginning to grasp these concepts deserve better. —

Mike Russell
www.curvemeister.com
www.geigy.2y.net
M
MOP
Jan 2, 2005
"Aerticulean Effort" wrote in message
MOP wrote:
"Aerticulean Effort" wrote in message

Wow! Now that is what I call diverse opinions

I think the difference is that light is light (OK?) but human perception of light more closely matches the behaviour of light & film rather than light & digital.

Maybe human perception tends to follow curvilinear responses rather than linear (after allowing for a few transformations along the axes)?
Aerticeus

LOL don’t just don’t
I may be wrong but aren’t F-stops non-linear?

I thought each stop represented a doubling of light input?

I thought that was exactly what I said!
An f number is simply the ratio of the size of the lens element to the focal length so for example a 50mm focal length lens with a 25mm aperture would be f2
now as the amount of light pasing through a lens is proportional to the area and not the diameter of the lens we see every time we double the diameter we increase the area by a factor of 4 therefore to increase the area by a factor 2 we need to increase the diameter by a factor of (root 2) = 1.414 which is why we have strange f numbers 1, 1.4, 2, 2.8, 4 for example so the f stops are in fact in powers of (root 2) or a log scale to the base of (root 2)
or a GP and not a AP πŸ™‚

Of course, this can be linearised by a transform (y-axis in powers of 2)
Aerticeus
M
MOP
Jan 2, 2005
There’s no need to be wrong to be clear and understandable.

Sorry but any simplification by definition will be inaccurate

this thread is testament to that, as you
are arguing over a subject that is, to be quite honest common sense to most first year engineering and physics students (well at Cambridge UK anyway)

Not really, though I would point out that what is obvious to a freshman may
be problematical to a graduate. πŸ™‚

The eye and film do work entirely differently. Two objects *do* appear to weigh twice as much as one.

Sorry they don’t! ( I just tried it) It’s a bit confusing as we are able to process what we feel, so someone use to judging the weight of things may well be able to guess something is twice the weight, but that’s by experience.

I give you credit for experimenting. People have been judging weight manually reasonably accurately for many thousands of years. Surely you don’t
believe that the apparent weight of four versus eight golf balls is the same
as eight versus sixteen.

The human ear can barley hear a 3db increase in volume, which is increasing it by a factor of two.

Granted, and Fraser would have been accurate had he used hearing instead of
the sensation of weight. He didn’t.

When we increase the intensity of the light through our lens in the darkroom by a stop, or twice as much light it does not seem that the image is twice as bright.

Absolutely, with the caveat that this is ignoring the eye’s adaptation. Both
sound and light share the characteristic of "perceptual uniformity" being logarithmic relative to power. I think it was simply incorrect to generalize this to muscular sensation.

OK lets drop that one,

Light adaptation of the eye exists and is well
documented. Gamma encoding does not increase the dynamic range of the image.

Yes it does! all the research says the eye is logarithmic! (To a first approximation) just like film the S curve which is density (linear) plotted against the log of the exposure.

No. The eye adapts to overall light conditions and this is responsible for the extremely large dynamic range of the eye,

Most of that effect is due to the iris of the eye and not any desensitisation of the retina

and indeed nothing to do with the dynamic range of the eye.

I’m sure you know this, but anyway I’ll say it, the dynamic range is the ability to receive a stimulus with little or no distortion. The eye works it two ways firstly with its log response it has a wider dynamic range, however this is also assisted by the iris, which effectively works as an attenuator in front of the retina, the iris can not *selectively* attenuate in either the luminosity domain or the spectral domain , in electronic terms it’s a simple broad band attenuator. in audio terms it’s like putting on ear defenders.

and not logarithmic encoding
of the data received by the eye. Second, simply applying gamma encoding to
an image’s pixel values, without reference to roundoff due to finite bit encoding, does not change its dynamic range.

You do open up an interesting point, and that is the question of whether film is inherently linear or not.

it’s neither! just very non linear parts of the characteristic are close to a log other parts,

the low light section is non-linear due to the chemical inertia of the reaction, (Reciprocity) and the high end non-linear due to the over load characteristic, but the eye is much the same in that respect.
Consider this statement: Film is linear in its response to light, just as a
CCD is. The number of silver halide crystals that change state as a result
of light interaction is a more or less linear function of the number of photons that interacts with the film. Simplifying a bit, if 1000 photons hit a film surface, a certain percentage of them (say 10% for this example)
will result in converted halide crystals. 2000 photons will convert double
that number. Voila – a linear response to light, very much like the discharge of electrons that occurs in a camera’s CCD.

I’m not sure that is correct do you have proof of that assumption?

most chemical reactions are not linear and follow a log response, hence we have a term called the natural log

You mentioned the S shape of the film curve, but neither the toe nor the shoulder of the S-curve is used much in a correctly exposed photograph. Instead, the middle, linear portion of the film’s response curve is used. Thus my contention that film is linear in exactly the same sense that a digital CCD is.

yes but it’s only a linear part of the curve when plotted on a log-lin axis!
Gamma is not and never has been a form of encoding; it’s simply a way of describing a transfer function on a logarithmic scale! And logarithmic functions do indeed increase the dynamic range for example 3db’ is factor 2 and 30db’s is 1,000, 100dB’s is 10,000,000,000 if that’s not compression I don’t know what is? Logarithmic amplifiers have always been use for compression. Maybe it’s the total lack of understanding of various logarithmic functions that is the problem here. have a look at a slide-rule the linear side normally goes to 10 when the log side goes to 1,000

No, the problem is probably not anyone’s lack of understanding of logarithms. BTW – you and I are among the very few who have ready access to
a slide rule. πŸ™‚

That’s a age thing when I first studies engineering calculators were not allowed, and I don’t think mine did logs anyway!
The decision of whether to use logs or not is a matter of our own convenience, and says nothing about the underlying process producing the numbers. So the fact that Hurter and Driffield decided long ago to use the
log of the exposure along the horizontal axis does not mean that film responds to light logarithmically. Rather, the log of exposure is used for
plotting the characteristic curve because of the great ratios involved.
True but the main part of the S curve is flat so that does mean on the linear part of the curve it is logarithmic.
which could be why they picked a log scale.
Similarly, density is a logarithmic function because it is more convenient to represent the darkness of a piece of film in logarithmic units of density, instead of the linear units of transmission or opacity (which density is the log of).
Because it gives a straight line (well as near as it can)
I’m not convinced maybe you can produce a reasoned scientific argument to back up you comments rather than some rather abstract comments

I can say in all fairness that I have done so, to the best of my ability.

Mike it was not you I was really aiming this point at but more Timo who seemed to think if he said something three times it made is right!
The Fraser article is oversimplified to the point of inaccuracy, and this is
not in keeping with the Adobe’s usually excellent editorial standards. Those who are still beginning to grasp these concepts deserve better.

OK point taken, but I do think the paper was basically correct in what it said and not the rubbish Timo is trying to make out. I don’t really want to carry on with this thread as it’s like trying to convert a Jehovah Witness to Christianity J

Anyway this has all rather moved away from what Fraser was really saying and that was if we use 8 bits we only get 255 levels of resolution and if we use 12 bit we get 4095, allowing us to more latitude in fact 12 stops instead of 8 stops.

MOP
H
Hecate
Jan 3, 2005
On Sat, 01 Jan 2005 13:44:17 GMT, "MOP" wrote:

I have done my homework, and then some. Please have a look at http://www.aim-dtp.net/aim/techniques/linear_raw/index.htm where this issue is correctly explained.

Timo Autiokari http://www.aim-dtp.net
Tell you what you believe in what ever you want πŸ™‚
He does, it’s just not what the rest of the world of physics and biomechanics believes πŸ™‚



Hecate – The Real One

veni, vidi, reliqui
TA
Timo Autiokari
Jan 3, 2005
MOP wrote:

No it’s because the eye has a logarithmic characteristic,

Your understanding about the various properties of the vision is very limited.

The property of the vision called Light Adaptation is about logarithmic, this is the same as the Weber’s law. Vision will adapt to various illumination levels. So similarly as we can hear a whisper when it is otherwice quiet we are able to see the stars when it is otherwice dark (so during the night). We do not see any stars during the day. Vision is able to operate over a very very large range of illumination and luminance levels, over someting like 1000000000:1 range in the photopic region only.

When the Light Adaptation is fixed we only see a range of about 200:1. The Light Adaptation moves this 200:1 range according to the prevailing illumination level so that we see well no matter what the absolute illumination level is. This 200:1 range is colorimetricly linear so imaging system must provide the image data for this range so that the luminances between the reproduction and the original have linear relation.

light is light how can it change it’s physical
characteristic just because it’s reflected.

Most often light happens to change it’s physical characteristics quite a lot when it is reflected, if it did not then we would not see e.g. different colors at all. But this is not the issue here.

The issue is that the light adaptation of the vision is pretty much fixed when we view a natural scene or an image of it. So the Weber’s law is not in effect, we see only the 200:1 range, very very linearly. Timo Autiokari http://www.aim-dtp.net
B
bagal
Jan 3, 2005
I knew this was far too complicated for me

Aerticeus
M
MOP
Jan 3, 2005
Okay this is my last posting on this subject!
Timo still has not come up with any references apart from his own bloody web site!!

Snip
No it’s because the eye has a logarithmic characteristic,

Your understanding about the various properties of the vision is very limited.
Do you mean "various properties of vision" or is "various properties of the vision" some hypothesis you are proposing?

Why do you make that assumption? because I do not agree with you?

The property of the vision called Light Adaptation is about logarithmic,
Yes that’s exactly what I said!
this is the same as the Weber’s law.

With reference to the website of the University of South Dakota http://www.usd.edu/psyc301/WebersLaw.htm
they say " Weber’s Law can be applied to variety of sensory modalities (brightness, loudness, mass, line length, etc.). The size of the Weber fraction varies across modalities but in all cases tends to be a constant within a specific modality" Note they include mass πŸ˜‰ now Weber’s law says (delta I) / I = a constant K ( don’t see any log in that equation?)

Vision will adapt to various illumination levels. So similarly as we can hear a whisper when
it is otherwice quiet we are able to see the stars when it is otherwice dark (so during the night). We do not see any stars during the day.

That’s not Weber’s law!

Vision is able to operate over a very very large range of illumination and luminance levels, over someting like 1000000000:1 range in the photopic region only.

That’s 90dB in power terms. sounds a bit high I would have thought 60 80dB’s was nearer the correct value. but for the sake of argument i’ll go along with your value.
When the Light Adaptation is fixed we only see a range of about 200:1.

I assume you mean when the iris in the eye is not working 200 slightly less than 8 stops or 23db’s yes agreed

The Light Adaptation moves this 200:1 range according to the prevailing illumination level so that we see well no matter what the absolute illumination level is.
Agreed and exactly how a film camera works. Film has around 8 stops latitude
and with the iris in the lens adding a further 7 to 8 stops giving a range of 16 stops
or 2 to the power of 16 = 65,536 or nearly 50dB’s

This 200:1 range is colorimetricly linear so
imaging system must provide the image data for this range so that the luminances between the reproduction and the original have linear relation.
Sorry that sentence does not make any sense what so ever I assume you mean colorimetrically linear?
But I really don’t see what it has to do with the subject we are talking about.

light is light how can it change it’s physical characteristic just because it’s reflected.

Most often light happens to change it’s physical characteristics quite a lot when it is reflected, if it did not then we would not see e.g. different colors at all. But this is not the issue here.

NO the laws of optics or light do not change on reflection! light is the most fundemental quantity in the universe, if what you are saying is correct, Steve Hawkin, Martin Rees Albert Einstein, Isaac Newton, to mention just a few are wrong and you are right!
Yea Right!

however what is true is:- when white light or indeed any light is reflected by a surface some of the wavelengths will be absorbed more than others, causing the apparent colour to change but the fundemental carecteristics of the light do not change, I thought Sir Isaac Newton had pretty much coverd that in the 17th century
http://www.newton.cam.ac.uk/newtlife.html

I think what you are getting confused with is the fact that an image when viewed with transmitted light (slide) will have a higher variation of intensity to a image viewed with reflected light (print) this nothing to do with the carecteristics of light but the reflectance of the image, if we printed our image on a mirror it would be exactly the same as a transmitted image.
The issue is that the light adaptation of the vision is pretty much fixed when we view a natural scene or an image of it. So the Weber’s law is not in effect,

Weber’s law has never been anything to do with this thread, it is some spurious law *you* keep quoting, and incorrectly at that! Weber says if you have a light of a certain intensity (I) and change that intensity by a small amount (delta I) so the eye can just detect that change then that ratio will be a constant (K) irrespective of the intensity of the light

we see only the 200:1 range, very very linearly.

I dispute the linearity bit! but just to keep you happy I’ll agree with you! so the eye without the iris or light adaptation has a range of a little less than 8 stops and with light adaption goes up to something like 20stops? so the eye does indeed act pretty much exactly like a camera and film, so what you have just done is agreed with Fraser. and indeed shot yourself in the foot!

If you really want to argue this again please quote some references and explain why you have referred to such data, without this you arguments hold no validity what so ever. you can’t just keep repeating yourself, you must back up you argument either with valid maths, or published material from respected authorities on the subject! Oh! sorry you think they are all wrong! I’ll tell you all about my perpetual motion machine if you tell me about you theories on physics!

MOP
TA
Timo Autiokari
Jan 4, 2005
MOP wrote:

I dispute the linearity bit!

You are confused, quite similarly like Mr. Fraser is, between the visual perception and requirements for image reproduction.

What comes to the visual perception of surface reflectances please go here:
http://www.cis.rit.edu/people/faculty/montag/vandplite/pages /chap_6/ch6p10.html and note the entry: Lightness, (gamma-space) 1.2, Reflectance of gray papers. So, quite linear, not logarithmic and not in an extremely steep gamma space like e.g. the sadRGB and the AdobeRGB working-spaces are.

What comes to image reproduction, our imaging path
(scene-camera-monitor) is confusing your thinking so I make things a little more easy to understand for you, we’ll inspect a much more simple image reproduction path, a normal household mirror. The reflectance of such mirros are in the range of 35% to 70%, let us consider a mirror that has 50% reflectance.

So, the 50% reflectance of this mirror drops the surface luminances by 1 f/stop, what ever the surface luminance value is, it is dropped down to 50%.

Now, Mr. Fraser and you seem to believe that mirrors would somehow apply this "eye response curve" that Mr. Fraser is writing about in the Adobe white paper in question. Do you see the image in mirrors as "very dark", with the similar appearance like Mr. Fraser’s linear example image on your monitor has?

In reality mirrors behaves in this regards exactly similarly like the linear image sensors behave (and exactly similarly like a lens at f/1.4 behaves). And the image that the mirror reflects is _perfect_ for the human vision and has _perfectly linear_ relation to the surface reflectances of the scene it is reflecting, just the surface luminances are scaled down by 50%. _No_ curve is applied by the mirror (and the same is true for lenses), they just scale the surface luminances by a factor, by a constant factor what ever the surface luminance values in the scene are.

Mr. Fraser is working in a highly non-liner RGB working-space so he need to appy his curve over the linear RAW data in order to make the surface luminances on his monitor linear in respect to the original scene luminances, in other words he is compesating the non-linearity of his RGB working-space by applying the the inverse curve over the image data. But to use the Curves dialog for this purpose is a _very_ low quality, low accuracy method since e.g. a steep enough curve can not be applied with the Curves tool.

Timo Autiokari http://www.aim-dtp.net
GH
Gernot Hoffmann
Jan 4, 2005
Timo is right.

IMO, Mr.B.F.’s image is underexposed. If it were cor-
rectly exposed then he had simply to apply a power
function y=x^(1/2.2) in order to compensate the monitor
tone reproduction curve.
Now he needs a very steep correction with gamma higher
than 2.2.
There are two highlights in the image, level about 220.
Perhaps flash reflections.
For a better average exposure these highlights would
have been clipped, without much harm, because the high-
lights are small isolated areas. Much better: no direct
flash.
http://www.fho-emden.de/~hoffmann/bf03012005.pdf

All this has nothing to do with nonlinearities in human
perception.

Best regards –Gernot Hoffmann
M
MOP
Jan 4, 2005
Timo, we had a term for people like you when I was at Cambridge University, "bullshitters", and the term for your argument technique was called B cubing or "bullshit baffles brains"
The URL you quote has nothing to do with this argument other than reinforcing the fact the eye is logarithmic.
and indeed our *perception of weight*, which I believe you rubbished Fraser for in an earlier posting.
if you feel it *has* some relevance please quote the formula and explain! just quoting a URL does not constitute a valid explanation if you were one of my students I certainly would have failed you!
I suspect what you have done is trawl through the web til you found a big and complicated formula that had some passing reference to light and human perception, quoting it to make you sound like to know what you were talking about.

I’m not sure (I maybe wrong on this one) but I thought mirrors reflected nearly 94% of the light) when I worked on laser systems we use several mirrors, and lasers of quite high powers Kw now if what you are saying is true then every time the light was reflected from a mirror that mirror would have to dissipate 50% of the power!
I did not notice any of our mirrors getting hot I just checked on the web and they say the mirrors we were using are better than 99.98% a long way from your 50% or -3db
the rest of what you say is just you regurgitating what you have already said
you keep saying gamma-space and linear in the same breath! a gamma curve is plotted on a log scale, so a straight line on such a graph is LOGARITHMIC and not LINEAR!
I got totally lost with you ramblings on mirrors of course they don’t change the light, I said in my previous posting.
light does not change.
Timo I really would stop this thread you are digging a bigger hole for yourself and rapidly loosing any resemblance of authority you ever had in this subject. Indeed already you have quoted things that later *you* have proved yourself to be wrong! If you don’t understand the subject shut-up.

wrote in message
MOP wrote:

I dispute the linearity bit!

You are confused, quite similarly like Mr. Fraser is, between the visual perception and requirements for image reproduction.
What comes to the visual perception of surface reflectances please go here:
http://www.cis.rit.edu/people/faculty/montag/vandplite/pages /chap_6/ch6p10.html and note the entry: Lightness, (gamma-space) 1.2, Reflectance of gray papers. So, quite linear, not logarithmic and not in an extremely steep gamma space like e.g. the sadRGB and the AdobeRGB working-spaces are.
What comes to image reproduction, our imaging path
(scene-camera-monitor) is confusing your thinking so I make things a little more easy to understand for you, we’ll inspect a much more simple image reproduction path, a normal household mirror. The reflectance of such mirros are in the range of 35% to 70%, let us consider a mirror that has 50% reflectance.

So, the 50% reflectance of this mirror drops the surface luminances by 1 f/stop, what ever the surface luminance value is, it is dropped down to 50%.

Now, Mr. Fraser and you seem to believe that mirrors would somehow apply this "eye response curve" that Mr. Fraser is writing about in the Adobe white paper in question. Do you see the image in mirrors as "very dark", with the similar appearance like Mr. Fraser’s linear example image on your monitor has?

In reality mirrors behaves in this regards exactly similarly like the linear image sensors behave (and exactly similarly like a lens at f/1.4 behaves). And the image that the mirror reflects is _perfect_ for the human vision and has _perfectly linear_ relation to the surface reflectances of the scene it is reflecting, just the surface luminances are scaled down by 50%. _No_ curve is applied by the mirror (and the same is true for lenses), they just scale the surface luminances by a factor, by a constant factor what ever the surface luminance values in the scene are.

Mr. Fraser is working in a highly non-liner RGB working-space so he need to appy his curve over the linear RAW data in order to make the surface luminances on his monitor linear in respect to the original scene luminances, in other words he is compesating the non-linearity of his RGB working-space by applying the the inverse curve over the image data. But to use the Curves dialog for this purpose is a _very_ low quality, low accuracy method since e.g. a steep enough curve can not be applied with the Curves tool.

Timo Autiokari http://www.aim-dtp.net
TA
Timo Autiokari
Jan 4, 2005
MOP wrote:

I got totally lost with you ramblings on mirrors

Yes, I can see you are confused.

Now, we seem to agree that the imaging path that mirrors provide do not require the "eye response curve" of Mr. Fraser. Even if the mirror works fully and exactly in the linear domain.

Very similarly linear RAW capture does not require such "eye response curve", the imaging sensors behave similarly linearly like the mirror behaves.

So the problem that Mr. Fraser does not understand is that he is looking at the linear RAW processed data in a very steep gamma RGB working-space (propably in the sadRGB or the adobeRGB). The linear RAW data does not produce linear light on the monitor in this case so the data has to be gamma compensated. This compensation however is not an "eye response curve" like Mr. Fraser believes but a function that compensates the gamma 2.2 space tonal reproduction range of his RGB working-space.

About you problem to understand the visual perception, it is just that, perception, and according to the Steven’s Power Law it behaves in about gamma 1.2 space (a power function). Perception is built into our visual system, it is always there (in eye+brain) so it must not be compensated or "taken into account" in the imaging path. The imaging path simply has to provide, by linear relation, colorimetricly, the scene luminances for the eye. Similarly like the mirror does it. Then when you look at the reproduction your perception does what it does, and does that similarly no matter if you are looking at the original scene or at the reproduction on the monitor.

You also had trouble to understand the Weber’s law. It is the Weber’s law that says that the vision adapts logarithmicly to the changing illumination level, in other words we perceive the changes in the illumination level logarithmicly. Also the Weber’s law must not be compensated or "taken into account" in the imaging path. The imaging path simply has to provide, by linear relation, colorimetricly, the scene luminances for the eye. Similarly like the mirror does it. Then when you look at the reproduction your perception does what it does, and does that similarly no matter if you are looking at the original scene or at the reproduction on the monitor.
Timo Autiokari http://www.aim-dtp.net
MR
Mike Russell
Jan 4, 2005
MOP wrote:
Timo, we had a term for people like you when I was at Cambridge University, "bullshitters", and the term for your argument technique was called B cubing
or "bullshit baffles brains"

Ad hominum.

I suspect what you have done is trawl through the web til you found a big
and complicated formula that had some passing reference to light and human perception, quoting it to make you sound like to know what you were talking about.

I think your suspicious are unfounded. I thought Timo’s analogy was interesting and clarifying – a mirror as an example of reduction of light by a constant factor is a good everyday example of a linear function applied to an image. It’s the sort of example that makes all this comprehensible to people like Artie πŸ™‚

I’m not sure (I maybe wrong on this one) but I thought mirrors reflected
nearly 94% of the light) when I worked on laser systems we use several mirrors, and lasers of quite high powers Kw

Front surface mirrors are normally used for laser systems, and they are very efficient indeed.

now if what you are saying is
true then every time the light was reflected from a mirror that mirror would have to dissipate 50% of the power!

No – ordinary mirrors, such as we use in our homes are back coated for the sake of durability, and the light goes through the glass twice, hence Timo’s 35 to 75% figure, which I have no reason to question.

Timo continues to address the technical issue , in spite of a rash of flames and ad hominum flailing (to be fair, this is not just from you, MOP). He does this in a way that people lurking on this thread can understand. Increasingly, this sort of fireproof attitude is necessary for any discussion on Usenet.

For this, Timo, kudos. I suggest others emulate him. If your technical point is valid, it should stand on its own, without the need to insult others.


Mike Russell
www.curvemeister.com
www.geigy.2y.net
GH
Gernot Hoffmann
Jan 4, 2005
Mike,

thanks for your posts.
Timo and I are not members of a ‘party’. We disagree often. But here I’m really embarrassed about people who just attack Timo, and who donΒ΄t publish any programs or tutorials which might be helpful for a better understanding of image processing.

Best regards –Gernot Hoffmann
M
MOP
Jan 4, 2005
I’m not really interested in this thread anymore, I think you lost your credibility when you said
" Two objects *do* appear to weigh twice as much as one" Timo’s reference
http://www.cis.rit.edu/people/faculty/montag/vandplite/pages /chap_6/ch6p10.html seems to indicate you are wrong even quoting an empirical exponential value of 1.45
I believe my technical points are indeed valid and do stand on their own, as for the insults, I’m sorry but they are made in frustration that Timo keeps saying the same thing over and over again without being able to prove them either by reference to technical papers or a mathematical analysis. when he did indeed produce a reference to a technical paper it seems to be totally unrelated, and he did not actually explain why he is quoting it.

He should be quite prepared to be shot down as he has rubbished a paper written by some one who is regarded as a authority on the subject, so should be able to fully justify his accusations.
For example in a previous posting I made reference to Weber’s law’s saying it did not contain any non linear functions,
rather than answer that point Timo decided to delete that part of my argument and start all over again saying that it was a non-linear function. quite a common technique in an argument but also not very scientific!

Anyway as I said I have lost interest, the holidays are over and I have to again concentrate on my business. so you won’t be hearing from me again on this subject. thanks to all the contributors for a lively debate and I hope no one has taken any of the comment to heart. Happy New Year to everyone! MOP

—————-Snip—————-
M
MOP
Jan 4, 2005
"Gernot Hoffmann" wrote in message
Mike,

thanks for your posts.
Timo and I are not members of a ‘party’. We disagree often. But here I’m really embarrassed about people who just attack Timo, and who don
MR
Mike Russell
Jan 4, 2005
MOP wrote:
I’m not really interested in this thread anymore, I think you lost your credibility when you said
" Two objects *do* appear to weigh twice as much as one"

I’ll stand by that.

Timo’s reference
http://www.cis.rit.edu/people/faculty/montag/vandplite/pages /chap_6/ch6p10.html
seems to indicate you are wrong even quoting an empirical exponential value
of 1.45

LOL. This paper is an historical curiosity, right up there with phlogisten, quoting exponent values for things like Smell, Tactual roughness of emery paper, warm and cold discomfort levels, etc. The values were measured by having the subject adjust the brightness of a light or sound source to match another sensation, such as the amount of discomfort he felt from cold.

Stevens was a grand figure in the field of "Psychophisics", an early, and abortive attempt to characterize physical senations of all kinds in quantitative terms. At its basis is the thesis that all nerve sensation is essentially converted environmental energy.

I believe my technical points are indeed valid and do stand on their own, as for the insults, I’m sorry but they are made in frustration that Timo keeps saying the same thing over and over again without being able to prove them either by reference to technical papers or a mathematical analysis. when he did indeed produce a reference to a technical paper it seems to be totally unrelated, and he did not actually explain why he is quoting it.

I don’t find that to be the case – opinions differ. What I object to in your and other’s responses to Timo is the ad hominum attacks. They add nothing, and the excuse that they are made in frustration is no compensation for the general degradation of the tone of the discussion.

He should be quite prepared to be shot down as he has rubbished a paper written by some one who is regarded as a authority on the subject, so should be able to fully justify his accusations.

Bruce is well able to defend himself. He testified against Apple in the ColorSynch suit and was regarded as a pariah for some length of time by anyone whose income derived from Macintosh related products.

For example in a previous posting I made reference to Weber’s law’s saying it did not contain any non linear functions,
rather than answer that point Timo decided to delete that part of my argument and start all over again saying that it was a non-linear function. quite a common technique in an argument but also not very scientific!

This is typical for Usenet – delete the parts that you don’t have juicy responses for. πŸ™‚

Anyway as I said I have lost interest, the holidays are over and I have to again concentrate on my business. so you won’t be hearing from me again on this subject.

Uh huh.

thanks to all the contributors for a
lively debate and I hope no one has taken any of the comment to heart.

Fair enough.

Happy New Year to everyone!

We end in agreement.

MOP


Mike Russell
www.curvemeister.com
www.geigy.2y.net
CC
Chris Cox
Jan 10, 2005
In article , Timo Autiokari
wrote:

On Sat, 01 Jan 2005 13:34:10 GMT, "MOP" wrote:
When we increase the intensity of the light through our lens in the darkroom by a stop, or twice as much light it does not seem that the image is twice as bright.

You know that’s wrong – it’s been explained to you only a few dozen times….

That is so, because the vision will adapt to the changes in lightness level. I think I have explained this already some three times.

And your explanations have been debunked every single time.

Vision however does not adapt to changes in surface reflectances that we perceive in a natural scene or an image of it. Indeed that would be a very irritating property of vision, what ever we would be looking at would always appear as seriously flat washed out.

The analogy with hearing is not very straightforward since the audio stimuli change in time, all the time.

So does human vision – look into it sometime πŸ˜‰

Chris
PR
Paco Rosso
Jan 14, 2005
It is an usual missunderstood concept that the gamma is related to the nonlienal performance of screens, but it seems to be related actually with the need of a "more" perceptual codification of signal to achieve the eye’s performance.

It looks that the author is trying to explain that linear image data would not appear properly for the human vision *because* the Light Adaptation of the human vision is non-linear. This is nonsense.

I’m afraid it is not a nonsense, but a missundersanding.

And gamma compressed image data appears properly for our vision when the monitor is gamma calibrated to that gamma space (or the image data is shown in such gamma compressed working-space of a color-managed application).

The gamma of the screen is the relevant one. The gamma of the image must conform with the output device, not the outputdevice with a theoretical number. Things are what things are, not what theory says.
PR
Paco Rosso
Jan 14, 2005
Timo is right.

IMO, Mr.B.F.’s image is underexposed. If it were cor-
rectly exposed then he had simply to apply a power
function y=x^(1/2.2) in order to compensate the monitor
tone reproduction curve.

When you get a image with a lineal gamma, the image seems to be underexposed. You have all the dinamic range of the sensor from 0 to 4096 levels and only use what the sensor "saw". The postprocessing of the raw data produce the scalation of the light measured to the total numeric range of the digital output.
Actually what you are talking is about to change the lineal data to a nonlineal data with a gamma of 2.2. Not no recover an underexposed image.
S
Sparticle
Jan 30, 2005
On Tue, 04 Jan 2005 18:22:27 GMT, "Mike Russell" wrote:

MOP wrote:
Timo, we had a term for people like you when I was at Cambridge University, "bullshitters", and the term for your argument technique was called B cubing
or "bullshit baffles brains"

Ad hominum.

correction – Ad hominem

Ad hominum.= a black metal band
http://www.mourningtheancient.com/adhom.htm

sparticle

<snip>

Must-have mockup pack for every graphic designer πŸ”₯πŸ”₯πŸ”₯

Easy-to-use drag-n-drop Photoshop scene creator with more than 2800 items.

Related Discussion Topics

Nice and short text about related topics in discussion sections