"16-bit" mode.

J
Posted By
JPS
Nov 16, 2004
Views
4157
Replies
123
Status
Closed
First I thought it was strange when I found out that Photoshop’s 16-bit mode was actually 15-bit. Then, I though it was stranger still when I found out there was one additional level (32768; it was explained that this made certain types of math faster). Now, this evening, I was writing an applet to extract RAW data from the uncompressed output of the DNG converter, and decided to create a raw greyscale bitmap 256*256, containing every possible 16-bit value, to see what PS would do with them. I assumed that PS would make 0 and 1 0, 2 and 3 1, 4 and 5 2, etc, but it does not. This is a sample of what it does:

real
16-bit PS values
data

0 0
1 0
2 0
3 3
4 3
5 3
6 3
7 3
8 3
9 5

2044 19128
2045 19131
2046 19131
2047 19131
2048 19131
2049 19131
2050 19131
2051 19134
2052 19134

4085 32762
4086 32762
4087 32766
4088 32766
4089 32766
4090 32766
4091 32766
4092 32766
4093 32768
4094 32768
4095 32768

Does anyone have any idea why they posterize the data even more than the 15-bit limitation?

This is basically about 13.5 bits of level-resolution.


<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><

How to Improve Photoshop Performance

Learn how to optimize Photoshop for maximum speed, troubleshoot common issues, and keep your projects organized so that you can work faster than ever before!

KW
Ken Weitzel
Nov 16, 2004
wrote:
First I thought it was strange when I found out that Photoshop’s 16-bit mode was actually 15-bit. Then, I though it was stranger still when I found out there was one additional level (32768; it was explained that this made certain types of math faster). Now, this evening, I was writing an applet to extract RAW data from the uncompressed output of the DNG converter, and decided to create a raw greyscale bitmap 256*256, containing every possible 16-bit value, to see what PS would do with them. I assumed that PS would make 0 and 1 0, 2 and 3 1, 4 and 5 2, etc, but it does not. This is a sample of what it does:
real
16-bit PS values
data

0 0
1 0
2 0
3 3
4 3
5 3
6 3
7 3
8 3
9 5

2044 19128
2045 19131
2046 19131
2047 19131
2048 19131
2049 19131
2050 19131
2051 19134
2052 19134

4085 32762
4086 32762
4087 32766
4088 32766
4089 32766
4090 32766
4091 32766
4092 32766
4093 32768
4094 32768
4095 32768

Does anyone have any idea why they posterize the data even more than the 15-bit limitation?

This is basically about 13.5 bits of level-resolution.

Hi John…

Just quickly off the top of my head… is it possible
that they’ve used a base 1 array?

Take care.

Ken
U
usenet
Nov 16, 2004
Kibo informs me that stated that:

16-bit PS values
[…]
0 0
1 0
[…]
2052 19134

4085 32762
[…]
4095 32768

Does anyone have any idea why they posterize the data even more than the 15-bit limitation?

Off the top of my head, my #1 guess would be that it’s a colour-space conversion. (There are plenty of other possibilities, but that seems by far the most likely.)

Questions:
What format are you using to store your artificial data? What colour-space have you got PS configured to work in? What rules have you told PS to use when loading a file with a different model to the default, or no model at all?


W
. | ,. w , "Some people are alive only because
\|/ \|/ it is illegal to kill them." Perna condita delenda est —^—-^————————————————— ————
J
JPS
Nov 16, 2004
In message ,
wrote:

Kibo informs me that stated that:

16-bit PS values
[…]
0 0
1 0
[…]
2052 19134

4085 32762
[…]
4095 32768

Sorry; I messed up on that chart. I had 12-bit values in my head when I wrote that. The numbers in the "real 16-bit column" should have been starting at zero, around 32768, and around 65535, like so:

real
16-bit PS values
data

0 0
1 0
2 0
3 3
4 3
5 3
6 3
7 3
8 3
9 5

32764 19128
32765 19131
32766 19131
32767 19131
32768 19131
32769 19131
32770 19131
32771 19134
32772 19134

65525 32762
65526 32762
65527 32766
65528 32766
65529 32766
65530 32766
65531 32766
65532 32766
65533 32768
65534 32768
65535 32768

Does anyone have any idea why they posterize the data even more than the 15-bit limitation?

Off the top of my head, my #1 guess would be that it’s a colour-space conversion.

Can’t be in this case. This is "open as .raw"

Even if there were a color space, I can’t think of any reason to posterize in it. Color space is about things like gamma, saturation, combinations of primary colors, not about quantization.

(There are plenty of other possibilities, but that seems by far the most likely.)

Questions:
What format are you using to store your artificial data? What colour-space have you got PS configured to work in? What rules have you told PS to use when loading a file with a different model to the default, or no model at all?

..raw

The things you are asking about would affect curves, not quantization. —

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
J
JPS
Nov 16, 2004
In message ,
wrote:

What format are you using to store your artificial data?

A binary file of 2-byte unsigned integers from 0 to 65535, loaded as ..raw.


<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
M
MitchAlsup
Nov 16, 2004
wrote in message news:…
First I thought it was strange when I found out that Photoshop’s 16-bit mode was actually 15-bit. Then, I though it was stranger still when I found out there was one additional level (32768; it was explained that this made certain types of math faster). Now, this evening, I was writing an applet to extract RAW data from the uncompressed output of the DNG converter, and decided to create a raw greyscale bitmap 256*256, containing every possible 16-bit value, to see what PS would do with them. I assumed that PS would make 0 and 1 0, 2 and 3 1, 4 and 5 2, etc, but it does not. This is a sample of what it does:
real
16-bit PS values
data

0 0
<snip>
9 5

2044 19128
<snip>
2052 19134

4085 32762
<snip>
4095 32768

Does anyone have any idea why they posterize the data even more than the 15-bit limitation?

This looks like somebody is applying the gamma correction to the RAW data ouptut = input**2.2 / some-fixed-scaling

Mitch
ME
Mike Engles
Nov 16, 2004
wrote:
First I thought it was strange when I found out that Photoshop’s 16-bit mode was actually 15-bit. Then, I though it was stranger still when I found out there was one additional level (32768; it was explained that this made certain types of math faster). Now, this evening, I was writing an applet to extract RAW data from the uncompressed output of the DNG converter, and decided to create a raw greyscale bitmap 256*256, containing every possible 16-bit value, to see what PS would do with them. I assumed that PS would make 0 and 1 0, 2 and 3 1, 4 and 5 2, etc, but it does not. This is a sample of what it does:
real
16-bit PS values
data

0 0
1 0
2 0
3 3
4 3
5 3
6 3
7 3
8 3
9 5

2044 19128
2045 19131
2046 19131
2047 19131
2048 19131
2049 19131
2050 19131
2051 19134
2052 19134

4085 32762
4086 32762
4087 32766
4088 32766
4089 32766
4090 32766
4091 32766
4092 32766
4093 32768
4094 32768
4095 32768

Does anyone have any idea why they posterize the data even more than the 15-bit limitation?

This is basically about 13.5 bits of level-resolution.


<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><

Hello

There was a discussion about this kind of thing on the scanner group, comp.periphs.scanner. Post your observation there. You might get a explanation.

Mike Engles
U
usenet
Nov 17, 2004
Kibo informs me that stated that:

In message ,
wrote:
[…]

Sorry; I messed up on that chart. I had 12-bit values in my head when I wrote that. The numbers in the "real 16-bit column" should have been starting at zero, around 32768, and around 65535, like so:
real
16-bit PS values
0 0
9 5
32772 19134
65535 32768
(etc)

Okay, that seems a much better choice of data set. 😉

Does anyone have any idea why they posterize the data even more than the 15-bit limitation?

Off the top of my head, my #1 guess would be that it’s a colour-space conversion.

Can’t be in this case. This is "open as .raw"

But what kind of .raw? – You obviously don’t mean a Canon (etc) camera RAW file. ISTR that PS has some sort of generic, simple binary image format. Is that what you mean?

Even if there were a color space,

Unless you’ve disabled it, PS colour-manages by default, & will convert to your working space, or bitch about your input file, etc. If it were me, I’d check the colour-management settings & ensure that they’re set to ask what to do when opening a file that doesn’t have an embedded colour-space, or one that doesn’t match your working space.

I can’t think of any reason to
posterize in it. Color space is about things like gamma, saturation, combinations of primary colors, not about quantization.

I don’t know if this is what’s happening to you, but if your working space is (for example) sRGB, & your image file is maxing out the absolute limits of what PS can process (which it does, because that’s the whole point of your test!), then I woldn’t be greatly surprised if PS rescales/transforms that data to fit within the working colour-space. Think about it from the point of view of the PS programmers – your test image is the very definition of a pathological program input, & it has to do *something* to shrink the input gamut into something that won’t choke the image-processing functions.

(There are plenty of other possibilities, but that seems by far the most likely.)

Questions:
What format are you using to store your artificial data? What colour-space have you got PS configured to work in? What rules have you told PS to use when loading a file with a different model to the default, or no model at all?

.raw

The things you are asking about would affect curves, not quantization.

Same difference in this context. Your problem, as you’ve stated it, is that you are inputting a data set containing the maximum & minimum possible data values allowable (we assume, at least) by the file format. If that data is transformed in any way whatsoever, (offsets, scaling, curves, anything at all), you’ll lose the 1:1 correspondance that you’re expecting to see.
What I suspect is that it’ll turn out that PS doesn’t /really/ permit pixels to stretch from the -2^15 to +2^15 range implied by the signed 16 bit/channel/pixel data format, & that at minimum, it transforms any data that pathological into some internal pseudo-colour-space that’s bigger than any of the standard spaces.


W
. | ,. w , "Some people are alive only because
\|/ \|/ it is illegal to kill them." Perna condita delenda est —^—-^————————————————— ————
U
usenet
Nov 17, 2004
Kibo informs me that stated that:

In message ,
wrote:

What format are you using to store your artificial data?

A binary file of 2-byte unsigned integers from 0 to 65535, loaded as .raw.

Got any data on that format? You’ve gotten me interested enough in this question to want to try a few experiments myself. 😉


W
. | ,. w , "Some people are alive only because
\|/ \|/ it is illegal to kill them." Perna condita delenda est —^—-^————————————————— ————
J
JPS
Nov 17, 2004
In message ,
wrote:

Kibo informs me that stated that:

In message ,
wrote:

What format are you using to store your artificial data?

A binary file of 2-byte unsigned integers from 0 to 65535, loaded as .raw.

Got any data on that format? You’ve gotten me interested enough in this question to want to try a few experiments myself. 😉

I just wrote an applet in C that stripped the RAW data out of uncompressed .dng files and multiplied them by 16 so that there wouldn’t be any quantization (the .dng files use the numbers 0 to 4095 in 16-bit space) . I modified it slightly to create a string of 65536 2-byte integers, 0 to 65535, and loaded it into PS as a 256*256 16-bit greyscale bitmap. I placed the "Info" cursor over it, and got those values.


<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
J
JPS
Nov 17, 2004
In message ,
(Mitch Alsup) wrote:

This looks like somebody is applying the gamma correction to the RAW data ouptut = input**2.2 / some-fixed-scaling

No; it’s totally linear. 6 sequential 16-bit values become one "PS 16-bit" value (except for the highest and lowest values, which are 3 to
1).

I think its time for Adobe to realize that people have fast machines, and that they need to stop cutting corners on quality for speed. —

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
MC
MArtin Chiselwitt
Nov 17, 2004
what the hell are you talking about?!!! 🙁
U
username
Nov 17, 2004
wrote:

In message ,
real
16-bit PS values
data

0 0
1 0
2 0
3 3
4 3
5 3
6 3
7 3
8 3
9 5

32764 19128
32765 19131
32766 19131
32767 19131
32768 19131
32769 19131
32770 19131
32771 19134
32772 19134

65525 32762
65526 32762
65527 32766
65528 32766
65529 32766
65530 32766
65531 32766
65532 32766
65533 32768
65534 32768
65535 32768
In my testing of photoshop on real images, I find the following equation:

PS = int(IP/2),

where PS = the photoshop "16-bit" value and
IP = the 16-bit output from ImagesPlus.

Roger
BV
Bart van der Wolf
Nov 17, 2004
"Mike Engles" wrote in message
SNIP
There was a discussion about this kind of thing on the scanner
group,
comp.periphs.scanner. Post your observation there. You might get a explanation.

You probably are referring to these:
1. Chris Cox’s ‘explanation’ of the internal format of 15bits+1: < http://groups.google.nl/groups?selm=190520021813529214%25cco x%40minds pring.com&output=gplain>
2. The way Photoshop converts (16-bits to 8-bits)
< http://groups.google.com/groups?selm=060620042031159126%25cc ox%40mind spring.com&output=gplain>

Bart
E
eawckyegcy
Nov 17, 2004
wrote:

(Mitch Alsup) wrote:

This looks like somebody is applying the gamma correction to the RAW
data
ouptut = input**2.2 / some-fixed-scaling

No; it’s totally linear.

Form your own table:

32764/19128 = 1.713

65525/32762 = 2.000

So it isn’t exactly linear. From these two points (essentially all you posted) it looks like there is a gamma-esque hump in the transfer function. You should fetch gnuplot or a similar plotting tool and plot the entire "real" vs "photoslop" correspondance you obtained.
J
JPS
Nov 17, 2004
In message ,
wrote:

wrote:

(Mitch Alsup) wrote:

This looks like somebody is applying the gamma correction to the RAW
data
ouptut = input**2.2 / some-fixed-scaling

No; it’s totally linear.

Form your own table:

32764/19128 = 1.713

65525/32762 = 2.000

So it isn’t exactly linear. From these two points (essentially all you posted) it looks like there is a gamma-esque hump in the transfer function. You should fetch gnuplot or a similar plotting tool and plot the entire "real" vs "photoslop" correspondance you obtained.

Yes, you’re right; it isn’t perfectly linear, due to color management. It is almost linear (gamma close to 1), but that isn’t the most disturbing problem. The problem is that the data is severely posterized. Even with color management disabled, this only happens with greyscale; 16-bit RGB bitmaps transfer very simply; they are psvalue=int((inputvalue+1)/2)). Greyscale only uses 1/3 as many levels as RGB; 6 input values become 1 PS value, as opposed to 2 becoming 1. —

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
J
JPS
Nov 17, 2004
In message <419b5fc7$0$14941$>,
"Bart van der Wolf" wrote:

"Mike Engles" wrote in message
SNIP
There was a discussion about this kind of thing on the scanner
group,
comp.periphs.scanner. Post your observation there. You might get a explanation.

You probably are referring to these:
1. Chris Cox’s ‘explanation’ of the internal format of 15bits+1: < http://groups.google.nl/groups?selm=190520021813529214%25cco x%40minds pring.com&output=gplain>
2. The way Photoshop converts (16-bits to 8-bits)
< http://groups.google.com/groups?selm=060620042031159126%25cc ox%40mind spring.com&output=gplain>

The slight gamma turned out to be from color management, but what I am talking about appears to be slightly different from what is discussed in the 15-bit + 1, or 16->8-bit discussions. 16-bit greyscale is highly posterized. With color management turned off, the count starts:

0 0 0 1 1 1 1 1 1 3 3 3 3 3 3 …

Where is 2? You can’t be skipping numbers while others are being repeated multiple times; that makes no sense whatsoever.

The sad fact of the matter is that not only is PS’ "16-bit" RGB only 15-bit, but its "16-bit" greyscale has even less effective bits. Even if you perform a bicubic upsampling, you can’t get the missing values. —

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
J
JPS
Nov 17, 2004
In message ,
"Roger N. Clark (change username to rnclark)" wrote:

In my testing of photoshop on real images, I find the following equation:

PS = int(IP/2),

Are you sure it isn’t "PS = int((IP+1)/2)"?

where PS = the photoshop "16-bit" value and
IP = the 16-bit output from ImagesPlus.



<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
U
username
Nov 18, 2004
wrote:

In message <419b5fc7$0$14941$>,
"Bart van der Wolf" wrote:

"Mike Engles" wrote in message
SNIP

There was a discussion about this kind of thing on the scanner

group,

comp.periphs.scanner. Post your observation there. You might get a explanation.

You probably are referring to these:
1. Chris Cox’s ‘explanation’ of the internal format of 15bits+1: < http://groups.google.nl/groups?selm=190520021813529214%25cco x%40minds pring.com&output=gplain>
2. The way Photoshop converts (16-bits to 8-bits)
< http://groups.google.com/groups?selm=060620042031159126%25cc ox%40mind spring.com&output=gplain>

The slight gamma turned out to be from color management, but what I am talking about appears to be slightly different from what is discussed in the 15-bit + 1, or 16->8-bit discussions. 16-bit greyscale is highly posterized. With color management turned off, the count starts:
0 0 0 1 1 1 1 1 1 3 3 3 3 3 3 …

Where is 2? You can’t be skipping numbers while others are being repeated multiple times; that makes no sense whatsoever.
The sad fact of the matter is that not only is PS’ "16-bit" RGB only 15-bit, but its "16-bit" greyscale has even less effective bits. Even if you perform a bicubic upsampling, you can’t get the missing values.

John:
Can you send me your 256*256 raw file? I’ll take a look at it with my tools, including some custom image processing
unix programs and compare to photoshop and ImagesPlus.

Roger
T
toby
Nov 18, 2004
wrote in message news:…

The sad fact of the matter is that not only is PS’ "16-bit" RGB only 15-bit, but its "16-bit" greyscale has even less effective bits. Even if you perform a bicubic upsampling, you can’t get the missing values.

That’s kind of obvious once you accept that the internal representation is only 15 bits.
J
JPS
Nov 18, 2004
In message ,
(Toby Thain) wrote:

wrote in message news:…

The sad fact of the matter is that not only is PS’ "16-bit" RGB only 15-bit, but its "16-bit" greyscale has even less effective bits. Even if you perform a bicubic upsampling, you can’t get the missing values.

That’s kind of obvious once you accept that the internal representation is only 15 bits.

It’s not obvious, though, when the missing values are 15-bit values. Rather than 32,769 values, there are only about 11,000 values. —

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
D
davem
Nov 19, 2004
writes:

It’s not obvious, though, when the missing values are 15-bit values. Rather than 32,769 values, there are only about 11,000 values.

That is indeed very strange. I can’t think of any reason the greyscale representation should be any different from one channel of an RGB image.

Dave
J
JPS
Nov 19, 2004
In message <cnjtnt$ll3$>,
(Dave Martindale) wrote:

writes:

It’s not obvious, though, when the missing values are 15-bit values. Rather than 32,769 values, there are only about 11,000 values.

That is indeed very strange. I can’t think of any reason the greyscale representation should be any different from one channel of an RGB image.

Me neither. Just one of a series of many disappointments with PS. This happens, I’ve found, with any conversion to greyscale in "16-bit" mode. The color management is different for color and greyscale, too, so if you switch back and forth there may be a continual loss of accuracy.

Someone should tell adobe that we have fast machines now and can work with accurate data.


<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
E
eawckyegcy
Nov 19, 2004
wrote:

"Roger N. Clark (change username to rnclark)" wrote:

In my testing of photoshop on real images, I find the following equation:

PS = int(IP/2),

Are you sure it isn’t "PS = int((IP+1)/2)"?

A minor danger is that if IP is 0xffff, the above will map to 0 if done in 16b arithmetic. "IP >> 1" (M. Clarke’s equation) is a single instruction that doesn’t suffer from overflow problems. But if wider arithmetic is allowed, then:

PS = rint(IP*(32768.0/65535.0))

or its integer equivalent may be used.

Note that PhotoSlop is "just an image editor" (a large, complicated, extensible, useful one to be sure), so precise stuff like you are demanding will probably never make it high on the priority list at Adobe, where most of their users are graphic artists, not by-the-bit technician types. Can MaximDL and similar handle non-astronomical imagery? It’s internals are probably _alot_ more formal (linear images, etc).
T
toby
Nov 19, 2004
wrote in message news:…
In message <cnjtnt$ll3$>,
(Dave Martindale) wrote:

writes:

It’s not obvious, though, when the missing values are 15-bit values. Rather than 32,769 values, there are only about 11,000 values.

That is indeed very strange. I can’t think of any reason the greyscale representation should be any different from one channel of an RGB image.

Me neither. Just one of a series of many disappointments with PS. This happens, I’ve found, with any conversion to greyscale in "16-bit" mode. The color management is different for color and greyscale, too, so if you switch back and forth there may be a continual loss of accuracy.
Someone should tell adobe that we have fast machines now and can work with accurate data.

I’ve tried. Their engineer insists that it’s 30x faster to work with 15 bit quantities than 16 bit ones.

–Toby
H
Harvey
Nov 19, 2004
"Toby Thain" wrote in message
wrote in message
news:…
In message <cnjtnt$ll3$>,
(Dave Martindale) wrote:

writes:

It’s not obvious, though, when the missing values are 15-bit values. Rather than 32,769 values, there are only about 11,000 values.

That is indeed very strange. I can’t think of any reason the greyscale representation should be any different from one channel of an RGB image.

Me neither. Just one of a series of many disappointments with PS. This happens, I’ve found, with any conversion to greyscale in "16-bit" mode. The color management is different for color and greyscale, too, so if you switch back and forth there may be a continual loss of accuracy.
Someone should tell adobe that we have fast machines now and can work with accurate data.

I’ve tried. Their engineer insists that it’s 30x faster to work with 15 bit quantities than 16 bit ones.

–Toby

15bits gives you 5bits for R,G & B. There’s no quality advantage between 15 vs. 16bits in terms of quality for RGB handling.
J
JPS
Nov 20, 2004
In message <xovnd.646$>,
"Harvey" wrote:

15bits gives you 5bits for R,G & B. There’s no quality advantage between 15 vs. 16bits in terms of quality for RGB handling.

We’re talking "bits per color channel"; not "bits per pixel".

In 16 bits per pixel, if the extra bit is used for green, it has a big image quality advantage. Green is the most significant channel for luminance.


<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
D
davem
Nov 20, 2004
(Toby Thain) writes:

I’ve tried. Their engineer insists that it’s 30x faster to work with 15 bit quantities than 16 bit ones.

When your source data was probably from a 12-bit ADC, or maybe 14-bit, working with 15 significant bits may indeed be completely adequate. And there *are* advantages to using a representation that has some headroom for "whiter than white" without overflow, and where the representation for "1.0" is a power of 2.

But the couple of most recent comments in this thread are about the fact that Photoshop’s greyscale doesn’t even seem to have 15 significant bits, unlike the RGB representation.

Dave
ME
Mike Engles
Nov 20, 2004
wrote:
wrote:

"Roger N. Clark (change username to rnclark)" wrote:

In my testing of photoshop on real images, I find the following equation:

PS = int(IP/2),

Are you sure it isn’t "PS = int((IP+1)/2)"?

A minor danger is that if IP is 0xffff, the above will map to 0 if done in 16b arithmetic. "IP >> 1" (M. Clarke’s equation) is a single instruction that doesn’t suffer from overflow problems. But if wider arithmetic is allowed, then:

PS = rint(IP*(32768.0/65535.0))

or its integer equivalent may be used.

Note that PhotoSlop is "just an image editor" (a large, complicated, extensible, useful one to be sure), so precise stuff like you are demanding will probably never make it high on the priority list at Adobe, where most of their users are graphic artists, not by-the-bit technician types. Can MaximDL and similar handle non-astronomical imagery? It’s internals are probably _alot_ more formal (linear images, etc).

Hello

Is astronomical image processing done in a linear space or a gamma space?

Mike Engles
KM
Kennedy McEwen
Nov 21, 2004
In article , Mike Engles
writes
Hello

Is astronomical image processing done in a linear space or a gamma space?
Normally in linear space, Mike – because you are primarily interested in physical quantities and their quantitative results.

After that has been achieved, the representation for human consumption is created – either from the source data or the processed data depending on the objective required.

Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he’s pissed.
Python Philosophers (replace ‘nospam’ with ‘kennedym’ when replying)
CC
Chris Cox
Nov 21, 2004
In article
wrote:

In message ,
(Mitch Alsup) wrote:

This looks like somebody is applying the gamma correction to the RAW data ouptut = input**2.2 / some-fixed-scaling

No; it’s totally linear. 6 sequential 16-bit values become one "PS 16-bit" value (except for the highest and lowest values, which are 3 to
1).

I think its time for Adobe to realize that people have fast machines, and that they need to stop cutting corners on quality for speed.

We don’t.

Without your original data to test, I can’t even guess what went wrong.

But the 0..32768 representation will have to remain for the forseeable future (unless you have a modern processor that does integer divides just as fast as it does integer shifts).

Chris
CC
Chris Cox
Nov 21, 2004
In article , Toby
Thain wrote:

wrote in message
news:…
In message <cnjtnt$ll3$>,
(Dave Martindale) wrote:

writes:

It’s not obvious, though, when the missing values are 15-bit values. Rather than 32,769 values, there are only about 11,000 values.

That is indeed very strange. I can’t think of any reason the greyscale representation should be any different from one channel of an RGB image.

Me neither. Just one of a series of many disappointments with PS. This happens, I’ve found, with any conversion to greyscale in "16-bit" mode. The color management is different for color and greyscale, too, so if you switch back and forth there may be a continual loss of accuracy.
Someone should tell adobe that we have fast machines now and can work with accurate data.

Photoshop is accurate, as far as we know.
Without your original data to test, I don’t know what might have gone wrong. But other people who are picky about their bits don’t have any problems with Photoshop.

I’ve tried. Their engineer insists that it’s 30x faster to work with 15 bit quantities than 16 bit ones.

Which is correct (for 0..32768 representation versus 0..65535 representation).

Chris
CC
Chris Cox
Nov 21, 2004
In article <cnmhjl$abm$>, Dave Martindale
wrote:

(Toby Thain) writes:

I’ve tried. Their engineer insists that it’s 30x faster to work with 15 bit quantities than 16 bit ones.

When your source data was probably from a 12-bit ADC, or maybe 14-bit, working with 15 significant bits may indeed be completely adequate. And there *are* advantages to using a representation that has some headroom for "whiter than white" without overflow, and where the representation for "1.0" is a power of 2.

But the couple of most recent comments in this thread are about the fact that Photoshop’s greyscale doesn’t even seem to have 15 significant bits, unlike the RGB representation.

The color mode doesn’t matter – it’s still 16 bit data (0..32768).

Chris
J
JPS
Nov 21, 2004
In message <201120041833424481%>,
Chris Cox wrote:

Without your original data to test, I can’t even guess what went wrong.

You don’t need my original data.

Any image in "16 bit greyscale" mode has all kinds of numbers between 0 and 32768 missing, and not possible no matter hown much you blur or interpolate. "16 bit greyscale" is about 13.5 bit greyscale. —

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
KW
Ken Weitzel
Nov 21, 2004
Chris Cox wrote:

In article <cnmhjl$abm$>, Dave Martindale
wrote:

(Toby Thain) writes:

I’ve tried. Their engineer insists that it’s 30x faster to work with 15 bit quantities than 16 bit ones.

When your source data was probably from a 12-bit ADC, or maybe 14-bit, working with 15 significant bits may indeed be completely adequate. And there *are* advantages to using a representation that has some headroom for "whiter than white" without overflow, and where the representation for "1.0" is a power of 2.

But the couple of most recent comments in this thread are about the fact that Photoshop’s greyscale doesn’t even seem to have 15 significant bits, unlike the RGB representation.

The color mode doesn’t matter – it’s still 16 bit data (0..32768).

Hi Chris…

0..32767 or 1..32768

You just can’t have it both ways 🙂

Ken

Chris
MA
Matt Austern
Nov 21, 2004
Chris Cox writes:

I’ve tried. Their engineer insists that it’s 30x faster to work with 15 bit quantities than 16 bit ones.

Which is correct (for 0..32768 representation versus 0..65535 representation).

Perhaps this is offtopic, and perhaps you can’t answer it without revealing proprietary information, but can you explain why 15-bit computation should be so much faster than 16-bit? (If there’s a publication somewhere you could point me to, that would be great.) I’ve thought about this for a few minutes, I haven’t been able to think of an obvious reason, and now I’m curious.

Feel free to email me if you think this wouldn’t be interesting to anyone else.
KW
Ken Weitzel
Nov 21, 2004
Matt Austern wrote:

Chris Cox writes:

I’ve tried. Their engineer insists that it’s 30x faster to work with 15 bit quantities than 16 bit ones.

Which is correct (for 0..32768 representation versus 0..65535 representation).

Perhaps this is offtopic, and perhaps you can’t answer it without revealing proprietary information, but can you explain why 15-bit computation should be so much faster than 16-bit? (If there’s a publication somewhere you could point me to, that would be great.) I’ve thought about this for a few minutes, I haven’t been able to think of an obvious reason, and now I’m curious.

Feel free to email me if you think this wouldn’t be interesting to anyone else.

Hi Matt…

Nor can I see even the slightest difference. None at all.

So – I suspect that we’re looking at it from the wrong
end. Suspect it’s the a/d converter that could be the
bottleneck?

8 bits are common; 15 bit’s are common. 18 bit
are available but seldom used. Never heard of 16.
Maybe that’s it?

Ken
MA
Matt Austern
Nov 21, 2004
Ken Weitzel writes:

Matt Austern wrote:

Chris Cox writes:

I’ve tried. Their engineer insists that it’s 30x faster to work with 15 bit quantities than 16 bit ones.

Which is correct (for 0..32768 representation versus 0..65535 representation).
Perhaps this is offtopic, and perhaps you can’t answer it without revealing proprietary information, but can you explain why 15-bit computation should be so much faster than 16-bit? (If there’s a publication somewhere you could point me to, that would be great.) I’ve thought about this for a few minutes, I haven’t been able to think of an obvious reason, and now I’m curious.
Feel free to email me if you think this wouldn’t be interesting to anyone else.

Hi Matt…

Nor can I see even the slightest difference. None at all.
So – I suspect that we’re looking at it from the wrong
end. Suspect it’s the a/d converter that could be the
bottleneck?

Nope. If Chris says 16-bit image processing in Photoshop would be much slower than 15, I have no doubt that he’s right. I just don’t know why. I can easily believe there’s some subtle algorithmic issue that I haven’t thought of. For that matter, I can easily believe there’s some glaringly obvious algorithmic issue I haven’t thought of. I’m just curious what it might be.
D
davidjl
Nov 21, 2004
"Matt Austern" wrote:
Chris Cox writes:

I’ve tried. Their engineer insists that it’s 30x faster to work with 15 bit quantities than 16 bit ones.

Which is correct (for 0..32768 representation versus 0..65535 representation).

Perhaps this is offtopic, and perhaps you can’t answer it without revealing proprietary information, but can you explain why 15-bit computation should be so much faster than 16-bit?

Here’s my guess: Correctly testing boundary conditions for unsigned arithmentic is tricky, since (0 – 1) is the largest value there is. Ouch. So if you use signed arithmetic you can catch problems much easier. It would be the additional testing code that makes full 16-bit calculations slower.

Just a guess.

David J. Littleboy
Tokyo, Japan
KW
Ken Weitzel
Nov 21, 2004
Matt Austern wrote:

Ken Weitzel writes:

Matt Austern wrote:

Chris Cox writes:

I’ve tried. Their engineer insists that it’s 30x faster to work with 15 bit quantities than 16 bit ones.

Which is correct (for 0..32768 representation versus 0..65535 representation).

Perhaps this is offtopic, and perhaps you can’t answer it without revealing proprietary information, but can you explain why 15-bit computation should be so much faster than 16-bit? (If there’s a publication somewhere you could point me to, that would be great.) I’ve thought about this for a few minutes, I haven’t been able to think of an obvious reason, and now I’m curious.
Feel free to email me if you think this wouldn’t be interesting to anyone else.

Hi Matt…

Nor can I see even the slightest difference. None at all.
So – I suspect that we’re looking at it from the wrong
end. Suspect it’s the a/d converter that could be the
bottleneck?

Nope. If Chris says 16-bit image processing in Photoshop would be much slower than 15, I have no doubt that he’s right. I just don’t know why. I can easily believe there’s some subtle algorithmic issue that I haven’t thought of. For that matter, I can easily believe there’s some glaringly obvious algorithmic issue I haven’t thought of. I’m just curious what it might be.

Hi…

Perhaps you’re right… throw a little more fuel on
the fire for whatever enlightenment it may be worth.

When I first got my ati aiw card, I read the manual(s).

They claimed that they were going to do 32 bit color…
sort of. They explained that they were going to do
3 x 10, but not "waste" the remaining 2 bits; they were using them for additional green levels.

I’ll see if I can’t somehow find those manuals,
and share the detail if/when I do.

Ken
D
davem
Nov 21, 2004
Matt Austern writes:

Nope. If Chris says 16-bit image processing in Photoshop would be much slower than 15, I have no doubt that he’s right. I just don’t know why. I can easily believe there’s some subtle algorithmic issue that I haven’t thought of. For that matter, I can easily believe there’s some glaringly obvious algorithmic issue I haven’t thought of. I’m just curious what it might be.

One guess: checking for overflow in filtering. Many filter operations can generate output values that are outside the range occupied by the input image under some circumstances. If you use a range of 0-65535 for storage and computation, you have to check for overflow after each operation that could possibly overflow in 16 bits, or you risk having white overflow and turn into black. Conditional branches are expensive on a Pentium 4.

If the inputs are in the range 0-32768, many of the same filtering operations can guarantee that their output, though it may be larger than 32768, will never reach 65535. So the code can sit there doing multiplies and adds at maximum rate, and only has to check for possible overflow at the end of the whole process. (Underflow is a similar issue, except that underflow generates a large value in unsigned arithmetic. As long as the maximum overflow value can’t get as large as the minimum underflow value, you can straighten it out later).

Another possibility: suppose you have both colour components and alpha encoded so that 1.0 is 65535 and 0.0 is 0. When you multiply colour by alpha, you get a 32-bit result. To convert that back into 16-bit form, you need to divide by 65535 if you want the best accuracy. Just doing a right shift by 16 bits is not good enough if you want to be able to test for exactly 1.0. (If you do a 16-bit right shift, you’re essentially scaling the result by 65535/65536, enough to change 65535 to 65534 when multiplying by the representation for 1.0).

But if 1.0 is represented by 32768, that’s an exact power of 2, and you can get the exact result of the multiply by doing a right-shift of 15 bits.

The Pixar Image Computer used tricks like this many years ago. The memory was 12 bits per component, with 2048 representing a value of 1.0. Values up to 3071 were brighter than white, up to 1.5. The range 3072-4095 represented negative values in [-0.5, 0]. When a 12-bit value was loaded into the processor, it was automatically extended to 16 bits in a way that preserved the positive or negative meaning, then the arithmetic was all 16/32 bit.

Dave
ME
Mike Engles
Nov 21, 2004
wrote:
In message <201120041833424481%>,
Chris Cox wrote:

Without your original data to test, I can’t even guess what went wrong.

You don’t need my original data.

Any image in "16 bit greyscale" mode has all kinds of numbers between 0 and 32768 missing, and not possible no matter hown much you blur or interpolate. "16 bit greyscale" is about 13.5 bit greyscale. —

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><

Hello

I don’t know if this thread in the Photoshop forum has any bearing on this.

http://www.adobeforums.com/cgi-bin/webx?13@28.7hpgdjJRoNJ.23 65457@.3bb65416/7

Mike Engles
ME
Mike Engles
Nov 21, 2004
Kennedy McEwen wrote:
In article , Mike Engles
writes
Hello

Is astronomical image processing done in a linear space or a gamma space?
Normally in linear space, Mike – because you are primarily interested in physical quantities and their quantitative results.

After that has been achieved, the representation for human consumption is created – either from the source data or the processed data depending on the objective required.

Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he’s pissed.
Python Philosophers (replace ‘nospam’ with ‘kennedym’ when replying)

Hello

Is the same true for imaging from spacecraft, interplanetary or otherwise or is gamma encoding done before transmission?

Mike Engles
JD
John Doe
Nov 21, 2004
I thought it was outer-space. 😉

John

"Mike Engles" wrote in message
wrote:
wrote:

"Roger N. Clark (change username to rnclark)" wrote:

In my testing of photoshop on real images, I find the following equation:

PS = int(IP/2),

Are you sure it isn’t "PS = int((IP+1)/2)"?

A minor danger is that if IP is 0xffff, the above will map to 0 if done in 16b arithmetic. "IP >> 1" (M. Clarke’s equation) is a single instruction that doesn’t suffer from overflow problems. But if wider arithmetic is allowed, then:

PS = rint(IP*(32768.0/65535.0))

or its integer equivalent may be used.

Note that PhotoSlop is "just an image editor" (a large, complicated, extensible, useful one to be sure), so precise stuff like you are demanding will probably never make it high on the priority list at Adobe, where most of their users are graphic artists, not by-the-bit technician types. Can MaximDL and similar handle non-astronomical imagery? It’s internals are probably _alot_ more formal (linear images, etc).

Hello

Is astronomical image processing done in a linear space or a gamma space?

Mike Engles
ME
Mike Engles
Nov 21, 2004
John Doe wrote:
I thought it was outer-space. 😉

John

"Mike Engles" wrote in message
wrote:
wrote:

"Roger N. Clark (change username to rnclark)" wrote:

In my testing of photoshop on real images, I find the following equation:

PS = int(IP/2),

Are you sure it isn’t "PS = int((IP+1)/2)"?

A minor danger is that if IP is 0xffff, the above will map to 0 if done in 16b arithmetic. "IP >> 1" (M. Clarke’s equation) is a single instruction that doesn’t suffer from overflow problems. But if wider arithmetic is allowed, then:

PS = rint(IP*(32768.0/65535.0))

or its integer equivalent may be used.

Note that PhotoSlop is "just an image editor" (a large, complicated, extensible, useful one to be sure), so precise stuff like you are demanding will probably never make it high on the priority list at Adobe, where most of their users are graphic artists, not by-the-bit technician types. Can MaximDL and similar handle non-astronomical imagery? It’s internals are probably _alot_ more formal (linear images, etc).

Hello

Is astronomical image processing done in a linear space or a gamma space?

Mike Engles

Hello

Like one way or another, we’re all, like, spaced out man.

Mike Engles
MR
Mike Russell
Nov 21, 2004
Dave Martindale wrote:
[re Photoshop’s 16 bit representation]

The Pixar Image Computer used tricks like this many years ago. The memory was 12 bits per component, with 2048 representing a value of
1.0. Values up to 3071 were brighter than white, up to 1.5. The range
3072-4095 represented negative values in [-0.5, 0]. When a 12-bit value was loaded into the processor, it was automatically extended to 16 bits in a way that preserved the positive or negative meaning, then the arithmetic was all 16/32 bit.

Dave

It did indeed, and I’m astonished that you remember these details so long after I, who worked on this stuff for years, have forgotten them!

You touch on another important aspect of having displayable values occupy only a subset of the total 16 bits. This is the ability to represent negative
intermediate values.

Negative numbers are an important component of most
graphic calculations, and the ability to represent negative values in place saves,
storage, and the time required to convert between storage formats, while retaining the ability to make calculations that require per pixel negative values. An example would be the subtraction of two channels, or the Pirl function, which uses negative terms in calculating the theoretically perfect resampling filter.

OTOH, Adobe’s weird 16 bit format makes it more difficult to interface to other graphics libraries, requiring additional passes to convert to and from 16 bit mode.


Mike Russell
www.curvemeister.com
www.geigy.2y.net
MR
Mike Russell
Nov 21, 2004
Mike Engles wrote:
….
[re linear encoding of specialized pixel data values]
Is the same true for imaging from spacecraft, interplanetary or otherwise or is gamma encoding done before transmission?

Yes. Gama encoding compresses some data values, and there is no reason to do this to raw data from a spacecraft.

Here’s an article that may interest you, by Alvy Ray Smith, on the distinction of work and display color spaces.
http://alvyray.com/Memos/MemosMicrosoft.htm#NonlinearAlphaQu estion

Mike Russell
www.curvemeister.com
www.geigy.2y.net
CC
Chris Cox
Nov 21, 2004
In article
wrote:

In message <201120041833424481%>,
Chris Cox wrote:

Without your original data to test, I can’t even guess what went wrong.

You don’t need my original data.

Any image in "16 bit greyscale" mode has all kinds of numbers between 0 and 32768 missing,

Only if you started with an image that had numbers missing. The representation is 0..32768 — all numbers are possible.

and not possible no matter hown much you blur or
interpolate. "16 bit greyscale" is about 13.5 bit greyscale.

No, that is not even remotely correct.

Chris
CC
Chris Cox
Nov 22, 2004
In article , Matt Austern
wrote:

Chris Cox writes:

I’ve tried. Their engineer insists that it’s 30x faster to work with 15 bit quantities than 16 bit ones.

Which is correct (for 0..32768 representation versus 0..65535 representation).

Perhaps this is offtopic, and perhaps you can’t answer it without revealing proprietary information, but can you explain why 15-bit computation should be so much faster than 16-bit? (If there’s a publication somewhere you could point me to, that would be great.) I’ve thought about this for a few minutes, I haven’t been able to think of an obvious reason, and now I’m curious.

1) Because a shift by 15 (divide by 32768) is much faster than a divide by 65535.

One of the most common operations is (value1*value2 + (maxValue/2)) / maxValue

With 0..255 we can pull some tricks to make the divide reasonably fast. For 0..65535 the tricks take quite a bit more time (and serialize the operation), or we have to use a multiply by reciprocal
For 0..32768, we can just use a shift.

2) A lot fewer overflows of 32 bit accumulators

This is still a problem.
When 64 bit processors become the norm (and the @#!^&$ OS allows a fully 64 bit application), then that becomes less of a problem.

3) The 2^N maximum value also has some benefits when dealing with subsampled lookup tables that require interpolation.

4) the 2^N maximum value also has benefits to blending operations that need a middle value (for 0..255 it was pretty random whether 127 or 128 was used for the middle).

Chris
CC
Chris Cox
Nov 22, 2004
In article <9D6od.24571$>, Mike
Russell wrote:

Mike Engles wrote:

[re linear encoding of specialized pixel data values]
Is the same true for imaging from spacecraft, interplanetary or otherwise or is gamma encoding done before transmission?

Yes. Gama encoding compresses some data values, and there is no reason to do this to raw data from a spacecraft.

Here’s an article that may interest you, by Alvy Ray Smith, on the distinction of work and display color spaces.
http://alvyray.com/Memos/MemosMicrosoft.htm#NonlinearAlphaQu estion

Actually, Alvy has a number of mistakes in that paper.
I’m still not sure if he understands gamma encoding…

Chris
CC
Chris Cox
Nov 22, 2004
In article <Fx6od.24570$>, Mike
Russell wrote:

OTOH, Adobe’s weird 16 bit format makes it more difficult to interface to other graphics libraries, requiring additional passes to convert to and from 16 bit mode.

That’s why the external representation is 0..65535.
Only the filter plugin APIs have to deal with the 0..32768 representation. There are flags for the file format, import, and export plugin APIs to use different maximum values (which Photoshop will then rescale to it’s internal representation).

Chris
J
JPS
Nov 22, 2004
In message <211120041556051196%>,
Chris Cox wrote:

In article
wrote:

In message <201120041833424481%>,
Chris Cox wrote:

Without your original data to test, I can’t even guess what went wrong.

You don’t need my original data.

Any image in "16 bit greyscale" mode has all kinds of numbers between 0 and 32768 missing,

Only if you started with an image that had numbers missing. The representation is 0..32768 — all numbers are possible.

and not possible no matter hown much you blur or
interpolate. "16 bit greyscale" is about 13.5 bit greyscale.

No, that is not even remotely correct.

It is exactly what is happening here. I get 0, 1, 3, 4, 5, 8, etc. No 2, 6, 7, 11, etc, at all, no matter what is done to the data. That’s with color management disabled. With it enabled, I had even less values. Clusters of 6 16-bit numbers all became the same "15bit+1" value when color management was enabled (except 3 values became 0, and 3 values became 32768).

I could write this off to a corrupted executable, but it happens on two different installations of CS.


<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
H
Hecate
Nov 22, 2004
On Mon, 22 Nov 2004 00:07:44 GMT, Chris Cox
wrote:

This is still a problem.
When 64 bit processors become the norm (and the @#!^&$ OS allows a fully 64 bit application), then that becomes less of a problem.
Is a 64 bit optimised Photoshop likely to be faster, or just more able to do complex operations? Or do the programmers generally aim for a bit of both if you’ll pardon the pun 🙂



Hecate – The Real One

veni, vidi, reliqui
CC
Chris Cox
Nov 22, 2004
In article
wrote:

In message <211120041556051196%>,
Chris Cox wrote:

In article
wrote:

In message <201120041833424481%>,
Chris Cox wrote:

Without your original data to test, I can’t even guess what went wrong.

You don’t need my original data.

Any image in "16 bit greyscale" mode has all kinds of numbers between 0 and 32768 missing,

Only if you started with an image that had numbers missing. The representation is 0..32768 — all numbers are possible.

and not possible no matter hown much you blur or
interpolate. "16 bit greyscale" is about 13.5 bit greyscale.

No, that is not even remotely correct.

It is exactly what is happening here. I get 0, 1, 3, 4, 5, 8, etc. No 2, 6, 7, 11, etc, at all, no matter what is done to the data.

And, again, without your original data – I can’t guess what could have gone wrong.

I do know that for anyone else doing a similar experiment (inside and outside Adobe), they get the full 32769 values.

Chris
CC
Chris Cox
Nov 22, 2004
In article , Hecate
wrote:

On Mon, 22 Nov 2004 00:07:44 GMT, Chris Cox
wrote:

This is still a problem.
When 64 bit processors become the norm (and the @#!^&$ OS allows a fully 64 bit application), then that becomes less of a problem.
Is a 64 bit optimised Photoshop likely to be faster, or just more able to do complex operations? Or do the programmers generally aim for a bit of both if you’ll pardon the pun 🙂

That depends a lot on the CPU in question, and the operation in question.
Most likely there will be little performance difference, but a big difference in available RAM (addressibility).

Chris
J
JPS
Nov 22, 2004
In message <211120041843480241%>,
Chris Cox wrote:

In article
wrote:

It is exactly what is happening here. I get 0, 1, 3, 4, 5, 8, etc. No 2, 6, 7, 11, etc, at all, no matter what is done to the data.

And, again, without your original data – I can’t guess what could have gone wrong.

I do know that for anyone else doing a similar experiment (inside and outside Adobe), they get the full 32769 values.

I already told what the data was – a binary file with the 16-bit unsigned values 0 through 65535. That’s it:

00 00 01 00 02 00 03 00 …. fb ff fc ff fd ff fe ff ff ff

load as .raw, 256*256, 1 channel, 16-bit, IBM/PC, 0 header.

Lots of values posterized, beyond the 2->1 you’d expect from 16->15 bit.

Subsequently, I have tried greyscale with new file as well, and the same thing happens. Lots of values don’t exist, no matter how much you crush the levels, blur, etc. They are simply impossible.

This happens on two completely independent PCs with CS installed, so it can’t be a binary corruption, unless something was corrupt off the CD.

Have you actually seen the values 2, 6, 7, or 11 in 16-bit greyscale mode (w/16-bit checked in "Info") with color management disabled? Recently?


<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
U
usenet
Nov 22, 2004
Kibo informs me that Chris Cox stated that:

In article
wrote:
It is exactly what is happening here. I get 0, 1, 3, 4, 5, 8, etc. No 2, 6, 7, 11, etc, at all, no matter what is done to the data.

And, again, without your original data – I can’t guess what could have gone wrong.

I do know that for anyone else doing a similar experiment (inside and outside Adobe), they get the full 32769 values.

Yeah, that’s what I would’ve expected. I find it impossible to believe that PS could be getting it that badly wrong without it showing up in a dozen different, really obvious ways.
And speaking of supplying original data, where the hell does Adobe keep the PS raw file format doc’s that used to be on the website? – I wanted to try John’s experiment for myself, but all I could find was the PS SDK, for which Adobe wants money.


W
. | ,. w , "Some people are alive only because
\|/ \|/ it is illegal to kill them." Perna condita delenda est —^—-^————————————————— ————
U
usenet
Nov 22, 2004
Kibo informs me that "Harvey" stated that:

"Toby Thain" wrote in message
wrote in message
Someone should tell adobe that we have fast machines now and can work with accurate data.

I’ve tried. Their engineer insists that it’s 30x faster to work with 15 bit quantities than 16 bit ones.

They’re correct – it is.

15bits gives you 5bits for R,G & B. There’s no quality advantage between 15 vs. 16bits in terms of quality for RGB handling.

You’re thinking of the HiColour video modes. The 15 bits we’ve been talking about in PS is 15 bits *per colour*, per pixel, ie; a total of 45 bits per pixel.


W
. | ,. w , "Some people are alive only because
\|/ \|/ it is illegal to kill them." Perna condita delenda est —^—-^————————————————— ————
U
usenet
Nov 22, 2004
Kibo informs me that stated that:

In message <xovnd.646$>,
"Harvey" wrote:

15bits gives you 5bits for R,G & B. There’s no quality advantage between 15 vs. 16bits in terms of quality for RGB handling.

We’re talking "bits per color channel"; not "bits per pixel".
In 16 bits per pixel, if the extra bit is used for green, it has a big image quality advantage. Green is the most significant channel for luminance.

When you’re talking about 15 vs 16 bits, per channel, there is not practical difference in quality. A single step at that resolution is only 0.003% of full scale, which can’t even be rendered, much less seen by a human eye.


W
. | ,. w , "Some people are alive only because
\|/ \|/ it is illegal to kill them." Perna condita delenda est —^—-^————————————————— ————
U
usenet
Nov 22, 2004
Kibo informs me that Ken Weitzel stated that:

Matt Austern wrote:

Chris Cox writes:

I’ve tried. Their engineer insists that it’s 30x faster to work with 15 bit quantities than 16 bit ones.

Which is correct (for 0..32768 representation versus 0..65535 representation).

Perhaps this is offtopic, and perhaps you can’t answer it without revealing proprietary information, but can you explain why 15-bit computation should be so much faster than 16-bit? (If there’s a publication somewhere you could point me to, that would be great.) I’ve thought about this for a few minutes, I haven’t been able to think of an obvious reason, and now I’m curious.

Feel free to email me if you think this wouldn’t be interesting to anyone else.

Hi Matt…

Nor can I see even the slightest difference. None at all.
So – I suspect that we’re looking at it from the wrong
end. Suspect it’s the a/d converter that could be the
bottleneck?

Unluss I’ve totally misunderstood John’s description, none of this data has been anywhere near an A2D converter.

8 bits are common; 15 bit’s are common. 18 bit
are available but seldom used. Never heard of 16.
Maybe that’s it?

Nope. (BTW, 16 bits is standard for audio work, including CDs, & 12 bits is standard for DSLRs.)


W
. | ,. w , "Some people are alive only because
\|/ \|/ it is illegal to kill them." Perna condita delenda est —^—-^————————————————— ————
J
JPS
Nov 22, 2004
In message ,
wrote:

And speaking of supplying original data, where the hell does Adobe keep the PS raw file format doc’s that used to be on the website? – I wanted to try John’s experiment for myself, but all I could find was the PS SDK, for which Adobe wants money.

You don’t need to enter through .raw to see the problem; just open a new 2×1 pixel document in greyscale mode. Convert to 16-bit. Bicubic resize the 2×1 pixel image to 500×1. Set "Info" to give 16-bit values. Make sure the color sampling tool is set to single pixel. put the pointer over the string of pixels. See if there are any gaps in the numbers.


<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
U
usenet
Nov 22, 2004
Kibo informs me that (Dave Martindale) stated that:

Matt Austern writes:

Nope. If Chris says 16-bit image processing in Photoshop would be much slower than 15, I have no doubt that he’s right.

He is.

I just don’t know
why. I can easily believe there’s some subtle algorithmic issue that I haven’t thought of. For that matter, I can easily believe there’s some glaringly obvious algorithmic issue I haven’t thought of. I’m just curious what it might be.

One guess: checking for overflow in filtering.

Close. The short answer is that it provides room for intermediate results that are guaranteed not to overflow a machine register. This is a huge performance win for operations that are going to be repeated for every pixel in a big dataset, because it means that you don’t need to include a conditional branch to check for overflow at every step in the computation.


W
. | ,. w , "Some people are alive only because
\|/ \|/ it is illegal to kill them." Perna condita delenda est —^—-^————————————————— ————
J
JPS
Nov 22, 2004
In message ,
wrote:

Kibo informs me that stated that:

In message <xovnd.646$>,
"Harvey" wrote:

15bits gives you 5bits for R,G & B. There’s no quality advantage between 15 vs. 16bits in terms of quality for RGB handling.

We’re talking "bits per color channel"; not "bits per pixel".
In 16 bits per pixel, if the extra bit is used for green, it has a big image quality advantage. Green is the most significant channel for luminance.

When you’re talking about 15 vs 16 bits, per channel, there is not practical difference in quality. A single step at that resolution is only 0.003% of full scale, which can’t even be rendered, much less seen by a human eye.

Am I on the other side of the looking glass?

Someone confused our "per channel" discussion for "per pixel". I made the distinction, and pointed out that in PER PIXEL the 16th bit (6th green bit) is very useful, and visible.


<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
J
JPS
Nov 22, 2004
Sorry; I forgot one step.

In message , I,
wrote:

In message ,
wrote:

And speaking of supplying original data, where the hell does Adobe keep the PS raw file format doc’s that used to be on the website? – I wanted to try John’s experiment for myself, but all I could find was the PS SDK, for which Adobe wants money.

You don’t need to enter through .raw to see the problem; just open a new 2×1 pixel document in greyscale mode.

Make one pixel 0 and the other 1 (out of 255).

Convert to 16-bit. Bicubic
resize the 2×1 pixel image to 500×1. Set "Info" to give 16-bit values. Make sure the color sampling tool is set to single pixel. put the pointer over the string of pixels. See if there are any gaps in the numbers.



<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
J
JPS
Nov 22, 2004
In message ,
wrote:

Close. The short answer is that it provides room for intermediate results that are guaranteed not to overflow a machine register. This is a huge performance win for operations that are going to be repeated for every pixel in a big dataset, because it means that you don’t need to include a conditional branch to check for overflow at every step in the computation.

It also allows you to perform some operations on multiple data values at the same time, as their carry-overs won’t spill into another data value. —

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
J
JPS
Nov 22, 2004
In message <211120041843480241%>,
Chris Cox wrote:

I do know that for anyone else doing a similar experiment (inside and outside Adobe), they get the full 32769 values.

I just came up with an idea to check if it was the internal representation or the "info" tool itself, and sure enough it was the info tool that was at fault.

What I did was open the "levels" dialog, and set the input max to 2. Then, I moved the info tool over the pixels, and sure enough, there was not a direct correspondence between the "old" and "new" values. the first 0 became a 0; the second 0 became a 52; all numbers that are multiples of 52 were present in the new values (no gaps).

The info tool is toast in 16-bit greyscale mode.


<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
TA
Timo Autiokari
Nov 22, 2004
On Sun, 21 Nov 2004 20:10:45 GMT, "Mike Russell"

Gama encoding compresses some data values, and there is no reason to do this to raw data from a spacecraft.

And, there is no reason to do that to images from digital cameras either, just like Adobe shows to us, the ARC (like most of the other conversion sw too) perform all the processing in the linear domain. Why, for the same reason why linear processing os done in scientific imaging also, to avoid the Gamma Induced Errors.

Timo Autiokari http://www.aim-dtp.net
H
Harvey
Nov 22, 2004
wrote in message
Kibo informs me that "Harvey" stated that:

"Toby Thain" wrote in message
wrote in message
Someone should tell adobe that we have fast machines now and can work with accurate data.

I’ve tried. Their engineer insists that it’s 30x faster to work with 15 bit quantities than 16 bit ones.

They’re correct – it is.

15bits gives you 5bits for R,G & B. There’s no quality advantage between 15
vs. 16bits in terms of quality for RGB handling.

You’re thinking of the HiColour video modes. The 15 bits we’ve been talking about in PS is 15 bits *per colour*, per pixel, ie; a total of 45 bits per pixel.

Good to see somebody’s awake…………………
T
toby
Nov 22, 2004
Chris Cox …
In article <cnmhjl$abm$>, Dave Martindale
wrote:

(Toby Thain) writes:

I’ve tried. Their engineer insists that it’s 30x faster to work with 15 bit quantities than 16 bit ones.

When your source data was probably from a 12-bit ADC, or maybe 14-bit, working with 15 significant bits may indeed be completely adequate. And there *are* advantages to using a representation that has some headroom for "whiter than white" without overflow, and where the representation for "1.0" is a power of 2.

But the couple of most recent comments in this thread are about the fact that Photoshop’s greyscale doesn’t even seem to have 15 significant bits, unlike the RGB representation.

The color mode doesn’t matter – it’s still 16 bit data (0..32768).

It’s deceptive to characterise that range of values as "16 bit" – it has only 15 bits of dynamic range.

Chris
CB
Chris Brown
Nov 22, 2004
In article ,
Toby Thain wrote:
Chris Cox wrote in message
news:<201120041838050385%>…
The color mode doesn’t matter – it’s still 16 bit data (0..32768).

It’s deceptive to characterise that range of values as "16 bit" – it has only 15 bits of dynamic range.

YM precision. Dynamic range is independent of the number of bits used to represent an image.
D
davem
Nov 22, 2004
Timo Autiokari writes:

Gama encoding compresses some data values, and there is no reason to do this to raw data from a spacecraft.

And, there is no reason to do that to images from digital cameras either, just like Adobe shows to us, the ARC (like most of the other conversion sw too) perform all the processing in the linear domain. Why, for the same reason why linear processing os done in scientific imaging also, to avoid the Gamma Induced Errors.

This is all fine if you can preserve the data at the width it was originally captured: at least 12 bits per sample in digital cameras, up to 16 bits for scientific cameras.

But if you convert it to 8 bits per sample to save space, you *need* to apply non-linear tone compression to keep sufficient tonal resolution in the shadows. The encoding called "gamma correction" does this very well, even better than taking the logarithm of intensity. But Timo tells people to use 8-bit linear data, which badly quantizes the shadow detail, because he things linear is always better no matter how many bits are used. He’s wrong.

And it happens that 8 bit gamma-corrected samples are enough to capture the whole intensity range that we can view *in a single image* with few or no quantization artifacts. It is good enough for pictorial images you are going to print, or display on screen. That’s why it’s so popular – it works just fine for final output in pictorial photography.

On the other hand, it’s useful to preserve 16 bits of data forever for some applications with high dynamic range (e.g. X-ray images), and some even require floating point to represent. It’s often useful to do intermediate computations with 16- or even 32-bit integer or floating point.

Dave
ME
Mike Engles
Nov 22, 2004
Mike Russell wrote:
Mike Engles wrote:

[re linear encoding of specialized pixel data values]
Is the same true for imaging from spacecraft, interplanetary or otherwise or is gamma encoding done before transmission?

Yes. Gama encoding compresses some data values, and there is no reason to do this to raw data from a spacecraft.

Here’s an article that may interest you, by Alvy Ray Smith, on the distinction of work and display color spaces.
http://alvyray.com/Memos/MemosMicrosoft.htm#NonlinearAlphaQu estion

Mike Russell
www.curvemeister.com
www.geigy.2y.net

Hello

What I have just read chimes with everything I think should happen in digital imaging. It does completely contradict everything that has been written about gamma encoding in these and other forums with the necessity for gamma to maximise the use of available bits.

ftp://ftp.alvyray.com/Acrobat/9_Gamma.pdf

Yet this guy seems to be a pioneer of digital imaging.

Also why is image data from spacecraft and astronomy not gamma encoded.It is after all digital photography. They must be transmitting/ recording in at least 18 bit. That is the bit level that Chris Cox et al say is the minimum necessary for linear images, without gamma encoding.

It does seem that what we have today is two types of digital imaging. One is the truly scientific one that uses ALL linear data. The other is a convenient engineering one that delivers the goods simply, by pre compensating the linear data to display on non linear displays.

Engineers were always happy with approximations.

Mike Engles
J
JPS
Nov 22, 2004
In message ,
Timo Autiokari wrote:

And, there is no reason to do that to images from digital cameras either, just like Adobe shows to us, the ARC (like most of the other conversion sw too) perform all the processing in the linear domain.

I’m not so sure that ACR works in a totally linear domain. Images exposed with bracketing, and compensated to be the same with the exposure slider, may have equal mid-tones, but the shadows and highlights will display that a different gamma is used. If you drag the ACR exposure slider to the left, after it runs out of "hidden highlights", it stretches the highlights so that 4095 in the RAW data stays anchored at 255 in the output, and never gets darker. That is not linear exposure compensation.

Optimally, I think that a RAW converter should have two basic exposure controls; one to scale the linear data for exposure adjustments, and another control to fit that to an output curve.

Why, for the same reason why linear processing os done in scientific imaging also, to avoid the Gamma Induced Errors.



<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
BV
Bart van der Wolf
Nov 22, 2004
"Mike Engles" wrote in message
SNIP
Also why is image data from spacecraft and astronomy not gamma encoded. It is after all digital photography.

Not necessarily. There is a difference between photometric data (e.g. spectral reflection/absorption/emission in certain bands), and pictorial imaging (e.g. stereo pairs in either visible light bands or mixed with other spectral data).
One common issue between them is the desire to reduce quantization errors (at least half of the LSB) to a minimum. Gamma encoding provides a visually efficient encoding, but it can underutilize the capacity at the lower, and overutilize (=additional quantization errors) at the higher counts. Then there is the trade-off caused by limited transmission bandwidth, and there is only so much one can do with compression…

Bart
BV
Bart van der Wolf
Nov 22, 2004
wrote in message
In message <211120041843480241%>,
SNIP
I already told what the data was – a binary file with the 16-bit unsigned values 0 through 65535. That’s it:

00 00 01 00 02 00 03 00 …. fb ff fc ff fd ff fe ff ff ff
load as .raw, 256*256, 1 channel, 16-bit, IBM/PC, 0
header.

As a precaution, do make sure that you have set the same gamma for your RGB *and* Gray working spaces and switch the "Use Dither" for 8-bit channel conversions to off. It should not matter when you load the data as "raw" data, but just to make sure they don’t interfere.

Bart
J
JPS
Nov 22, 2004
In message <41a279a0$0$566$>,
"Bart van der Wolf" wrote:

wrote in message
In message <211120041843480241%>,
SNIP
I already told what the data was – a binary file with the 16-bit unsigned values 0 through 65535. That’s it:

00 00 01 00 02 00 03 00 …. fb ff fc ff fd ff fe ff ff ff
load as .raw, 256*256, 1 channel, 16-bit, IBM/PC, 0
header.

As a precaution, do make sure that you have set the same gamma for your RGB *and* Gray working spaces and switch the "Use Dither" for 8-bit channel conversions to off. It should not matter when you load the data as "raw" data, but just to make sure they don’t interfere.

If you look through the thread, you’ll see that I’ve already pinpointed the real problem; it’s in the "info" tool. There’s nothing wrong with the internal bitmap (as long as you’re only expecting 15-bit). —

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy
<<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
U
usenet
Nov 22, 2004
Kibo informs me that Mike Engles stated
that:

It does seem that what we have today is two types of digital imaging. One is the truly scientific one that uses ALL linear data. The other is a convenient engineering one that delivers the goods simply, by pre compensating the linear data to display on non linear displays.

The difference is actaully quite simple. With photography, the intention is to produce a final image that is as similar as possible to what a human eye would’ve seen through the viewfinder, which requires a non-linear response. With scientific imaging, OTOH, the interest is generally in absolute data (eg; number of photons, detecting miniscule light sources, etc), so there’s no particular reason to try to approximate the response of the human eye.


W
. | ,. w , "Some people are alive only because
\|/ \|/ it is illegal to kill them." Perna condita delenda est —^—-^————————————————— ————
ME
Mike Engles
Nov 23, 2004
Bart van der Wolf wrote:
"Mike Engles" wrote in message
SNIP
Also why is image data from spacecraft and astronomy not gamma encoded. It is after all digital photography.

Not necessarily. There is a difference between photometric data (e.g. spectral reflection/absorption/emission in certain bands), and pictorial imaging (e.g. stereo pairs in either visible light bands or mixed with other spectral data).
One common issue between them is the desire to reduce quantization errors (at least half of the LSB) to a minimum. Gamma encoding provides a visually efficient encoding, but it can underutilize the capacity at the lower, and overutilize (=additional quantization errors) at the higher counts. Then there is the trade-off caused by limited transmission bandwidth, and there is only so much one can do with compression…

Bart

Hello

I would have thought that photographs taken by spacecraft are to be viewed. They would be stored on the spacecraft in a file, prior to relay. It strikes me that if gamma encoding is necessary for terrestrial imaging to maximise the use of a limited number of bits, then that would also apply to space photography. There was a thread in the scanner group, where the expert consensus was that any imaging,storage and processing in a linear domain invited image degradation and posterisation. Yet we find that such linear imaging,storage and processing is common in scientific digital imaging, where one would imagine that extreme accuracy was paramount.

Do they use a large number of bits to avoid problems associated with linear storage and processing? The expert consensus was that one would need 18 to 20 bit linear images to match the efficiency of a 8 bit gamma encoded image.

What is sauce for the goose is sauce for the gander.

Timo Autiokari has been saying for ages that scientific imaging was done linearly. He has been abused soundly for his claims. We have been told that no one who does serious image processing does it linearly. So all the scientists of the world who regularly do their processing in a linear domain are not really serious and that they are merely FADISTS like Timo Autiokari.

Mike Engles
H
Hecate
Nov 23, 2004
On Mon, 22 Nov 2004 02:44:47 GMT, Chris Cox
wrote:

This is still a problem.
When 64 bit processors become the norm (and the @#!^&$ OS allows a fully 64 bit application), then that becomes less of a problem.
Is a 64 bit optimised Photoshop likely to be faster, or just more able to do complex operations? Or do the programmers generally aim for a bit of both if you’ll pardon the pun 🙂

That depends a lot on the CPU in question, and the operation in question.
Most likely there will be little performance difference, but a big difference in available RAM (addressibility).
Thanks. That, at least, will be very useful given the image sizes I usually have.



Hecate – The Real One

veni, vidi, reliqui
MA
Matt Austern
Nov 23, 2004
Chris Cox writes:

In article , Matt Austern
wrote:

Chris Cox writes:

I’ve tried. Their engineer insists that it’s 30x faster to work with 15 bit quantities than 16 bit ones.

Which is correct (for 0..32768 representation versus 0..65535 representation).

Perhaps this is offtopic, and perhaps you can’t answer it without revealing proprietary information, but can you explain why 15-bit computation should be so much faster than 16-bit? (If there’s a publication somewhere you could point me to, that would be great.) I’ve thought about this for a few minutes, I haven’t been able to think of an obvious reason, and now I’m curious.

1) Because a shift by 15 (divide by 32768) is much faster than a divide by 65535.

One of the most common operations is (value1*value2 + (maxValue/2)) / maxValue

With 0..255 we can pull some tricks to make the divide reasonably fast. For 0..65535 the tricks take quite a bit more time (and serialize the operation), or we have to use a multiply by reciprocal
For 0..32768, we can just use a shift.

2) A lot fewer overflows of 32 bit accumulators

This is still a problem.
When 64 bit processors become the norm (and the @#!^&$ OS allows a fully 64 bit application), then that becomes less of a problem.

3) The 2^N maximum value also has some benefits when dealing with subsampled lookup tables that require interpolation.

4) the 2^N maximum value also has benefits to blending operations that need a middle value (for 0..255 it was pretty random whether 127 or 128 was used for the middle).

Thanks! That all makes perfect sense.

One of the crucial things I missed, apparently, was that we really aren’t talking about a 15-bit representation. I missed the fact that the range really is, [0, 32768] not [0, 32768).

Actually, I think you’ve also given me some fun new questions to ask at interviews.
TA
Timo Autiokari
Nov 23, 2004
On Mon, 22 Nov 2004 21:32:21 GMT, wrote:

I’m not so sure that ACR works in a totally linear domain. Images exposed with bracketing, and compensated to be the same with the exposure slider, may have equal mid-tones, but the shadows and highlights will display that a different gamma is used. If you drag the ACR exposure slider to the left, after it runs out of "hidden highlights", it stretches the highlights so that 4095 in the RAW data stays anchored at 255 in the output, and never gets darker. That is not linear exposure compensation.

That editing operation is still applied over linear image data, even if the operation itself is not linear. Exposure adjustment in fact is a linear operation, multiplication by a factor, so the control should behave linearly (this is the same as scaling the right input slider in the Levels dialog).

About the main issues in this thread, you found that DNG converter seem not to be linear, I can not provide anything to this issue since I can not run the DNG sw. But I sure would like to know more, e.g. if someone could possibly do a correlation test please. Could it be possible for your to put your original data available for download, yes, sure it is simple data, but not everybody have the means to create it.

About the other main issue, the Photoshop "16" bit/c codespace, there sure is enough weirdness with that. E.g. if you create PhotoshopRaw data from level 0 to level 65535 (that is from 0000h to FFFFh) then open it to Photoshop, then save it as PhotoshopRaw under another name you’ll get:

0000h 0002h 0002h 0004h 0004h 0006h 0006h … up to the level 32768 (8000h), only even values there. Above that it converts to 8001h 8003h 8005h … etc up to the 65535 (FFFFh), only odd values there.

So at the middle it will calmly snip one level away, the coding there is: …7FF8h 7FFAh 7FFCh 7FFEh 8000h 8001h 8003h 8005h … Due to this discontinuity ‘d say that the 15th bit of Photoshop is quite un-usable for applications that require accuracy.

Timo Autiokari
TA
Timo Autiokari
Nov 23, 2004
On Tue, 23 Nov 2004 10:50:44 +1100, wrote:

With photography, the intention is to produce a final image that is as similar as possible to what a human eye would’ve seen through the viewfinder,

For you information, what ever real life scene the human eye is viewing at, it happens that *linear* light (photons) will hit the sensors on the retina. The effect of the viewfinder above is just a linear scaling.

Now in order to reproduce such a viewfinder view of a real life scene on the CRT "as similar as possible to what a human eye would’ve seen through the viewfinder" the CRT has to emit the very same amount of photons per each "pixel" as the eye receive from the viewfinder. The very same amount, so this is 1:1 linear.

Displays however are not capable to output very high luminance levels but it so happens that the eye has the iris so it can adapt to different brightness levels, therefore 1:1 linearity is not needed, just an overall linearity of the transfer function is enough.

Nonlinearity in this path makes the image appearance too dark or to bright in some portion of the tonal reproduction range.

which requires a non-linear response.

False.

Timo Autiokari
D
davem
Nov 23, 2004
Timo Autiokari writes:

0000h 0002h 0002h 0004h 0004h 0006h 0006h … up to the level 32768 (8000h), only even values there. Above that it converts to 8001h 8003h 8005h … etc up to the 65535 (FFFFh), only odd values there.

So at the middle it will calmly snip one level away, the coding there is: …7FF8h 7FFAh 7FFCh 7FFEh 8000h 8001h 8003h 8005h … Due to this discontinuity ‘d say that the 15th bit of Photoshop is quite un-usable for applications that require accuracy.

What do you mean by "accuracy"? The behaviour you are describing is the most accurate conversion between [0..65535] to [0..32768] and back. There is a one-code-value discontinuity where it goes from rounding down to rounding up. This represents a maximum error of half a code value out of 32768, or approximately one part in 65536.

Meanwhile, photographic data comes from a camera with 10 or 12 or maybe 14-bit A/D converter, so the inherent accuracy of the camera data given a totally noise-free image is 2, 8, or 32 times worse. Then real images have noise on top of that.

So this rounding error in Photoshop is totally insignificant compared to other errors in the data incurred earlier.

Besides, I can recall earlier arguments with you where you claimed that 8 bit linear coding was good enough for photographic work because the noise in photographs was sufficient to mask the (rather bad) quantization errors due to 8-bit linear coding. Why is it that you don’t worry about errors of 1 part in 512 when your own recommendations are at stake, but do worry about an error of 1 part in 65536 when criticizing Photoshop?

Dave
D
davem
Nov 23, 2004
Mike Engles writes:

What I have just read chimes with everything I think should happen in digital imaging. It does completely contradict everything that has been written about gamma encoding in these and other forums with the necessity for gamma to maximise the use of available bits.

ftp://ftp.alvyray.com/Acrobat/9_Gamma.pdf

Yet this guy seems to be a pioneer of digital imaging.

First, note that the article was written nearly 10 years ago. Since then, we have the PNG file format that explicitly tells you what non-linear transformation was used in encoding the image. We have colour management systems, with data chunks encoded in a file header telling you even more about the meaning of the data. And I think that even in 1995 TIFF would let you describe the data nonlinearity.

He’s right that a lot of guessing happened in 1995. But things are better now. He also talks a lot about one particular application, Altamira Composer, which apparently assumes PC monitors have a gamma of
1.8 (with the participation of the lookup table in the hardware). To
the best of my knowledge, this value has never been common on PCs, only on Macs, so one could describe this as simply a bad assumption for PC software.

Anyway, it’s now perfectly possible to *store* images using a nonlinear encoding, but unpack them to a wider linear representation before doing arithmetic on them, then convert back to the nonlinear representation for storage again. He recommended linear storage because that avoids conversion operations, and avoids having to store the data to describe the nonlinearity, but that’s not necessary to do linear arithmetic.

Unfortunately, the memo does *not* discuss the cost of linear storage. It’s a simple fact that if you store 8 bits per component (i.e. 24 bit colour), 8-bit linear coding does not provide sufficient intensity resolution to code shadow areas without quantization artifacts. 8-bit "gamma corrected" encoding is used because it provides more resolution in the shadows, where it’s needed, and less in the highlights, where the steps are still small enough not to see. To use linear coding without quantization problems, you’d need 12 or better yet 16 bits per component, and most applications do not want to pay the extra price in file size for no visible benefit.

Also why is image data from spacecraft and astronomy not gamma encoded.It is after all digital photography. They must be transmitting/ recording in at least 18 bit. That is the bit level that Chris Cox et al say is the minimum necessary for linear images, without gamma encoding.

First, the data from those sources is quantitative data used to make actual measurements of intensity. Producing pretty pictures is somewhat incidental. So it’s worth providing a wide linear data path, and calibrating the whole thing periodically, in order to get numbers that mean something. But consumer cameras are not used as photometers, so the same level of accuracy is not needed.

As for how many linear bits are needed to equal 8 bits gamma encoded: it all depends on the brightness range you want to represent. 16 bits is pretty damned good.

It does seem that what we have today is two types of digital imaging. One is the truly scientific one that uses ALL linear data. The other is a convenient engineering one that delivers the goods simply, by pre compensating the linear data to display on non linear displays.

Or, more accuratly, by non-linearly encoding the data in a way that fits human perceptual abilities without wasting bits.

Engineers were always happy with approximations.

Engineers are happy with what does the job at the lowest cost necessary. For photometry, you need more bits and a calibrated chain. For photography you don’t.

Dave
ME
Mike Engles
Nov 23, 2004
Dave Martindale wrote:
Mike Engles writes:

What I have just read chimes with everything I think should happen in digital imaging. It does completely contradict everything that has been written about gamma encoding in these and other forums with the necessity for gamma to maximise the use of available bits.

ftp://ftp.alvyray.com/Acrobat/9_Gamma.pdf

Yet this guy seems to be a pioneer of digital imaging.

First, note that the article was written nearly 10 years ago. Since then, we have the PNG file format that explicitly tells you what non-linear transformation was used in encoding the image. We have colour management systems, with data chunks encoded in a file header telling you even more about the meaning of the data. And I think that even in 1995 TIFF would let you describe the data nonlinearity.
He’s right that a lot of guessing happened in 1995. But things are better now. He also talks a lot about one particular application, Altamira Composer, which apparently assumes PC monitors have a gamma of
1.8 (with the participation of the lookup table in the hardware). To
the best of my knowledge, this value has never been common on PCs, only on Macs, so one could describe this as simply a bad assumption for PC software.

Anyway, it’s now perfectly possible to *store* images using a nonlinear encoding, but unpack them to a wider linear representation before doing arithmetic on them, then convert back to the nonlinear representation for storage again. He recommended linear storage because that avoids conversion operations, and avoids having to store the data to describe the nonlinearity, but that’s not necessary to do linear arithmetic.
Unfortunately, the memo does *not* discuss the cost of linear storage. It’s a simple fact that if you store 8 bits per component (i.e. 24 bit colour), 8-bit linear coding does not provide sufficient intensity resolution to code shadow areas without quantization artifacts. 8-bit "gamma corrected" encoding is used because it provides more resolution in the shadows, where it’s needed, and less in the highlights, where the steps are still small enough not to see. To use linear coding without quantization problems, you’d need 12 or better yet 16 bits per component, and most applications do not want to pay the extra price in file size for no visible benefit.

Also why is image data from spacecraft and astronomy not gamma encoded.It is after all digital photography. They must be transmitting/ recording in at least 18 bit. That is the bit level that Chris Cox et al say is the minimum necessary for linear images, without gamma encoding.

First, the data from those sources is quantitative data used to make actual measurements of intensity. Producing pretty pictures is somewhat incidental. So it’s worth providing a wide linear data path, and calibrating the whole thing periodically, in order to get numbers that mean something. But consumer cameras are not used as photometers, so the same level of accuracy is not needed.

As for how many linear bits are needed to equal 8 bits gamma encoded: it all depends on the brightness range you want to represent. 16 bits is pretty damned good.

It does seem that what we have today is two types of digital imaging. One is the truly scientific one that uses ALL linear data. The other is a convenient engineering one that delivers the goods simply, by pre compensating the linear data to display on non linear displays.

Or, more accuratly, by non-linearly encoding the data in a way that fits human perceptual abilities without wasting bits.

Engineers were always happy with approximations.

Engineers are happy with what does the job at the lowest cost necessary. For photometry, you need more bits and a calibrated chain. For photography you don’t.

Dave

Hello

Do they use a high number of bits in space imaging? I cannot imagine they do as storage must be limited for high amounts of data. After all the systems in use on say the Cassini mission are over 10 years old in technology terms. I can see why accuracy is essential for photometry, but there are also imaging cameras, which should use gamma. I doubt that these are more than 8 bit per colour.

Mike Engles
E
eawckyegcy
Nov 23, 2004
Mike Engles wrote:

I would have thought that photographs taken by spacecraft are to be viewed.

Here’s the deal:

sensor -> processor -> … -> processor -> display

The ‘sensor’ is (these days) linear, as is most processing. The ‘display’, however, is almost never linear. Thus at some point in the "processor" chain, one _must_ compensate for this nonlinear response or one will have Problems upon viewing. Early television, in the interests of making receivers as simple (cheap) as possible, put the compressors at the broadcaster since the receiver’s CRT would do the expansion. The same model is (or should be) used for photographic image processing as well: you collect linear data, you mangle it through whatever linear processing steps, and then, right at the end, you compress it ("apply gamma") prior to JPEG or whatever.

If there is no need for a human to view the image on a non-linear display, then this compression step can be removed. There are indeed applications where the image is not inteded for human consumption. There are other cases, though, where processing non-linear data as if it was linear ("homomorphic filtering") has its uses. (Indeed, when one feeds gamma-encoded samples to a JPEG encoder, one is engaged in homomorphic processing…)
CC
Chris Cox
Nov 23, 2004
In article ,
wrote:

Kibo informs me that Chris Cox stated that:

In article
wrote:
It is exactly what is happening here. I get 0, 1, 3, 4, 5, 8, etc. No 2, 6, 7, 11, etc, at all, no matter what is done to the data.

And, again, without your original data – I can’t guess what could have gone wrong.

I do know that for anyone else doing a similar experiment (inside and outside Adobe), they get the full 32769 values.

Yeah, that’s what I would’ve expected. I find it impossible to believe that PS could be getting it that badly wrong without it showing up in a dozen different, really obvious ways.
And speaking of supplying original data, where the hell does Adobe keep the PS raw file format doc’s that used to be on the website? – I wanted to try John’s experiment for myself, but all I could find was the PS SDK, for which Adobe wants money.

They’re part of the SDK, and always have been.
(and don’t get me started on why the SDK isn’t free)

Photoshop RAW is just bytes – no header, no documentation.

Chris
CC
Chris Cox
Nov 23, 2004

[[ This message was both posted and mailed: see
the "To," "Cc," and "Newsgroups" headers for details. ]]

In article
wrote:

In message <211120041843480241%>,
Chris Cox wrote:

I do know that for anyone else doing a similar experiment (inside and outside Adobe), they get the full 32769 values.

I just came up with an idea to check if it was the internal representation or the "info" tool itself, and sure enough it was the info tool that was at fault.

What I did was open the "levels" dialog, and set the input max to 2. Then, I moved the info tool over the pixels, and sure enough, there was not a direct correspondence between the "old" and "new" values. the first 0 became a 0; the second 0 became a 52; all numbers that are multiples of 52 were present in the new values (no gaps).
The info tool is toast in 16-bit greyscale mode.

OK – that must have slipped through QE somehow.
(and I think I did most of my tests in RGB mode)

I’ll have someone double check it in the current build and fix it if it’s still broken (well, as soon as I get rid of this @#!^&*^%$ cold).

Chris
D
davem
Nov 23, 2004
Mike Engles writes:

I would have thought that photographs taken by spacecraft are to be viewed.

Not just viewed, measured. For that, you need to calibrate the camera regularly, and preserve the data from it. For that, it’s worth keeping the data in linear form, and using more memory and transmission time.

It strikes me that if gamma encoding is necessary for terrestrial imaging to maximise the use of a limited number of bits, then that would also apply to space photography.

Generally no, because the tradeoffs are different. Some cameras *do* allow you to save data in a linear losslessly compressed form called "raw", precisely when you want more control over what’s done with it. If you have raw camera data, you can process it in 16-bit linear form if you want.

There was a thread in the scanner
group, where the expert consensus was that any imaging,storage and processing in a linear domain invited image degradation and posterisation.

Any processing in *8 bit* linear invites posterization and other degradation. Using *16 bit* per sample linear avoids most of this for ordinary pictorial images. Using *floating point* linear is enough for high dynamic range images. You must distinguish between these different linear forms.

Yet we find that such linear imaging,storage and
processing is common in scientific digital imaging, where one would imagine that extreme accuracy was paramount.

I’ll bet it isn’t 8 bit linear.

Do they use a large number of bits to avoid problems associated with linear storage and processing? The expert consensus was that one would need 18 to 20 bit linear images to match the efficiency of a 8 bit gamma encoded image.

Yes, though the 18 or 20 bit number depends on what you mean by "efficiency", and what intensity range you’re trying to cover.

What is sauce for the goose is sauce for the gander.

Can’t you see that 8-bit linear and 16-bit linear are entirely different sauces?

Timo Autiokari has been saying for ages that scientific imaging was done linearly. He has been abused soundly for his claims.

He’s been abused for recommending 8-bit linear over 8-bit nonlinear.

We have been told
that no one who does serious image processing does it linearly.

Oh, who said that? I do the actual signal processing in linear space (in 32-bit floating point), but often store images in 8-bit nonlinear form. There’s no contradiction here; it just requires a conversion.

So all
the scientists of the world who regularly do their processing in a linear domain are not really serious and that they are merely FADISTS like Timo Autiokari.

Again, they’re not using 8-bit linear for any serious measurement data.

Dave
U
usenet
Nov 24, 2004
Kibo informs me that Timo Autiokari stated
that:

On Tue, 23 Nov 2004 10:50:44 +1100, wrote:

With photography, the intention is to produce a final image that is as similar as possible to what a human eye would’ve seen through the viewfinder,

For you information,

As it happens, I’ve designed imaging systems, so I’m quite familiar with the differences between requirements of a scientific imaging system, vs the requirements of a device to create images intended to approximate what a human eye would see in the same situation. And for /your/ information, a photograph evokes only a very vague approximation of what an eye would’ve seen if it’d been in place of the camera. Even a gamma-corrected (ie; non-linear) image is just another in a long string of compromises that makes it a little easier to trick the human eye into perceiving the printed/displayed image as ‘real’.

what ever real life scene the human eye is
viewing at, it happens that *linear* light (photons) will hit the sensors on the retina.

You’re ignoring the fact that most scientific imaging uses false-colouring *precisely because* the ‘true’ image would either be invisible, too dark, or too bright to be processed by a naked human eye. If the human eye was capable of perceiving, (for example), Doppler-shifted light from a star on the other side of the galaxy, we wouldn’t need space-telescopes in the first place, would we? – We could just look out the window instead. And the human eye can’t correctly image even fairly close stars – we perceive most stars as being white, (even though they are strongly coloured), because their light is too dim for our colour vision to pick it up. Fortunately, scientific imaging systems can show us their *real* colour. Closer to home, scientific instruments create images via things like soft X-rays, or infrared light – situations where the capabilities & limitations of the human eye are completely irrelevant. The particular scaling system, (whether it’s linear, log, exponential, bell-shaped or whatever) that’s optimal for scientific imaging has nothing whatever to do with how the eye perceives light, & everything to do with the physics of whatever it is that the device is intended to measure.

Displays however are not capable to output very high luminance levels but it so happens that the eye has the iris so it can adapt to different brightness levels,

The eye does a hell of a lot more to deal with large contrast ranges than just adjust the iris. For example; the retina automatically performs an astonishingly-similar analog of darkroom or PS contrast masking to ‘correct’ for localised highlights in the visual field that would otherwise ‘blowout’, just as photographers do to ‘correct’ photos of sunsets or other scenes with contrast ranges that are too big to print or display.

therefore 1:1 linearity is not needed,
just an overall linearity of the transfer function is enough. Nonlinearity in this path makes the image appearance too dark or to bright in some portion of the tonal reproduction range.

For starters, the light output of a display isn’t even close to being linear, nor should it be. If you actually look at the transfer graph for a calibrated monitor, you’ll find that the transfer curve is exponential. It’s no harder to calibrate a monitor to give a completely linear input-voltage to light output relationship, rather than a 1.8 or
2.2 gamma curve, then run an extremely accurate linear greyscale
gradient across it, but it would result in precisely the *perceived* non-linearity you’ve just mentioned. We gamma-correct monitors for the *exact purpose* of eliminating that non-linear perception.

which requires a non-linear response.

False.

No, I’m afraid not. You would do well to read up on how the human eye works, as well as about scientific imaging techniques, because the stuff you’re saying is just plain wrong.


W
. | ,. w , "Some people are alive only because
\|/ \|/ it is illegal to kill them." Perna condita delenda est —^—-^————————————————— ————
U
usenet
Nov 24, 2004
Kibo informs me that Timo Autiokari stated
that:

On Mon, 22 Nov 2004 21:32:21 GMT, wrote:

I’m not so sure that ACR works in a totally linear domain.

It definitely doesn’t.

Images
exposed with bracketing, and compensated to be the same with the exposure slider, may have equal mid-tones, but the shadows and highlights will display that a different gamma is used. If you drag the ACR exposure slider to the left, after it runs out of "hidden highlights", it stretches the highlights so that 4095 in the RAW data stays anchored at 255 in the output, and never gets darker. That is not linear exposure compensation.

Nor is it an exact analog of adjusting by F-stops (ie; non-linear), which is what I’d like it to be. You should try C1 some time, its exposure compensation adjustment control is *way* more like adjusting the exposure compensation dial on a camera. Going from that to to ACR weirds me out every time.

That editing operation is still applied over linear image data, even if the operation itself is not linear. Exposure adjustment in fact is a linear operation, multiplication by a factor,

Incorrect. It’s scaled in F-stops, which are exponential, not linear. You’ll find the mathematical details in any good textbook on photography.

So at the middle it will calmly snip one level away, the coding there is: …7FF8h 7FFAh 7FFCh 7FFEh 8000h 8001h 8003h 8005h … Due to this discontinuity ‘d say that the 15th bit of Photoshop is quite un-usable for applications that require accuracy.

*sigh*

You’re talking about the LSB of a 15 bit value sometimes skipping a value, which, (assuming that you’re correct about it), is an inaccuracy of around 0.003%. To put this hypothetical error into perspective, it’d have to be at least *four times greater* to alter a 12 bit RAW image by even a single step in value – a change that would be not only be completely invisible to the human eye, but would be completely swamped by the much, much greater errors contributed by the sensor noise in the camera, *plus* the ADC error in the camera, *plus* the colour-space rounding errors in the computer, *plus* the DAC inaccuracy in your video card, *plus* the video amp inaccuracy in your monitor. The ‘error’ you’re talking about is about as significant as an ICBM missing the targetted position by a couple of feeet.


W
. | ,. w , "Some people are alive only because
\|/ \|/ it is illegal to kill them." Perna condita delenda est —^—-^————————————————— ————
TA
Timo Autiokari
Nov 24, 2004
wrote:

Timo Autiokari wrote:
Exposure adjustment in fact is a linear
operation, multiplication by a factor,

Incorrect. It’s scaled in F-stops, which are exponential, not linear.

Exposure adjustment by +1 fstop is the same as multiplication of the data values by 2. Exposure adjustment by +2 fstop is the same as multiplication of the data values by 4. So a linear operation. Just the gradation of the aperture and the time dial is logarithmic, they both affect to the data linearly.

You’ll find the mathematical details in
any good textbook onphotography.

Then study them.

Timo Autiokari
TA
Timo Autiokari
Nov 24, 2004
Mike Engles wrote:

ftp://ftp.alvyray.com/Acrobat/9_Gamma.pdf

Yes, very good information, from a rather heavy weight professional, his Bio btw is at: http://alvyray.com/Bio/default.htm Even if it was written in 1995 it is perfectly valid today (only that in case the image has an embedded ICC profile we need not to *guess* the transfer function, ICC color-management was not very popular at that time).

They must be transmitting/recording in at least 18 bit. That is the bit level that Chris Cox et al say is the minimum necessary for linear images, without gamma encoding.

No more bits are necessary for digital imaging than what the sensor of the acquire device is able to provide (according to it’s S/N ratio) in other words there is no need to store pure noise. Say you buy 12 eggs, then you do not need a trailer truck to bring them home. And these days the so called pro scanners and pro digital cameras can not reach even 10-bit. Ther real pro devices (like the EverSmart Supreme ll scanner that has cooled CCD) can do nearly 12-bit.

And the so called banding issue in 8-bit/c is linear is enormously exaggerated, it is in fact quite an academic case:Here is an example from a thread on my forums:

16-bit/ edit:
http://www.aim-dtp.net/aim/temp/sad_CRW_1708-edit-16bit.jpg 8-bit/c edit
http://www.aim-dtp.net/aim/temp/sad_CRW_1708-edit-8bit.jpg

The original was a linear converted RAW from D60. You can read the details from the thread:’Linear workflow and 8bit/channel’ if you like to go there. Rather demanding picture in regards to the horrible banding issue, can you see *any* problems in the 8-bit/c edit?

It does seem that what we have today is two types of digital imaging. One is the truly scientific one that uses ALL linear data.

Yes, the scientific imaging is done in linear.

The other is a convenient engineering one that delivers the goods simply, by pre compensating the linear data to display on non linear displays.

This the _easy solution_ for the ordinary consumers. It is *not* an engineering issue but a marketing issue.The industry simply needs a way to sell the digital imaging gadgets to the mass market consumers without stressing the consumers with the workflow issues.

And the third type of digital imaging is the high end professional imaging that you see in the better magazines etc. It is still done in linear domain like it has been done for the past 30 years. For the very reasons Dr. Alvy Ray Smith lists on his above mentioned memo like "all computer graphics computations assume linear images", this includes Photoshop CS also. When the computations are applied over gamma compensated image data there will be the Gamma Induced Errors.

Timo Autiokari
CC
Chris Cox
Nov 25, 2004
Please do not feed the troll.

In article , Timo Autiokari
wrote nothing useful:
MR
Mike Russell
Nov 25, 2004
Chris Cox wrote:
Please do not feed the troll.

In article , Timo
Autiokari wrote nothing useful:

I respectfully disagree. Timo’s contributions to the group are certainly of value.

I think he deserves further credit for not responding in kind to some of those who criticize him personally.

Mike Russell
www.curvemeister.com
www.geigy.2y.net
U
usenet
Nov 25, 2004
Kibo informs me that Chris Cox stated that:

They’re part of the SDK, and always have been.
(and don’t get me started on why the SDK isn’t free)

*mutter* I can guess.
It’d obviously be inappropriate for you to discuss that issue here, but if you feel the urge to vent about it, I’d be most interested in discussing it via email. (This email address will reach me, BTW)

Photoshop RAW is just bytes – no header, no documentation.

Ah. In that case, I should be able to figure the details out for myself. Much obliged for the hint, Chris.

And if you get a chance to pass on on a message to the genius who decided to put a price-tag on the SDK (despite the fact that Adobe has the sense to provide the PostScript, etc doco’s for free download), please inform him/her that that policy has cost them a potential plugin developer, because I refuse to pay a company money for documentation that I would need to enhance *their* product. Even *Microsoft* have finally realised that encouraging 3rd party developers is good business practice.


W
. | ,. w , "Some people are alive only because
\|/ \|/ it is illegal to kill them." Perna condita delenda est —^—-^————————————————— ————
C
coxtale
Nov 25, 2004
The pot is calling the kettle black again. Chris Cox’ behavior was well documented in this link:
http://www.ledet.com/margulis/How_CM_Failed.pdf

=============================
Is Color Management Rocket Science?

In January, I was e-mailed some tough technical questions from a gentleman who was having trouble making the upgrade to PS 5. I wondered why he had not asked them of Adobe. It turned out that he had. Chris Cox, an Adobe programmer, had responded on-line as follows: “Start by going to the Adobe web site and reading the PS 5 technical guides (oh, and get the 5.0.2 update).” The correspondent replied, “I’ve read them several times and still have the
many questions described (and have 5.0.2).” To that, Mr. Cox’s answer was, “Well, since they clearly answer the ques-tions you posed, I have to wonder what’s wrong.”

The answers are not there. And the man Mr. Cox had blown off not only was a Photoshop instructor with a graduate degree in mathematics but, get this, a rocket scientist, claiming to have played an important design role in the launch interface for the Apollo program. The rocket scientist, having stated these credentials, shot back, “I don’t know what your problem is. Your first response to my questions showed you didn’t read my message, which clearly indicates that I had read the documents you cited. Your second reply, in addition to being totally insulting, reinforces that you didn’t read the original message as there are many, many questions not addressed by Adobe. Moreover anyone who claims that the Adobe documents answer my questions clearly is either an expert who doesn’t understand the difficulties others have or someone who doesn’t understand the difficulties of the subject…These are not straightforward issues…So, Chris, be careful with your snivelling, ignorant remarks.”

At that point, the two wisely took their conversation off-line, but not before other readers chimed in. One wrote. “I agree with [the rocket scientist]. As a Photoshop heavy user since version 1.0, I have been struggling and struggling with ICC color profiles for a while now and I just don’t get it! I’VE BEEN TO ADOBE ONLINE….I’m really pissed at Adobe. I have ruined—by embedding profiles—a bunch of scans that I don’t know how I’m going to fix…I’m sure the Adobe engineers had their heads in the right place, but I’ll be dipped in [doo-doo] if I can get my scanners, computers and printers to all work together. Sure I can go back to 4.0, but c’mon—I don’t think that’s the intent of Adobe—to send people reeling back-wards. Most people will not spend the 8-16 hours trying to write color profiles and run tons of expensive coated papers to master this crap. Let’s try to work together to share experiences and not just patly respond with trite comments.”

You don’t need to be a rocket scientist to agree with that. —DM

Mike Russell wrote:
Chris Cox wrote:
Please do not feed the troll.

In article , Timo
Autiokari wrote nothing useful:

I respectfully disagree. Timo’s contributions to the group are certainly of value.

I think he deserves further credit for not responding in kind to some of those who criticize him personally.

Mike Russell
www.curvemeister.com
www.geigy.2y.net
D
drjohnruss
Nov 25, 2004
The pot is calling the kettle black again.

Let’s not go through this long, nasty and utterly pointless dialog again. There are a couple of basic truths here:

1. Chris Cox, at Adobe, is sometimes less than politic in his postings. He has a short temper and gets annoyed easily at not only the stupidity of some questions he is asked, but also sometimes the wordings of the questions which he interprets (rightly or wrongly) as the pigheadedness of those who asked them. That having been said, he knows what he is doing and his answers, if you can see through the attitude they are sometimes presented with, are generally correct. The code he has written for Adobe adheres very closely to the exact and correct models of color science, although Adobe is not generally very open about presenting the underlying models and especially not the highly optimized code.

2. Timo, on the other hand, has mastered the authoritative wording that makes his answers, postings and web sites seem to be rational, useful and even correct. Unfortunately, this masks the fact that they are almost invariable utterly and completely wrong. He begins from false assumptions, cheerfully ignores or misquotes established facts and a very large body of science, and has his own dreadfully wrong view of the universe (which seems to include a "conspiracy theory" involving Adobe).

The conflict of personalities that results when Timo starts posting stuff that appears reasonable but is actually nonsense almost always causes Chris to flame and then a whole lot of other people to chime in. This has happened dozens of times in the past, looks amusing to the casual observer, clutters the postings, confuses those ignorant of the past or the truth, and never does anything at all positive.

Let’s not go there.
T
toby
Nov 25, 2004
Chris Cox …

(and don’t get me started on why the SDK isn’t free)

Here are my theories:
* so that Adobe can weed out people who want to use the SDK to write plugin hosts (not to mention other interoperative stuff like parse PSD format)
* so there is a gate already in place for when the new plugin API is launched

The timing of the retract was curious, in that it neatly divided developers from the information they needed to take plugins to OS X (unless they wanted to pay the money, somersault through the flaming hoops, wait for their application to be vetted, yadda yadda…)

Shortly before the diktat came down to plug the leak, you can bet there was muttering in the halls, "Why did we -ever- give it away".

–T

Photoshop RAW is just bytes – no header, no documentation.
Chris
D
digiboy
Nov 25, 2004
the surprise to me about color management is thats its a suprise to everyone else.

I’ve always thought that the task of converting different gamuts, white points, phosphers etc too much for the average / typical user.

How do you color manage when you have perceptive colors like RGB, color mixed output colors like CMYK and fixed-by-dye colors like Pantones, all on the same page?

Can’t be done! How do you manage out of gamut colors. Shrink the gamut, and if the image moves to a device with a larger gamut, what happens then?

Do you shrink a gamut by chromaticity ie shrink towards the white point, or do you do it so the perceptual colors are the same?

Just my 2p worth. I have color management turned off

DB
ME
Mike Engles
Nov 25, 2004
Timo Autiokari wrote:
Mike Engles wrote:

ftp://ftp.alvyray.com/Acrobat/9_Gamma.pdf

Yes, very good information, from a rather heavy weight professional, his Bio btw is at: http://alvyray.com/Bio/default.htm Even if it was written in 1995 it is perfectly valid today (only that in case the image has an embedded ICC profile we need not to *guess* the transfer function, ICC color-management was not very popular at that time).
They must be transmitting/recording in at least 18 bit. That is the bit level that Chris Cox et al say is the minimum necessary for linear images, without gamma encoding.

No more bits are necessary for digital imaging than what the sensor of the acquire device is able to provide (according to it’s S/N ratio) in other words there is no need to store pure noise. Say you buy 12 eggs, then you do not need a trailer truck to bring them home. And these days the so called pro scanners and pro digital cameras can not reach even 10-bit. Ther real pro devices (like the EverSmart Supreme ll scanner that has cooled CCD) can do nearly 12-bit.

And the so called banding issue in 8-bit/c is linear is enormously exaggerated, it is in fact quite an academic case:Here is an example from a thread on my forums:

16-bit/ edit:
http://www.aim-dtp.net/aim/temp/sad_CRW_1708-edit-16bit.jpg 8-bit/c edit
http://www.aim-dtp.net/aim/temp/sad_CRW_1708-edit-8bit.jpg
The original was a linear converted RAW from D60. You can read the details from the thread:’Linear workflow and 8bit/channel’ if you like to go there. Rather demanding picture in regards to the horrible banding issue, can you see *any* problems in the 8-bit/c edit?
It does seem that what we have today is two types of digital imaging. One is the truly scientific one that uses ALL linear data.

Yes, the scientific imaging is done in linear.

The other is a convenient engineering one that delivers the goods simply, by pre compensating the linear data to display on non linear displays.

This the _easy solution_ for the ordinary consumers. It is *not* an engineering issue but a marketing issue.The industry simply needs a way to sell the digital imaging gadgets to the mass market consumers without stressing the consumers with the workflow issues.
And the third type of digital imaging is the high end professional imaging that you see in the better magazines etc. It is still done in linear domain like it has been done for the past 30 years. For the very reasons Dr. Alvy Ray Smith lists on his above mentioned memo like "all computer graphics computations assume linear images", this includes Photoshop CS also. When the computations are applied over gamma compensated image data there will be the Gamma Induced Errors.
Timo Autiokari

Hello

It would be really interesting to know if he still supports his 1995 writings. He seems to be a heavy weight in computer graphics.

Mike Engles
ME
Mike Engles
Nov 25, 2004
Mike Engles wrote:
Timo Autiokari wrote:
Mike Engles wrote:

ftp://ftp.alvyray.com/Acrobat/9_Gamma.pdf

Yes, very good information, from a rather heavy weight professional, his Bio btw is at: http://alvyray.com/Bio/default.htm Even if it was written in 1995 it is perfectly valid today (only that in case the image has an embedded ICC profile we need not to *guess* the transfer function, ICC color-management was not very popular at that time).
They must be transmitting/recording in at least 18 bit. That is the bit level that Chris Cox et al say is the minimum necessary for linear images, without gamma encoding.

No more bits are necessary for digital imaging than what the sensor of the acquire device is able to provide (according to it’s S/N ratio) in other words there is no need to store pure noise. Say you buy 12 eggs, then you do not need a trailer truck to bring them home. And these days the so called pro scanners and pro digital cameras can not reach even 10-bit. Ther real pro devices (like the EverSmart Supreme ll scanner that has cooled CCD) can do nearly 12-bit.

And the so called banding issue in 8-bit/c is linear is enormously exaggerated, it is in fact quite an academic case:Here is an example from a thread on my forums:

16-bit/ edit:
http://www.aim-dtp.net/aim/temp/sad_CRW_1708-edit-16bit.jpg 8-bit/c edit
http://www.aim-dtp.net/aim/temp/sad_CRW_1708-edit-8bit.jpg
The original was a linear converted RAW from D60. You can read the details from the thread:’Linear workflow and 8bit/channel’ if you like to go there. Rather demanding picture in regards to the horrible banding issue, can you see *any* problems in the 8-bit/c edit?
It does seem that what we have today is two types of digital imaging. One is the truly scientific one that uses ALL linear data.

Yes, the scientific imaging is done in linear.

The other is a convenient engineering one that delivers the goods simply, by pre compensating the linear data to display on non linear displays.

This the _easy solution_ for the ordinary consumers. It is *not* an engineering issue but a marketing issue.The industry simply needs a way to sell the digital imaging gadgets to the mass market consumers without stressing the consumers with the workflow issues.
And the third type of digital imaging is the high end professional imaging that you see in the better magazines etc. It is still done in linear domain like it has been done for the past 30 years. For the very reasons Dr. Alvy Ray Smith lists on his above mentioned memo like "all computer graphics computations assume linear images", this includes Photoshop CS also. When the computations are applied over gamma compensated image data there will be the Gamma Induced Errors.
Timo Autiokari

Hello

It would be really interesting to know if he still supports his 1995 writings. He seems to be a heavy weight in computer graphics.
Mike Engles

Hello

He did in 1998 and all the articles are on his site.
He certainly is a proponent of linear processing.

ftp://ftp.alvyray.com/Acrobat/17_Nonln.pdf

Mike Engles
CC
Chris Cox
Nov 26, 2004
In article <Kfgpd.22438$>, Mike
Russell wrote:

Chris Cox wrote:
Please do not feed the troll.

In article , Timo
Autiokari wrote nothing useful:

I respectfully disagree. Timo’s contributions to the group are certainly of value.

You mean his misinformation campaign?
Get real.

Chris
CC
Chris Cox
Nov 26, 2004
In article wrote:

The pot is calling the kettle black again. Chris Cox’ behavior was well documented in this link:
http://www.ledet.com/margulis/How_CM_Failed.pdf

Which is also documented as being pure BS – the part about me being taken from an incomplete online resume, then jumping to a conclusion and not bothering to check his facts.

Dan lost a lot of his remaining credibility with that cheap shot.

Chris
U
usenet
Nov 26, 2004
Kibo informs me that Timo Autiokari stated
that:

wrote:

Timo Autiokari wrote:
Exposure adjustment in fact is a linear
operation, multiplication by a factor,

Incorrect. It’s scaled in F-stops, which are exponential, not linear.

Exposure adjustment by +1 fstop is the same as multiplication of the data values by 2. Exposure adjustment by +2 fstop is the same as multiplication of the data values by 4. So a linear operation. Just the gradation of the aperture and the time dial is logarithmic, they both affect to the data linearly.

You failed maths at school, didn’t you?

You’ll find the mathematical details in
any good textbook onphotography.

Then study them.

I have. You clearly havn’t.
I won’t waste my time trying to educate you, as you clearly prefer to be ignorant.


W
. | ,. w , "Some people are alive only because
\|/ \|/ it is illegal to kill them." Perna condita delenda est —^—-^————————————————— ————
U
usenet
Nov 26, 2004
Kibo informs me that Chris Cox stated that:

In article <Kfgpd.22438$>, Mike
Russell wrote:

Chris Cox wrote:
Please do not feed the troll.

In article , Timo
Autiokari wrote nothing useful:

I respectfully disagree. Timo’s contributions to the group are certainly of value.

You mean his misinformation campaign?
Get real.

I have no idea whether he has any skill as a photographer, but the comments I’ve seen him make on theoretical concepts are totally clueless. He’s not quite as deluded as The Preddiot, but he’s in the same weight-class.


W
. | ,. w , "Some people are alive only because
\|/ \|/ it is illegal to kill them." Perna condita delenda est —^—-^————————————————— ————
L
look
Nov 26, 2004
"Linear" is being used in two different senses here. The f-stops are not on a linear scale, but changing the f-stop is a linear operation (multiplication).
TA
timo.autiokari
Nov 26, 2004
wrote

You failed maths at school, didn’t you?

I know this can be confusing but it is really very easy to understand, and quite basic knowledge too. I try to make this as easy as possible:

Say you take a picture using 2 second exposure time. Each pixel will collect some amount of photons depending on the luminance of the corresponding scene area. Now you take another picture with -1 stop exposure adjustment e.g. you change the exposure time to 1 second. So in this second picture you have collected just half the amount of photons into each of the pixels compared to the first picture. Thus, the effect you have made is the same as if you multiplied the captured data of the first picture by 0.5 pixel by pixel.

Timo Autiokari
D
davem
Nov 26, 2004
writes:

Timo Autiokari wrote:
Exposure adjustment in fact is a linear
operation, multiplication by a factor,

Incorrect. It’s scaled in F-stops, which are exponential, not linear.

Exposure adjustment by +1 fstop is the same as multiplication of the data values by 2. Exposure adjustment by +2 fstop is the same as multiplication of the data values by 4. So a linear operation. Just the gradation of the aperture and the time dial is logarithmic, they both affect to the data linearly.

You failed maths at school, didn’t you?

Actually, Timo’s right here.

"Linear" is a term that appears all over the place, with somewhat different meanings. In high school math, linear probably refers to a first-degree polynomial.

In signal processing, a linear system is one where the result of applying the system to the sum of two signals is the same as the sum of applying the system to the two signals individually (superposition). (These are not the same; the function y = 2x + 3 is a linear polynomial, but it’s *not* linear in the signal processing sense).

And in television, the electronics are linear even though the signal being processed is non-linearly related to scene brightness.

In this example, exposure adjustment is a linear operator in the signal processing sense. The amount of adjustment can be specified directly (2X, 0.5X), or as an equivalent number of stops. F-stops are a logarithmic scale used to specify (in this case) a linear transformation scale factor.

You’ll find the mathematical details in
any good textbook onphotography.

Then study them.

I have. You clearly havn’t.
I won’t waste my time trying to educate you, as you clearly prefer to be ignorant.

You’re both arguing past each other, unable to see that each is applying "linear" to something different.

Dave
D
davem
Nov 26, 2004
(Dave Martindale) writes:

In this example, exposure adjustment is a linear operator in the signal processing sense. The amount of adjustment can be specified directly (2X, 0.5X), or as an equivalent number of stops. F-stops are a logarithmic scale used to specify (in this case) a linear transformation scale factor.

And, by the way, the fact that the operation is a linear function does *not* imply that it is better done on linearly encoded pixels instead of gamma-corrected ones. This is yet another use of linear.

If you want the effect of 1 stop exposure adjustment, you want to multiply the intensity by 2 (or 0.5, depending on the direction of the adjustment). If you have linearly encoded pixels, you just multiply every pixel value by 2. If you have gamma-2.2 encoded pixels, you multiply by 2^(1/2.2) = 1.37 instead. Either one doubles intensity.

So, you can apply a linear transformation to nonlinearly-encoded pixels when the amount is specified by a nonlinear scale (f stops). That’s three different uses of "linear" in the same sentence.

Dave
D
davem
Nov 26, 2004
writes:

I have no idea whether he has any skill as a photographer, but the comments I’ve seen him make on theoretical concepts are totally clueless. He’s not quite as deluded as The Preddiot, but he’s in the same weight-class.

I’d have to disagree with that. I had a number of long drawn-out arguments with Timo in this newsgroup several years ago, but I have to say that The Preddiot is worse.

Timo is pretty firmly convinced of his own rightness, and tends to disparage the personal integrity of anyone who disagrees with him. But if you argue with him in minute detail about something he says that is clearly wrong, he will eventually see that. He’ll never say "I was wrong", but he will abandon wrong positions eventually if you prove he’s wrong. It just isn’t worth the work to do so.

Timo does care about telling people the truth; it’s just that he’s sometimes wrong and it’s incredibly hard to demonstrate that to his satisfaction. He’s also put a lot of time and effort into thinking about image processing and setting up his pages; it’s too bad they are sometimes misleading.

But "George Preddy" just makes ridiculous statements, with no reasonable argument to back them up. And he won’t support what he says – he ignores questions from other people. He’s effectively an output-only device with no regard for the truth at all. Nor does he seem capable of much original argument; he mostly just parrots Foveon marketing hype.

Dave
D
davem
Nov 26, 2004
Mike Engles writes:

He did in 1998 and all the articles are on his site.
He certainly is a proponent of linear processing.
ftp://ftp.alvyray.com/Acrobat/17_Nonln.pdf

The thing he doesn’t really address is that you don’t need to *store* data in a linear encoding in order to *process* it in a linear space.

Choosing nonlinear storage but linear processing has its costs (the conversion steps), but choosing linear storage also has its costs (more bits on disk or in memory for the same intensity range and resolution).

Dave
T
toby
Nov 26, 2004
Chris Brown …
In article ,
Toby Thain wrote:
Chris Cox wrote in message
news:<201120041838050385%>…
The color mode doesn’t matter – it’s still 16 bit data (0..32768).

It’s deceptive to characterise that range of values as "16 bit" – it has only 15 bits of dynamic range.

YM precision. Dynamic range is independent of the number of bits used to represent an image.

Right. Sloppy of me.
U
username
Nov 27, 2004
Mike Engles wrote:

Dave Martindale wrote:

Mike Engles writes:

What I have just read chimes with everything I think should happen in digital imaging. It does completely contradict everything that has been written about gamma encoding in these and other forums with the necessity for gamma to maximise the use of available bits.

ftp://ftp.alvyray.com/Acrobat/9_Gamma.pdf

Yet this guy seems to be a pioneer of digital imaging.

First, note that the article was written nearly 10 years ago. Since then, we have the PNG file format that explicitly tells you what non-linear transformation was used in encoding the image. We have colour management systems, with data chunks encoded in a file header telling you even more about the meaning of the data. And I think that even in 1995 TIFF would let you describe the data nonlinearity.
He’s right that a lot of guessing happened in 1995. But things are better now. He also talks a lot about one particular application, Altamira Composer, which apparently assumes PC monitors have a gamma of
1.8 (with the participation of the lookup table in the hardware). To
the best of my knowledge, this value has never been common on PCs, only on Macs, so one could describe this as simply a bad assumption for PC software.

Anyway, it’s now perfectly possible to *store* images using a nonlinear encoding, but unpack them to a wider linear representation before doing arithmetic on them, then convert back to the nonlinear representation for storage again. He recommended linear storage because that avoids conversion operations, and avoids having to store the data to describe the nonlinearity, but that’s not necessary to do linear arithmetic.
Unfortunately, the memo does *not* discuss the cost of linear storage. It’s a simple fact that if you store 8 bits per component (i.e. 24 bit colour), 8-bit linear coding does not provide sufficient intensity resolution to code shadow areas without quantization artifacts. 8-bit "gamma corrected" encoding is used because it provides more resolution in the shadows, where it’s needed, and less in the highlights, where the steps are still small enough not to see. To use linear coding without quantization problems, you’d need 12 or better yet 16 bits per component, and most applications do not want to pay the extra price in file size for no visible benefit.

Also why is image data from spacecraft and astronomy not gamma encoded.It is after all digital photography. They must be transmitting/ recording in at least 18 bit. That is the bit level that Chris Cox et al say is the minimum necessary for linear images, without gamma encoding.

First, the data from those sources is quantitative data used to make actual measurements of intensity. Producing pretty pictures is somewhat incidental. So it’s worth providing a wide linear data path, and calibrating the whole thing periodically, in order to get numbers that mean something. But consumer cameras are not used as photometers, so the same level of accuracy is not needed.

As for how many linear bits are needed to equal 8 bits gamma encoded: it all depends on the brightness range you want to represent. 16 bits is pretty damned good.

It does seem that what we have today is two types of digital imaging. One is the truly scientific one that uses ALL linear data. The other is a convenient engineering one that delivers the goods simply, by pre compensating the linear data to display on non linear displays.

Or, more accuratly, by non-linearly encoding the data in a way that fits human perceptual abilities without wasting bits.

Engineers were always happy with approximations.

Engineers are happy with what does the job at the lowest cost necessary. For photometry, you need more bits and a calibrated chain. For photography you don’t.

Dave

Hello

Do they use a high number of bits in space imaging? I cannot imagine they do as storage must be limited for high amounts of data. After all the systems in use on say the Cassini mission are over 10 years old in technology terms. I can see why accuracy is essential for photometry, but there are also imaging cameras, which should use gamma. I doubt that these are more than 8 bit per colour.

Mike Engles

As a scientist on the Cassini mission, as well as Mars Global Surveyor and several past missions, having been a science team member on multiple planetary and terrestrial missions that defined the science instruments, I think I can shed some light here. The main thing to realize is that spacecraft data are first and foremost about making scientific measurements. Viewing is secondary. All scientific instruments for which I’ve been involved with the design (probably a couple of dozen), the output is digitized directly from the detectors, whatever form that may be. Since modern electronic detectors are inherently linear, then the sensor output is digitized linearly. No instrument that I have been involved with has had some transformation and all have had lossless compression. In fact early on (say 1980s into the early 1990s) even mentioning lossy compression in a proposal was almost certain death to the whole instrument. Today, on the Cassini spacecraft, my instrument (VIMS: http://wwwvims.lpl.arizona.edu ) does only lossless or no compression at 12-bits/pixel (note, each pixel has 352 colors, not simply RGB). The camera (ISS), has 12-bit encoding, but they can do lossy or lossless compression, but I’m pretty sure (not 100%) that it is linear only. It has a 1024 pixel square array with 12-micron pixels. I believe I have heard the ISS scientists say they can do 8-bit encoding as a lossy data compression (couldn’t find that on the web site), but they do not like to use it. Regardless, once you have good scientific data (which ultimately must be calibrated to a known scale, like linear in photons per second), you can transform, and degrade if necessary, the data for viewing.

Roger
S
santa
Nov 27, 2004
"Roger N. Clark (change username to rnclark)" wrote:

As a scientist on the Cassini mission, as well as Mars Global Surveyor and several past missions, having been a science team member on multiple planetary and terrestrial missions that defined the science instruments, I think I can shed some light here. The main thing to realize is that spacecraft data are first and foremost about making scientific measurements. Viewing is secondary. All scientific instruments for which I’ve been involved with the design (probably a couple of dozen), the output is digitized directly from the detectors, whatever form that may be. Since modern electronic detectors are inherently linear, then the sensor output is digitized linearly. No instrument that I have been involved with has had some transformation and all have had lossless compression. In fact early on (say 1980s into the early 1990s) even mentioning lossy compression in a proposal was almost certain death to the whole instrument. Today, on the Cassini spacecraft, my instrument (VIMS: http://wwwvims.lpl.arizona.edu ) does only lossless or no compression at 12-bits/pixel (note, each pixel has 352 colors, not simply RGB). The camera (ISS), has 12-bit encoding, but they can do lossy or lossless compression, but I’m pretty sure (not 100%) that it is linear only. It has a 1024 pixel square array with 12-micron pixels. I believe I have heard the ISS scientists say they can do 8-bit encoding as a lossy data compression (couldn’t find that on the web site), but they do not like to use it. Regardless, once you have good scientific data (which ultimately must be calibrated to a known scale, like linear in photons per second), you can transform, and degrade if necessary, the data for viewing.
Roger

Amazon does not carry this camera. Where is it available? Hate to disappoint those who have it on their lists.
U
username
Nov 27, 2004
wrote:

"Roger N. Clark (change username to rnclark)" wrote:

As a scientist on the Cassini mission, as well as Mars Global Surveyor and several past missions, having been a science team member on multiple planetary and terrestrial missions that defined the science instruments, I think I can shed some light here. The main thing to realize is that spacecraft data are first and foremost about making scientific measurements. Viewing is secondary. All scientific instruments for which I’ve been involved with the design (probably a couple of dozen), the output is digitized directly from the detectors, whatever form that may be. Since modern electronic detectors are inherently linear, then the sensor output is digitized linearly. No instrument that I have been involved with has had some transformation and all have had lossless compression. In fact early on (say 1980s into the early 1990s) even mentioning lossy compression in a proposal was almost certain death to the whole instrument. Today, on the Cassini spacecraft, my instrument (VIMS: http://wwwvims.lpl.arizona.edu ) does only lossless or no compression at 12-bits/pixel (note, each pixel has 352 colors, not simply RGB). The camera (ISS), has 12-bit encoding, but they can do lossy or lossless compression, but I’m pretty sure (not 100%) that it is linear only. It has a 1024 pixel square array with 12-micron pixels. I believe I have heard the ISS scientists say they can do 8-bit encoding as a lossy data compression (couldn’t find that on the web site), but they do not like to use it. Regardless, once you have good scientific data (which ultimately must be calibrated to a known scale, like linear in photons per second), you can transform, and degrade if necessary, the data for viewing.
Roger

Amazon does not carry this camera. Where is it available? Hate to disappoint those who have it on their lists.

For ~$35,000,000 you can have one built especially for you. For a few million extra, you can probably customize it too. 😉

Roger
ME
Mike Engles
Nov 27, 2004
Chris Cox wrote:
In article <9D6od.24571$>, Mike
Russell wrote:

Mike Engles wrote:

[re linear encoding of specialized pixel data values]
Is the same true for imaging from spacecraft, interplanetary or otherwise or is gamma encoding done before transmission?

Yes. Gama encoding compresses some data values, and there is no reason to do this to raw data from a spacecraft.

Here’s an article that may interest you, by Alvy Ray Smith, on the distinction of work and display color spaces.
http://alvyray.com/Memos/MemosMicrosoft.htm#NonlinearAlphaQu estion

Actually, Alvy has a number of mistakes in that paper.
I’m still not sure if he understands gamma encoding…

Chris

Hello

Yes there does seem to some confusion about PC Gamma, but he is absolutly clear about the need for linear processing. As for not understanding Gamma encoding, that is not clear from the article. He has been around a long time, and does know a thing or two. If gamma encoding were that important he would have mentioned it. He is totally clear about not applying any non linearity to a image, just to the display device. I assume from this he means the same for storage, but I don’t know.

Suffice to say that he and his collaborator are MAJOR digital graphics imaging authorities who on the face of it supports Timo Autiokari’s lonely stance. His last words in the gamma article are telling.

Mike Engles
MR
Mike Russell
Nov 27, 2004
Mike Engles wrote:
[re alvy gamma article]

Hello

Yes there does seem to some confusion about PC Gamma, but he [Alvy] is absolutly clear about the need for linear processing. As for not understanding Gamma encoding, that is not clear from the article. He has
been around a long time, and does know a thing or two. If gamma encoding
were that important he would have mentioned it. He is totally clear about not applying any non linearity to a image, just to the display device. I assume from this he means the same for storage, but I don’t know.

Suffice to say that he and his collaborator are MAJOR digital graphics imaging authorities who on the face of it supports Timo Autiokari’s lonely stance. His last words in the gamma article are telling.
Mike Engles

Right on. Alvy has another article, written in 1995, that goes into further detail re gamma issues:
http://www.cs.princeton.edu/courses/archive/fall00/cs426/pap ers/smith95d.pdf .. In this 1995 article, Alvy states:
"Nonlinearity should never be stored in an image. Or, if it is, then this nonlinearity must be noted in the storage format in such a way that it is known how to remove it to retrieve linear data."

This comment, as fundamental as it is to graphics algorithms, plus others relating to the concept of working versus display space, came years before Photoshop 5 commercially introduced the concept of working spaces, as part of color management. Not bad for someone who "doesn’t understand gamma".

As for Timo’s lonely stance – he appears to be in good company now, having been debunked together with Alvy Ray Smith and Dan Margulis, all in the space of a few days. 🙂

Mike Russell
www.curvemeister.com
www.geigy.2y.net
ME
Mike Engles
Nov 27, 2004
Mike Russell wrote:
Mike Engles wrote:
[re alvy gamma article]

Hello

Yes there does seem to some confusion about PC Gamma, but he [Alvy] is absolutly clear about the need for linear processing. As for not understanding Gamma encoding, that is not clear from the article. He has
been around a long time, and does know a thing or two. If gamma encoding
were that important he would have mentioned it. He is totally clear about not applying any non linearity to a image, just to the display device. I assume from this he means the same for storage, but I don’t know.

Suffice to say that he and his collaborator are MAJOR digital graphics imaging authorities who on the face of it supports Timo Autiokari’s lonely stance. His last words in the gamma article are telling.
Mike Engles

Right on. Alvy has another article, written in 1995, that goes into further detail re gamma issues:
http://www.cs.princeton.edu/courses/archive/fall00/cs426/pap ers/smith95d.pdf . In this 1995 article, Alvy states:
"Nonlinearity should never be stored in an image. Or, if it is, then this nonlinearity must be noted in the storage format in such a way that it is known how to remove it to retrieve linear data."

This comment, as fundamental as it is to graphics algorithms, plus others relating to the concept of working versus display space, came years before Photoshop 5 commercially introduced the concept of working spaces, as part of color management. Not bad for someone who "doesn’t understand gamma".
As for Timo’s lonely stance – he appears to be in good company now, having been debunked together with Alvy Ray Smith and Dan Margulis, all in the space of a few days. 🙂

Mike Russell
www.curvemeister.com
www.geigy.2y.net

Hello

This curiuosly was the article I was referring to.
I was not sure of the legality of quoting from it. Since you have I will also.

These are the last words.

"Gamma can be confusing, as the above probably illustrates. Here are the simple rules Altamira Composer uses and what I am advocating that imaging applications do as a matter of course: Images are always assumed to be linear. Gamma is applied only to the display of images and not to the data of the images.The display is assumed to be nonlinear (because it is). Applications separate computation from display cleanly, and gamma correct for the local display only in the display process.

To get compatible results between imaging applications written under the (I trust you believe sensible) “new” guidelines offered here and those written the “old” way: Set the monitor gamma assumption in all the “old” imaging applications to the same (greater than 1) value—presumably to that matching one’s usual display monitor. Most applications provide a way to do this. This transfers the nonlinearity correction in those apps from the computation process to the display process, as it should be, leaving linear data in the images themselves.
A desirable consequence of all this is that it would be very convenient for imaging software if display devices provided gamma correction tables settable by software. That way, each imaging app could work completely in linear space,knowing that the display step would be correctly compensated by the local monitor for its local nonlinearities.

Believe it or not, this was the way it was done
20 years ago, but the idea got lost along the way, leading to the mess described in this memo. Unfortunately, it is probably too late to change. The technique offered here is the best that can be done short of changing all the hardware."

It seems that there was a differnt way and the only people who use it now are the scientists.

I honestly do not know which is right and do not know enough to be able to know, but feel in my bones that the old way was right.

Mike Engles
D
davem
Nov 27, 2004
Mike Engles writes:

Actually, Alvy has a number of mistakes in that paper.
I’m still not sure if he understands gamma encoding…

Yes there does seem to some confusion about PC Gamma, but he is absolutly clear about the need for linear processing. As for not understanding Gamma encoding, that is not clear from the article.

Actually, he does mention that the RGB data will be stored and transmitted in nonlinear form. The paper is about the debate over whether the alpha data should also be nonlinearly encoded (for uniformity), or not. He also mentions another great debate in computer graphics: whether partially-transparent pixels should have their alpha already multiplied into the RGB values, or whether alpha should be independent.

He has
been around a long time, and does know a thing or two. If gamma encoding were that important he would have mentioned it.

He does mention it applying to the RGB data.

Suffice to say that he and his collaborator are MAJOR digital graphics imaging authorities who on the face of it supports Timo Autiokari’s lonely stance. His last words in the gamma article are telling.

To know whether he supported Timo, you’d have to ask him. He’s talking about using a linear representation in one very particular place, not commenting on Timo’s obsession with everything being linear everywhere.

Linear and nonlinear representations both have their place.

Dave
D
davem
Nov 29, 2004
"Mike Russell" writes:

. In this 1995 article, Alvy states:
"Nonlinearity should never be stored in an image. Or, if it is, then this nonlinearity must be noted in the storage format in such a way that it is known how to remove it to retrieve linear data."

Right. All you need to satisfy the above is a way to decode the pixels back to linear space. PNG has that. OpenEXR stores pixels in a version of floating point – but the decoding method is specified. sRGB also specifies how to convert between linear and encoded pixels. Looks like this problem is mostly solved now.

Of course, you *can* leave data linear, but you need more bits for it. The Pixar Image Computer, deveoped under Alvy at Pixar, used 12 bits per colour component in memory, and in data files, while arithmetic was all 16/32 bit.

As for Timo’s lonely stance – he appears to be in good company now, having been debunked together with Alvy Ray Smith and Dan Margulis, all in the space of a few days. 🙂

Shall we then place George Preddy above all the others you mention? If being debunked is a virtue…

Dave
D
davem
Nov 29, 2004
Mike Engles writes:

Believe it or not, this was the way it was done
20 years ago, but the idea got lost along the way, leading to the mess described in this memo. Unfortunately, it is probably too late to change. The technique offered here is the best that can be done short of changing all the hardware."

There’s more to the history than that. Computer graphics started out using 8-bit linear images, because it was simple and obvious. Television started out using analog gamma-corrected voltages, because there were a bunch of good reasons to put the gamma correction in the camera instead of in the receiver. But along the way, computers started generating television signals, and digitizing television, and television itself became digital, and then photography came along and borrowed from all of these.

It seems that there was a differnt way and the only people who use it now are the scientists.

The vast majority of digital images in existence are probably 8-bit "gamma corrected" data, because that’s a particular sweet spot that makes a good tradeoff between cost and results. But there are people working with fixed-point linear data and floating-point linear data, and people who store one way and process the other.

Dave
B
bagal
Dec 1, 2004
what’s RGB?

Aerticeus

"digiboy" wrote in message
the surprise to me about color management is thats its a suprise to everyone else.

I’ve always thought that the task of converting different gamuts, white points, phosphers etc too much for the average / typical user.
How do you color manage when you have perceptive colors like RGB, color mixed output colors like CMYK and fixed-by-dye colors like Pantones, all on the same page?

Can’t be done! How do you manage out of gamut colors. Shrink the gamut, and if the image moves to a device with a larger gamut, what happens then?

Do you shrink a gamut by chromaticity ie shrink towards the white point, or do you do it so the perceptual colors are the same?
Just my 2p worth. I have color management turned off

DB
BB
Big Bill
Dec 1, 2004
On Wed, 01 Dec 2004 01:39:24 GMT, "Aerticeus" wrote:

what’s RGB?

Aerticeus

Red, Green, Blue?
Like what your monitor uses.


Bill Funk
Change "g" to "a"

Must-have mockup pack for every graphic designer 🔥🔥🔥

Easy-to-use drag-n-drop Photoshop scene creator with more than 2800 items.

Related Discussion Topics

Nice and short text about related topics in discussion sections