printing 16bit vs. 8bit: printer driver or PSCS?

MA
Posted By
mutator_accessor
Jan 1, 2004
Views
681
Replies
13
Status
Closed
Since it is possible to complete ones entire workflow now in 16bit, the question is:

Since I am using an Epson 2200 printer ( 8 bit device ), somewhere along the way the 16 bit image is going to be converted to 8 bit for printing. The way I see it I have (at least) 2 choices:

1) Use CS to convert the image into 8bit and send it on its merry way…

or

2) Print the 16bit image and allow the print driver to convert it to 8 bits.

I have done this both ways and haven’t been able to detect any differences (at least to the naked eye). One advantage that #1 might have is that if CS does the conversion, it’s still possible to check for problems that may occur due to the conversion, whereas they wouldn’t be visible as the driver converts it while it’s printing. I just have to remember NOT to save the file after converting it to 8 bit just before printing.

Master Retouching Hair

Learn how to rescue details, remove flyaways, add volume, and enhance the definition of hair in any photo. We break down every tool and technique in Photoshop to get picture-perfect hair, every time.

IL
Ian_Lyons
Jan 1, 2004
So far as I can recall it’s Photoshop automatically does the conversion to 8-bit rather than the actual print driver.
Y
YrbkMgr
Jan 1, 2004
Ian’s right. It comes from Chris Cox that it’s converted by PS before going to the printer driver.
LH
Lawrence_Hudetz
Jan 1, 2004
So, there is no need to convert?

Now we can work 16 bit so well, it would be nice to have at least a 12 bit printer. A 12 bit printer would handle a 10 stop B&W much better; it would print similar to an 8 stop "conventional wisdom" range does in 8 bit. It appears that a rule of thumb would be 1 bit for each stop value you are trying to hold.

I have seen situations where I can actually see the step from one value to the next, as in a uniform sky gradually changing value. I can see it in an 8 bit generated step tablet. Before the steps are generated.

At least, it seems to me.
MA
mutator_accessor
Jan 2, 2004
Ok, if that is the case, how does CS know when and when not to convert. If I wanted to print to another device greater than 8 bits, how do I tell CS not to do the conversion? Or does it automatically query the device driver for the bit depth? I assume this must be the case since I don’t recall seeing any controls for changing this.

I agree with the limitations of 8 bit also. I do a lot of landscape shots (ie lots of blue sky) and that combined with the fact that the 2200 is relatively weak in the blue gamut (as compared to some dye inks) I have to be careful about the blue transitions – it can be noticable to the naked eye.

I don’t know if it would do any good to have a 12 bit printer until inks are developed that can handle the gamut that it affords. I guess it took evolution at least a couple of thousand years to develop the eye’s range, it might take the printer manufacturer’s a <few> years to match it 🙂
LH
Lawrence_Hudetz
Jan 2, 2004
Bit depth and gamut are not related. In any case, technology today is still a bootstrap operation.
L
LenHewitt
Jan 2, 2004
Mutator,

how does CS know when and when not to convert. <<

As the Windows O/S is incapable of sending anything more than 8-bit to ANY device it doesn’t need to – and do you know of any output device that can handle more than 8-bits/channel???
MA
mutator_accessor
Jan 2, 2004
For now I don’t know any device that can handle more than 8bits/channel of color. That doesn’t mean that there won’t be. I was just curious. I am just trying to understand so that I can print the most pleasing pictures as I can.

Bit depth and gamut are not related. In any case, technology today >
is still a bootstrap operation.

From an inkjet perspective I disagree. This is my understanding, but I could be wrong:

Without bit-depth gamut means very little (and vice-versa as well). Take the extreme example: bit depth = 1. That means each R,G,B can have exactly 2 values: 0,1. That means there is a total combination of 8 unique colors to represent that gamut. I don’t care how big your gamut is, see how accurate of a color image you can print with 8 unique colors from it. You might be able to get a reasonable representation, if your dot size is infinitesimally small in relation to the image size/viewing distance (B/W has been doing this for over 100 years with dithering), but inkjet printers today have a physical limitation as to how small they can make the dot and not have the jet clog. Somewhere around 2-4 picoliters I think is current. Why do you think manufacturers went to more ink tanks for photographic quality? It’s because to get the color combination (ie. gamut) without having to dither many different colors adjacent which would increase the "dot" (or pixel) size – not to mention the problems with color mixing (bleeding) between the individual ink dots as they are put down on paper. So I believe from a practical matter they are very much related.

If I am wrong, please correct me, because I always willing to learn more about this.

Thanks,
MA
MV
Mathias_Vejerslev
Jan 2, 2004
An obvious exception is pure grayscale printing á la the piezography system. For this, 8-bit RGB output is not sufficient, and special gray inks must be used for smooth un-dithered output.
LH
Lawrence_Hudetz
Jan 2, 2004
Gamut has existed before the advent of digital processes. When I posted my comment, I also went through the same thinking, until I realized that limited bit depth simply means limited capability to realize the full gamut of the device in question. A color TV screen has a gamut, and is still analog. It is phosphor dependent. So does Cibachrome, Fujichrome, Ektachrome etc.

Here’s a definition:
<http://www.hyperdictionary.com/computing/gamut>

Mathias, I also find it true of B&W printing with the Canon and my old Epson 870. Sometimes, I wish I could introduce that dithering there as well.
BD
Brad Dalley
Jan 9, 2004
Can anybody tell me if having my image mode set to 8 bit/channels would make it look pixelatted/bitmapped? What’s the dif. between 8 bit and 16 bit.?

Thanks!
RK
Rob_Keijzer
Jan 9, 2004
Brad,

My experience is that there is no visible difference. I shoot my work in RAW-mode and that imports in PS as 16 bit/ch.
When I change Curves&Levels there is a big advantage: Stretching (part of) an 8 bits contrast leaves gaps in the histogram, i.e. like a comb.
This can ultimately cause posterisation, or "banding". However stretching a 16 bit image seems to fill in the gaps on the histogram. My idea is that if the image is really 16 bit there is twice as much.

16 bit (IMO) is for having room to edit, but not something that is visible on screen. At least I can’t see the difference.

Rob
MV
Mathias_Vejerslev
Jan 9, 2004
16 bit there is twice as much.

There´s a lot more than twice as much data in a 16 bit per channel file vs an 8 bit per channel file.
RK
Rob_Keijzer
Jan 9, 2004
Hi Mathias,

Yes You’re right. 9 bits would be twice as much as 8 bits, but I meant to calculate all dimetions in, so that would mean that one could stretch the histogram content to twice the original width before it would produce gaps.

Rob

Master Retouching Hair

Learn how to rescue details, remove flyaways, add volume, and enhance the definition of hair in any photo. We break down every tool and technique in Photoshop to get picture-perfect hair, every time.

Related Discussion Topics

Nice and short text about related topics in discussion sections