Hi David – this is a good question and I can see from the responses you have had already that you have some very, very good replies.
Rather than repeat what has already been writ I would like to add: there are basically 2 main forms of onscreen images
a) rasterised and b) vectorised
a) is probably associated with colored pixels. So, for example, in a straight green line across a whote background each pixel is independent of its neighbour. The problems used to be that is the image was shown at anything other than 100% original size then pixels had to be added or subtracted to display onscreen (and probably to print as well). That creates problems cos there needs to be some way to handle the loss or gain of pixels and retain image fidelity
b) these are usually the type of images that can be scaled very easily because a straight line can be represented as a box with 4 corners, an inner fill color (green in this example) and an outer fill color (whote in this example)
I daresay that the same holds true when printing and this is where smart printer drivers are needy things.
It gets even more complicated when you start to add ways methods and algorithms to minimise image size and retain image fidelity at the same time. FWIW TIF used to be the professional standard for hi-level image processing on very hi quality (and cost) workstations and monitors.
There are lots of different image formats because there are lots of different functions attached to those formats
I hope I have not clouded the issue but explained enough to show that it is complicated, intriguing and fascinating all at the same time
Artio
"David Habercom" wrote in message
Can anyone explain why TIF is a better format for printing? Or it is? Thanks.
—
v=b