Scan vs digital image

DM
Posted By
Daniel Masse
Oct 31, 2004
Views
2491
Replies
125
Status
Closed
Hello !

Yesterday, I was talking with several advanced photographers, who all claim that there is a significant difference in quality between images from a high-end digital reflex (8 M pixels) and a high-end scanner (Nikon 4). I have no reason to doubt their ability to use the equipment, yet I cannot figure out where this difference would come from ?

The NikonScan maximum definition is 2900 dpi, which gives about 11 M pixels for a 24 x 36 film : the image should be at least as good as that obtained with a 8 M pixels camera. Right ? And the film is said to have a definition equivalent to at least 20 M pixels, so it should not degrade the image…

Master Retouching Hair

Learn how to rescue details, remove flyaways, add volume, and enhance the definition of hair in any photo. We break down every tool and technique in Photoshop to get picture-perfect hair, every time.

B
Beemer
Oct 31, 2004
In article <41849cc9$0$1195$>,
says…
Hello !

Yesterday, I was talking with several advanced photographers, who all claim that there is a significant difference in quality between images from a high-end digital reflex (8 M pixels) and a high-end scanner (Nikon 4). I have no reason to doubt their ability to use the equipment, yet I cannot figure out where this difference would come from ?

The NikonScan maximum definition is 2900 dpi, which gives about 11 M pixels for a 24 x 36 film : the image should be at least as good as that obtained

Daniel,

You said high end scanner "Nikon 4" but I know of no such name. Are you thinking of Nikon Scan 4 software?

Beemer
MR
Mike Russell
Oct 31, 2004
Daniel Masse wrote:
Hello !

Yesterday, I was talking with several advanced photographers, who all claim that there is a significant difference in quality between images from a high-end digital reflex (8 M pixels) and a high-end scanner (Nikon 4). I have no reason to doubt their ability to use the equipment, yet I cannot figure out where this difference would come from ?

The NikonScan maximum definition is 2900 dpi, which gives about 11 M pixels for a 24 x 36 film : the image should be at least as good as that obtained with a 8 M pixels camera. Right ? And the film is said to have a definition equivalent to at least 20 M pixels, so it should not degrade the image…

Digital images have no film grain. So the better quality digital SLR’s generally do better than the best 35mm film. Larger format film can still deliver a better image than the best digital cameras..


Mike Russell
www.curvemeister.com
www.geigy.2y.net
N
nomail
Oct 31, 2004
Daniel Masse wrote:

Yesterday, I was talking with several advanced photographers, who all claim that there is a significant difference in quality between images from a high-end digital reflex (8 M pixels) and a high-end scanner (Nikon 4). I have no reason to doubt their ability to use the equipment, yet I cannot figure out where this difference would come from ?

The NikonScan maximum definition is 2900 dpi, which gives about 11 M pixels for a 24 x 36 film : the image should be at least as good as that obtained with a 8 M pixels camera. Right ?

Wrong. A scan stacks pixels on top of grain. The result is that you will never see detail as small as one pixel only or one line of pixels only. A digital photo on the other hand can indeed have detail that small. As a result, you need (much) more pixels in a scan to resolve the same amount of detail, so you cannot compare the number of pixels one to one. BTW, a Nikon scanner with 2900 ppi maximum resolution is not a ‘high end scanner’, it’s a good consumer grade scanner.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/
DM
Daniel Masse
Oct 31, 2004
Beemer wrote:
You said high end scanner "Nikon 4" but I know of no such name. Are you thinking of Nikon Scan 4 software?

Yes, I mean the NikonScan 4, used with Vuescan.
DM
Daniel Masse
Oct 31, 2004
Mike Russell wrote:
Digital images have no film grain. So the better quality digital SLR’s generally do better than the best 35mm film. Larger format film can still deliver a better image than the best digital cameras..

What I am really wondering is : if I want to print, say, a 30 x 45 cm image, will I get a significantly different result if I start from a 24 x 36 film scanned at 2900 dpi, or from the image obtained with a 8 M pixels camera – assuming equivalent optical qualities in both cameras ?
N
nomail
Oct 31, 2004
Daniel Masse wrote:

Mike Russell wrote:
Digital images have no film grain. So the better quality digital SLR’s generally do better than the best 35mm film. Larger format film can still deliver a better image than the best digital cameras..

What I am really wondering is : if I want to print, say, a 30 x 45 cm image, will I get a significantly different result if I start from a 24 x 36 film scanned at 2900 dpi, or from the image obtained with a 8 M pixels camera – assuming equivalent optical qualities in both cameras ?

Yes, possibly. The DSLR may be sharper and more detailed. It’s not a fair comparison, though. If you scan at 2900 ppi, you do not get all the possible detail. It would be better to compare the DSLR with a 4000 ppi scan.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/
A7
aka 717
Oct 31, 2004
"Johan W. Elzenga" wrote in message
Daniel Masse wrote:

Yesterday, I was talking with several advanced photographers, who all claim
that there is a significant difference in quality between images from a high-end digital reflex (8 M pixels) and a high-end scanner (Nikon 4). I have no reason to doubt their ability to use the equipment, yet I cannot figure out where this difference would come from ?

The NikonScan maximum definition is 2900 dpi, which gives about 11 M pixels
for a 24 x 36 film : the image should be at least as good as that obtained
with a 8 M pixels camera. Right ?

Wrong. A scan stacks pixels on top of grain. The result is that you will never see detail as small as one pixel only or one line of pixels only. A digital photo on the other hand can indeed have detail that small. As a result, you need (much) more pixels in a scan to resolve the same amount of detail, so you cannot compare the number of pixels one to one. BTW, a Nikon scanner with 2900 ppi maximum resolution is not a ‘high end scanner’, it’s a good consumer grade scanner.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/

It used to be that the best quality, I think, was from
large transparencies scanned on a drum scanner.
Has digital gone beyond that? Is Arizona Highways
using digital, I wonder.
N
nomail
Oct 31, 2004
aka 717 wrote:

Wrong. A scan stacks pixels on top of grain. The result is that you will never see detail as small as one pixel only or one line of pixels only. A digital photo on the other hand can indeed have detail that small. As a result, you need (much) more pixels in a scan to resolve the same amount of detail, so you cannot compare the number of pixels one to one. BTW, a Nikon scanner with 2900 ppi maximum resolution is not a ‘high end scanner’, it’s a good consumer grade scanner.
It used to be that the best quality, I think, was from
large transparencies scanned on a drum scanner.
Has digital gone beyond that? Is Arizona Highways
using digital, I wonder.

No, digital has not yet gone beyond a 8 x 10 inch scanned on a drum scanner. But 35mm is surpassed by DSLR cameras and 6×7 cm is surpassed by digital backs on medium format cameras. Whether Arizona Highways is using digital is not very relevant IMHO. There will always be people living in the previous century.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/
B
bhilton665
Oct 31, 2004
From: "Daniel Masse"

What I am really wondering is : if I want to print, say, a 30 x 45 cm image, will I get a significantly different result if I start from a 24 x 36 film scanned at 2900 dpi, or from the image obtained with a 8 M pixels camera – assuming equivalent optical qualities in both cameras ?

I’m using Canon 35 mm film cameras (EOS-3) and also Canon dSLRs, which use the same lenses. I’m scanning the film with a Nikon 8000, which is a 4,000 sspi scanner.

With a 6 Mpixel dSLR like the 10D I feel I get slightly better large prints (say 12 x 18") from scanned Velvia or Provia 100F. I’m basing this on shooting the exact same scene with the same lenses in the same light and then printing both.

With our 11 Mpixel Canon 1Ds it’s no contest, the digital files print large much better than the film scans, especially when resampled carefully. We also shoot two medium format sizes (645 and 6×7 cm) and prints from the 1Ds are rivaling 645 at the largest size we can print at home (16×20" on an Epson 4000).

With our 8 Mpixel Canon 1D Mark II (we have two of these) it looks like a rough equivalency between film and digital for prints, and I just resampled one of our grizzly bear images taken recently to print 16×20" and sold four prints almost immediately. This particular image looks better printed 16×20" than any 35 mm shot I’ve printed at that size. On the other hand some other 8 Mpix shots are more difficult to enlarge … so I think there’s a rough equivalency at 8 Mpixels, at least for something with a large sensor size like the Mark II. The 8 Mpixel digicameras with tiny sensors from Sony, Minolta, Canon and others will have more noise and won’t print as well, especially at moderate to high ISOs.

If you haven’t actually tried digital it may seem to make no sense that an 11 Mpixel file (or 8 Mpixel file) can print as well or better than a 22 Mpixel scan from 35 mm but seeing is believing … film actually has a bit higher resolving power if you shoot test charts and measure lines but digital has zero grain (at lower ISO settings) which makes the prints look smoother and more appealing, and it’s pretty easy to interpolate up to reach the same file size as film without compromising the image quality.

It helps if you know a couple of different techniques for resampling and know how to use USM carefully.

Bill
B
bhilton665
Oct 31, 2004
From: "aka 717"

It used to be that the best quality, I think, was from
large transparencies scanned on a drum scanner.

Still very much true.

Has digital gone beyond that?

Large format scanning backs do a wonderful job but the 35 mm based digitals, even the 11 and 14 Mpixel versions, are still chasing medium format film and are not close to 4×5" sheet film.

Is Arizona Highways using digital, I wonder.

They think 6×7 cm is entry level 🙂

Bill
S
Scruff
Oct 31, 2004
Anything scanned from film becomes second generation. Digital is already there!

"Johan W. Elzenga" wrote in message
aka 717 wrote:

Wrong. A scan stacks pixels on top of grain. The result is that you
will
never see detail as small as one pixel only or one line of pixels
only.
A digital photo on the other hand can indeed have detail that small.
As
a result, you need (much) more pixels in a scan to resolve the same amount of detail, so you cannot compare the number of pixels one to
one.
BTW, a Nikon scanner with 2900 ppi maximum resolution is not a ‘high
end
scanner’, it’s a good consumer grade scanner.
It used to be that the best quality, I think, was from
large transparencies scanned on a drum scanner.
Has digital gone beyond that? Is Arizona Highways
using digital, I wonder.

No, digital has not yet gone beyond a 8 x 10 inch scanned on a drum scanner. But 35mm is surpassed by DSLR cameras and 6×7 cm is surpassed by digital backs on medium format cameras. Whether Arizona Highways is using digital is not very relevant IMHO. There will always be people living in the previous century.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/
J
jjs
Oct 31, 2004
Considering that the final goal is a color print about 350x450cm, and presuming a full-sensor 35mm digital vs full-frame 35mm camera, I would go with a very good digital camera. Introducing another step such as scanning a negative or slide cannot help the quality and almost certainly diminished it. Besides, color correction of a straight digital image offers much greater range in this case than film – in part because you have none of the film’s original color issues to deal with. (And whomever might say his color film is always perfectly balanced is either the .1% pro or has very low standards.)
B
bagal
Oct 31, 2004
Personally (all IMHO) it is a matter of taste and kit.

If someone has a preference for film, scanned images – well hey that really is OK.

It makes sense to optimise any investment in kit. A good SLR, a handful of decent lenses, a few years of experience, a good darkroom (and increasingly) a good digital darkroom does produce or has the potential to produce fantastic results.

On the other hand, a good digital setup is fantastic too.

My preference is for digital – mainstream common standard digital is IMHO far superior to mainstream common 35 mm film

Aerticus

ps – I can understand why some people prefer to remain non-digital but just think of what they are missing

A

"jjs" wrote in message
Considering that the final goal is a color print about 350x450cm, and presuming a full-sensor 35mm digital vs full-frame 35mm camera, I would go with a very good digital camera. Introducing another step such as scanning a negative or slide cannot help the quality and almost certainly diminished it. Besides, color correction of a straight digital image offers much greater range in this case than film – in part because you have none of the film’s original color issues to deal with. (And whomever might say his color film is always perfectly balanced is either the .1% pro or has very low standards.)

J
jjs
Oct 31, 2004
"Aerticus" wrote in message
Personally (all IMHO) it is a matter of taste and kit.

If someone has a preference for film, scanned images – well hey that really is OK.

If all you have to please is yourself, then you aren’t a professional.
S
Scruff
Oct 31, 2004
Boy, that’s a fact, lol

"jjs" wrote in message
"Aerticus" wrote in message
Personally (all IMHO) it is a matter of taste and kit.

If someone has a preference for film, scanned images – well hey that really is OK.

If all you have to please is yourself, then you aren’t a professional.
B
bagal
Oct 31, 2004
Quite right – I am not a professional photographer 🙂

A mere amateur dibbling in the wondrful world of digital image processing

Aerticus

"jjs" wrote in message
"Aerticus" wrote in message
Personally (all IMHO) it is a matter of taste and kit.

If someone has a preference for film, scanned images – well hey that really is OK.

If all you have to please is yourself, then you aren’t a professional.
DM
Daniel Masse
Oct 31, 2004
Aerticus wrote:
ps – I can understand why some people prefer to remain non-digital but just think of what they are missing

Since you mention it… what you are missing with 100 % digital is the wonder of a slide projected on a large screen…

I am trying to get the best of both worlds : slides, slide shows, and, sometimes, a few good quality prints…
B
bhilton665
Oct 31, 2004
Since you mention it… what you are missing with 100 % digital is the wonder of a slide projected on a large screen…

These have been largely replaced by digital LCD projectors. Kodak quit making slide projectors some time ago because the demand was so low.
S
Scruff
Oct 31, 2004
lol, yea, those flawless digital projectors really suck.

"Daniel Masse" wrote in message
Aerticus wrote:
ps – I can understand why some people prefer to remain non-digital but just think of what they are missing

Since you mention it… what you are missing with 100 % digital is the wonder of a slide projected on a large screen…

I am trying to get the best of both worlds : slides, slide shows, and, sometimes, a few good quality prints…
B
bagal
Oct 31, 2004
Daniel – my first image processing software was Harvard Graphics – the version that permitted emails to photoshops so it could be "printed" to a slide (optical variety rather than digital sort) 🙂

A

"Daniel Masse" wrote in message
Aerticus wrote:
ps – I can understand why some people prefer to remain non-digital but just think of what they are missing

Since you mention it… what you are missing with 100 % digital is the wonder of a slide projected on a large screen…

I am trying to get the best of both worlds : slides, slide shows, and, sometimes, a few good quality prints…
B
bagal
Oct 31, 2004
It’s good to see a wide range of views

A

"Scruff" wrote in message
lol, yea, those flawless digital projectors really suck.
"Daniel Masse" wrote in message
Aerticus wrote:
ps – I can understand why some people prefer to remain non-digital but just think of what they are missing

Since you mention it… what you are missing with 100 % digital is the wonder of a slide projected on a large screen…

I am trying to get the best of both worlds : slides, slide shows, and, sometimes, a few good quality prints…

DD
David Dyer-Bennet
Oct 31, 2004
"jjs" writes:

"Aerticus" wrote in message
Personally (all IMHO) it is a matter of taste and kit.

If someone has a preference for film, scanned images – well hey that really is OK.

If all you have to please is yourself, then you aren’t a professional.

Important fact!!!!!

David Dyer-Bennet, <mailto:>
RKBA: <http://noguns-nomoney.com/> <http://www.dd-b.net/carry/> Pics: <http://dd-b.lighthunters.net/> <http://www.dd-b.net/dd-b/SnapshotAlbum/> Dragaera/Steven Brust: <http://dragaera.info/>
S
SJB
Nov 1, 2004
"Bill Hilton" wrote in message
From: "Daniel Masse"

What I am really wondering is : if I want to print, say, a 30 x 45 cm image,
will I get a significantly different result if I start from a 24 x 36 film scanned at 2900 dpi, or from the image obtained with a 8 M pixels camera – assuming equivalent optical qualities in both cameras ?

I’m using Canon 35 mm film cameras (EOS-3) and also Canon dSLRs, which use the
same lenses. I’m scanning the film with a Nikon 8000, which is a 4,000 sspi
scanner.

With a 6 Mpixel dSLR like the 10D I feel I get slightly better large prints
(say 12 x 18") from scanned Velvia or Provia 100F. I’m basing this on shooting
the exact same scene with the same lenses in the same light and then printing
both.

With our 11 Mpixel Canon 1Ds it’s no contest, the digital files print large
much better than the film scans, especially when resampled carefully. We also
shoot two medium format sizes (645 and 6×7 cm) and prints from the 1Ds are rivaling 645 at the largest size we can print at home (16×20" on an Epson 4000).

With our 8 Mpixel Canon 1D Mark II (we have two of these) it looks like a rough
equivalency between film and digital for prints, and I just resampled one of
our grizzly bear images taken recently to print 16×20" and sold four prints
almost immediately. This particular image looks better printed 16×20" than any
35 mm shot I’ve printed at that size. On the other hand some other 8 Mpix shots are more difficult to enlarge … so I think there’s a rough equivalency
at 8 Mpixels, at least for something with a large sensor size like the Mark II.
The 8 Mpixel digicameras with tiny sensors from Sony, Minolta, Canon and others will have more noise and won’t print as well, especially at moderate to
high ISOs.

If you haven’t actually tried digital it may seem to make no sense that an 11
Mpixel file (or 8 Mpixel file) can print as well or better than a 22 Mpixel
scan from 35 mm but seeing is believing … film actually has a bit higher resolving power if you shoot test charts and measure lines but digital has zero
grain (at lower ISO settings) which makes the prints look smoother and more
appealing, and it’s pretty easy to interpolate up to reach the same file size
as film without compromising the image quality.

It helps if you know a couple of different techniques for resampling and know
how to use USM carefully.

Bill
This is a very interesting thread and I appreciate your first hand observations. I currently shoot film and scan with a Nikonscan IV (2900 dpi) so I do get the 11 mpixel or so files. I’m very satisfied with the overall quality but would like to instant gratification of digital … am eyeing a Canon 20D. My question is this, however, can you further comment on your "resampling techniques". When you print larger pictures, I assume that you "up-sample" your digital file. Can you share your approach(es).

Thanks in advance.

SB
DM
Daniel Masse
Nov 1, 2004
Bill Hilton wrote:
Since you mention it… what you are missing with 100 % digital is the wonder of a slide projected on a large screen…

These have been largely replaced by digital LCD projectors. Kodak quit making slide projectors some time ago because the demand was so low.

Yes, digital projectors are great for professionnal presentations, but are still way behind slide projectors if you are looking for quality… With our Photo Club, we make multi-projector slide shows on a 3 x 8 meter screen… There is no way we could do that with digital projectors…
J
JNB0382
Nov 1, 2004
Thanks for putting the comparisons in context. My assumption is that your comparisons are based on some obvious factors:

– The shots were taken with accurate exposure and focus, etc., and the scans were done with "appropriate" settings. In other words, you know how to operate your equipment.

– Digital shots and film scans are edited and sharpened in PS in "equal" amounts.

– The 12 x 18" prints are printed at ~300dpi (printer). At this setting, the 4000dpi scans are not resampled, but the 6 or 8 Meg digital shots are upsampled. (The 11 Meg digital shots are not resampled?) For prints bigger than 12 x 18" printed at ~300dpi (printer), all of the above are upsampled.

Bill Hilton wrote:
From: "Daniel Masse"

What I am really wondering is : if I want to print, say, a 30 x 45 cm image, will I get a significantly different result if I start from a 24 x 36 film scanned at 2900 dpi, or from the image obtained with a 8 M pixels camera – assuming equivalent optical qualities in both cameras ?

I’m using Canon 35 mm film cameras (EOS-3) and also Canon dSLRs, which use the same lenses. I’m scanning the film with a Nikon 8000, which is a 4,000 sspi scanner.

With a 6 Mpixel dSLR like the 10D I feel I get slightly better large prints (say 12 x 18") from scanned Velvia or Provia 100F. I’m basing this on shooting the exact same scene with the same lenses in the same light and then printing both.

With our 11 Mpixel Canon 1Ds it’s no contest, the digital files print large much better than the film scans, especially when resampled carefully. We also shoot two medium format sizes (645 and 6×7 cm) and prints from the 1Ds are rivaling 645 at the largest size we can print at home (16×20" on an Epson 4000).

With our 8 Mpixel Canon 1D Mark II (we have two of these) it looks like a rough equivalency between film and digital for prints, and I just resampled one of our grizzly bear images taken recently to print 16×20" and sold four prints almost immediately. This particular image looks better printed 16×20" than any 35 mm shot I’ve printed at that size. On the other hand some other 8 Mpix shots are more difficult to enlarge … so I think there’s a rough equivalency at 8 Mpixels, at least for something with a large sensor size like the Mark II. The 8 Mpixel digicameras with tiny sensors from Sony, Minolta, Canon and others will have more noise and won’t print as well, especially at moderate to high ISOs.

If you haven’t actually tried digital it may seem to make no sense that an 11 Mpixel file (or 8 Mpixel file) can print as well or better than a 22 Mpixel scan from 35 mm but seeing is believing … film actually has a bit higher resolving power if you shoot test charts and measure lines but digital has zero grain (at lower ISO settings) which makes the prints look smoother and more appealing, and it’s pretty easy to interpolate up to reach the same file size as film without compromising the image quality.

It helps if you know a couple of different techniques for resampling and know how to use USM carefully.

Bill
BP
Barry Pearson
Nov 1, 2004
wrote:
Thanks for putting the comparisons in context. My assumption is that your comparisons are based on some obvious factors:
[snip]
– The 12 x 18" prints are printed at ~300dpi (printer). At this setting, the 4000dpi scans are not resampled, but the 6 or 8 Meg digital shots are upsampled. (The 11 Meg digital shots are not resampled?) For prints bigger than 12 x 18" printed at ~300dpi (printer), all of the above are upsampled.

Be aware that even if the photographer doesn’t resample explicitly, the printer driver is likely to. Most printer drivers (HP, Canon, etc) resample (down or up as necessary) to 600 ppi. Epson resamples to 720 ppi. (Some other types of printer resample to other values).

Bill Hilton wrote:
From: "Daniel Masse"
[snip]


Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
DD
David Dyer-Bennet
Nov 1, 2004
"SJB" writes:

This is a very interesting thread and I appreciate your first hand observations. I currently shoot film and scan with a Nikonscan IV (2900 dpi) so I do get the 11 mpixel or so files.

Scanned-from-film pixels are very much *NOT* worth as much as digital-original pixels. Although of course it depends on scanner, technique, film, subject, etc.; but a very very rough rule-of-thumb is something like 2x — a 6 megapixel scanned file is roughly as good as a 3 megapixel digital original.

David Dyer-Bennet, <mailto:>
RKBA: <http://noguns-nomoney.com/> <http://www.dd-b.net/carry/> Pics: <http://dd-b.lighthunters.net/> <http://www.dd-b.net/dd-b/SnapshotAlbum/> Dragaera/Steven Brust: <http://dragaera.info/>
R
RSD99
Nov 1, 2004
Agreed …

FWIW: digital LCD projectors are pretty much limited to relatively low resolution. Something like 1024 x 768 seems to be the norm.
B
bagal
Nov 1, 2004
It is really bgood to see such deversity of views without falling into the "flame wars" trap

Look out – I bet there will be an antagonist along in a minute

Aerticus

"Daniel Masse" wrote in message
Aerticus wrote:
ps – I can understand why some people prefer to remain non-digital but just think of what they are missing

Since you mention it… what you are missing with 100 % digital is the wonder of a slide projected on a large screen…

I am trying to get the best of both worlds : slides, slide shows, and, sometimes, a few good quality prints…
N
nomail
Nov 1, 2004
Barry Pearson wrote:

Be aware that even if the photographer doesn’t resample explicitly, the printer driver is likely to. Most printer drivers (HP, Canon, etc) resample (down or up as necessary) to 600 ppi. Epson resamples to 720 ppi. (Some other types of printer resample to other values).

Resampling is the wrong word, because that is *not* what happens at all. The printer uses more than one drop of ink to simulate a pixel. It needs to, because that is the only way to get millions of colors with only a few ink colors. So even though your Epson printer may use 720 dpi (read: 720 droplets per inch), it does *not* mean that your image is resampled to 720 pixels per inch.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/
BP
Barry Pearson
Nov 1, 2004
Johan W. Elzenga wrote:
Barry Pearson wrote:

Be aware that even if the photographer doesn’t resample explicitly, the printer driver is likely to. Most printer drivers (HP, Canon, etc) resample (down or up as necessary) to 600 ppi. Epson resamples to 720 ppi. (Some other types of printer resample to other values).

Resampling is the wrong word, because that is *not* what happens at all. The printer uses more than one drop of ink to simulate a pixel. It needs to, because that is the only way to get millions of colors with only a few ink colors. So even though your Epson printer may use 720 dpi (read: 720 droplets per inch), it does *not* mean that your image is resampled to 720 pixels per inch.

I mean *precisely* that it is resampled to 720 pixels per inch. (Or 600 ppi for HP & Canon). *Before* the driver attempts anything to do putting ink onto paper.

The driver, in order to prepare itself for driving the printer, takes the pixels that you give it, and resamples (possibly using a resampling algorithm inferior to bicubic or whatever) into its internal buffer, corresponding to 600 ppi or 720 ppi. (Or other values for other printers. For example, 360 ppi for large Epson printers, perhaps 314 ppi for a Kodak 8500 dye sublimation printer, etc).

At the time the printer driver is working out what ink to put on the paper, which colours to use, what error diffusion correction to apply, etc, it is working from its own internal buffer, containing its resampled image, not directly from the image that the photographer supplied.

If the photographer resamples in Photoshop to this resolution, the driver doesn’t have to resample. (And, of course, if you do the resampling first, you can sharpen at the target resolution). Some photographers do this. Some packages claim to do this better than drivers can, for example by Lanczos resampling. Qimage is the obvious example:
http://www.ddisoftware.com/qimage/quality/

(I am not talking about dpi, or dots per inch. I really am talking about ppi, or pixels per inch).


Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
N
nomail
Nov 1, 2004
Barry Pearson wrote:

If the photographer resamples in Photoshop to this resolution, the driver doesn’t have to resample. (And, of course, if you do the resampling first, you can sharpen at the target resolution). Some photographers do this. Some packages claim to do this better than drivers can, for example by Lanczos resampling. Qimage is the obvious example:
http://www.ddisoftware.com/qimage/quality/

Aha, the infamous qimage commercial again. The test on this page is misleading, because it uses a pure black & white image, which obviously doesn’t need halftoning. For such an image it does make sense to resample to higher ppi, and indeed the driver would do that if you didn’t. But a color photograph needs halftoning as I described earlier, so for a color photograph the driver does not resample to 720 ppi, because it needs its extra dots for the halftone raster.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/
B
bagal
Nov 1, 2004
ahem

A

"Johan W. Elzenga" wrote in message
Barry Pearson wrote:

If the photographer resamples in Photoshop to this resolution, the driver doesn’t have to resample. (And, of course, if you do the resampling first, you
can sharpen at the target resolution). Some photographers do this. Some packages claim to do this better than drivers can, for example by Lanczos resampling. Qimage is the obvious example:
http://www.ddisoftware.com/qimage/quality/

Aha, the infamous qimage commercial again. The test on this page is misleading, because it uses a pure black & white image, which obviously doesn’t need halftoning. For such an image it does make sense to resample to higher ppi, and indeed the driver would do that if you didn’t. But a color photograph needs halftoning as I described earlier, so for a color photograph the driver does not resample to 720 ppi, because it needs its extra dots for the halftone raster.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/
BP
Barry Pearson
Nov 2, 2004
Johan W. Elzenga wrote:
Barry Pearson wrote:

If the photographer resamples in Photoshop to this resolution, the driver doesn’t have to resample. (And, of course, if you do the resampling first, you can sharpen at the target resolution). Some photographers do this. Some packages claim to do this better than drivers can, for example by Lanczos resampling. Qimage is the obvious example: http://www.ddisoftware.com/qimage/quality/

Aha, the infamous qimage commercial again. The test on this page is misleading, because it uses a pure black & white image, which obviously doesn’t need halftoning. For such an image it does make sense to resample to higher ppi, and indeed the driver would do that if you didn’t. But a color photograph needs halftoning as I described earlier, so for a color photograph the driver does not resample to 720 ppi, because it needs its extra dots for the halftone raster.

There are 2 different matters here. One is the resampling of the digital image into the print driver’s own buffer. The other is the means for that image to be put down in ink (or whatever) on the medium.

The resampling into the print driver’s buffer is as I said. Frankly, it would be truly potty to try to design and implement a print driver some other way! The driver has a very complicated task to make just a few inks represent the rich set of colours of the image, and it would be foolish to do this while handling the massive variation in resolutions that photographers could supply. So the driver resamples to a standard resolution ("native resolution") in pixels per inch. Normally, either 600 or 720, but there are some variations. *Then* the driver does its stuff with inks & heads & paper.

This is *not* the number of dots per inch. Or droplets per inch. And, indeed, I think those numbers are really marketing BS. But, have a look at the numbers quoted by the printer manufacturers. They tend to be simple multiples of the native resolution I noted above. HP and Canon printers tend to be quoted with a dpi of multiples in each direction of 600. Epson printers, with a multiple of 720. Sometimes the numbers are much larger, such as 5760. But the drivers still use the resampling to 600 or 720 (unless they have changed these numbers upwards recently, which I doubt).

It should not be assumed that the area of paper represented by one pixel of the image supplied by the photographer is all that is involved in presenting that pixel. There is often simply not the available combinations of ink colours & sizes to represent it. So the driver gets as close as it can, and remembers the error (the difference between the supplied pixel and what it actually told the printer hardware to do). Then, when it presents adjacent pixels, on the same line or the next line, it both calculates what the photographer’s image says should be represented, and the error accumulated so far, and acts accordingly. So the influence of pixels is spread, as "error diffusion". At least – that applies to some printer technologies, such as Epson. Dye sublimation doesn’t need that, but it still involves resampling to a native resolution. I’m really talking about desktop printers used by photographers.

So, you may get a Canon printer driver which resamples to 600 x 600 ppi, then prints at a quoted 4800 x 2400 dpi. Or an Epson printer driver which resamples to 720 x 720 ppi, then prints at a quoted 5760 x 1440 dpi. Note the difference between ppi & dpi.


Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
T
tacitr
Nov 2, 2004
Yesterday, I was talking with several advanced photographers, who all claim that there is a significant difference in quality between images from a high-end digital reflex (8 M pixels) and a high-end scanner (Nikon 4). I have no reason to doubt their ability to use the equipment, yet I cannot figure out where this difference would come from ?

There are several places the difference would come from.

As many people have already said, when you scan an image from a transparency or a print, you’re scanning the film grain; the resolution of the original being scanned is already inherently limited.

But that’s not the only difference. Another significant difference is the dynamic range of the scanner.

A consumer-grade scanner like the Nikon you talk about has a very limited dynamic range. "Dynamic range" is a measure of the total range of tones, from light to dark, that the scanner can capture. Having a limited dynamic range means that everything lighter than a certain point in the original becomes pure white, with no detail at all, and everything darker than a certain point becomes pure black. Consumer-grade scanners tend to have a very poor dynamic range, and they do a crap job of reproducing shadow detail; any detail in the deep shadows in the original is completely lost.

Now, digital cameras still have a more limited dynamic range than film, but it’s not as poor as a scan from a consumer-grade scanner. You’ll see better rendition and greater detail in the hilights and shadows of a digital image than in a scan from a consumer-grade scanner.

You refer to the Nikon as a "high-end scanner." It’s not; get that thought right out of your head. "High-end" scanners start at about $17,000 US and go up to about $340,000 US; they’re drum scanners, not flatbed scanners, and have a dynamic range and resolution that exceeds that of film. A medium or large format original scanned on a drum scanner will produce a digital image that far exceeds the quality of any consumer flatbed scanner or digital SLR.


Art, literature, shareware, polyamory, kink, and more:
http://www.xeromag.com/franklin.html
J
JNB0382
Nov 2, 2004
David Dyer-Bennet wrote:
"SJB" writes:

This is a very interesting thread and I appreciate your first hand observations. I currently shoot film and scan with a Nikonscan IV (2900 dpi) so I do get the 11 mpixel or so files.

Scanned-from-film pixels are very much *NOT* worth as much as digital-original pixels. Although of course it depends on scanner, technique, film, subject, etc.; but a very very rough rule-of-thumb is something like 2x — a 6 megapixel scanned file is roughly as good as a 3 megapixel digital original.

This is news to me. Where did you get this information from?

What did you mean by "a 6 megapixel scanned file is roughly as good as a 3 megapixel digital original"? Did you mean that a 3 megapixel digital original can be upsampled to 6 megapixel file that is "as good as" a 6 megapixel scanned file that is not resampled?
B
bhilton665
Nov 2, 2004
From: "SJB"

My question is this, however, can you further comment on your "resampling techniques". When you print larger pictures, I assume that you "up-sample" your digital file. Can you share your approach(es).

You can’t really create new data by resampling that’s as good as what you’d get with a higher rez image to start with, but you CAN resample up in Photoshop and get better large prints than you could get by sending a low rez image to an inkjet printer. This is likely because the inkjet will resample anyway to its internal rez (probably 360 ppi) and it uses ‘nearest neighbor’ instead of smarter algorithms, plus there’s no sharpening.

Different images (data structure types) will resize better or worse using different techniques. Three I’d suggest you try are Stair Interpolation (basically resizing with ‘bicubic’ in 110% steps until you reach the right size …. easy to write an action for this) or using ‘bicubic smoother’ in a single step if you have CS or, if you have a free or trial copy, Genuine Fractals. GF seems to do better on test patterns (I got a free copy with a scanner and have tested it a fair bit) but I think I get better results on actual photographs with more random structure using something else. GF is also touted as being better for really big enlargements, say 10x linear, but I rarely need to go more than 2.5x linear (ie, to rezz up 8 Mpixel files to print 16×20" at 360 ppi). I’m not saying it’s bad, just that it’s not that useful for resampling typical digital files to typical print sizes.

This site compares Stair Interpolation to several other algorithms … since the guy who wrote the web page is selling an Action to do Stair Interpolation it’s no great surprise to see that it wins, but it’s easy enough to test it out yourself on your own images … http://www.fredmiranda.com/SI/ (this was written before ‘bicubic smoother’ became available in CS, btw). Stair works well for me but it takes a while to run. Doing ‘bicubic smoother’ in one step may be good enough, try it and see.

Then you need to sharpen the resampled image correctly, with finesse. It’s easy to over-sharpen rezzed up files … I use an edge sharpening action written by Bill Atkinson of Apple Computer fame, which uses the Stylize > Find Edges filter to generate a mask … you invert it, expand and smooth it out a bit, load it as a selection and sharpen thru the mask at a high Amt (400%) with Threshold 0 (the mask becomes the Threshold, basically) and change the radius to taste. Keeps the noise in smooth areas from getting sharpened (masked out by the ‘find edges’ step) and doesn’t pick up any resizing artifacts and emphasis them. I think McClelland’s "Bible" describes this flow in detail (first time I saw it was in V5) and the "Artistry" series by Haynes has a copy of Bill’s action on the CD. Or you can probably find the steps via Google.

Bill
J
JNB0382
Nov 4, 2004
Bill Hilton wrote:

Then you need to sharpen the resampled image correctly, with finesse. It’s easy to over-sharpen rezzed up files … I use an edge sharpening action written by Bill Atkinson of Apple Computer fame, which uses the Stylize > Find Edges filter to generate a mask … you invert it, expand and smooth it out a bit, load it as a selection and sharpen thru the mask at a high Amt (400%) with Threshold 0 (the mask becomes the Threshold, basically) and change the radius to taste. Keeps the noise in smooth areas from getting sharpened (masked out by the ‘find edges’ step) and doesn’t pick up any resizing artifacts and emphasis them. I think McClelland’s "Bible" describes this flow in detail (first time I saw it was in V5) and the "Artistry" series by Haynes has a copy of Bill’s action on the CD. Or you can probably find the steps via Google.

McClelland’s new book One on One does go into detail about creating an edge mask for sharpening. After creating the edge mask, instead of using USM, he prefers using the High Pass filter. But creating the edge mask is the bulk of the work, after that experimenting and choosing between USM or High Pass filter is relatively simple. Too bad McClelland did not provide an action for this. Does Atkinson’s action support both USM and High Pass, or only USM?
DD
David Dyer-Bennet
Nov 4, 2004
writes:

David Dyer-Bennet wrote:
"SJB" writes:

This is a very interesting thread and I appreciate your first hand observations. I currently shoot film and scan with a Nikonscan IV (2900 dpi) so I do get the 11 mpixel or so files.

Scanned-from-film pixels are very much *NOT* worth as much as digital-original pixels. Although of course it depends on scanner, technique, film, subject, etc.; but a very very rough rule-of-thumb is something like 2x — a 6 megapixel scanned file is roughly as good as a 3 megapixel digital original.

This is news to me. Where did you get this information from?

Experience, though many people have confirmed it on the net over the years. I got a Nikon LS-2000 scanner long ago, and have scanned a lot of my own film. And then in 2000 I got a 2 megapixel digital camera, and was *astounded* at how much cleaner each pixel was than in any scanned image, and in how much bigger than a scanned image the same size I could print it and have it look good.

(The exact factor is not broadly agreed to, and I only present it as a very rough suggestion of how big a difference I’m talking about.)

What did you mean by "a 6 megapixel scanned file is roughly as good as a 3 megapixel digital original"? Did you mean that a 3 megapixel digital original can be upsampled to 6 megapixel file that is "as good as" a 6 megapixel scanned file that is not resampled?

No point in upsampling; just print the two at the same physical size, and compare.

David Dyer-Bennet, <mailto:>
RKBA: <http://noguns-nomoney.com/> <http://www.dd-b.net/carry/> Pics: <http://dd-b.lighthunters.net/> <http://www.dd-b.net/dd-b/SnapshotAlbum/> Dragaera/Steven Brust: <http://dragaera.info/>
DD
David Dyer-Bennet
Nov 4, 2004
"Daniel Masse" writes:

Bill Hilton wrote:
Since you mention it… what you are missing with 100 % digital is the wonder of a slide projected on a large screen…

These have been largely replaced by digital LCD projectors. Kodak quit making slide projectors some time ago because the demand was so low.

Yes, digital projectors are great for professionnal presentations, but are still way behind slide projectors if you are looking for quality… With our Photo Club, we make multi-projector slide shows on a 3 x 8 meter screen… There is no way we could do that with digital projectors…

Sorry, but that’s *exactly* what professional presenters are doing with digital projectors. Your "no way we could do that" is just showing your ignorance.

The virtue of real slide projection is the level of detail in the images, which we *don’t* match with digital projectors in general (though a few theaters project movies through digital projectors that *do* surpass slides).

David Dyer-Bennet, <mailto:>
RKBA: <http://noguns-nomoney.com/> <http://www.dd-b.net/carry/> Pics: <http://dd-b.lighthunters.net/> <http://www.dd-b.net/dd-b/SnapshotAlbum/> Dragaera/Steven Brust: <http://dragaera.info/>
DM
Daniel Masse
Nov 5, 2004
David Dyer-Bennet wrote:
Yes, digital projectors are great for professionnal presentations, but are still way behind slide projectors if you are looking for quality… With our Photo Club, we make multi-projector slide shows on a 3 x 8 meter screen… There is no way we could do that with digital projectors…

Sorry, but that’s *exactly* what professional presenters are doing with digital projectors. Your "no way we could do that" is just showing your ignorance.

I agree : my wording was not well chosen. What I really mean is that the investment that would be necessary to project large images is way beyond the financial means of a Photo Club. From what I understand, we would need three (or four) computers, and three projectors to convert our slide shows to digital…

The virtue of real slide projection is the level of detail in the images, which we *don’t* match with digital projectors in general (though a few theaters project movies through digital projectors that *do* surpass slides).

Last week-end, I saw several digital slide shows in an international contest. The image obtained with an XGA projector (1024 x 768 pixels) was fairly good, seen from about 10 meters, but it definitely lacked the deep colors and crispness of a slide projector. I understand that better digital projectors are available, but the price is such that they are out of reach of the general public… and I doubt that the price will ever come down significantly, as the demand for such projectors is going to remain limited to a few commercial uses. This is exactly what happened with slide projectors : top professionnal Kodak, SIMDA and LEICA projectors always remained very expensive. But slide projectors last forever, and the maintenance costs are limited to a new lamp, once in a while…
J
JNB0382
Nov 5, 2004
David Dyer-Bennet wrote:
writes:

David Dyer-Bennet wrote:
"SJB" writes:

This is a very interesting thread and I appreciate your first hand observations. I currently shoot film and scan with a Nikonscan IV (2900 dpi) so I do get the 11 mpixel or so files.

Scanned-from-film pixels are very much *NOT* worth as much as digital-original pixels. Although of course it depends on scanner, technique, film, subject, etc.; but a very very rough rule-of-thumb is something like 2x — a 6 megapixel scanned file is roughly as good as a 3 megapixel digital original.

This is news to me. Where did you get this information from?

Experience, though many people have confirmed it on the net over the years. I got a Nikon LS-2000 scanner long ago, and have scanned a lot of my own film. And then in 2000 I got a 2 megapixel digital camera, and was *astounded* at how much cleaner each pixel was than in any scanned image, and in how much bigger than a scanned image the same size I could print it and have it look good.

I haven’t made such a comparison. But I would expect that a digital camera’s pixel to be "cleaner" than a scanner’s in the sense that it will not have the noise from the film grain and scanner ccd. However, a digital camera can have its own kind of noise.

(The exact factor is not broadly agreed to, and I only present it as a very rough suggestion of how big a difference I’m talking about.)

Point well taken, and the exact factor is unimportant.

What did you mean by "a 6 megapixel scanned file is roughly as good as a 3 megapixel digital original"? Did you mean that a 3 megapixel digital original can be upsampled to 6 megapixel file that is "as good as" a 6 megapixel scanned file that is not resampled?

No point in upsampling; just print the two at the same physical size, and compare.

Is your comparison based on two same size prints? Without upsampling and printing at 300 printer dpi, a 6 meg file will produce a print twice the size of that from a 3 meg file.
W
Waldo
Nov 5, 2004
Are you sure that the printer driver resamples??? I work with a lot of postscript printers and as far as I know, they DON’T resample at all! That is done by the RIP. The RIP rasterizes font and vector art and transforms the bitmaps of the print stream. The resolution of the output of the RIP isn’t necessarily the output resolution of the printer. After the halftoning process (or any other type of screening, stochastic for most inkjets) there will be a number of planes of pure bilevel bitmaps, one for each process color. These bitmaps are used to steer the print engine.

You can check easily this by printing the stream to file and examining the output (easier for PostScript than for GDI, PCL and HPGL printers…).

Waldo
DD
David Dyer-Bennet
Nov 5, 2004
writes:

David Dyer-Bennet wrote:
writes:

David Dyer-Bennet wrote:
"SJB" writes:

This is a very interesting thread and I appreciate your first hand observations. I currently shoot film and scan with a Nikonscan IV (2900 dpi) so I do get the 11 mpixel or so files.

Scanned-from-film pixels are very much *NOT* worth as much as digital-original pixels. Although of course it depends on scanner, technique, film, subject, etc.; but a very very rough rule-of-thumb is something like 2x — a 6 megapixel scanned file is roughly as good as a 3 megapixel digital original.

This is news to me. Where did you get this information from?

Experience, though many people have confirmed it on the net over the years. I got a Nikon LS-2000 scanner long ago, and have scanned a lot of my own film. And then in 2000 I got a 2 megapixel digital camera, and was *astounded* at how much cleaner each pixel was than in any scanned image, and in how much bigger than a scanned image the same size I could print it and have it look good.

I haven’t made such a comparison. But I would expect that a digital camera’s pixel to be "cleaner" than a scanner’s in the sense that it will not have the noise from the film grain and scanner ccd. However, a digital camera can have its own kind of noise.

Yes, it certainly can (especially a small-sensor camera at high ISO settings). I’ve got some *amazingly* speckly pictures from dark conditions! But then, my work over the years with film to take available-light pictures under those conditions is pretty speckly too :-).

The difference is *much* more than just the grain.

(The exact factor is not broadly agreed to, and I only present it as a very rough suggestion of how big a difference I’m talking about.)

Point well taken, and the exact factor is unimportant.

What did you mean by "a 6 megapixel scanned file is roughly as good as a 3 megapixel digital original"? Did you mean that a 3 megapixel digital original can be upsampled to 6 megapixel file that is "as good as" a 6 megapixel scanned file that is not resampled?

No point in upsampling; just print the two at the same physical size, and compare.

Is your comparison based on two same size prints? Without upsampling and printing at 300 printer dpi, a 6 meg file will produce a print twice the size of that from a 3 meg file.

Yes, same size prints; that’s what I meant by "print the two at the same physical size".

The size a print comes out at has nothing to do with the pixel dimensions in my workflow; it depends on the physical dimensions set in Photoshop when I print it. In inkjet printing, the concept of "pixel" in the actual output doesn’t apply; the data is shared around among many physical locations, and the pixels overlap. —
David Dyer-Bennet, <mailto:>
RKBA: <http://noguns-nomoney.com/> <http://www.dd-b.net/carry/> Pics: <http://dd-b.lighthunters.net/> <http://www.dd-b.net/dd-b/SnapshotAlbum/> Dragaera/Steven Brust: <http://dragaera.info/>
B
bhilton665
Nov 5, 2004
From: Waldo

Are you sure that the printer driver resamples?

Dunno for sure … good question. I give a link below that shows where I got the idea but they may be wrong.

I work with a lot of postscript printers and as far as I know, they DON’T resample at all! That is done by the RIP.

Most of us using inkjets don’t have separate RIPs … that function is done by the printer driver, so it’s kind of a moot point.

With the Epson printers you can save the output file (not the input image file, the actual processed output file as sent to the printer). If you take a file with say 360 ppi rez and save the output file, and then downsample the input image file drastically (say to 100 ppi at the same print size) and save that one’s printer output file you’ll see they are almost identical in size. If you downsample using ‘nearest neighbor’ you’ll see that the files *are* identical in size, which is why I deduced that the printer driver is using ‘nn’ instead of better algorithms like bicubic or, with CS, ‘bicubic smoother’.

The same thing happens if you resample the 360 ppi file upward, even to say 900 ppi … the resulting output file from the printer driver is still the same size (if you resample with ‘nearest neighbor’).

Here’s a site that describes how this was tested (and is where I got the idea in the first place) … http://www.inkjetart.com/news/archive/IJN_01-27-04.html …. scroll down to "RESOLUTION BASICS FOR SCANNING AND INKJET PRINTING".

For sure I get better large prints from input files that I resample and (carefully) sharpen myself than from files that are lacking in ppi for the given printer. For example my Canon 1D Mark II has 8 Mpixels and if printed native at 16×24" the input rez would be 146 ppi … carefully upsampling to 360 ppi and running USM gives me much better prints every time.

Bill
BP
Barry Pearson
Nov 6, 2004
Waldo wrote:
Are you sure that the printer driver resamples??? I work with a lot of postscript printers and as far as I know, they DON’T resample at all! That is done by the RIP.
[snip]

This thread is surely about printing digital images, such as photographs? And not about postscript or text or vectors? (If it isn’t, I apologise).

Assuming it *is* about digital images, such as photographs, then what the printer driver first sees is an array of pixels. It is a pretty hopeless task to dynamically interpret an array of pixels with some arbitrary resolution and work out how to move the heads, the paper, and what ink size droplets to apply, and where.

So the driver resamples to its native resolution. I’m told this is typically 600 ppi for HP & Canon, 720 ppi for Epson desktops, 360 ppi for large Epsons, and other values for other printers. It now has pixels that precisely match the rate at which it can make decisions about what to do with the heads, the paper, and the ink droplets.

When a photographer prints a file on a desktop printer, the photographer typically doesn’t see separate "drivers" & "RIP"s. Just a printer driver, which does everything necessary between the photo-editor and the paper. Because the phtographer supplies an array of pixels, in fact what we are really talking about is the RIP. As far as I can tell, what you get when you buy an Epson 1290 or Canon i9950 or whatever, is the equivalent of a RIP. Perhaps an expert could comment on this.


Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
W
Waldo
Nov 6, 2004
This thread is surely about printing digital images, such as photographs? And not about postscript or text or vectors? (If it isn’t, I apologise).

That’s true, apologies for going in a different direction.

Assuming it *is* about digital images, such as photographs, then what the printer driver first sees is an array of pixels. It is a pretty hopeless task to dynamically interpret an array of pixels with some arbitrary resolution and work out how to move the heads, the paper, and what ink size droplets to apply, and where.
So the driver resamples to its native resolution. I’m told this is
typically
600 ppi for HP & Canon, 720 ppi for Epson desktops, 360 ppi for large
Epsons,
and other values for other printers. It now has pixels that precisely
match
the rate at which it can make decisions about what to do with the
heads, the
paper, and the ink droplets.

I checked a PostScript 3 printer, it just send the data in a wrapper to the printer. It doesn’t do any resampling. Dunno for GDI printers.

Anyway, it doesn’t make any sense to resample to 720 PPI when your printer is the same resolution. The only type of printer where you will benefit from this is a dye sublimation printer. Even if you have 12 process colors, you won’t notice if you send in a lower resolution (not too low of course). The print engine is simply not able to translate one pixel of the image into one single drop of ink. It needs subpixels in order to reproduce (actually to "fake") the color.

Waldo
N
nomail
Nov 6, 2004
Waldo wrote:

I checked a PostScript 3 printer, it just send the data in a wrapper to the printer. It doesn’t do any resampling. Dunno for GDI printers.
Anyway, it doesn’t make any sense to resample to 720 PPI when your printer is the same resolution. The only type of printer where you will benefit from this is a dye sublimation printer. Even if you have 12 process colors, you won’t notice if you send in a lower resolution (not too low of course). The print engine is simply not able to translate one pixel of the image into one single drop of ink. It needs subpixels in order to reproduce (actually to "fake") the color.

I use an Epson Stylus Pro 7600, which is supposed to have a native resolution of 360 dpi. So I took an image at 180 dpi and printed this just the way it is. Next, I used interpolation (Photoshop CS, bicubic smoother) to resample this image to 360 ppi and printed it again. According to the ‘native resolution theory’, the second print should be better because bicubic smoother is a better kind of interpolation than the nearest neighbor interpolation that the printer driver is supposed to use. In reality, I do not see any difference, even with a magnifier.

Conclusion: Either the theory is wrong and the driver does not interpolate at all, or the driver does not use nearest neighbor but bicubic as well. In any case, interpolation to 360 ppi seems a useless exercise that only increase the file size and the processing time.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/
B
bhilton665
Nov 6, 2004
From: (Johan W. Elzenga)

So I took an image at 180 dpi and printed this
just the way it is. Next, I used interpolation (Photoshop CS, bicubic smoother) to resample this image to 360 ppi and printed it again.
Conclusion: Either the theory is wrong and the driver does not interpolate at all, or the driver does not use nearest neighbor but bicubic as well.

Try the test with an odd ppi (not dpi), something like 134, 146 etc … anything *not* an even multiple of 360. It’s easier for the printer to resample at 90, 180, 240 etc.
N
nomail
Nov 6, 2004
Bill Hilton wrote:

From: (Johan W. Elzenga)

So I took an image at 180 dpi and printed this
just the way it is. Next, I used interpolation (Photoshop CS, bicubic smoother) to resample this image to 360 ppi and printed it again.
Conclusion: Either the theory is wrong and the driver does not interpolate at all, or the driver does not use nearest neighbor but bicubic as well.

Try the test with an odd ppi (not dpi), something like 134, 146 etc … anything *not* an even multiple of 360. It’s easier for the printer to resample at 90, 180, 240 etc.

That would introduce an extra variable, that may have nothing to do with interpolation or no interpolation. It’s obvious that printing at such an intermediate ppi value is more difficult for the driver, even if there is no interpolation at all. It means the printer cannot use an even number of drops for each pixel, so halftoning becomes more difficult. It would have to use something like 4.2368 drops per pixel, which is impossible.

So the question is: what does it prove if you do see a difference between 195 and 180 ppi? Do you see the difference in resampling quality, or do you see the difference in halftoning effectiveness?…


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/
J
jjs
Nov 6, 2004
"Johan W. Elzenga" wrote in message

[…] It means the printer cannot use an even
number of drops for each pixel, so halftoning becomes more difficult. It would have to use something like 4.2368 drops per pixel, which is impossible.

It seems likely that the driver rounds-off to use an integer representation.
N
nomail
Nov 6, 2004
Bill Hilton wrote:

With the Epson printers you can save the output file (not the input image file, the actual processed output file as sent to the printer). If you take a file with say 360 ppi rez and save the output file, and then downsample the input image file drastically (say to 100 ppi at the same print size) and save that one’s printer output file you’ll see they are almost identical in size. If you downsample using ‘nearest neighbor’ you’ll see that the files *are* identical in size, which is why I deduced that the printer driver is using ‘nn’ instead of better algorithms like bicubic or, with CS, ‘bicubic smoother’.

There must be something else going on here. No matter what type of interpolation you use, the result is the same in terms of number of pixels. And since the file size only depends on the number of pixels (and the bit depth of course), not their individual RGB values, there shouldn’t be any difference between downsampling with NN or downsampling with bicubic. Both have the same number of pixels, so both should result in identical file sizes (unless you use compression).

Try this in Photoshop (be sure you save without icons, preview, etc and make the length of the file name the same) and you will see that bicubic or nearest neighbor downsizing will indeed give identical file sizes if you save both as uncompressed tiff’s.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/
BV
Bart van der Wolf
Nov 6, 2004
"Johan W. Elzenga" wrote in message
SNIP
So the question is: what does it prove if you do see a
difference between 195 and 180 ppi? Do you see the
difference in resampling quality, or do you see the
difference in halftoning effectiveness?…

Resampling quality.

The color dithering algorithm is optimized for a fixed pixel density (e.g. 720 ppi). It is obviously much easier to dither with a known starting point. If the supplied image is not 720 ppi in that case, the driver will internally resample it to a known state, before dithering. That resampling usually introduces degradation, and prior sharpening will need to be restored by yet another unknown algorithm. If you’d started with 720 ppi, that would have allowed to sharpen at final output size and the only thing left for the printer driver is to dither.

And for those that are skeptical about the ‘known state’ pixel density, the printer driver can be interrogated for what the ppi should be, for a given paper/ink combination.

Bart
B
bhilton665
Nov 6, 2004
Bill Hilton wrote:

With the Epson printers you can save the output file (not the input image file, the actual processed output file as sent to the printer). If you take a file with say 360 ppi rez and save the output file, and then downsample the input image file drastically (say to 100 ppi at the same print size) and save that one’s printer output file you’ll see they are almost identical in size. If you downsample using ‘nearest neighbor’ you’ll see that the files *are* identical in size, which is why I deduced that the printer driver is using ‘nn’ instead of better algorithms like bicubic or, with CS, ‘bicubic smoother’.

From: (Johan W. Elzenga)

There must be something else going on here. No matter what type of interpolation you use, the result is the same in terms of number of pixels. And since the file size only depends on the number of pixels (and the bit depth of course), not their individual RGB values, there shouldn’t be any difference between downsampling with NN or downsampling with bicubic. Both have the same number of pixels, so both should result in identical file sizes (unless you use compression).

You completely miss the point … what you say is obviously true for tiff files but irrelevant to this discussion.

What I described is a difference in file sizes if you save the printer OUTPUT file, the actual raster info sent to the printer after the driver has finished with it. If you interpolate using nearest neighbor you’ll find these files are identical. If you interpolate using something smarter you’ll find that the printer OUTPUT files are slightly different, which is evidence that the printer software is using ‘nn’ for its internal resizing.

Bill
WF
Wayne Fulton
Nov 7, 2004
In article <hMVid.462$>,
says…

So the driver resamples to its native resolution. I’m told this is typically 600 ppi for HP & Canon, 720 ppi for Epson desktops, 360 ppi for large Epsons, and other values for other printers. It now has pixels that precisely match the rate at which it can make decisions about what to do with the heads, the paper, and the ink droplets.

Just curious, but told by who? What authoritive source? We hear many wild tales on the internet, so how can this claim be authenticated? Which printer manufacturer claims they resample all images to these huge sizes? Where do they say this? If the claim can be true, there must be some evidence for it. What is the actual evidence that this is true?

About the same spool file sizes regardless of image resolution – why would we imagine those spool files contained pixel bitmaps? What possible use are image pixels to the printer and print head? Why wouldnt we assume the purpose of the device driver was to convert our bitmap image data to a spool file containing rasterized passes of four color ink dot patterns suitable for the device? (the device is a moving printhead) Why else is our computer so busy during this printing time? It seems no surprise that all spool files of printhead data are about the same size, assuming everything is the same except resampled resolution – image resolution is no longer a factor at that point. This spool file size is NOT evidence of this claim. So what is the evidence?


Wayne
http://www.scantips.com "A few scanning tips"
BP
Barry Pearson
Nov 7, 2004
Wayne Fulton wrote:
In article <hMVid.462$>,
says…

So the driver resamples to its native resolution. I’m told this is typically 600 ppi for HP & Canon, 720 ppi for Epson desktops, 360 ppi for large Epsons, and other values for other printers. It now has pixels that precisely match the rate at which it can make decisions about what to do with the heads, the paper, and the ink droplets.

Just curious, but told by who? What authoritive source? We hear many wild tales on the internet, so how can this claim be authenticated? Which printer manufacturer claims they resample all images to these huge sizes? Where do they say this? If the claim can be true, there must be some evidence for it. What is the actual evidence that this is true?

First, I’ll talk about "driver+printer", because the distribution of functionality is irrelevant for my point. I’m talking about printers used for photography, especially inkjet printers. And when I say ppi or pixels per inch, I mean it.

I deduced a year or two ago that driver+printers would resample to a "standard" (for them) ppi value. (Since I sometimes print at 1000 ppi, sometimes the rseampling is *downwards*!) I then tried to find if my deduction was correct. (Google Groups should still have my posts on the topic). I eventually found discussion and even software that supported this deduction, and none that I felt contradicted it. So I am as confident as I can be without access to the design specs.

The reason why I was interested is that I wanted to get the maximum print sharpness, especially for the size of prints I hang on my wall, or enter for competitions. What are the best sharpening parameters? What is the relationship between what I see on the screen and what is printed? Can I get better soft-proofing?

I had assumed up to then (without evidence) that the driver+printer would loop through my pixels, and for each one work out which inks to place where, how to move heads & paper, etc. But I realised that this would be tricky (in fact, silly) design, which would cause all sorts of troubles. For example:

If I provide 1000 ppi, what does the driver+printer do? Take the first pixel, work out what droplets to apply, then go onto the next pixel and … what? What would it do with the next pixels, if it is not able to print detail this fine? Ignore them until is has finished with that first pixel, perhaps (what?) 1/250th or 1/500th of an inch later, hence every 2 or 4 pixels? Treat them as an accumulated error and correct this at the next 1/250th position?

If I provide 10 ppi (for example, 60 by 40 pixels for a 6 x 4 inch print!), what would the driver+printer do, for example for a yellow pixel? Put down one yellow droplet? In fact, why wouldn’t it do approximately the same as it would do for a 10 x 10 pixel square in an image at 100 ppi? After all, the task is approximately the same – print a yellow square of 1/10th x 1/10th inch. (And so on for intermediate and other numbers of ppi).

I deduced that a more sensible design would be to loop at a standard paper-spacing, not loop through my pixels, when working out how to drive the hardware & inks. If that standard paper-spacing for that driver+printer were (I’m deliberately making up these numbers) 1/250th per inch, then it would identify the colour my photograph wants at that position, take into account the accumulated error from its error diffusion algorithm, perhaps apply a stochastic variation, and work out ink colours, sizes & placement. Then it would do the same 1/250th inch later. And so on. (I’m not talking about dots or droplets per inch).

If it really is looping 250 times per inch, what is the best method of identifying the colour that the photographic image wants at the position? Just find the nearest pixel in the image corresponding to that position on the paper? But that risks applifying noise. If that is an anomalous pixel, instead of just being one anomalous pixel in (say) a 4 x 4 pixel region of the image, it would now dominate the 1/250th x 1/250th square on the paper. So my deduction was that it would not take just the one pixel at that position, but would ensure that nearby pixels were taken in account. This is equivalent to resampling from (say) 1000 ppi to 250 ppi.

Similar logic applies to a 10 ppi image. It would identify the same colour 25 times round the loop. So that 10 ppi image would be treated by the driver+printer as though it were a 250 ppi image with 25 x 25 pixel squares. That is also equivalent to resampling.

I’m not saying that the driver+printer actually builds a single image in the computer at 250 ppi (or whatever). I don’t know whether it does or not. A plausible optimisation would be to resample in stages, "just in time". But the key deduction is that, every 1/250th of an inch, the driver+printer has to identify a colour from the image, and this should be *equivalent* to what the colour would be by some form of resampling. (And what does "equivalent to resampling" mean other than resampling? I’m concerned with what is printed, not with file sizes).

Qimage gives the 600 ppi and 720 ppi values. I don’t know whether they are correct. Plenty of people appear to believe that you don’t get better results beyond 360 ppi on an Epson.

About the same spool file sizes regardless of image resolution – why would we imagine those spool files contained pixel bitmaps? What possible use are image pixels to the printer and print head? Why wouldnt we assume the purpose of the device driver was to convert our bitmap image data to a spool file containing rasterized passes of four color ink dot patterns suitable for the device? (the device is a moving printhead) Why else is our computer so busy during this printing time? It seems no surprise that all spool files of printhead data are about the same size, assuming everything is the same except resampled resolution – image resolution is no longer a factor at that point. This spool file size is NOT evidence of this claim. So what is the evidence?

I make no assumptions about the contents of spoolfiles. I would expect them to be the same same size for different image-PPI values whatever the type of content. But I’ve never been concerned either way.


Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
BP
Barry Pearson
Nov 7, 2004
Bart van der Wolf wrote:
[snip]
And for those that are skeptical about the ‘known state’ pixel density, the printer driver can be interrogated for what the ppi should be, for a given paper/ink combination.

Is this what Qimage uses?


Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
J
JNB0382
Nov 7, 2004
David Dyer-Bennet wrote:
writes:

David Dyer-Bennet wrote:
writes:

David Dyer-Bennet wrote:
"SJB" writes:

This is a very interesting thread and I appreciate your first hand observations. I currently shoot film and scan with a Nikonscan IV (2900 dpi) so I do get the 11 mpixel or so files.

Scanned-from-film pixels are very much *NOT* worth as much as digital-original pixels. Although of course it depends on scanner, technique, film, subject, etc.; but a very very rough rule-of-thumb is something like 2x — a 6 megapixel scanned file is roughly as good as a 3 megapixel digital original.

This is news to me. Where did you get this information from?

Experience, though many people have confirmed it on the net over the years. I got a Nikon LS-2000 scanner long ago, and have scanned a lot of my own film. And then in 2000 I got a 2 megapixel digital camera, and was *astounded* at how much cleaner each pixel was than in any scanned image, and in how much bigger than a scanned image the same size I could print it and have it look good.

I haven’t made such a comparison. But I would expect that a digital camera’s pixel to be "cleaner" than a scanner’s in the sense that it will not have the noise from the film grain and scanner ccd. However, a digital camera can have its own kind of noise.

Yes, it certainly can (especially a small-sensor camera at high ISO settings). I’ve got some *amazingly* speckly pictures from dark conditions! But then, my work over the years with film to take available-light pictures under those conditions is pretty speckly too :-).

The difference is *much* more than just the grain.

(The exact factor is not broadly agreed to, and I only present it as a very rough suggestion of how big a difference I’m talking about.)

Point well taken, and the exact factor is unimportant.

What did you mean by "a 6 megapixel scanned file is roughly as good as a 3 megapixel digital original"? Did you mean that a 3 megapixel digital original can be upsampled to 6 megapixel file that is "as good as" a 6 megapixel scanned file that is not resampled?

No point in upsampling; just print the two at the same physical size, and compare.

Is your comparison based on two same size prints? Without upsampling and printing at 300 printer dpi, a 6 meg file will produce a print twice the size of that from a 3 meg file.

Yes, same size prints; that’s what I meant by "print the two at the same physical size".

What is the actual print size from the 6 and 3 meg files you are comparing?

The size a print comes out at has nothing to do with the pixel dimensions in my workflow; it depends on the physical dimensions set in Photoshop when I print it. In inkjet printing, the concept of "pixel" in the actual output doesn’t apply; the data is shared around among many physical locations, and the pixels overlap.

Now I’m really confused. In my workflow, I set up the print size and the dpi in the PS Image Size window. I would leave Resample unchecked and disabled if the dpi for the desired print size is computed by PS to be around 300dpi (not exact). If the dpi is significantly less than 300 for a print size, I would check and enable Resample, enter 300 as dpi to maintain that size. My understanding is that the file PS sends to an Epson printer driver has the exact pixel count and information as specified by the Image Size window.

If you use Image Size in your workflow, what are the settings for your 3meg and 6meg files for the same size prints? If you don’t specify your print size in Image Size, how do you get that information to the printer driver?

What the Epson printer driver do with the pixel count and information it receives from PS is a different story, as discussed in the other branch of this thread.
WF
Wayne Fulton
Nov 7, 2004
In article <UDpjd.57$>,
says…

I deduced a year or two ago that driver+printers would resample to a "standard" (for them) ppi value.

So any "evidence" for this theory is only like a hunch? There is no actual evidence? I dont know the specifics either, but my own hunch is that this is another case when Occams Razor is useful. Whatever would an inkjet do with colored 720 dpi image pixels? That excess would make the printhead decision problem be much worse, not better. I know a few people can always see about anything, but in my case, I cannot see this.

If I provide 10 ppi

Good example. Then you see a nice printed square pixel at 1/10 inch size, very visible. It is precisely done too, with smooth edges (nearest neighbor, not blended). This known result should affect that hunch too, because if we first resample such image to say 72 times larger, the blending will make it so fuzzy as to be near unrecognizable. What would be the advantage of that?

The only goal in printing any image is to reproduce the pixel data provided. On the printed paper, a pixel is a specific dimensioned area of paper. A printers only job is to fill that area with the correct color of ink.

The problem is that ink jets are not so great at doing this. The ink jet printing algorithm is very complicated, surely different depending on each individual pixels size and color situation. A black pixel is vastly easier than a red pixel, since we have no red ink. The problem is what specific area of paper is to be colored, with which color combination, and how to create that color combination from 4 colors of ink dots? Line art is of course vastly easier, no color issues, only size.

But regardless, the general goal is to fill the pixel’s specified area of paper with several ink dots to simulate the nearest possible color of that pixel. Due to the real world dimension of ink dots (the ink dots are much larger than their so-called spacing grid), it can never do this job precisely, not even for more coarse images (so if you imagine you can print 720 dpi images, dont look too closely at 150 dpi images). Error diffusion techniques are designed, and randomized dithering is designed, both existing only to hide these extreme difficulties. The issue is the difficulties. The problem is already that too few ink dots can fit in the pixels paper area space. Excessive image resolution with tiny paper area space really complicates the problem. It does not help things.

I do think we are too quick to judge printed detail by looking only at fine black lines in a color image. That detail is just lineart, not hard to print, and high resolution does help that line art aspect. But we really should instead be looking at detail in green grass or leaves, etc, something involving some multiple ink dots. Then less is often more.

Let’s assume the printers actual image capability is in the 300 dpi ballpark, just to have a number. So one plan is to print a 297 dpi image by trying to paint or color the 1/297 inch paper areas with the correct color using the tools at our disposal, which are admittedly crude. I believe the printer has the same computation to do, to fill 1/297 inch or 1/720 inch areas, its just a number (and the "printed pixel" will have neither actual dimension).

Or for a second plan, we can first upsample to a 720 dpi image, fuzzier by definition, and then discard at least 3/4 of that worse data to try to get back to the 300 dpi ball park we might have some hope of accomplishing. One of these plans makes no sense to me.

When queried, yes, Windows GDI does return the printers ink dot spacing values, like 720 dpi or 600 dpi. However these values are certainly not to be confused with image pixels (I know you know that). (and FWIW, Windows GDI also returns the 96 or 120 dpi value for the video screen, so dont take it very seriously <g>).

A B&W laser printer returns a 600 dpi value too, but we are certain that it is foolish to print a 600 dpi grayscale image on a 600 dpi laser. The meaning of that 600 dpi value is only about black dot spacing capability. It is NOT about gray image pixels, not related to image pixels, except line art. No stretch of the imagination can extend this beyond line art. The grayscale image is no longer a grayscale image, but is instead always a line art halftone image any time the printer sees it (because the laser printer only has black toner, no gray toner exists, etc, etc. You know all this too).

This 600 dpi value is very important in that way, but certainly it is no reason to upsample all our images to 600 dpi. Seems counterproductive.


Wayne
http://www.scantips.com "A few scanning tips"
BP
Barry Pearson
Nov 7, 2004
Wayne Fulton wrote:
In article <UDpjd.57$>,
says…

I deduced a year or two ago that driver+printers would resample to a "standard" (for them) ppi value.

So any "evidence" for this theory is only like a hunch? There is no actual evidence?

I said: "I deduced a year or two ago that driver+printers would resample to a "standard" (for them) ppi value…. I then tried to find if my deduction was correct…. I eventually found discussion and even software that supported this deduction, and none that I felt contradicted it. So I am as confident as I can be without access to the design specs."

That is more than "only like a hunch". (Do you have more than a hunch?)

I dont know the specifics either, but my own hunch
is that this is another case when Occams Razor is useful. Whatever would an inkjet do with colored 720 dpi image pixels? That excess would make the printhead decision problem be much worse, not better. I know a few people can always see about anything, but in my case, I cannot see this.

I suggest that you examine the principles of what I say, before getting to the specific numbers. That is why I made up numbers such as 1/250th inch in the post you are replying to. I didn’t want readers to judge what I said by assuming numbers like 720 ppi. I am confident about the principle of a native resolution, but less confident about the specific numbers such as 720 ppi. But I also have no reason to believe they are wrong.

If I provide 10 ppi

Good example. Then you see a nice printed square pixel at 1/10 inch size, very visible. It is precisely done too, with smooth edges (nearest neighbor, not blended). This known result should affect that hunch too, because if we first resample such image to say 72 times larger, the blending will make it so fuzzy as to be near unrecognizable. What would be the advantage of that?

I just took one of my photographs, and resampled it to 57 x 40 pixels. Then I duplicated it twice, and upsized one of these to 570 x 400, and the other to 4104 x 2880 (720 ppi). I used "nearest neighbour", which some believe is what driver+printers use. (I used Photoshop CS throughout). I used the original of: http://www.barry.pearson.name/photography/pictures/eg95/eg95 _26_11_3.htm

I printed each of these at 15 cm by about 10.53 cm. I can’t spot the difference between any of those prints. There is nothing fuzzy about any of them. Just very crisp squares of colour for the pixels. I assume that a driver+printer that used "nearest neighbour" to upsample to 720 ppi would give the same result as when I upsample in Photoshop to 720 ppi and pass it to the driver+printer.

This doesn’t prove my point, but it is compatible with it and doesn’t disprove it. I’ve never said that driver+printers use bicubic.

[snip]
Let’s assume the printers actual image capability is in the 300 dpi ballpark, just to have a number. So one plan is to print a 297 dpi image by trying to paint or color the 1/297 inch paper areas with the correct color using the tools at our disposal, which are admittedly crude. I believe the printer has the same computation to do, to fill 1/297 inch or 1/720 inch areas, its just a number (and the "printed pixel" will have neither actual dimension).

Are you talking about dpi or ppi? Are there any inkjet printers that work in the 300 dpi region? I can’t find any in my Jessops catalogue. But, where inkjet printers are concerned, I think dpi numbers are marketing BS anyway. I don’t care about dpi numbers, because I am a photographer, and what matters to me is how a driver+printer displays the pixels I give it on inches of paper, not some esoteric marketing measure of "dots".

(What the heck is a "dot", when you have various ink droplet sizes? And what is "dots per inch", when you have stochastic variation in droplet position? Marketing bovine excrement!)

Or for a second plan, we can first upsample to a 720 dpi image, fuzzier by definition, and then discard at least 3/4 of that worse data to try to get back to the 300 dpi ball park we might have some hope of accomplishing.
One of these plans makes no sense to me.

See above. Sharp, not fuzzy. I have the prints in front of me!

When queried, yes, Windows GDI does return the printers ink dot spacing values, like 720 dpi or 600 dpi. However these values are certainly not to be confused with image pixels (I know you know that). (and FWIW, Windows GDI also returns the 96 or 120 dpi value for the video screen, so dont take it very seriously <g>).

[snip]

Does the Windows GDI return dpi or ppi? Qimage works in ppi, not dpi. As it should, of course.

(And what is an agreed, standardised, definition of "dpi"? Is there one? TIFF talks about ppi, not dpi. Photoshop dialogues are ppi, not dpi. Ditto PSP, I think. The ink-droplet technology is at least as important as the dots per inch, even if that means the same as droplets per inch, which is variable if you have variable droplet sizes).


Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
WF
Wayne Fulton
Nov 7, 2004
In article <i%tjd.133$>,
says…

That is more than "only like a hunch". (Do you have more than a hunch?)

No, my view is only a hunch too. Our considered opinions about these hunches simply differ, which is a common situation when there are no facts. My own view is that printing at around 300 dpi works very well in general, and I cant see any of the supposed improvement at 720 dpi, real or interpolated. A few say they do, but I dont.

OK, thanks Barry, I was just inquiring if there was any evidence to confirm that story. I understand the answer to be no.

I suggest that you examine the principles of what I say, before getting to
the
specific numbers. That is why I made up numbers such as 1/250th inch in the post you are replying to.

I understood your 250 dpi case, and about aligning ink dots in it. I just couldnt imagine how this exactness much matters when the situation needs to fit in several more ink dots than can possibly fit the printed pixel area. It seems like picking nits when the actual pixel area is not well defined (intentionally randomized all around the limits of where it should be), and the ink dot is larger than we imagine it to be, and that vague situation is not about any one ink dot anyway, and I gave it a 297 dpi image file. It is about the averaged color of the pixel space on paper, the best our tools allow creating it, which is not that great for discrete ink dots.

Are you talking about dpi or ppi? Are there any inkjet printers that work in the 300 dpi region? I can’t find any in my Jessops catalogue. But, where inkjet printers are concerned, I think dpi numbers are marketing BS anyway.I don’t care about dpi numbers, because I am a photographer, and what matters
to
me is how a driver+printer displays the pixels I give it on inches of paper, not some esoteric marketing measure of "dots".

Yes, now that the motor stepping specs are much smaller than the ink dot diameters, then the motor specs dont say much now. I agree, only the final results on paper matters today.

OK, I’ll bite. My use of dpi always means image resolution (pixels per inch) if the usage context is about image resolution. Or it always means printer motor stepping ratings (or so-called ink drops per inch) if the context is about printer motor stepping ratings. Depending on the usage context, my term dpi always means the only thing it can possibly mean… if about images, then it is about images. I suspect we may not find many English words that only have one meaning. 🙂 Context always decides the meaning, we are used to this.

The term PPI is also fine, no problem with it at all, and I will understand ppi too if you prefer to use it. And if you’ve been around the block once, as you have, then I expect you to understand dpi too. Dpi was technical jargon in the older days (a few years ago), and pixels are infact conceptual colored dot of sorts. Nevertheless, dpi is simply the universal name for image resolution, and always has been, for years and years. PPI is relatively new usage, and it is fine too. Regardless of preference, we must understand it with either name, dpi or ppi, because we are certainly going to always see it both ways.

I know some people either dont or wont understand, but sometimes its best to just accept how things are. We really do need to understand either way.

If you might want to argue it, you must also deal with the fact that all scanner specifcations say dpi, and almost all (but a few exceptions) scanner software does too, and I assure you that scanners use of dpi definitely never means ink dots… scanners dpi will always mean pixels per inch of image resolution, in the classic sense. Even continuous tone printers, dye-subs and Frontier-type printers, are also rated dpi, and these certainly always mean printing pixels too, not single-color-ink dots. I doubt you are saying they are all wrong too? I cant solve that problem. They do know what they’re doing, so I just go with them, because that is what dpi means, in the context of image resolution.

Does the Windows GDI return dpi or ppi? Qimage works in ppi, not dpi. As it should, of course.

I have not seen the newest Microsoft photo editors, but I’d guess they are using ppi too. But otherwise I really doubt Windows itself or GDI has ever said ppi once. Actually, GDI printer rating in dpi would be correct for your limited reserved definition for dpi, and my point was that the GDI printer rating is about your motor dpi instead having any meaning in terms of ppi and pixels. Today’s Windows XP advanced video setting still says dpi too (for logical inches), and this is the same video value GDI shows.

(And what is an agreed, standardised, definition of "dpi"? Is there one? TIFF talks about ppi, not dpi. Photoshop dialogues are ppi, not dpi. Ditto PSP, I think. The ink-droplet technology is at least as important as the dots per inch, even if that means the same as droplets per inch, which is variable if you have variable droplet sizes).

The prepress industry has used dpi forever, with both meanings in context, but one never heard ppi until recent years. But there are so many newbies today using scanners and cameras, and they often dont understand much of the details at first, so the photo editor software has largely changed to say ppi (Irfanview is one exception). So usage is changing slowly, but this is a relatively new thing, last few years, to help newbies understand the difference. Certainly I do agree this recent distinction (ppi for images/dpi for printers) is less confusing for the newbies. I have no problem with ppi, and my own book was changed to say ppi a long time ago because it is more clear for newbies. Nevertheless, I always still say dpi myself in person, thats just how I learned it, and how I think, and what it is, because dpi is simply the name of it and always has been, IMO.

Other preferences are fine too, PPI is good and very descriptive, but the main thing is that we all really need to understand both ways. Those that imagine a law is written in stone saying the term dpi is reserved for printers ink dots are simply wrong, they just dont know how things are, and always were. That is merely their preference.


Wayne
http://www.scantips.com "A few scanning tips"
A7
aka 717
Nov 7, 2004
Generation is not always the most accurate indicator.
A large format transparency has huge amounts of detail
in the shadows, and when scanned through a drum scanner
the details are captured. A small CCD can’t come close and the better and larger ones may or may not. Previous poster said no. I don’t have a necessity to do tests, so his word is the best info I have so far…

"Scruff" wrote in message
Anything scanned from film becomes second generation. Digital is already there!

"Johan W. Elzenga" wrote in message
aka 717 wrote:

Wrong. A scan stacks pixels on top of grain. The result is that you
will
never see detail as small as one pixel only or one line of pixels
only.
A digital photo on the other hand can indeed have detail that small.
As
a result, you need (much) more pixels in a scan to resolve the same amount of detail, so you cannot compare the number of pixels one to
one.
BTW, a Nikon scanner with 2900 ppi maximum resolution is not a ‘high
end
scanner’, it’s a good consumer grade scanner.
It used to be that the best quality, I think, was from
large transparencies scanned on a drum scanner.
Has digital gone beyond that? Is Arizona Highways
using digital, I wonder.

No, digital has not yet gone beyond a 8 x 10 inch scanned on a drum scanner. But 35mm is surpassed by DSLR cameras and 6×7 cm is surpassed by digital backs on medium format cameras. Whether Arizona Highways is using digital is not very relevant IMHO. There will always be people living in the previous century.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/

BP
Barry Pearson
Nov 8, 2004
Wayne Fulton wrote:
In article <i%tjd.133$>,
Barry Pearson says…
Wayne Fulton wrote …

That is more than "only like a hunch". (Do you have more than a hunch?)

No, my view is only a hunch too.

Too? Mine was "more than only a hunch". It was an informed deduction supported, by software that I tried, and other information.

Our considered opinions about these
hunches simply differ, which is a common situation when there are no facts. My own view is that printing at around 300 dpi works very well in general, and I cant see any of the supposed improvement at 720 dpi, real or interpolated. A few say they do, but I dont.
OK, thanks Barry, I was just inquiring if there was any evidence to confirm that story. I understand the answer to be no.

Then you didn’t read what I wrote! I suggest you do some searching for "native resolution", and read about Qimage, etc. http://www.google.co.uk/search?q=printer+native+resolution http://www.google.co.uk/search?q=Qimage+native+resolution http://www.ddisoftware.com/qimage/quality/

I would be interested to know how you would go about designing a driver+printer with the task of taking a digital array of pixels and converting them into ink droplets on paper. Would you base the central loop on the pixels of the supplied image? Or on some paper-spacing based on the characteristics of the hardware, such as ink droplet sizes, nozzles, head movement, papaer movement, etc? Or something else?

If you decided to design it around the characteristics of the hardware, (and I suspect you would, because you are a world-class expert in some aspects of digital imaging), how would you then treat the input image? You would not be basing the loop on the PPI of the input image, yet you would be requiring information about the photograph based on the characteristics of the hardware. How would you make them match? That appears to be the origin of "native resolution" – ensuring that the supplied photographic image is able to supply the colour values needed by the driver+printer as required each time round its main loop.

[snip]
Let’s assume the printers actual image capability is in the 300 dpi ballpark, just to have a number. So one plan is to print a 297 dpi image by trying to paint or color the 1/297 inch paper areas with the correct color using the tools at our disposal, which are admittedly crude. I believe the printer has the same computation to do, to fill 1/297 inch or 1/720 inch areas, its just a number (and the "printed pixel" will have neither actual dimension).

Are you talking about dpi or ppi? Are there any inkjet printers that work in the 300 dpi region? I can’t find any in my Jessops catalogue. But, where inkjet printers are concerned, I think dpi numbers are marketing BS anyway. I don’t care about dpi numbers, because I am a photographer, and what matters to me is how a driver+printer displays the pixels I give it on inches of paper, not some esoteric marketing measure of "dots".
[snip]
Other preferences are fine too, PPI is good and very descriptive, but the main thing is that we all really need to understand both ways. Those that imagine a law is written in stone saying the term dpi is reserved for printers ink dots are simply wrong, they just dont know how things are, and always were. That is merely their preference.

Printing is the dangerous case for being careless about dots versus pixels. And that is because DPI and PPI have different technical meanings, so there is often genuine ambiguity. "Print at 720 dpi" and "print at 720 ppi" could be different – one might be talking about dots of ink, the other about supplied pixels. I have a rule of thumb – if the number is less than 600, it probably means PPI, if it is greater than 720, it probably means DPI, and between those values it might mean either, needing clarification. I guessed that you meant PPI, but still wasn’t sure.

(CRTs can also be tricky – I once got into a discussion about *phosphor dots* when I thought the discussion was about pixels! So CRTs can also be a dangerous case for being careless about dots versus pixels).

My local photographic society holds sessions for visitors as well as members about digital imaging. Questions have included "what is a sensor?", and "what is the difference between pixels and resolution?". It cannot be assumed that everyone reading about these matters can interpret "dot" and "pixel" confidently according to context. I suggest that old hands like us should clean up our act to help the next generation understand what is going on. Printing is the most obvious case where we should use exactly the right words.

Fortunately, the main photo-editors, and the web standards, agreed several years ago that "pixel" was the proper word for an element of a picture, or a measure for web pages, etc. The TIFF 6.0 standard identifies resolution in terms of pixels per (inch, cm, etc), not dots. A web browser will give the dimensions of a photograph as pixels rather than dots, because that is how web standards are defined. The standards for digital cameras is in terms of pixels. The new "Digital Negative" standard, that we can hope will apply to scanners as well as cameras, is also in terms of pixels, not dots. If there ever was a terminology battle, it was won years ago. People coming across these terms for the first time in the 21st century should just see the modern words. (Modern? Hm! I think "pixel" dates back to the 1960s). We can hope that use of older words (even including "pels", etc) dies away.

I suppose it is a matter of "be tolerant with input, and strict with output". We should speak and write the correct words, but accept that others may not be so careful.


Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
DD
David Dyer-Bennet
Nov 8, 2004
writes:

David Dyer-Bennet wrote:
writes:

David Dyer-Bennet wrote:
writes:

David Dyer-Bennet wrote:
"SJB" writes:

This is a very interesting thread and I appreciate your first hand observations. I currently shoot film and scan with a Nikonscan IV (2900 dpi) so I do get the 11 mpixel or so files.

Scanned-from-film pixels are very much *NOT* worth as much as digital-original pixels. Although of course it depends on scanner, technique, film, subject, etc.; but a very very rough rule-of-thumb is something like 2x — a 6 megapixel scanned file is roughly as good as a 3 megapixel digital original.

This is news to me. Where did you get this information from?

Experience, though many people have confirmed it on the net over the years. I got a Nikon LS-2000 scanner long ago, and have scanned a lot of my own film. And then in 2000 I got a 2 megapixel digital camera, and was *astounded* at how much cleaner each pixel was than in any scanned image, and in how much bigger than a scanned image the same size I could print it and have it look good.

I haven’t made such a comparison. But I would expect that a digital camera’s pixel to be "cleaner" than a scanner’s in the sense that it will not have the noise from the film grain and scanner ccd. However, a digital camera can have its own kind of noise.

Yes, it certainly can (especially a small-sensor camera at high ISO settings). I’ve got some *amazingly* speckly pictures from dark conditions! But then, my work over the years with film to take available-light pictures under those conditions is pretty speckly too :-).

The difference is *much* more than just the grain.

(The exact factor is not broadly agreed to, and I only present it as a very rough suggestion of how big a difference I’m talking about.)

Point well taken, and the exact factor is unimportant.

What did you mean by "a 6 megapixel scanned file is roughly as good as a 3 megapixel digital original"? Did you mean that a 3 megapixel digital original can be upsampled to 6 megapixel file that is "as good as" a 6 megapixel scanned file that is not resampled?

No point in upsampling; just print the two at the same physical size, and compare.

Is your comparison based on two same size prints? Without upsampling and printing at 300 printer dpi, a 6 meg file will produce a print twice the size of that from a 3 meg file.

Yes, same size prints; that’s what I meant by "print the two at the same physical size".

What is the actual print size from the 6 and 3 meg files you are comparing?

Whatever; 4×6, 8×10. More likely something not quite so neat, since I mostly crop by eye rather than to standard print aspect ratios these days.

The file size has *absolutely nothing to do* with the print size.

The size a print comes out at has nothing to do with the pixel dimensions in my workflow; it depends on the physical dimensions set in Photoshop when I print it. In inkjet printing, the concept of "pixel" in the actual output doesn’t apply; the data is shared around among many physical locations, and the pixels overlap.

Now I’m really confused. In my workflow, I set up the print size and the dpi in the PS Image Size window. I would leave Resample unchecked and disabled if the dpi for the desired print size is computed by PS to be around 300dpi (not exact). If the dpi is significantly less than 300 for a print size, I would check and enable Resample, enter 300 as dpi to maintain that size. My understanding is that the file PS sends to an Epson printer driver has the exact pixel count and information as specified by the Image Size window.

Similar, except I *never* resample when sizing for printing.

If you use Image Size in your workflow, what are the settings for your 3meg and 6meg files for the same size prints? If you don’t specify your print size in Image Size, how do you get that information to the printer driver?

So I specify the print size I want, with resample NOT checked. —
David Dyer-Bennet, <mailto:>
RKBA: <http://noguns-nomoney.com/> <http://www.dd-b.net/carry/> Pics: <http://dd-b.lighthunters.net/> <http://www.dd-b.net/dd-b/SnapshotAlbum/> Dragaera/Steven Brust: <http://dragaera.info/>
WF
Wayne Fulton
Nov 8, 2004
In article <A5zjd.1063$>,
says…

Then you didn’t read what I wrote! I suggest you do some searching for "native resolution", and read about Qimage, etc.

Yes, I did read it, and yes, I know about Qimage, which must be the original source of this idea, at least it was the first I heard of it a few years back. But my own notion then and now is that this idea seems to confuse concepts of printer ink dot grids with image pixels. I was just asking if there was actual evidence, instead of another opinion.

I have already agreed that ppi is a fine descriptive term. The only one little problem with assuming it has exclusive rights is that the concept of image resolution already has a well accepted universal name, dpi, which is generally much more used. Either is fine, but my point was that we better understand both terms, because both are in use. For grins, searching google (adding the word "image" to filter some unrelated or printer links, and probably scanners too), then
ppi image finds 112,000 links
dpi image finds 1,670,000 links

That’s a full order of magnitude, so whether someone wants to say and use the word dpi or not is their personal preference, but to not recognize it is a bit too self-rightous 🙂



Wayne
http://www.scantips.com "A few scanning tips"
BV
Bart van der Wolf
Nov 8, 2004
"Barry Pearson" wrote in message
Bart van der Wolf wrote:
[snip]
And for those that are skeptical about the ‘known state’ pixel density, the printer driver can be interrogated for what the ppi should be, for a given paper/ink combination.

Is this what Qimage uses?

Yep, according to Mike Chaney (= Qimage author).

Bart
BP
Barry Pearson
Nov 8, 2004
Wayne Fulton wrote:
[snip]
I have already agreed that ppi is a fine descriptive term. The only one little problem with assuming it has exclusive rights is that the concept of image resolution already has a well accepted universal name, dpi, which is generally much more used. Either is fine, but my point was that we better understand both terms, because both are in use. For grins, searching google (adding the word "image" to filter some unrelated or printer links, and probably scanners too), then ppi image finds 112,000 links
dpi image finds 1,670,000 links

That’s a full order of magnitude, so whether someone wants to say and use the word dpi or not is their personal preference, but to not recognize it is a bit too self-rightous 🙂

As I said:

"I suppose it is a matter of "be tolerant with input, and strict with output". We should speak and write the correct words, but accept that others may not be so careful."

The most important case to get right is probably printing, because technically they mean different things, so there is genuine potential for ambiguity. For example, when I print a photograph, the Image Size dialogue talks about pixels per inch (and means it), while the printer driver talks about dpi (and means it). And I don’t think we should expect relative newcomers to photography, who may be starting with a digital camera, and perhaps a photo-editor, both talking about pixels, to understand that "dots" sometimes means the same as "pixels". Especially when the catalogue they chose their printer from used dpi in a different sense.


Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
WF
Wayne Fulton
Nov 8, 2004
In article <HMHjd.39$>,
says…

"I suppose it is a matter of "be tolerant with input, and strict with
output".
We should speak and write the correct words, but accept that others may not
be
so careful."

The most important case to get right is probably printing, because
technically
they mean different things, so there is genuine potential for ambiguity. For example, when I print a photograph, the Image Size dialogue talks about
pixels
per inch (and means it), while the printer driver talks about dpi (and means it). And I don’t think we should expect relative newcomers to photography,
who
may be starting with a digital camera, and perhaps a photo-editor, both talking about pixels, to understand that "dots" sometimes means the same as "pixels". Especially when the catalogue they chose their printer from used
dpi
in a different sense.

So their scanner catalog is therefore wrong? And their continuous tone printer catalog is wrong too? Wrong only because inkjets could have found a better term for motor steps per inch, but didnt? Or wrong only because Barry says so when he mandates what he feels is "right"? It is egotistical to imagine ones personal preference is the only acceptable correct answer for others. Especially so when it differs with years of previous professional practice, and the obvious order-of-magnitude-greater different current usage, all of which indicates that dpi is in fact the one true way. 🙂

I dont mean to loop in the same circles, but I assure you that dpi absolutely means pixels per inch of image resolution, and always has, when and if the context relates to image resolution. When and if about other context, then it has another meaning too. There is no right or wrong about it, that is simply the definitions, and standard practice. We understand the English language of words has many such situations. We can handle it.

My own view is not about promoting which is right or better, but instead I think it is obvious that the newcomers should specifically be taught that both terms dpi and ppi are commonly in use everywhere for image resolution, with the specific goal to help them understand when they see both terms in use, as they definitely will. Real World experience is better than fantasy ideals.


Wayne
http://www.scantips.com "A few scanning tips"
T
tacitr
Nov 8, 2004
So their scanner catalog is therefore wrong? And their continuous tone printer catalog is wrong too?

Yes, and yes, on both counts.

Scanner manufacturers who make consumer-grade scanners for amateur home users use the term "DPI" because amateur home users recognize it, do not recognize the term "pixels per inch," and do not understand the difference; the scanner manufacturers deliberately use incorrect terminology for the purpose of marketing.

Manufacturers of professional high-end scanners–such as drum scanners, not aimed at amateur home users–use the correct terminology.

It’s all about marketing. The scanner manufacturers do not want to have to educate the buying public.


Art, literature, shareware, polyamory, kink, and more:
http://www.xeromag.com/franklin.html
BP
Barry Pearson
Nov 8, 2004
Wayne Fulton wrote:
In article <HMHjd.39$>,
says…

"I suppose it is a matter of "be tolerant with input, and strict with output". We should speak and write the correct words, but accept that others may not be so careful."

The most important case to get right is probably printing, because technically they mean different things, so there is genuine potential for ambiguity. For example, when I print a photograph, the Image Size dialogue talks about pixels per inch (and means it), while the printer driver talks about dpi (and means it). And I don’t think we should expect relative newcomers to photography, who may be starting with a digital camera, and perhaps a photo-editor, both talking about pixels, to understand that "dots" sometimes means the same as "pixels". Especially when the catalogue they chose their printer from used dpi in a different sense.

So their scanner catalog is therefore wrong? And their continuous tone printer catalog is wrong too? Wrong only because inkjets could have found a better term for motor steps per inch, but didnt? Or wrong only because Barry says so when he mandates what he feels is "right"? It is egotistical to imagine ones personal preference is the only acceptable correct answer for others. Especially so when it differs with years of previous professional practice, and the obvious order-of-magnitude-greater different current usage, all of which indicates that dpi is in fact the one true way. 🙂
[snip]

Please simply read what I say! I was talking above about *printing*. I was pointing out that in printing, people use dpi with 2 very different meanings. (One use is pixels, of the image being printed, per inch, the other is related to droplets per inch). It is sometimes not clear from the context which is meant. Sometimes the writer may not even recognise that there are 2 different meanings.

I don’t mandate. I was pointing out that there is an audience for whom use of dpi in place of ppi when printing is confusing and may sometimes lead to doing the wrong thing. Given that my own printing sequence involves both ppi and dpi (as different things), I assume that this will apply to others too. I have observed ongoing confusion over these terms for years.

A search of the Google Groups archive will show just how much confusion & discussion occurs. There are completely contradictory statements, casual switching from one to the other, advice that doesn’t make sense if the wrong version is used, etc. Try, for example, the following, probably giving just a small proportion of this topic:
http://groups.google.com/groups?q=dpi+ppi+printing

Use of dpi when it is really talking about the droplets, for example a discussion about whether 1440 or 2880 is better with an Epson printer, is unavoidable. Laser printers have dots, too, so it isn’t just a problem with inkjet printers. Presumably no one is suggesting that use of dots is wrong in these cases! So, in printing, the other use can never be "the one true way"!

I don’t recall seeing people get the wrong meaning when ppi is used to mean the mapping of the photograph’s pixels to the paper. So that latter is safe, and doesn’t require readers to judge the meaning from the context. In general, perhaps because it is a made-up word, pixel and ppi tend to be used well, and unambiguously. (Except by Foveon marketing!)

It is use of dpi as an alternative to ppi in the context of printing that appears to lead to confusion. Or at least requires some judgement, which some people apparently don’t know enough to exercise. Whatever you and I do, others will still use dpi and sometimes have an audience that doesn’t understand. But once the ambiguity and confusion is recognised, why add to it?

A way round the ambiguity, of course, is to define one’s usage. For example, earlier in this thread I said "when I say ppi or pixels per inch, I mean it", to emphasise that it wasn’t an accident. Any article that made it clear that "dpi" was being used synonymously with "ppi" would probably avoid confusion. Would that be OK?


Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
BP
Barry Pearson
Nov 8, 2004
Tacit wrote:
[snip]
Scanner manufacturers who make consumer-grade scanners for amateur home users use the term "DPI" because amateur home users recognize it, do not recognize the term "pixels per inch," and do not understand the difference; the scanner manufacturers deliberately use incorrect terminology for the purpose of marketing.

Manufacturers of professional high-end scanners–such as drum scanners, not aimed at amateur home users–use the correct terminology.
[snip]

The scanner I used for years, until recently, was the ArtixScan 4000t, with ScanWizard Pro TX software. Not a drum scanner. It used pixels per inch and ppi throughout. If I scanned a 35mm slide, nearly 1" by 1.5", at 4000 ppi, I got nearly 4000 by nearly 6000 pixels in Photoshop. Scanners *can* sometimes get it right, even for scanners bought by amateurs.

My background is in large-scale computer systems architecture. (Not specifically image processing). My personal "architecture diagram" of digital image processing places the *digital image* itself at the heart of the diagram. This comprises an array of pixels. Then I link it to all the other components of the system – computers with screens, scanners of various types, digital cameras & even mobile phones, photo-editors, the web & other Internet services, filestore with various formats, printers, etc. (This is for my own use – it is not something ready for publication, yet).

Once you centre on arrays of pixels, you can get from anything to anything else, somehow. Specialist applications and subsystems interface to the digital image, and may have their own specialist terminology & standards. The trick is to map the differences locally, not to propagate the differences across the whole system. It becomes important to choose which words & terms to major on for the 21st century, and which ones to accept as historical terms & just map them for the time being.


Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
N
nomail
Nov 8, 2004
Barry Pearson wrote:

Tacit wrote:
[snip]
Scanner manufacturers who make consumer-grade scanners for amateur home users use the term "DPI" because amateur home users recognize it, do not recognize the term "pixels per inch," and do not understand the difference; the scanner manufacturers deliberately use incorrect terminology for the purpose of marketing.

Manufacturers of professional high-end scanners–such as drum scanners, not aimed at amateur home users–use the correct terminology.
[snip]

The scanner I used for years, until recently, was the ArtixScan 4000t, with ScanWizard Pro TX software. Not a drum scanner. It used pixels per inch and ppi throughout. If I scanned a 35mm slide, nearly 1" by 1.5", at 4000 ppi, I got nearly 4000 by nearly 6000 pixels in Photoshop. Scanners *can* sometimes get it right, even for scanners bought by amateurs.

Actually, ‘pixels per inch’ is also not completely correct. It should be ‘samples per inch’, because there are many scanners with an uneven resolution for X-axis and Y-axis, such as a "600 x 1200 ppi" scanner.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/
WF
Wayne Fulton
Nov 8, 2004
In article <bpNjd.540$>,
says…

A way round the ambiguity, of course, is to define one’s usage. For example, earlier in this thread I said "when I say ppi or pixels per inch, I mean
it",
to emphasise that it wasn’t an accident. Any article that made it clear that "dpi" was being used synonymously with "ppi" would probably avoid confusion. Would that be OK?

Again, I did read what you said Barry. You do realize that there is a difference in reading and agreeing? 🙂 I am also speaking of printing, and scanning, and all aspects of digital images. Frankly, I do agree with much of your idea, with the major exception that you show this unnecessary prejudice throughout, which appears to have incorrectly imagined and assumed that the term dpi can only be used correctly to reference printer ink drop density. However, this obviously isnt fact at all. (and printer usage really doesnt even mean that you know, inkjet use means stepping motor steps per inch of paper. Ink drops are obviously much larger, and a rather different concept in theory and practice).

It seems to me that your "I mean it" prejudice absolutely refuses the current existing major usage of the term dpi to mean its long accepted universal definition of image resolution in pixels per inch. You’re welcome to believe anything you want, but how you wish things were is merely your opinion, and not a fact in this case. Fact is the common practice of very many others that came before you and me, especially the professional world that defined the term, years before inkjets had ability to print photos. Dpi is everywhere in our literature, pixels per inch as dpi has a very honored reputation and history. Google appears to find over a million links that disagree with your bias, which seems substantially overwhelming in degree. All scanner manufacturers and all continuous tone printer manufacturers also disagree with your bias that pixels cannot be called conceptual colored dots. Yes, they can, they are, look around. Your ArtixScan 4000t was rated 4000 dpi.

I know you know better, but your words show this prejudice about how you think it should be, instead of accepting obvious reality of how it is. I really dont think this concealment helps the novices. It would seem better to sync with the real world, and to explain that real world accurately to the novices. The obvious true fact is that both terms are used interchangeably, both meaning pixels per inch when about image resolution. This usage is obvious overwhelming fact.

So the concepts and differences of image pixels and printer ink dots must be explained to novices. And both terms dpi and ppi for image resolution must be explained to novices since they are obviously interchangeable in the real world. I do think it is OK to teach a prejudice for favoring the use of one of them, but it must be explained to expect both in the real world too. One term may be "better" according to prejudices, but obviously neither is "wrong". Any notion of wrong is laughable, wake up and look around.

Both terms are real, and the novice really must understand both terms because they will see both terms everywhere. The key word is "interchangeable", because it is the absolute truth. Both terms are in obvious common use with exactly the same meaning, and this must be understood, or there will be confusion. Confusion is cleared by explaining how things actually work, not in trying to conceal the details.


Wayne
http://www.scantips.com "A few scanning tips"
B
bhilton665
Nov 8, 2004
From: (Johan W. Elzenga)

Actually, ‘pixels per inch’ is also not completely correct. It should be ‘samples per inch’, because there are many scanners with an uneven resolution for X-axis and Y-axis, such as a "600 x 1200 ppi" scanner.

Dan Margulis tackles this definition problem in a chapter in his book "Professional Photoshop" … IIRC he goes with dpi for printing, ppi for pixels in a file or printer input files and sspi (scan samples per inch) for scanner rez. Makes a lot of sense but people didn’t pick up on it.
BP
Barry Pearson
Nov 8, 2004
Johan W. Elzenga wrote:
Barry Pearson wrote:
[snip]
The scanner I used for years, until recently, was the ArtixScan 4000t, with ScanWizard Pro TX software. Not a drum scanner. It used pixels per inch and ppi throughout. If I scanned a 35mm slide, nearly 1" by 1.5", at 4000 ppi, I got nearly 4000 by nearly 6000 pixels in Photoshop. Scanners *can* sometimes get it right, even for scanners bought by amateurs.

Actually, ‘pixels per inch’ is also not completely correct. It should be ‘samples per inch’, because there are many scanners with an uneven resolution for X-axis and Y-axis, such as a "600 x 1200 ppi" scanner.

Couldn’t this simply be handled by noting a different "pixels per inch" value in each dimension? Surely it isn’t necessary to change the units just because the XResolution and the YResolution are different? After all, you have just managed to describe the situation using ppi.

The reason I phrased it like that is because the TIFF 6.0 specification does exactly this, using separate XResolution and YResolution values of pixels per ResolutionUnit. Also, many printers work at different dots (sic) per inch in the two dimensions. It isn’t a problem to cater for uneven resolutions in dialogues. (I’m assuming this this isn’t about non-square aspect ratios, which I normally associate with TV).

My scanners take a physical medium that can be measured in inches (or centimetres) and feed an array of pixels into Photoshop (or into a file if I didn’t use "Import"). As far as I, as a photographer, am concerned, that is pixels per inch. I accept that there is history behind alternative terms – but so what? We are getting used to cameras feeding pixels into Photoshop, so why not align our terminology? I suspect that vastly more digital cameras than scanners are being sold nowadays, and they will surely help influence future terminology.

(In case anyone objects to talk of "cameras feeding pixels into Photoshop", I’ll mention that Photoshop can browse a card while it is still in my camera).

I have recognised for some time that "samples per inch" is another measure for scanners, alongside "dots per inch" and "pixels per inch". I wasn’t sure what it meant to a photographer – did a sample map to a pixel in the digital image? If so, what value does the term have? Don’t we already have too many terms?


Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
G
gewgle
Nov 8, 2004
(Johan W. Elzenga) wrote in message
The NikonScan maximum definition is 2900 dpi, which gives about 11 M pixels for a 24 x 36 film : the image should be at least as good as that obtained with a 8 M pixels camera. Right ?

Wrong. A scan stacks pixels on top of grain. The result is that you will never see detail as small as one pixel only or one line of pixels only. A digital photo on the other hand can indeed have detail that small. As

On the other hand, if one takes a photo where artsy lighting is colored with red gels, the 11 MP digital camera will have 8.25M of its CCD sensors showing pure black. Image will be captured only on the 2.5 million red sensors. Take that image using Astia 100F or similar film with very low grain. A quality scan of that film should be able to blow away the DSLR’s image in terms of detail. My Minolta 5400 scanner is 5400 dpi, so one ends up with about 40 Megapixels (of red) scanned off the film (and little to no grain showing, particularly if the hardware grain-dissolver is turned on, but with that film it may not matter). The film scanning method will have 16 times the number of image capture points as compared to the 11MP DSLR. That same ratio also exists for blue-only pictures (and 8x for green-light only). I’m using single color light images (which is indeed used sometimes in artsy photos) for simplification of the point (and easy math).

Mike

P.S. – The Minolta grain dissolver reduces or eliminates the grain aliasing that some scanners can have. That film probably helps too by not having much grain to speak of to begin with. Scanners in the 2800 dpi "range" seem to be most noted for the aliasing problem (esp. w/o grain dissolver solutions — which btw is NOT a blur-the-image method).
BP
Barry Pearson
Nov 8, 2004
Bill Hilton wrote:
From: (Johan W. Elzenga)

Actually, ‘pixels per inch’ is also not completely correct. It should be ‘samples per inch’, because there are many scanners with an uneven resolution for X-axis and Y-axis, such as a "600 x 1200 ppi" scanner.

Dan Margulis tackles this definition problem in a chapter in his book "Professional Photoshop" … IIRC he goes with dpi for printing, ppi for pixels in a file or printer input files and sspi (scan samples per inch) for scanner rez. Makes a lot of sense but people didn’t pick up on it.

1. Was his dpi for printing a measure of the number of droplets per inch, or a measure of the number of pixels in the supplied digital image per inch of paper? (Use of ppi for "printer input files" suggests he meant the former).

2. Did each of his scan samples become a pixel in the digital image?

Question 1 suggests that we should use different terms for different concepts, to avoid ambiguity. Question 2 suggests that we should the same term for the same concept, (but if we don’t, perhaps it is easy enough to provide a mapping).

(When is ppi needed in a file, other than for printer input files?)


Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
BP
Barry Pearson
Nov 8, 2004
Wayne Fulton wrote:
In article <bpNjd.540$>,
says…

A way round the ambiguity, of course, is to define one’s usage. For example, earlier in this thread I said "when I say ppi or pixels per inch, I mean it", to emphasise that it wasn’t an accident. Any article that made it clear that "dpi" was being used synonymously with "ppi" would probably avoid confusion. Would that be OK?

Again, I did read what you said Barry. You do realize that there is a difference in reading and agreeing? 🙂 I am also speaking of printing, and scanning, and all aspects of digital images. Frankly, I do agree with much of your idea, with the major exception that you show this unnecessary prejudice throughout, which appears to have incorrectly imagined and assumed that the term dpi can only be used correctly to reference printer ink drop density.
[snip]

I’ll cut it short there, because I believe you have missed the point of what I am saying. I’ll say it again, as clearly as I can.

I am *not* trying to dictate what you say. I did *not* choose the terms concerned. I am simply the messenger.

The message is: there is confusion and ambiguity in the some of the uses of these terms. That is evident from the number of dialogues visible in forums and newsgroups and also face-to-face discussions on these topics. Anyone who wants to avoid causing confusion in their audience should take this into account. But it is up to each individual writer whether they care enough. I do.


Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/
B
bhilton665
Nov 8, 2004
Dan Margulis tackles this definition problem in a chapter in his book "Professional Photoshop" … IIRC he goes with dpi for printing, ppi for pixels in a file or printer input files and sspi (scan samples per inch) for scanner rez. Makes a lot of sense but people didn’t pick up on it.

From: "Barry Pearson"

1. Was his dpi for printing a measure of the number of droplets per inch, or a measure of the number of pixels in the supplied digital image per inch of paper? (Use of ppi for "printer input files" suggests he meant the former).

He defines dpi as "dots per inch, in a halftone screen" …

2. Did each of his scan samples become a pixel in the digital image?

Ch 15 ("Resolving the Resolution Issue") in his book (4th edition) … I read it once, I’m not willing to read it again 🙂

Parts of it are online, dunno about this chapter.
O
Odysseus
Nov 9, 2004
In article ,
(Bill Hilton) wrote:

Dan Margulis tackles this definition problem in a chapter in his book "Professional Photoshop" … IIRC he goes with dpi for printing, ppi for pixels in a file or printer input files and sspi (scan samples per inch) for scanner rez. Makes a lot of sense but people didn’t pick up on it.

From: "Barry Pearson"

1. Was his dpi for printing a measure of the number of droplets per inch, or a measure of the number of pixels in the supplied digital image per inch of paper? (Use of ppi for "printer input files" suggests he meant the former).

He defines dpi as "dots per inch, in a halftone screen" …
For that measure lpi (lines per inch) is preferable, to distinguish between halftone cells and printer dots. "Run that 300-ppi image at 3556 dpi, using a 150-lpi screen — but for proofing on the 300-dpi laser printer you can downsample it to 96 ppi, because there the halftoning will be done at only 53 lpi."


Odysseus
MR
Mike Russell
Nov 9, 2004
Anoni Moose wrote:
(Johan W. Elzenga) wrote in message
The NikonScan maximum definition is 2900 dpi, which gives about 11 M pixels for a 24 x 36 film : the image should be at least as good as that obtained with a 8 M pixels camera. Right ?

Wrong. A scan stacks pixels on top of grain. The result is that you will never see detail as small as one pixel only or one line of pixels only.
A digital photo on the other hand can indeed have detail that small. As

On the other hand, if one takes a photo where artsy lighting is colored with red gels, the 11 MP digital camera will have 8.25M of its CCD sensors showing pure black. Image will be captured only on the 2.5 million red sensors.

This assumes pure filters, which is not the case. All of the pixels would pick up significant signal, and this is why use of a Bayer pattern, contrary to the "conventional wisdom", does not sacrifice resolution. —

Mike Russell
www.curvemeister.com
www.geigy.2y.net
A7
aka 717
Nov 9, 2004
It really should be ccd cells/ inch, since that would be more descriptive, or whatever the correct term is. And if the scan is interpolated there should be a second field: 600 CPI/ 900 bicubic.

"Tacit" wrote in message
So their scanner catalog is therefore wrong? And their continuous tone printer catalog is wrong too?

Yes, and yes, on both counts.

Scanner manufacturers who make consumer-grade scanners for amateur home users
use the term "DPI" because amateur home users recognize it, do not recognize
the term "pixels per inch," and do not understand the difference; the scanner
manufacturers deliberately use incorrect terminology for the purpose of marketing.

Manufacturers of professional high-end scanners–such as drum scanners, not
aimed at amateur home users–use the correct terminology.
It’s all about marketing. The scanner manufacturers do not want to have to educate the buying public.


Art, literature, shareware, polyamory, kink, and more:
http://www.xeromag.com/franklin.html
A7
aka 717
Nov 9, 2004
Does DPI mean the same thing as it used to, with
the inkjet printers. I can’t imagine there being a printer with 1200 dpi revaling an old imagesetter at 1200 dpi.
How many shades of gray can you get out of an inkjet
at 1200 dpi? How many lines? Is that the right way to
say that?

"Barry Pearson" wrote in message
Tacit wrote:
[snip]
Scanner manufacturers who make consumer-grade scanners for amateur home users use the term "DPI" because amateur home users recognize it, do not recognize the term "pixels per inch," and do not understand the difference; the scanner manufacturers deliberately use incorrect terminology for the purpose of marketing.

Manufacturers of professional high-end scanners–such as drum scanners, not aimed at amateur home users–use the correct terminology.
[snip]

The scanner I used for years, until recently, was the ArtixScan 4000t, with
ScanWizard Pro TX software. Not a drum scanner. It used pixels per inch and
ppi throughout. If I scanned a 35mm slide, nearly 1" by 1.5", at 4000 ppi, I
got nearly 4000 by nearly 6000 pixels in Photoshop. Scanners *can* sometimes
get it right, even for scanners bought by amateurs.

My background is in large-scale computer systems architecture. (Not specifically image processing). My personal "architecture diagram" of digital
image processing places the *digital image* itself at the heart of the diagram. This comprises an array of pixels. Then I link it to all the other
components of the system – computers with screens, scanners of various types,
digital cameras & even mobile phones, photo-editors, the web & other Internet
services, filestore with various formats, printers, etc. (This is for my own
use – it is not something ready for publication, yet).

Once you centre on arrays of pixels, you can get from anything to anything else, somehow. Specialist applications and subsystems interface to the digital
image, and may have their own specialist terminology & standards. The trick is
to map the differences locally, not to propagate the differences across the
whole system. It becomes important to choose which words & terms to major on
for the 21st century, and which ones to accept as historical terms & just map
them for the time being.


Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/

A7
aka 717
Nov 9, 2004
"Barry Pearson" wrote in message
Bill Hilton wrote:
From: (Johan W. Elzenga)

Actually, ‘pixels per inch’ is also not completely correct. It should be ‘samples per inch’, because there are many scanners with an uneven resolution for X-axis and Y-axis, such as a "600 x 1200 ppi" scanner.

Dan Margulis tackles this definition problem in a chapter in his book "Professional Photoshop" … IIRC he goes with dpi for printing, ppi for pixels in a file or printer input files and sspi (scan samples per inch) for scanner rez. Makes a lot of sense but people didn’t pick up on it.

1. Was his dpi for printing a measure of the number of droplets per inch, or a
measure of the number of pixels in the supplied digital image per inch of paper? (Use of ppi for "printer input files" suggests he meant the former).

2. Did each of his scan samples become a pixel in the digital image?
Question 1 suggests that we should use different terms for different concepts,
to avoid ambiguity. Question 2 suggests that we should the same term for the
same concept, (but if we don’t, perhaps it is easy enough to provide a mapping).

(When is ppi needed in a file, other than for printer input files?)

PPI would be useful in estimating the smallest pic for an internet presentation, no? And it could be helpful in creating a file with a resolution that is more easily modified, as in one that is in a multiple of 4, 8, 16… At least that seems right to me.


Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/

A7
aka 717
Nov 9, 2004
"Bill Hilton" wrote in message
Dan Margulis tackles this definition problem in a chapter in his book "Professional Photoshop" … IIRC he goes with dpi for printing, ppi for pixels in a file or printer input files and sspi (scan samples per inch) for scanner rez. Makes a lot of sense but people didn’t pick up on it.

From: "Barry Pearson"

1. Was his dpi for printing a measure of the number of droplets per inch, or
a measure of the number of pixels in the supplied digital image per inch of
paper? (Use of ppi for "printer input files" suggests he meant the former).

He defines dpi as "dots per inch, in a halftone screen" …
2. Did each of his scan samples become a pixel in the digital image?

Ch 15 ("Resolving the Resolution Issue") in his book (4th edition) … I read
it once, I’m not willing to read it again 🙂

Parts of it are online, dunno about this chapter.

You can print grayscales in other than a halftone. By using patterns you can get some great grays, really beautiful. I forget the name of the patterns… diffusion dither.
A7
aka 717
Nov 9, 2004
"Odysseus" wrote in message
In article ,
(Bill Hilton) wrote:

Dan Margulis tackles this definition problem in a chapter in his book "Professional Photoshop" … IIRC he goes with dpi for printing, ppi for pixels in a file or printer input files and sspi (scan samples per inch) for scanner rez. Makes a lot of sense but people didn’t pick up on it.

From: "Barry Pearson"

1. Was his dpi for printing a measure of the number of droplets per inch, or
a measure of the number of pixels in the supplied digital image per inch of
paper? (Use of ppi for "printer input files" suggests he meant the former).

He defines dpi as "dots per inch, in a halftone screen" …
For that measure lpi (lines per inch) is preferable, to distinguish between halftone cells and printer dots. "Run that 300-ppi image at 3556 dpi, using a 150-lpi screen — but for proofing on the 300-dpi laser printer you can downsample it to 96 ppi, because there the halftoning will be done at only 53 lpi."


Odysseus

Dpi and linescreen are two different things. The more dpi the more linescreen you can get, but you don’t have to. You can have a 300 ppi file at the size you want to print and print at a low lpi and get a crappy picture. You could have a smaller ppi file and print at a higher lpi and get a better result.
A7
aka 717
Nov 9, 2004
"Barry Pearson" wrote in message
Johan W. Elzenga wrote:
Barry Pearson wrote:
[snip]
The scanner I used for years, until recently, was the ArtixScan 4000t, with ScanWizard Pro TX software. Not a drum scanner. It used pixels per inch and ppi throughout. If I scanned a 35mm slide, nearly 1" by 1.5", at 4000 ppi, I got nearly 4000 by nearly 6000 pixels in Photoshop. Scanners *can* sometimes get it right, even for scanners bought by amateurs.

Actually, ‘pixels per inch’ is also not completely correct. It should be ‘samples per inch’, because there are many scanners with an uneven resolution for X-axis and Y-axis, such as a "600 x 1200 ppi" scanner.

Couldn’t this simply be handled by noting a different "pixels per inch" value
in each dimension? Surely it isn’t necessary to change the units just because
the XResolution and the YResolution are different? After all, you have just
managed to describe the situation using ppi.

The reason I phrased it like that is because the TIFF 6.0 specification does
exactly this, using separate XResolution and YResolution values of pixels per
ResolutionUnit. Also, many printers work at different dots (sic) per inch in
the two dimensions. It isn’t a problem to cater for uneven resolutions in dialogues. (I’m assuming this this isn’t about non-square aspect ratios, which
I normally associate with TV).

My scanners take a physical medium that can be measured in inches (or centimetres) and feed an array of pixels into Photoshop (or into a file if I
didn’t use "Import"). As far as I, as a photographer, am concerned, that is
pixels per inch. I accept that there is history behind alternative terms – but
so what? We are getting used to cameras feeding pixels into Photoshop, so why
not align our terminology? I suspect that vastly more digital cameras than scanners are being sold nowadays, and they will surely help influence future
terminology.

(In case anyone objects to talk of "cameras feeding pixels into Photoshop",
I’ll mention that Photoshop can browse a card while it is still in my camera).

I have recognised for some time that "samples per inch" is another measure for
scanners, alongside "dots per inch" and "pixels per inch". I wasn’t sure what
it meant to a photographer – did a sample map to a pixel in the digital image?
If so, what value does the term have? Don’t we already have too many terms?


Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/

The scanner resolution means that any scans done
at a higher resolution than the lowest dimension are
interpolated. That’s what I’ve always believed anyway.
A7
aka 717
Nov 9, 2004
"Barry Pearson" wrote in message
Wayne Fulton wrote:
In article <HMHjd.39$>,
says…

"I suppose it is a matter of "be tolerant with input, and strict with output". We should speak and write the correct words, but accept that others may not be so careful."

The most important case to get right is probably printing, because technically they mean different things, so there is genuine potential for ambiguity. For example, when I print a photograph, the Image Size dialogue talks about pixels per inch (and means it), while the printer driver talks about dpi (and means it). And I don’t think we should expect relative newcomers to photography, who may be starting with a digital camera, and perhaps a photo-editor, both talking about pixels, to understand that "dots" sometimes means the same as "pixels". Especially when the catalogue they chose their printer from used dpi in a different sense.

So their scanner catalog is therefore wrong? And their continuous tone printer catalog is wrong too? Wrong only because inkjets could have found a better term for motor steps per inch, but didnt? Or wrong only because Barry says so when he mandates what he feels is "right"? It is egotistical to imagine ones personal preference is the only acceptable correct answer for others. Especially so when it differs with years of previous professional practice, and the obvious order-of-magnitude-greater different current usage, all of which indicates that dpi is in fact the one true way. 🙂
[snip]

Please simply read what I say! I was talking above about *printing*. I was pointing out that in printing, people use dpi with 2 very different meanings.
(One use is pixels, of the image being printed, per inch, the other is related
to droplets per inch). It is sometimes not clear from the context which is meant. Sometimes the writer may not even recognise that there are 2 different
meanings.

I don’t mandate. I was pointing out that there is an audience for whom use of
dpi in place of ppi when printing is confusing and may sometimes lead to doing
the wrong thing. Given that my own printing sequence involves both ppi and dpi
(as different things), I assume that this will apply to others too. I have observed ongoing confusion over these terms for years.

A search of the Google Groups archive will show just how much confusion & discussion occurs. There are completely contradictory statements, casual switching from one to the other, advice that doesn’t make sense if the wrong
version is used, etc. Try, for example, the following, probably giving just a
small proportion of this topic:
http://groups.google.com/groups?q=dpi+ppi+printing

Use of dpi when it is really talking about the droplets, for example a discussion about whether 1440 or 2880 is better with an Epson printer, is unavoidable. Laser printers have dots, too, so it isn’t just a problem with
inkjet printers. Presumably no one is suggesting that use of dots is wrong in
these cases! So, in printing, the other use can never be "the one true way"!

I don’t recall seeing people get the wrong meaning when ppi is used to mean
the mapping of the photograph’s pixels to the paper. So that latter is safe,
and doesn’t require readers to judge the meaning from the context. In general,
perhaps because it is a made-up word, pixel and ppi tend to be used well, and
unambiguously. (Except by Foveon marketing!)

It is use of dpi as an alternative to ppi in the context of printing that appears to lead to confusion. Or at least requires some judgement, which some
people apparently don’t know enough to exercise. Whatever you and I do, others
will still use dpi and sometimes have an audience that doesn’t understand. But
once the ambiguity and confusion is recognised, why add to it?
A way round the ambiguity, of course, is to define one’s usage. For example,
earlier in this thread I said "when I say ppi or pixels per inch, I mean it",
to emphasise that it wasn’t an accident. Any article that made it clear that
"dpi" was being used synonymously with "ppi" would probably avoid confusion.
Would that be OK?


Barry Pearson
http://www.Barry.Pearson.name/photography/
http://www.BirdsAndAnimals.info/

I’m not sure who said what, but I used to be bothered by PPI but now it seems right to be as precise as possible.

No one has mentioned variable dot sizes. Is that what inkjets use to claim such high resolution? Are they continuous tone almost? Can you look at the ouput and see a linescreen?
N
nomail
Nov 9, 2004
Barry Pearson wrote:

Actually, ‘pixels per inch’ is also not completely correct. It should be ‘samples per inch’, because there are many scanners with an uneven resolution for X-axis and Y-axis, such as a "600 x 1200 ppi" scanner.

Dan Margulis tackles this definition problem in a chapter in his book "Professional Photoshop" … IIRC he goes with dpi for printing, ppi for pixels in a file or printer input files and sspi (scan samples per inch) for scanner rez. Makes a lot of sense but people didn’t pick up on it.

1. Was his dpi for printing a measure of the number of droplets per inch, or a measure of the number of pixels in the supplied digital image per inch of paper? (Use of ppi for "printer input files" suggests he meant the former).
2. Did each of his scan samples become a pixel in the digital image?

If the scanner has an even resolution for X- and Y-axis, then each sample becomes a pixel, so a 600×600 samples per inch scanner gives a 600 ppi image. But a ‘600×1200 samples per inch’ scanner can only give a 600 ppi image (or a 1200 ppi image with interpolation of one side), because a ‘600×1200 ppi image’ does not exist. That’s exactly the reason why ‘samples per inch’ would be better than pixels per inch.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/
N
nomail
Nov 9, 2004
Barry Pearson wrote:

Johan W. Elzenga wrote:
Barry Pearson wrote:
[snip]
The scanner I used for years, until recently, was the ArtixScan 4000t, with ScanWizard Pro TX software. Not a drum scanner. It used pixels per inch and ppi throughout. If I scanned a 35mm slide, nearly 1" by 1.5", at 4000 ppi, I got nearly 4000 by nearly 6000 pixels in Photoshop. Scanners *can* sometimes get it right, even for scanners bought by amateurs.

Actually, ‘pixels per inch’ is also not completely correct. It should be ‘samples per inch’, because there are many scanners with an uneven resolution for X-axis and Y-axis, such as a "600 x 1200 ppi" scanner.

Couldn’t this simply be handled by noting a different "pixels per inch" value in each dimension? Surely it isn’t necessary to change the units just because the XResolution and the YResolution are different? After all, you have just managed to describe the situation using ppi.

That’s what they do now, but that is not correct. Pixels are square by definition, so a 600×1200 ppi image does not exist. In reality the scanner takes 1/600 inch samples in both directions, but in one direction the samples overlap 0.5 sample. That makes the "1200 ppi" claim doubtful at the least. You do not get 1200 distinct samples per inch. You get 600 samples per inch, plus another 600 samples of the same information, but shifted 0.5 sample.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/
N
nomail
Nov 9, 2004
Anoni Moose wrote:

On the other hand, if one takes a photo where artsy lighting is colored with red gels, the 11 MP digital camera will have 8.25M of its CCD sensors showing pure black. Image will be captured only on the 2.5 million red sensors. Take that image using Astia 100F or similar film with very low grain. A quality scan of that film should be able to blow away the DSLR’s image in terms of detail. My Minolta 5400 scanner is 5400 dpi, so one ends up with about 40 Megapixels (of red) scanned off the film (and little to no grain showing, particularly if the hardware grain-dissolver is turned on, but with that film it may not matter). The film scanning method will have 16 times the number of image capture points as compared to the 11MP DSLR. That same ratio also exists for blue-only pictures (and 8x for green-light only). I’m using single color light images (which is indeed used sometimes in artsy photos) for simplification of the point (and easy math).

You are comparing apples and oranges. Your film scanner does not record reality, it records how that reality was recorded on film. So your scan is the multiplication of two interpretations, while a DSLR photo is one interpretation. That is why you cannot compare pixels or samples one to one. The reality is that all the tests say that the 11 MP digital camera beats your 35mm film scan hands down.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/
X
Xalinai
Nov 9, 2004
Johan W. Elzenga wrote:

Barry Pearson wrote:

Johan W. Elzenga wrote:
Barry Pearson wrote:
[snip]
The scanner I used for years, until recently, was the ArtixScan 4000t, with ScanWizard Pro TX software. Not a drum scanner. It
used >> pixels per inch and ppi throughout. If I scanned a 35mm slide, >> nearly 1" by 1.5", at 4000 ppi, I got nearly 4000 by nearly 6000 >> pixels in Photoshop. Scanners can sometimes get it right, even for >> scanners bought by amateurs.
Actually, ‘pixels per inch’ is also not completely correct. It should be ‘samples per inch’, because there are many scanners with an uneven resolution for X-axis and Y-axis, such as a "600 x 1200 ppi" scanner.

Couldn’t this simply be handled by noting a different "pixels per inch" value in each dimension? Surely it isn’t necessary to change the units just because the XResolution and the YResolution are different? After all, you have just managed to describe the situation using ppi.

That’s what they do now, but that is not correct. Pixels are square by definition,

On most computer screens and in most resolutions they are. At 1280×1024 on the usual 3:4 screen they aren’t, the same is true for TV pixels or those of a standard resolution fax.

so a 600×1200 ppi image does not exist. In reality the
scanner takes 1/600 inch samples in both directions, but in one direction the samples overlap 0.5 sample. That makes the "1200 ppi" claim doubtful at the least. You do not get 1200 distinct samples per inch. You get 600 samples per inch, plus another 600 samples of the same information, but shifted 0.5 sample.

The scanner has enough CCD cells for 600dpi and moves at a stepping precision of 1200 steps per inch, so it can take 1200 samples per inch along its moving direction with its 5100 CCD-cells.

Michael
G
gewgle
Nov 9, 2004
(Johan W. Elzenga) wrote in message
You are comparing apples and oranges. Your film scanner does not record reality, it records how that reality was recorded on film. So your scan is the multiplication of two interpretations, while a DSLR photo is one interpretation. That is why you cannot compare pixels or samples one to

Doesn’t each layer of software the image passes through provide another "interpretation" (Bayer layer, color-balance layer, etc)?

one. The reality is that all the tests say that the 11 MP digital camera beats your 35mm film scan hands down.

That’s good to hear. I’m currently on my third generation of digital cameras, but I’m only to the 5.something MP stage where my film scans still completely blow away my digital images, not even in the same class. The seemingly small doubling of digital camera megapixels must make a really dramatic difference. That
may move up timing for my fourth generation of digital camera! 🙂

In the meanwhile, I’ll stick to scans, there aren’t any digital cameras yet available that will touch the performance and size of my current film camera (an RBT X3-b 3D stereo camera) for it’s first "interpretation".

Mike
G
gewgle
Nov 9, 2004
"Mike Russell" …

This assumes pure filters, which is not the case. All of the pixels would pick up significant signal, and this is why use of a Bayer pattern, contrary to the "conventional wisdom", does not sacrifice resolution.

Interesting. That means it can’t tell the difference between an image where the light source was pure-red only and another image where there was a bit of blue and green light added to the red light source? The two images would come out the same?

Got my curiosity up. I think testing this should be simple. I’ve some lens resolution charts and some color filters that I can put over the digital camera lens. You’re saying that my shots of the resolution charts (where the lens isn’t being pushed) will make no difference using a strong red or strong blue filter over it.

I won’t have time tonight, but I think I’ll try it later this week.

Mike
MR
Mike Russell
Nov 9, 2004
Anoni Moose wrote:
"Mike Russell" wrote in message
news:<d6Wjd.6470$>…

This assumes pure filters, which is not the case. All of the pixels would pick up significant signal, and this is why use of a Bayer
pattern,
contrary to the "conventional wisdom", does not sacrifice resolution.

Interesting. That means it can’t tell the difference between an image where the light source was pure-red only and another image where there was a bit of blue and green light added to the red light source? The two images would come out the same?

The images would differ, but resolution of the two images would be the same. In particular, the rez would not drop by a factor of three or four, as someone else suggested. The filters used in the Bayer pattern do not absorb 100% of any spectrum of light, and in fact allow a substantial amount of all wavelengths to pass. In an RGGB bayer sensor, pure red light will still give more signal for the red filtered pixels than it will for the green or blue ones, but there will still be substantial signal for the green and blue. A CMY based bayer sensor would have an even more uniform sensor.

Here’s an interesting document that includes a chart of the spectral responses of the different filters, including both RGB and CMY. (Note that this is not the combined filter-sensor response, just the filter) In particular, note that the lowest spectral response is for red and green filters in the 400-475 nm range, but my basic point is that nowhere does the response fall to zero, and therefore the resolution of a Bayer sensor is not necessarily compromised by colored objects.
http://www.kodak.com/global/plugins/acrobat/en/digital/ccd/p apersArticles/kacBetterColorCMY.pdf

Got my curiosity up. I think testing this should be simple. I’ve some lens resolution charts and some color filters that I can put over the digital camera lens. You’re saying that my shots of the resolution charts (where the lens isn’t being pushed) will make no difference using a strong red or strong blue filter over it.

Basically, yes, with the caveat that your camera is color, and therefore a color filter will of course change the image, but signal will be nonzero for all pixels. From the chart, it looks as if a pure blue filter would tend to cut out the R and G sensors of an RGB based bayer pattern, but even in that case it would not be complete cutoff. For normal colored objects in normal illumination, there would be plenty of signal on all the sensors, and therefore little or no decrease in resolution due to the bayer pattern.

I won’t have time tonight, but I think I’ll try it later this week.

This would be interesting. Do be careful of your camera’s color balance settings – a raw capture would be best to eliminate that from the equation. If your camera does not support raw mode, use a daylight setting for a blue filter, to prevent your camera’s color balance firmware from attenuating the red and green channels, and conversly a tungsten color balance when using the red filter.


Mike Russell
www.curvemeister.com
www.geigy.2y.net
N
nomail
Nov 10, 2004
Xalinai wrote:

Couldn’t this simply be handled by noting a different "pixels per inch" value in each dimension? Surely it isn’t necessary to change the units just because the XResolution and the YResolution are different? After all, you have just managed to describe the situation using ppi.

That’s what they do now, but that is not correct. Pixels are square by definition,

On most computer screens and in most resolutions they are. At 1280×1024 on the usual 3:4 screen they aren’t, the same is true for TV pixels or those of a standard resolution fax.

True, but totally irrelevant in this discussion. We are not discussing TVs or faxes, we are discussing scanners.

The scanner has enough CCD cells for 600dpi and moves at a stepping precision of 1200 steps per inch, so it can take 1200 samples per inch along its moving direction with its 5100 CCD-cells.

You’ll get 2x 600 square samples of 1/600 inch in size, overlapping half a sample. That may indeed give you 1200 SAMPLES per inch, but not an optical resolution of 1200 PIXELS per inch as the manufacturer wants you to believe.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/
N
nomail
Nov 10, 2004
Anoni Moose wrote:

(Johan W. Elzenga) wrote in message
You are comparing apples and oranges. Your film scanner does not record reality, it records how that reality was recorded on film. So your scan is the multiplication of two interpretations, while a DSLR photo is one interpretation. That is why you cannot compare pixels or samples one to

Doesn’t each layer of software the image passes through provide another "interpretation" (Bayer layer, color-balance layer, etc)?

Yes, but that’s included in the end result. A scanner has software and so has a digital camera.

one. The reality is that all the tests say that the 11 MP digital camera beats your 35mm film scan hands down.

That’s good to hear. I’m currently on my third generation of digital cameras, but I’m only to the 5.something MP stage where my film scans still completely blow away my digital images, not even in the same class. The seemingly small doubling of digital camera megapixels must make a really dramatic difference.

Don’t forget that it’s not just the number of pixels. I don’t know what camera you currently have, but a 5 or 6 Mpixel compact camera cannot be compared to a DSLR with the same number of pixels, let alone a DSLR with 11 Mpixels.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/
X
Xalinai
Nov 11, 2004
Johan W. Elzenga wrote:

Xalinai wrote:

Couldn’t this simply be handled by noting a different "pixels per inch" value in each dimension? Surely it isn’t necessary to change the units just because the XResolution and the
YResolution are different? After all, you have just managed to describe the situation using ppi.

That’s what they do now, but that is not correct. Pixels are square by definition,

On most computer screens and in most resolutions they are. At 1280×1024 on the usual 3:4 screen they aren’t, the same is true for TV pixels or those of a standard resolution fax.

True, but totally irrelevant in this discussion. We are not discussing TVs or faxes, we are discussing scanners.

That depends on your *use* of the pixels you scan.

The scanner has enough CCD cells for 600dpi and moves at a stepping precision of 1200 steps per inch, so it can take 1200 samples per inch along its moving direction with its 5100 CCD-cells.

You’ll get 2x 600 square samples of 1/600 inch in size, overlapping half a sample. That may indeed give you 1200 SAMPLES per inch, but not an optical resolution of 1200 PIXELS per inch as the manufacturer wants you to believe.

The manufacturer specifies that the device can take 600 samples per inch in one and 1200 samples per inch in another direction. He does neither specify the mechanical overlap nor the optical overlap (=lack of sharpness from bad lenses).

The *real* optical resolution that can be measured by scanning a test document is usually a lot less (like 1600 ppi from a big brand scanner specified as 4800×4800).

But don’t we all know that? Aren’t we used to marketeers songs and lore? You are discussing the use of "dpi" and IMO there are two valid positions, one saying that each purpose deserves a special measuring unit and conversion rules must be set up between them, while the other assumes that an intelligent user will understand that dpi has different meanings when applied to different things – as ounce means different things when applied to solids or fluids.

I think this all is a question of getting informed on the user’s side and only those who do not care will fall to the marketing siren’s song.

BTW: Getting informed is a voluntary thing. Pushing information to people makes them learning resistant.

Michael
N
nomail
Nov 11, 2004
Xalinai wrote:

The scanner has enough CCD cells for 600dpi and moves at a stepping precision of 1200 steps per inch, so it can take 1200 samples per inch along its moving direction with its 5100 CCD-cells.

You’ll get 2x 600 square samples of 1/600 inch in size, overlapping half a sample. That may indeed give you 1200 SAMPLES per inch, but not an optical resolution of 1200 PIXELS per inch as the manufacturer wants you to believe.

The manufacturer specifies that the device can take 600 samples per inch in one and 1200 samples per inch in another direction.

NO, that is exactly my point. The manufacurer *DOES NOT* specify that the device can take 600 samples per inch in one and 1200 samples per inch in another direction. The manufacturer just claims that the scanner has an "optical resolution of 600 x 1200 ppi" or even "600 x 1200 dpi". You and I may know what that really means, but the average consumer does not.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/
X
Xalinai
Nov 11, 2004
Johan W. Elzenga wrote:

Xalinai wrote:

The scanner has enough CCD cells for 600dpi and moves at a stepping precision of 1200 steps per inch, so it can take 1200 samples per inch along its moving direction with its 5100 CCD-cells.

You’ll get 2x 600 square samples of 1/600 inch in size,
overlapping half a sample. That may indeed give you 1200 SAMPLES per inch, but not an optical resolution of 1200 PIXELS per inch as the manufacturer wants you to believe.

The manufacturer specifies that the device can take 600 samples per inch in one and 1200 samples per inch in another direction.

NO, that is exactly my point. The manufacurer *DOES NOT* specify that the device can take 600 samples per inch in one and 1200 samples per inch in another direction. The manufacturer just claims that the scanner has an "optical resolution of 600 x 1200 ppi" or even "600 x 1200 dpi". You and I may know what that really means, but the average consumer does not.

The average consumer has the same options to get informed as you and me.

The average consumer can read lots of electronic and paper documents and can ask people who know better and usually does so for cars, houses, household equipment and even clothing – why should this be different with computer equipment?

If people do not want to get information it is utterly useless to push it in their face, they will reject it.

If you tell people that the cheap product they’d so like to buy is a bad buy they won’t listen and later tell /you/ you had given them bad advice.

The average consumer gets what he deserves – good products if he requests advice and acts accordingly, else ….

Michael
PR
Phil Rose
Nov 12, 2004
Wayne Fulton wrote:
In article <hMVid.462$>,
says…

So the driver resamples to its native resolution. I’m told this is typically 600 ppi for HP & Canon, 720 ppi for Epson desktops, 360 ppi for large Epsons, and other values for other printers. It now has pixels that precisely match the rate at which it can make decisions about what to do with the heads, the paper, and the ink droplets.

Just curious, but told by who? What authoritive source? We hear many wild tales on the internet, so how can this claim be authenticated? Which printer manufacturer claims they resample all images to these huge sizes? Where do they say this? If the claim can be true, there must be some evidence for it. What is the actual evidence that this is true?

The one place in which I’ve seen printer manufacturer information that pertains to so-called native resolution is an Epson Product Service Bulletin from 2001 (PSB.2001.01.001). In this document (actually tucked away in Note #2 on page 3) the following is stated:

"All Epson large format printers use 360 dpi [note terminology!] as the input resolution (this is the resolution data is rasterized at)…As for Epson desktop products, they rasterize data at 720 dpi…"

Unfortunately I’ve misplaced the URL for downloading this document.

Phil
N
nomail
Nov 12, 2004
Xalinai wrote:

The manufacturer specifies that the device can take 600 samples per inch in one and 1200 samples per inch in another direction.

NO, that is exactly my point. The manufacurer *DOES NOT* specify that the device can take 600 samples per inch in one and 1200 samples per inch in another direction. The manufacturer just claims that the scanner has an "optical resolution of 600 x 1200 ppi" or even "600 x 1200 dpi". You and I may know what that really means, but the average consumer does not.

The average consumer has the same options to get informed as you and me.

Yeah, right. If even an expert like you claims that "The manufacturer specifies that the device can take 600 samples per inch in one and 1200 samples per inch in another direction", how could they really? Where do you live? Utopia? In theory you can become an expert on everything, but in real life it doesn’t work that way.

In the normal world, consumers expect a manufacturer to tell them the truth, or at least not to lie. If a car manufacturer says the engine has four cilinders and delivers 100 horse power, you do not open the hood to count them and hire 100 horses to check those claims. You could, but you don’t.

Scanner manufacurers OTOH, make a lot of dubious claims, counting on the fact that the average consumer does not have the knowledge or the resources to verify those claims.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/
S
Scruff
Nov 12, 2004
"Johan W. Elzenga" wrote in message
Xalinai wrote:

In the normal world, consumers expect a manufacturer to tell them the truth, or at least not to lie. If a car manufacturer says the engine has four cilinders and delivers 100 horse power, you do not open the hood to count them and hire 100 horses to check those claims. You could, but you don’t.
Yep, just try and get the gas mileage that car manufacturers claim!
J
jytzel
Nov 12, 2004
"Daniel Masse" …
Hello !

Yesterday, I was talking with several advanced photographers, who all claim that there is a significant difference in quality between images from a high-end digital reflex (8 M pixels) and a high-end scanner (Nikon 4). I have no reason to doubt their ability to use the equipment, yet I cannot figure out where this difference would come from ?

The NikonScan maximum definition is 2900 dpi, which gives about 11 M pixels for a 24 x 36 film : the image should be at least as good as that obtained with a 8 M pixels camera. Right ? And the film is said to have a definition equivalent to at least 20 M pixels, so it should not degrade the image…

I send my images for highend drum scan- a real drum scanner not like the Nikon film scanner. I can detect a big difference. Images from a drum scanner have richer colors. I do a lot of olor editing; an image from a digital camera can easily get out of hand during manipulation, while that of a drum scan offers wider latitude and richer color and tonality. Images are also sharper.

J.
N
nomail
Nov 12, 2004
Jytzel wrote:

I send my images for highend drum scan- a real drum scanner not like the Nikon film scanner. I can detect a big difference. Images from a drum scanner have richer colors. I do a lot of olor editing; an image from a digital camera can easily get out of hand during manipulation, while that of a drum scan offers wider latitude and richer color and tonality. Images are also sharper.

As long as you do not specify which film size you compare with which digital camera, this is totally useless information.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/
X
Xalinai
Nov 12, 2004
Johan W. Elzenga wrote:

Xalinai wrote:

The manufacturer specifies that the device can take 600 samples per inch in one and 1200 samples per inch in another direction.

NO, that is exactly my point. The manufacurer *DOES NOT* specify that the device can take 600 samples per inch in one and 1200 samples per inch in another direction. The manufacturer just claims that the scanner has an "optical resolution of 600 x 1200 ppi" or even "600 x 1200 dpi". You and I may know what that really means, but the average consumer does not.

The average consumer has the same options to get informed as you and me.

Yeah, right. If even an expert like you claims that "The manufacturer specifies that the device can take 600 samples per inch in one and 1200 samples per inch in another direction", how could they really? Where do you live? Utopia? In theory you can become an expert on everything, but in real life it doesn’t work that way.

The scanner takes a sample whenever the stepper motor stops. It does so at a minimum distance of 1/1200th of an inch.

It is /your/ definition that samples are not allowed to overlap.

In the normal world, consumers expect a manufacturer to tell them the truth, or at least not to lie. If a car manufacturer says the engine has four cilinders and delivers 100 horse power, you do not open the hood to count them and hire 100 horses to check those claims. You could, but you don’t.

Scanner manufacurers OTOH, make a lot of dubious claims, counting on the fact that the average consumer does not have the knowledge or the resources to verify those claims.

Scanner manufacturers – as other people making complex products – specify what is consistent over a series. Usually that is the stepper motor used and the sensor. They do not specify the quality of plastic lenses and mounting precision. But as we are used from cars, getting additional information before buying is essential.

If you don’t get additional information you’ll believe that eating hamburgers and fries is healthy because only the best raw material is used – not one person willing to sell a product will specify things that do really matter and can vary the same time.

This is life – if your average customer is a moron find something that is not relevant but sounds sexy.

Michael
N
nomail
Nov 12, 2004
Xalinai wrote:

Yeah, right. If even an expert like you claims that "The manufacturer specifies that the device can take 600 samples per inch in one and 1200 samples per inch in another direction", how could they really? Where do you live? Utopia? In theory you can become an expert on everything, but in real life it doesn’t work that way.

The scanner takes a sample whenever the stepper motor stops. It does so at a minimum distance of 1/1200th of an inch.

But that does not change the fact that the sample SIZE is 1/600 inch.

It is /your/ definition that samples are not allowed to overlap.

I’m not saying they are not "allowed" to overlap, but it does mean that you cannot possibly get an optical resolution of 1200 ppi. At best, you can get an optical resolution of 600 ppi, because that is your physical sample size. In reality it’s even worse because of lens quality, etc, as you say. Suppose the stepping motor wouldn’t move at all, so you take samples at a minimum distance of 0 inch. Would that mean you now have a scanner with infinite resolution?…

That’s all I’m saying. And that’s how I’ll end my part of this discussion.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/
A7
aka 717
Nov 13, 2004
"Xalinai" wrote in message
Johan W. Elzenga wrote:

Xalinai wrote:

The manufacturer specifies that the device can take 600 samples per inch in one and 1200 samples per inch in another direction.

NO, that is exactly my point. The manufacurer *DOES NOT* specify that the device can take 600 samples per inch in one and 1200 samples per inch in another direction. The manufacturer just claims that the scanner has an "optical resolution of 600 x 1200 ppi" or even "600 x 1200 dpi". You and I may know what that really means, but the average consumer does not.

The average consumer has the same options to get informed as you and me.

Yeah, right. If even an expert like you claims that "The manufacturer specifies that the device can take 600 samples per inch in one and 1200 samples per inch in another direction", how could they really? Where do you live? Utopia? In theory you can become an expert on everything, but in real life it doesn’t work that way.

The scanner takes a sample whenever the stepper motor stops. It does so at a minimum distance of 1/1200th of an inch.

It is /your/ definition that samples are not allowed to overlap.
In the normal world, consumers expect a manufacturer to tell them the truth, or at least not to lie. If a car manufacturer says the engine has four cilinders and delivers 100 horse power, you do not open the hood to count them and hire 100 horses to check those claims. You could, but you don’t.

Scanner manufacurers OTOH, make a lot of dubious claims, counting on the fact that the average consumer does not have the knowledge or the resources to verify those claims.

Scanner manufacturers – as other people making complex products – specify what is consistent over a series. Usually that is the stepper motor used and the sensor. They do not specify the quality of plastic lenses and mounting precision. But as we are used from cars, getting additional information before buying is essential.

If you don’t get additional information you’ll believe that eating hamburgers and fries is healthy because only the best raw material is used – not one person willing to sell a product will specify things that do really matter and can vary the same time.

This is life – if your average customer is a moron find something that is not relevant but sounds sexy.

Michael

Michael,
How does a drum scanner work differently? Since it is
spinning there is no step. If you aren’t familiar that’s ok. : -)
A7
aka 717
Nov 13, 2004
"Johan W. Elzenga" wrote in message
Xalinai wrote:

Yeah, right. If even an expert like you claims that "The manufacturer specifies that the device can take 600 samples per inch in one and 1200 samples per inch in another direction", how could they really? Where do you live? Utopia? In theory you can become an expert on everything, but in real life it doesn’t work that way.

The scanner takes a sample whenever the stepper motor stops. It does so at a minimum distance of 1/1200th of an inch.

But that does not change the fact that the sample SIZE is 1/600 inch.
It is /your/ definition that samples are not allowed to overlap.

I’m not saying they are not "allowed" to overlap, but it does mean that you cannot possibly get an optical resolution of 1200 ppi. At best, you can get an optical resolution of 600 ppi, because that is your physical sample size. In reality it’s even worse because of lens quality, etc, as you say. Suppose the stepping motor wouldn’t move at all, so you take samples at a minimum distance of 0 inch. Would that mean you now have a scanner with infinite resolution?…

That’s all I’m saying. And that’s how I’ll end my part of this discussion.


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/

At first thought, it would seem that the only way to measure true resolution would be a relative one where scanners are measured against each other since there is no way to tell except for scanning, well maybe by magnification like a powerful loop or microscope. I wonder.
X
Xalinai
Nov 13, 2004
aka 717 wrote:

"Xalinai" wrote in message
Johan W. Elzenga wrote:

Xalinai wrote:

The manufacturer specifies that the device can take 600
samples >> > > > per inch in one and 1200 samples per inch in another direction. >> > >
NO, that is exactly my point. The manufacurer *DOES NOT*
specify >> > > that the device can take 600 samples per inch in one and 1200 >> > > samples per inch in another direction. The manufacturer just >> > > claims that the scanner has an "optical resolution of 600 x 1200 >> > > ppi" or even "600 x 1200 dpi". You and I may know what that >> > > really means, but the average consumer does not. >> >
The average consumer has the same options to get informed as you
and >> > me.
Yeah, right. If even an expert like you claims that "The
manufacturer >> specifies that the device can take 600 samples per inch in one and >> 1200 samples per inch in another direction", how could they really? >> Where do you live? Utopia? In theory you can become an expert on >> everything, but in real life it doesn’t work that way.
The scanner takes a sample whenever the stepper motor stops. It does so at a minimum distance of 1/1200th of an inch.

It is your definition that samples are not allowed to overlap.
In the normal world, consumers expect a manufacturer to tell them
the >> truth, or at least not to lie. If a car manufacturer says the engine >> has four cilinders and delivers 100 horse power, you do not open the >> hood to count them and hire 100 horses to check those claims. You >> could, but you don’t.
Scanner manufacurers OTOH, make a lot of dubious claims, counting
on >> the fact that the average consumer does not have the knowledge or the >> resources to verify those claims.
Scanner manufacturers – as other people making complex products – specify what is consistent over a series. Usually that is the stepper motor used and the sensor. They do not specify the quality of plastic lenses and mounting precision. But as we are used from cars, getting additional information before buying is essential.
If you don’t get additional information you’ll believe that eating hamburgers and fries is healthy because only the best raw material is used – not one person willing to sell a product will specify things that do really matter and can vary the same time.
This is life – if your average customer is a moron find something that is not relevant but sounds sexy.

Michael

Michael,
How does a drum scanner work differently? Since it is
spinning there is no step. If you aren’t familiar that’s ok. : -)

You have only one sensor that is precisely focused on the object surface.

Sampling along the rotation direction can be done in arbitrary intervals resulting in any sampling rate (aka "dpi") you want.

Sensor movement sideways is done via a stepping mechanism that is limiting the possible resolutions somewhat – some values resulting in different x and y resolutions. The data can be internally interpolated to give the desired symmetrical resolution (at a high bitrate and on the very fine stept those machines use for stepping) so you better chose a resolution that fits the actual stepping used for best results.

For all scanners, the maximum resolution depends not only on the basic design but on the device at hand and it can be measured with one simple scan of a measuring sheet with line patterns and Siemens-stars at the scanners highest resolution.

If you need maximum detail resolution get a calibration sheet, scan it with different scanners and buy the one with the best result – not something still in a box.

Michael
C
Chris
Nov 13, 2004
In article ,
"Scruff" wrote:

"Johan W. Elzenga" wrote in message
Xalinai wrote:

In the normal world, consumers expect a manufacturer to tell them the truth, or at least not to lie. If a car manufacturer says the engine has four cilinders and delivers 100 horse power, you do not open the hood to count them and hire 100 horses to check those claims. You could, but you don’t.

Yep, just try and get the gas mileage that car manufacturers claim!

Except the manufacturers don’t set the estimated mileage rating … the feds do. Extending the analogy, it would be interesting if scanner manufacturers had to submit their hardware to an independent (nongov) body for testing, and get rated in their optical res and dmax.


C
U
Uni
Nov 13, 2004
Johan W. Elzenga wrote:
Xalinai wrote:

Yeah, right. If even an expert like you claims that "The manufacturer specifies that the device can take 600 samples per inch in one and 1200 samples per inch in another direction", how could they really? Where do you live? Utopia? In theory you can become an expert on everything, but in real life it doesn’t work that way.

The scanner takes a sample whenever the stepper motor stops. It does so at a minimum distance of 1/1200th of an inch.

But that does not change the fact that the sample SIZE is 1/600 inch.

It is /your/ definition that samples are not allowed to overlap.

I’m not saying they are not "allowed" to overlap, but it does mean that you cannot possibly get an optical resolution of 1200 ppi. At best, you can get an optical resolution of 600 ppi, because that is your physical sample size. In reality it’s even worse because of lens quality, etc, as you say. Suppose the stepping motor wouldn’t move at all, so you take samples at a minimum distance of 0 inch. Would that mean you now have a scanner with infinite resolution?…

That’s all I’m saying. And that’s how I’ll end my part of this discussion.

I hope so – rather boring.

🙂

Uni
U
Uni
Nov 13, 2004
Daniel Masse wrote:
Hello !

Yesterday, I was talking with several advanced photographers, who all claim that there is a significant difference in quality between images from a high-end digital reflex (8 M pixels) and a high-end scanner (Nikon 4). I have no reason to doubt their ability to use the equipment, yet I cannot figure out where this difference would come from ?

Snanners have circular dots, while digital cameras have square dots. The is why you never have moiré problems with digital cameras.

🙂

Uni

The NikonScan maximum definition is 2900 dpi, which gives about 11 M pixels for a 24 x 36 film : the image should be at least as good as that obtained with a 8 M pixels camera. Right ? And the film is said to have a definition equivalent to at least 20 M pixels, so it should not degrade the image…
S
Scruff
Nov 13, 2004
"Chris Havel" wrote in message
In article ,
"Scruff" wrote:

"Johan W. Elzenga" wrote in message
Xalinai wrote:

In the normal world, consumers expect a manufacturer to tell them the truth, or at least not to lie. If a car manufacturer says the engine
has
four cilinders and delivers 100 horse power, you do not open the hood
to
count them and hire 100 horses to check those claims. You could, but
you
don’t.

Yep, just try and get the gas mileage that car manufacturers claim!

Except the manufacturers don’t set the estimated mileage rating … the feds do.

Yes, and the end result is the same.
Here’s an interesting article about that
http://www.csmonitor.com/2004/0715/p12s01-wmgn.html
J
jytzel
Nov 14, 2004

Jytzel wrote:

I send my images for highend drum scan- a real drum scanner not like the Nikon film scanner. I can detect a big difference. Images from a drum scanner have richer colors. I do a lot of olor editing; an image from a digital camera can easily get out of hand during manipulation, while that of a drum scan offers wider latitude and richer color and tonality. Images are also sharper.

As long as you do not specify which film size you compare with which digital camera, this is totally useless information.

6×7 and even 35mm compared to 10D and Fuji S2
T
tacitr
Nov 14, 2004
Snanners have circular dots, while digital cameras have square dots. The is why you never have moiré problems with digital cameras.

What???!!

Consumer-grade scanners use CCDs identical to the CCDs in digital scanners; the sampling elements are square. What on Earth are you talking about? —
Art, literature, shareware, polyamory, kink, and more:
http://www.xeromag.com/franklin.html
U
Uni
Nov 14, 2004
Tacit wrote:
Snanners have circular dots, while digital cameras have square dots. The is why you never have moiré problems with digital cameras.

What???!!

Consumer-grade scanners use CCDs identical to the CCDs in digital scanners; the sampling elements are square. What on Earth are you talking about?

How on earth do you fit a square peg in a round monitor hole?

Uni
X
Xalinai
Nov 14, 2004
Jytzel wrote:

(Johan W. Elzenga) wrote in message
news:<1gn5dgi.14e9ejgcxuox8N%>…
Jytzel wrote:

I send my images for highend drum scan- a real drum scanner not like the Nikon film scanner. I can detect a big difference. Images from a drum scanner have richer colors. I do a lot of olor editing; an image from a digital camera can easily get out of hand during manipulation, while that of a drum scan offers wider latitude and richer color and tonality. Images are also sharper.

As long as you do not specify which film size you compare with which digital camera, this is totally useless information.

6×7 and even 35mm compared to 10D and Fuji S2

6×7 film scanned at the usual film grain visibility limit of 2800spi will result in 6600×7700 pixels or almost 51 megapixels, 35mm film will give a 2800×4400 pixel / 11MP image.

You compare this to images from a 6 MP camera.

The drum scanner usually uses the full range of 48 bits for the saved image while the digital camera starts with a 12bit sensor so only 36bits are used in a tiff created from the raw image.

I each case the lowest two to three lowest used bits are only noise so the digital camera comes very near to the eight bit per channel of the monitor where the image is shown while the image from the drum scanner still has 11bits per channel of significant data.

But then: You compare an absolute high-end process that takes several hours in the best case using devices that cost several tens of thousands of dollars to a single camera – hat do you expect?

Michael
X
Xalinai
Nov 14, 2004
Tacit wrote:

Snanners have circular dots, while digital cameras have square dots. The is why you never have moiré problems with digital cameras.

What???!!

Consumer-grade scanners use CCDs identical to the CCDs in digital scanners; the sampling elements are square. What on Earth are you talking about?

Don’t feed the PSP-group troll.
C
Chris
Nov 15, 2004
In article ,
"Scruff" wrote:

"Chris Havel" wrote in message
In article ,
"Scruff" wrote:

"Johan W. Elzenga" wrote in message
Xalinai wrote:

In the normal world, consumers expect a manufacturer to tell them the truth, or at least not to lie. If a car manufacturer says the engine
has
four cilinders and delivers 100 horse power, you do not open the hood
to
count them and hire 100 horses to check those claims. You could, but
you
don’t.

Yep, just try and get the gas mileage that car manufacturers claim!

Except the manufacturers don’t set the estimated mileage rating … the feds do.

Yes, and the end result is the same.
Here’s an interesting article about that
http://www.csmonitor.com/2004/0715/p12s01-wmgn.html

Cool. Thanks.

Note that the mileage estimate isn’t wrong because the EPA is lying to sell more product, but rather just wrong because they’re dolts.


C
U
Uni
Nov 15, 2004
Xalinai wrote:
Tacit wrote:

Snanners have circular dots, while digital cameras have square dots. The is why you never have moiré problems with digital cameras.

What???!!

Consumer-grade scanners use CCDs identical to the CCDs in digital scanners; the sampling elements are square. What on Earth are you talking about?

Don’t feed the PSP-group troll.

Most people try their damnedest to prove me wrong, but seldom do.

🙂

Uni
MR
Mike Russell
Nov 15, 2004
Uni wrote:
….
Snanners have circular dots, while digital cameras have square dots. The is why you never have moir
N
nomail
Nov 15, 2004
Xalinai wrote:

6×7 film scanned at the usual film grain visibility limit of 2800spi will result in 6600×7700 pixels or almost 51 megapixels, 35mm film will give a 2800×4400 pixel / 11MP image.

You compare this to images from a 6 MP camera.

The drum scanner usually uses the full range of 48 bits for the saved image while the digital camera starts with a 12bit sensor so only 36bits are used in a tiff created from the raw image.

I each case the lowest two to three lowest used bits are only noise so the digital camera comes very near to the eight bit per channel of the monitor where the image is shown while the image from the drum scanner still has 11bits per channel of significant data.

But then: You compare an absolute high-end process that takes several hours in the best case using devices that cost several tens of thousands of dollars to a single camera – hat do you expect?

Exactly. Take the 35mm slide again, but now a Canon 1Ds MkII and your conclusions may be entirely different…


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/
U
Uni
Nov 16, 2004
Mike Russell wrote:
Uni wrote:


Snanners have circular dots, while digital cameras have square dots. The is why you never have moiré problems with digital cameras.
🙂

Uni

Hee Hee. That makes my day.

Sort of like some usenet user challenging everyone, claiming 16 bit per color channel was no better than 8 bit. However, every software vendor is now offering 16 bit editing.

🙂

Uni
A7
aka 717
Nov 22, 2004
"Johan W. Elzenga" wrote in message
Xalinai wrote:

6×7 film scanned at the usual film grain visibility limit of 2800spi will result in 6600×7700 pixels or almost 51 megapixels, 35mm film will give a 2800×4400 pixel / 11MP image.

You compare this to images from a 6 MP camera.

The drum scanner usually uses the full range of 48 bits for the saved image while the digital camera starts with a 12bit sensor so only 36bits are used in a tiff created from the raw image.

I each case the lowest two to three lowest used bits are only noise so the digital camera comes very near to the eight bit per channel of the monitor where the image is shown while the image from the drum scanner still has 11bits per channel of significant data.

But then: You compare an absolute high-end process that takes several hours in the best case using devices that cost several tens of thousands of dollars to a single camera – hat do you expect?

Exactly. Take the 35mm slide again, but now a Canon 1Ds MkII and your conclusions may be entirely different…


Johan W. Elzenga johan<<at>>johanfoto.nl Editor / Photographer http://www.johanfoto.nl/

Drum scanners don’t take that long.

MacBook Pro 16” Mockups 🔥

– in 4 materials (clay versions included)

– 12 scenes

– 48 MacBook Pro 16″ mockups

– 6000 x 4500 px

Related Discussion Topics

Nice and short text about related topics in discussion sections