Archiving: TIFF or PSP, 16 bit or 8 bit?

RA
Posted By
Robert A
Apr 4, 2004
Views
1165
Replies
56
Status
Closed
Two questions, same subject:

How do you permanently archive your Vuescan files? Do you leave them in the native TIFF format, or convert them to PSP? Moreover, do you leave them in the original 16-bit, or convert them to 8-bit to save space?

-Robert Ades

How to Improve Photoshop Performance

Learn how to optimize Photoshop for maximum speed, troubleshoot common issues, and keep your projects organized so that you can work faster than ever before!

MR
Mike Russell
Apr 4, 2004
Robert A wrote:
Two questions, same subject:

How do you permanently archive your Vuescan files? Do you leave them in the native TIFF format, or convert them to PSP? Moreover, do you leave them in the original 16-bit, or convert them to 8-bit to save space?

Robert,

CD’s and hard drive space are cheap. Archive your original raw scans, and save your corrected images as well on the same disk. The amount of work required to do each scan is much more important than the amount of storage required.

If you have so many scanned images that the number of CD’s is oppressive, either switch to DvD, or consider saving as 8 bit tiff.


Mike Russell
www.curvemeister.com
www.geigy.2y.net
RA
Robert A
Apr 4, 2004
Mike, my 4000-dpi 16-bit scans take up more than 100Mb each. Normally when working in Photoshop, I convert to 8-bit right away. The reason I scan in 16-bit is for greater dynamic range and shadow detail. Once the scan is done, is there any useful reason to keep that 16-bit data?

-Robert

"Mike Russell" wrote in message
Robert A wrote:
Two questions, same subject:

How do you permanently archive your Vuescan files? Do you leave them in the native TIFF format, or convert them to PSP? Moreover, do you leave them in the original 16-bit, or convert them to 8-bit to save space?

Robert,

CD’s and hard drive space are cheap. Archive your original raw scans, and save your corrected images as well on the same disk. The amount of work required to do each scan is much more important than the amount of storage required.

If you have so many scanned images that the number of CD’s is oppressive, either switch to DvD, or consider saving as 8 bit tiff.


Mike Russell
www.curvemeister.com
www.geigy.2y.net

MR
Mike Russell
Apr 4, 2004
Robert A wrote:
Mike, my 4000-dpi 16-bit scans take up more than 100Mb each. Normally when working in Photoshop, I convert to 8-bit right away. The reason I scan in 16-bit is for greater dynamic range and shadow detail. Once the scan is done, is there any useful reason to keep that 16-bit data?

Are you scanning medium format? I wonder if you are getting any additional resolution over, say, a 25 or 50 meg scan. Have you experimented and can you see the difference on your prints? You may even be losing sharpness by scanning at too high a resolution. Scanning at a high resolution introduces softness, which you must then compensate for by sharpening.

But back to your question. Volumes have been written on the topic of 8 bits versus 16, and I have contributed some bulk to that discussion.

My personal conclusion is that 8 bits per channel is plenty for today’s technology, and the evidence I offer is that (for a gamma 1.8 or greater image) it is impossible to tell by looking, and looking, after all, is what we do with photographs.

But there are those for whom that argument is not convincing, and the act of throwing away any image data is not something they can justify. Whether I agree with the technical reasons for this extra data, I have to say many of these people do create better photographs and prints than I do.

So, pick which side of the fence you want to be on. Above all keep your originals in a safe place – scanners can only continue to get better and better.


Mike Russell
www.curvemeister.com
www.geigy.2y.net
U
Uni
Apr 4, 2004
Robert A wrote:
Two questions, same subject:

How do you permanently archive your Vuescan files? Do you leave them in the native TIFF format, or convert them to PSP? Moreover, do you leave them in the original 16-bit, or convert them to 8-bit to save space?

If you prefer discard critical color information, reduce them to 8 bit.

Uni

-Robert Ades

RA
Robert A
Apr 5, 2004
I’m scanning 35mm. I scan based on the intended print size, so for 13×19, I use 4000 dpi, for smaller prints, I adjust accordingly. I always scan at 16-bit (actually 14-bit with my Canon FS4000US) to a TIFF file.

Once in Photoshop, I adjust levels in 16-bit if necessary, then convert to 8-bit and make all the remaining adjustments for final output in a PSP file. I know that my Epson 2200 printer only utilitizes 8-bits/channel, so I see no point in preserving 16-bit data for current prints. But I still retain the original TIFF file in 16-bit, unretouched or modified in Photoshop for later archive.

My understanding is that it’s important to SCAN in the highest bit depth as possible, but once you have the file in your computer, there’s little if any use for the extra bits in terms of archiving. Is there any general agreement on this?

Robert Ades

"Mike Russell" wrote in message
Robert A wrote:
Mike, my 4000-dpi 16-bit scans take up more than 100Mb each. Normally when working in Photoshop, I convert to 8-bit right away. The reason I scan in 16-bit is for greater dynamic range and shadow detail. Once the scan is done, is there any useful reason to keep that 16-bit data?

Are you scanning medium format? I wonder if you are getting any
additional
resolution over, say, a 25 or 50 meg scan. Have you experimented and can you see the difference on your prints? You may even be losing sharpness
by
scanning at too high a resolution. Scanning at a high resolution
introduces
softness, which you must then compensate for by sharpening.
But back to your question. Volumes have been written on the topic of 8
bits
versus 16, and I have contributed some bulk to that discussion.
My personal conclusion is that 8 bits per channel is plenty for today’s technology, and the evidence I offer is that (for a gamma 1.8 or greater image) it is impossible to tell by looking, and looking, after all, is
what
we do with photographs.

But there are those for whom that argument is not convincing, and the act
of
throwing away any image data is not something they can justify. Whether I agree with the technical reasons for this extra data, I have to say many
of
these people do create better photographs and prints than I do.
So, pick which side of the fence you want to be on. Above all keep your originals in a safe place – scanners can only continue to get better and better.


Mike Russell
www.curvemeister.com
www.geigy.2y.net

KM
Kennedy McEwen
Apr 5, 2004
In article , Robert A
writes
Two questions, same subject:

How do you permanently archive your Vuescan files? Do you leave them in the native TIFF format, or convert them to PSP? Moreover, do you leave them in the original 16-bit, or convert them to 8-bit to save space?
Certainly don’t even consider PSP as an archive format, it is a proprietary coding scheme which may not be supported in the future. TIFF is an open coding scheme which is supported by virtually all image processing applications and will not only continue to be supported but continue to develop.

The 16/8-bpc argument continues, but I have yet to see any evidence in favour of 16bpc archiving, despite Dan Margulis issuing and open challenge for anyone to demonstrate an image which could be achieved in 16bpc processing which could not also be achieved in 8bpc processing over three years ago. Given that failure, the generally accepted principle is to scan in 16bpc, or the greatest available bit depth of the scanner, optimise the image in terms of gamma and levels before archiving in 8bpc format.

No doubt such sacrilegious advice will solicit much consternation and opposing views amongst the collective. 😉

Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he’s pissed.
Python Philosophers (replace ‘nospam’ with ‘kennedym’ when replying)
KM
Kennedy McEwen
Apr 5, 2004
In article <sX_bc.32629$>, Mike
Russell writes
Scanning at a high resolution introduces
softness, which you must then compensate for by sharpening.
On what equipment are you experiencing this particular kind of madness. Scanning at an increased sampling density may not offer any more resolution in the image, but it certainly cannot make it any less sharp or softer!

Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he’s pissed.
Python Philosophers (replace ‘nospam’ with ‘kennedym’ when replying)
H
Hecate
Apr 5, 2004
On Mon, 5 Apr 2004 01:55:07 +0100, Kennedy McEwen
wrote:

No doubt such sacrilegious advice will solicit much consternation and opposing views amongst the collective. 😉

😉 Only slightly..

I prefer to scan at the highest bit depth possible, do nothing to the file, and then archive it to either/both DVD and a firewire hard disk.

Then, I work on a copy of that file. It means I always have the 16 bit file to fall back on should something disastrous happen. I used to work in computer support and I’ve seen too many screw-ups which resulted in original files being trashed with nothing to fall back on.

Anyway, that’s my excuse – maybe I’m just paranoid ;-0



Hecate

veni, vidi, reliqui
MR
Mike Russell
Apr 5, 2004
Kennedy McEwen wrote:
In article <sX_bc.32629$>, Mike
Russell writes
Scanning at a high resolution introduces
softness, which you must then compensate for by sharpening.
On what equipment are you experiencing this particular kind of madness. Scanning at an increased sampling density may not offer any more resolution in the image, but it certainly cannot make it any less sharp or softer!

As you approach the Nyquist frequency, certain frequencies are reduced in a very predictable way. This is softness.

Artificially boosting those frequencies yields a better approximation to the original image’s frequency distribution, and a more natural appearance. This is sharpening.



Mike Russell
www.curvemeister.com
www.geigy.2y.net
RA
Robert A
Apr 5, 2004
But is there any benefit to having a 16-bit backup?

"Hecate" wrote in message
On Mon, 5 Apr 2004 01:55:07 +0100, Kennedy McEwen
wrote:

No doubt such sacrilegious advice will solicit much consternation and opposing views amongst the collective. 😉

😉 Only slightly..

I prefer to scan at the highest bit depth possible, do nothing to the file, and then archive it to either/both DVD and a firewire hard disk.
Then, I work on a copy of that file. It means I always have the 16 bit file to fall back on should something disastrous happen. I used to work in computer support and I’ve seen too many screw-ups which resulted in original files being trashed with nothing to fall back on.
Anyway, that’s my excuse – maybe I’m just paranoid ;-0



Hecate

veni, vidi, reliqui
WF
Wayne Fulton
Apr 5, 2004
In article ,
says…
But is there any benefit to having a 16-bit backup?

Not likely, if it already appears as a halfway decent image.

16 bits may be useful for the extreme tone-shifting adjustments, like gamma specifically, but histogram B&W Points or Curve too (the latter is of debatable benefit, but it is popularly done as 16b).

If we are scanning and saving RAW data (no adjustments done), then 16 bits is good, since all of these operations are still to come. This would be the purpose of 16 bit data.

But if these operations are already generally done (and archive seems to imply that), then there would be no point of saving 16 bits. Additional fine adjustments dont need 16 bits at all.

Printers and video are 8 bit devices.


Wayne
http://www.scantips.com "A few scanning tips"
U
Uni
Apr 5, 2004
Wayne Fulton wrote:
In article ,
says…

But is there any benefit to having a 16-bit backup?

Not likely, if it already appears as a halfway decent image.
16 bits may be useful for the extreme tone-shifting adjustments, like gamma specifically, but histogram B&W Points or Curve too (the latter is of debatable benefit, but it is popularly done as 16b).
If we are scanning and saving RAW data (no adjustments done), then 16 bits is good, since all of these operations are still to come. This would be the purpose of 16 bit data.

But if these operations are already generally done (and archive seems to imply that), then there would be no point of saving 16 bits. Additional fine adjustments dont need 16 bits at all.

Printers and video are 8 bit devices.

http://www.aja.com/kona.htm

🙂

Uni

KM
Kennedy McEwen
Apr 5, 2004
In article <3H2cc.32736$>, Mike
Russell writes
Kennedy McEwen wrote:
In article <sX_bc.32629$>, Mike
Russell writes
Scanning at a high resolution introduces
softness, which you must then compensate for by sharpening.
On what equipment are you experiencing this particular kind of madness. Scanning at an increased sampling density may not offer any more resolution in the image, but it certainly cannot make it any less sharp or softer!

As you approach the Nyquist frequency, certain frequencies are reduced in a very predictable way. This is softness.
This is a contradiction of your earlier statement, since scanning at a higher resolution (ie. increased sampling density) results in a higher Nyquist limit and thus, by your latter argument, shifts the onset of this "softness" to higher spatial frequencies in the image. In short, your latter argument indicates that scanning at a higher resolution results in *less* softness, not more, in an image scaled at the same size! Which argument are you making?

The reproduction of spatial frequencies are reduced by the MTF of the scanner, which decreases not only as you approach Nyquist, but throughout the spatial frequency range, usually monotonically from a maximum at zero cy/mm. Sampling density merely determines where on that MTF curve the Nyquist limit sits. As sampling density increases, more of the total MTF range is included in the spatial frequency range that can be unambiguously reproduced – so more information is resolved, not less. Clearly this is an issue of diminishing returns, but the result is always positive – more total detail resolved, not less and certainly not more "softness". In some systems it is possible to sample such that the Nyquist limit lies beyond the limiting MTF of the scanner (eg. most flatbed scanners) thus meeting the criteria for total elimination of aliasing. In such cases, increasing the sampling density will not gain resolution, but neither will it increase image softness – the end result is just more data representing the same image information content.

Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he’s pissed.
Python Philosophers (replace ‘nospam’ with ‘kennedym’ when replying)
KM
Kennedy McEwen
Apr 5, 2004
In article , Robert A
writes
But is there any benefit to having a 16-bit backup?
That is the $64k question. As mentioned, the general consensus is that after level adjustments are made there is little point in retaining the additional bits. As mentioned, the challenge is still out there to provide examples where this is not the case, but I am not aware of anyone having successfully done that (though quite a few have tried). —
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he’s pissed.
Python Philosophers (replace ‘nospam’ with ‘kennedym’ when replying)
KM
Kennedy McEwen
Apr 5, 2004
In article , Uni
writes
http://www.aja.com/kona.htm
I have built systems (monochrome as it happens, but that shouldn’t influence the result) with 12-bit ADCs on the video channel output. The difference cannot be perceived, but it is something the marketing folks like to exploit. 😉

Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he’s pissed.
Python Philosophers (replace ‘nospam’ with ‘kennedym’ when replying)
MR
Mike Russell
Apr 5, 2004
Kennedy McEwen wrote:
In article <3H2cc.32736$>, Mike
Russell writes
Kennedy McEwen wrote:
In article <sX_bc.32629$>, Mike
Russell writes
Scanning at a high resolution introduces
softness, which you must then compensate for by sharpening.
On what equipment are you experiencing this particular kind of madness. Scanning at an increased sampling density may not offer any more resolution in the image, but it certainly cannot make it any less sharp or softer!

As you approach the Nyquist frequency, certain frequencies are reduced in a very predictable way. This is softness.
This is a contradiction of your earlier statement, since scanning at a higher resolution (ie. increased sampling density) results in a higher Nyquist limit and thus, by your latter argument, shifts the onset of this "softness" to higher spatial frequencies in the image. In short, your latter argument indicates that scanning at a higher resolution results in *less* softness, not more, in an image scaled at the same size! Which argument are you making?

The reproduction of spatial frequencies are reduced by the MTF of the scanner, which decreases not only as you approach Nyquist, but throughout the spatial frequency range, usually monotonically from a maximum at zero cy/mm. Sampling density merely determines where on that MTF curve the Nyquist limit sits. As sampling density increases, more of the total MTF range is included in the spatial frequency range that can be unambiguously reproduced – so more information is resolved,
not less. Clearly this is an issue of diminishing returns, but the result is always positive – more total detail resolved, not less and certainly not more "softness". In some systems it is possible to sample such that the Nyquist limit lies beyond the limiting MTF of the scanner (eg. most flatbed scanners) thus meeting the criteria for total
elimination of aliasing. In such cases, increasing the sampling density will not gain resolution, but neither will it increase image softness –
the end result is just more data representing the same image information content.

Certainly I agree that a higher scan rez extracts more information.

My point is that sharpening is an indispensable step after resampling. That resampling may be explicit, as when you resize an image in Photoshop, or it could be implicit, as when a large image is printed at a small size.

For example, the original poster scans to 150 meg – if that is printed at 8×10 without sharpening, it will be softer than an image scanned at a lower ppi.

I believe this is supported by theory, as I described, and by common practice in the industry. If you disagree, or if you believe that sharpening is otherwise not needed, I’m interested in your explanation.



Mike Russell
www.curvemeister.com
www.geigy.2y.net
T
toby
Apr 5, 2004
Kennedy McEwen …

Certainly don’t even consider PSP as an archive format, it is a proprietary coding scheme which may not be supported in the future.

Not true; the Paint Shop Pro file format (at least through v8) is documented[1], as are its compression schemes (RLE and LZ77[3]). As proof of this, I have written a PSP format plugin for Photoshop[2] which is interoperable with PSP 5-8 *and released as open source under the GPL*. It is a more open format, for example, than Photoshop PSD or PSB; documentation for those is not freely available.

Arguments for or against archiving in PSP format might perhaps take into account issues such as metadata and colour profiling. For interoperability with non-proprietary tools, standardisation and "future-proofing", TIFF or JPEG seem very good choices.

Toby

[1] http://www.jasc.com/support/kb/articles/pspspec.asp
[2] http://www.telegraphics.com.au/sw/#pspformat
[3] A PSP-compatible FREE implementation of LZ77 exists: http://www.gzip.org/zlib/
KM
Kennedy McEwen
Apr 5, 2004
In article <oS9cc.18633$>, Mike
Russell writes
Kennedy McEwen wrote:
In article <3H2cc.32736$>, Mike
Russell writes
Kennedy McEwen wrote:
In article <sX_bc.32629$>, Mike
Russell writes
Scanning at a high resolution introduces
softness, which you must then compensate for by sharpening.
On what equipment are you experiencing this particular kind of madness. Scanning at an increased sampling density may not offer any more resolution in the image, but it certainly cannot make it any less sharp or softer!

As you approach the Nyquist frequency, certain frequencies are reduced in a very predictable way. This is softness.
This is a contradiction of your earlier statement, since scanning at a higher resolution (ie. increased sampling density) results in a higher Nyquist limit and thus, by your latter argument, shifts the onset of this "softness" to higher spatial frequencies in the image. In short, your latter argument indicates that scanning at a higher resolution results in *less* softness, not more, in an image scaled at the same size! Which argument are you making?

The reproduction of spatial frequencies are reduced by the MTF of the scanner, which decreases not only as you approach Nyquist, but throughout the spatial frequency range, usually monotonically from a maximum at zero cy/mm. Sampling density merely determines where on that MTF curve the Nyquist limit sits. As sampling density increases, more of the total MTF range is included in the spatial frequency range that can be unambiguously reproduced – so more information is resolved,
not less. Clearly this is an issue of diminishing returns, but the result is always positive – more total detail resolved, not less and certainly not more "softness". In some systems it is possible to sample such that the Nyquist limit lies beyond the limiting MTF of the scanner (eg. most flatbed scanners) thus meeting the criteria for total
elimination of aliasing. In such cases, increasing the sampling density will not gain resolution, but neither will it increase image softness –
the end result is just more data representing the same image information content.

Certainly I agree that a higher scan rez extracts more information.
My point is that sharpening is an indispensable step after resampling. That resampling may be explicit, as when you resize an image in Photoshop, or it could be implicit, as when a large image is printed at a small size.
But as you can see, from your first quoted line above which you have conveniently retained throughout this thread, your statement concerned high resolution scanning, not resampling.


Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he’s pissed.
Python Philosophers (replace ‘nospam’ with ‘kennedym’ when replying)
KM
Kennedy McEwen
Apr 5, 2004
In article , Toby Thain
writes
Kennedy McEwen wrote in message
news:…

Certainly don’t even consider PSP as an archive format, it is a proprietary coding scheme which may not be supported in the future.

Not true; the Paint Shop Pro file format (at least through v8) is documented[1], as are its compression schemes (RLE and LZ77[3]).

As were the technical details of Betamax!

It’s specification may be available but it is still a proprietary format that is, in the main, only supported by JASC software and owned by them. Remember the GIF format and the Unisys debacle?

Nobody else really bothers with PSP simply because it offers little that is not already available or bettered in industry standard formats. TIFF, in particular, supports a wide variety of compression techniques or, indeed, no compression at all, making it completely immune from Unisys type action.

Folks will be using TIF and JPG formats long after JASC have gone bust or attempted to revoke the licences for PSP.

Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he’s pissed.
Python Philosophers (replace ‘nospam’ with ‘kennedym’ when replying)
PE
Preston Earle
Apr 5, 2004
"Kennedy McEwen" wrote: "Scanning at an increased sampling density may not offer any more resolution in the image, but it certainly cannot make it any less sharp or softer!" and, later, "But as you can see, from your first quoted line above which you have conveniently retained throughout this thread, your statement concerned high resolution scanning, not resampling."
———————

I haven’t followed this thread closely, and I don’t know anything about Nyquist frequencies, but aren’t you quibbling a little? <g> If an image is printed at a particular size from two otherwise similar files of different resolutions, the image from the higher-res original will be softer and appear less sharp. Whether from scanning or resizing, the higher-res image will be softer, will it not? [See page 12 of the PDF at http://www.ledet.com/margulis/PP7_Ch15_Resolution.pdf (or page 306 of the book).]

Preston Earle
KM
Kennedy McEwen
Apr 5, 2004
In article <sgicc.24215$>, Preston
Earle writes
"Kennedy McEwen" wrote: "Scanning at an increased sampling density may not offer any more resolution in the image, but it certainly cannot make it any less sharp or softer!" and, later, "But as you can see, from your first quoted line above which you have conveniently retained throughout this thread, your statement concerned high resolution scanning, not resampling."
———————

I haven’t followed this thread closely, and I don’t know anything about Nyquist frequencies, but aren’t you quibbling a little? <g>

Obviously not! 🙂

If an image
is printed at a particular size from two otherwise similar files of different resolutions, the image from the higher-res original will be softer and appear less sharp.

That is complete rubbish – the higher resolution image will always, unless you have had to degrade it in some way to meet the constraints of your printer, be much sharper than the lower resolution image (assuming that the image contains adequate fine detail with which to observe the difference in the first place).

Where on earth did you ever get the idea that higher resolution meant less sharp results?

Whether from scanning or resizing, the
higher-res image will be softer, will it not?

Definitely not. A higher resolution scan will contain finer detail and sharper edges than a lower resolution scan, if the information is present in the image in the first place. Even when it is not, it cannot be less sharp than the lower resolution scan, only as sharp. A resized image can never contain any more detail or sharper edges than it already has. How much the resized image softens depends on the algorithm used. For example, nearest neighbour interpolation will retain apparent edge sharpness completely, whilst bicubic will soften it slightly and bilinear more so. None of these algorithms will provide an image as sharp as a higher resolution scan – again assuming that the image has higher resolution content to bring out in the first place.

[See page 12 of the PDF at
http://www.ledet.com/margulis/PP7_Ch15_Resolution.pdf (or page 306 of the book).]
Not the same thing at all. Both images have been resampled to exactly the same resolution for presentation on the page. This is fairly obvious if you zoom into the images in the pdf file you referenced – although the upper image has been scanned at 3x the resolution it has exactly the same pixel dimensions as the lower image.

What you are looking at here is 3rd (and higher, odd) harmonic distortion caused by reproducing each sample as a square pixel on the page. (Recall harmonic distortion in audio – well you get it in images too!) Each sample, however, only represents the image at an infinitely small point in space, called a delta function, which has a volume equal to the average light incident on the CCD sensor centred at that point. The sample, in reality does not exist anywhere else, however an array of delta functions is not a particularly useful thing to look at – for one thing they require an infinite video bandwidth to reproduce on your monitor, and an infinite dpi printer to represent them. So each delta function is represented instead by a pixel, which has a finite dimension but is, in fact, a completely false representation. What should occur between the samples depends on how the user chooses to reproduce the delta function in pixel terms – how he *interpolates* between the samples. Block pixels are simply a uniform square interpolation – introducing every odd harmonic spatial frequency above what is possible for the samples to contain, which is simply false information. However they do make the image look artificially sharp. It is important to draw a distinction right away between the use of the term interpolation here and what is normally referred to by the same term in upscaling – this is simply how each sample is represented by a pixel, in terms of it size, shape and intensity profile, in the final image.

That conventional square uniform pixel reconstruction process is no more valid than a linear interpolated pixel, where each pixel is represented by an intensity at its centre prportional to the volume of the delta function it represents and which linearly merges to reach the intensity of the neighbouring pixels at their centres. In the simplest, bilinear, case each pixel is effectively a square based pyramid (height representing intensity), with the corners incident on the centre of the neighbouring pixels. Although bilinear is the simplest version of this and implements the linear merging only in the horizontal and vertical axes, you can imagine octagonal interpolation where the intensity of each pixel merges to the 8 nearest neighbours, or even circularly symmetric interpolation. Clearly such interpolation schemes cannot be linear, since the sum of the uniform samples must also be a uniform illumination field but, nevertheless, such interpolation is possible. Similarly there are higher order profile pixels which have intensities which are polynomial curves, even pixels which extend their intensity profile well beyond their nearest neighbours, although being constrained to zero at them. Indeed, the ideal pixel reproduction, which introduces no spatial harmonic distortion on the image at all, would have just such a profile, extending to infinity in all directions.

What you are doing when you upscale an image using bilinear, bicubic or any other interpolation method is *simulating* that pixel representation by using higher resolution pixels to create the intermediate samples. What you therefore perceive as a reduction in sharpness through bilinear upscaling is merely the effect of a different pixel reproduction, not a loss in sharpness over the original lower sampling density original.

The proof of this? Simply upscale using nearest neighbour interpolation. That gives you a simulation of the square uniform pixel reproduction using several new pixels to represent each old one but now, of course, the effect of sharpness is retained.

In short, the effective sharpness of a scaled image is nothing to do with the resolution, simply how each pixel is represented in the first place. Quite different from increased scanning resolution, where information conveying true image sharpness is pulled off of the original medium, rather than synthetic odd harmonic distortions.

Mike was not referring to how the image was printed or reproduced in his comments, merely what happened when an image was scanned – hence my original question.

Nevertheless, since you clearly believe that higher resolution scans are softer and thus, by default, that lower resolution scans are sharper, can we expect to see your Minolta film scanner appearing on e-bay whilst you "trade up" to a sharper 300, perhaps only 100ppi, piece of antiquity? Why don’t you just go the whole hog and flash a single photodiode at your slides to get an ultrasharp 1×1 pixel rendition of the entire image on each slide. the next step is just to remove the sensor completely and type a random character into a file called "image.raw" and observe the infinite sharpness of it all. 😉 —
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he’s pissed.
Python Philosophers (replace ‘nospam’ with ‘kennedym’ when replying)
H
Hecate
Apr 6, 2004
On Sun, 04 Apr 2004 21:59:06 -0500, Wayne Fulton
wrote:

In article ,
says…
But is there any benefit to having a 16-bit backup?

Not likely, if it already appears as a halfway decent image.
16 bits may be useful for the extreme tone-shifting adjustments, like gamma specifically, but histogram B&W Points or Curve too (the latter is of debatable benefit, but it is popularly done as 16b).
If we are scanning and saving RAW data (no adjustments done), then 16 bits is good, since all of these operations are still to come. This would be the purpose of 16 bit data.

But if these operations are already generally done (and archive seems to imply that), then there would be no point of saving 16 bits. Additional fine adjustments dont need 16 bits at all.
When I say archive I mean scan – no adjustments whatever – archive. I don’t men archiving after adjustments. 🙂



Hecate

veni, vidi, reliqui
H
Hecate
Apr 6, 2004
On Mon, 5 Apr 2004 08:13:48 +0100, Kennedy McEwen
wrote:

In article , Robert A
writes
But is there any benefit to having a 16-bit backup?
That is the $64k question. As mentioned, the general consensus is that after level adjustments are made there is little point in retaining the additional bits. As mentioned, the challenge is still out there to provide examples where this is not the case, but I am not aware of anyone having successfully done that (though quite a few have tried).

I agree. However, there is good reason to retain the file if you do so *without making any adjustments*. Which is what I do.

It’s a bit like archaeology where they never excavate a whole site because the technology will improve giving the opportunity to find out more about a given site. If you save at maximum bit depth without making adjustments you will then always have a raw (or even RAW <g>) file which may produce better, or different results at a later date because

a. the technology may have improved and,
b. you may want to make different adjustments to the file than those you originally thought of.



Hecate

veni, vidi, reliqui
WF
Wayne Fulton
Apr 6, 2004
In article ,
says…
When I say archive I mean scan – no adjustments whatever – archive. I don’t men archiving after adjustments. 🙂

Unless you have some way to specify a RAW scan (I cant imagine wanting RAW), then by definition the scanner software has already done gamma. Gamma does need more than 8 bits, which is why scanners are built that way, but the scanners do this. And the scanner software will also have generally made a first try at the histogram end points, at least the coarse adjustment, so to speak. So there have been adjustments, and that is what I meant "if it already appears as a halfway decent image".

Other than non-photo contrivances, it has never been convincingly demonstrated that 16 bit output actually helps photos. Cant hurt however, other than time and space, and some people do it anyway. Me too, at times. But keep in mind that other than one of the few 16 bit editor programs, there is no other use for a 16 bit image.


Wayne
http://www.scantips.com "A few scanning tips"
T
toby
Apr 6, 2004
Kennedy McEwen …
In article , Toby Thain
writes
Kennedy McEwen wrote in message
news:…

Certainly don’t even consider PSP as an archive format, it is a proprietary coding scheme which may not be supported in the future.

Not true; the Paint Shop Pro file format (at least through v8) is documented[1], as are its compression schemes (RLE and LZ77[3]).

As were the technical details of Betamax!

It’s specification may be available but it is still a proprietary format that is, in the main, only supported by JASC software and owned by them. Remember the GIF format and the Unisys debacle?

Unlike GIF, unencumbered PSP readers and writers exist, as I have shown.

Nobody else really bothers with PSP simply because it offers little that is not already available or bettered in industry standard formats. TIFF, in particular, supports a wide variety of compression techniques or, indeed, no compression at all, making it completely immune from Unisys type action.

Folks will be using TIF and JPG formats long after JASC have gone bust or attempted to revoke the licences for PSP.

All true… which is why I suggested them in my posting.

The rest of my post was merely intended to correct some wild misconceptions. (In that vein, understand that Jasc cannot "revoke licenses" for free code that reads and writes PSP as it has been documented to date. Unisys’ patent was on the specific LZW compression method. Jasc has no such hold on LZ77, for instance.)

Toby
KM
Kennedy McEwen
Apr 6, 2004
In article , Hecate
writes

If you save at maximum bit depth without
making adjustments you will then always have a raw (or even RAW <g>) file which may produce better, or different results at a later date because

a. the technology may have improved and,
b. you may want to make different adjustments to the file than those you originally thought of.
Unless you evolve some new technology to replace your eyes then "a" is irrelevant. However, if you are going to save without making any adjustments at all then do so at the highest bit depth available. —
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he’s pissed.
Python Philosophers (replace ‘nospam’ with ‘kennedym’ when replying)
KM
Kennedy McEwen
Apr 6, 2004
In article , Toby Thain
writes
Kennedy McEwen wrote in message
news:…

Remember the GIF format and the Unisys debacle?

Unlike GIF, unencumbered PSP readers and writers exist, as I have shown.
As did many readers and writers for GIF, until Unisys decided to enforce their IPR which they had previously been quite happy for everyone to use openly.

In that vein, understand that Jasc cannot "revoke
licenses" for free code that reads and writes PSP as it has been documented to date. Unisys’ patent was on the specific LZW compression method. Jasc has no such hold on LZ77, for instance.

Patents are not the only form of IPR and Jasc certainly do own IPR in the PSP format. What they choose to do with that in the future is anyone’s guess, just as nobody would have predicted Unisys enforcing their IPR in the GIF format.

Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he’s pissed.
Python Philosophers (replace ‘nospam’ with ‘kennedym’ when replying)
BV
Bart van der Wolf
Apr 6, 2004
"Kennedy McEwen" wrote in message
In article , Hecate
writes

If you save at maximum bit depth without
making adjustments you will then always have a raw (or even RAW <g>) file which may produce better, or different results at a later date because

a. the technology may have improved and,
b. you may want to make different adjustments to the file than those you originally thought of.
Unless you evolve some new technology to replace your eyes then "a" is irrelevant. However, if you are going to save without making any adjustments at all then do so at the highest bit depth available.

Just guessing but, if "Hectate" was thinking of a VueScan Raw (64-bits), it is possible to benefit from e.g. improvements that Ed Hamrick makes on his IR-cleaning method, and he’s working on his "Curves adjustment option". It happened in the past, IR-cleaning improved and all I had to do was rerun VS on the Raw file and never had to touch the film untill I got a better scanner.
I am also thinking about improved tonescaling or High Dynamic Range compression based on Raw scan data.

But you are right, once adjustments have been applied, and no significant new adjustments are anticipated, there’s little benefit in keeping more than 24-bits color.

Bart
R
RSD99
Apr 7, 2004
You might take a look at both the TIF and PNG formats. I think they’ll both "be around" for quite a while.

PNG has many of the capabilities found in the TIF / TIFF format … but not(yet) the usage in the photo and publishing worlds. Additionally … by design and because of the Unisys fiasco … it has a compression scheme that is not covered by patents. It’s usage can easily extend past the ‘web only’ realm, and might be a good candidate for archiving. The only thing, I don’t think that it can handle CMYK (yet).

"Kennedy McEwen" wrote in message
In article , Toby Thain
writes
Kennedy McEwen wrote in message
news:…

Remember the GIF format and the Unisys debacle?

Unlike GIF, unencumbered PSP readers and writers exist, as I have shown.
As did many readers and writers for GIF, until Unisys decided to enforce their IPR which they had previously been quite happy for everyone to use openly.

In that vein, understand that Jasc cannot "revoke
licenses" for free code that reads and writes PSP as it has been documented to date. Unisys’ patent was on the specific LZW compression method. Jasc has no such hold on LZ77, for instance.

Patents are not the only form of IPR and Jasc certainly do own IPR in the PSP format. What they choose to do with that in the future is anyone’s guess, just as nobody would have predicted Unisys enforcing their IPR in the GIF format.

Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he’s pissed.
Python Philosophers (replace ‘nospam’ with ‘kennedym’ when replying)
H
Hecate
Apr 7, 2004
On Mon, 05 Apr 2004 22:17:30 -0500, Wayne Fulton
wrote:

In article ,
says…
When I say archive I mean scan – no adjustments whatever – archive. I don’t men archiving after adjustments. 🙂

Other than non-photo contrivances, it has never been convincingly demonstrated that 16 bit output actually helps photos. Cant hurt however, other than time and space, and some people do it anyway. Me too, at times. But keep in mind that other than one of the few 16 bit editor programs, there is no other use for a 16 bit image.

Understand. It’s purely for image Editor use i.e. Photoshop.



Hecate

veni, vidi, reliqui
H
Hecate
Apr 7, 2004
On Tue, 6 Apr 2004 17:54:53 +0100, Kennedy McEwen
wrote:

In article , Hecate
writes

If you save at maximum bit depth without
making adjustments you will then always have a raw (or even RAW <g>) file which may produce better, or different results at a later date because

a. the technology may have improved and,
b. you may want to make different adjustments to the file than those you originally thought of.
Unless you evolve some new technology to replace your eyes then "a" is irrelevant. However, if you are going to save without making any adjustments at all then do so at the highest bit depth available.

I meant the technology for making image adjustments. <shrug>

And yes, that is why I save them at the highest bit depth – it makes no sense to "degrade" your "negative".



Hecate

veni, vidi, reliqui
H
Hecate
Apr 7, 2004
On Tue, 6 Apr 2004 20:01:29 +0200, "Bart van der Wolf" wrote:

"Kennedy McEwen" wrote in message
In article , Hecate
writes

If you save at maximum bit depth without
making adjustments you will then always have a raw (or even RAW <g>) file which may produce better, or different results at a later date because

a. the technology may have improved and,
b. you may want to make different adjustments to the file than those you originally thought of.
Unless you evolve some new technology to replace your eyes then "a" is irrelevant. However, if you are going to save without making any adjustments at all then do so at the highest bit depth available.

Just guessing but, if "Hectate" was thinking of a VueScan Raw (64-bits), it is possible to benefit from e.g. improvements that Ed Hamrick makes on his IR-cleaning method, and he’s working on his "Curves adjustment option". It happened in the past, IR-cleaning improved and all I had to do was rerun VS on the Raw file and never had to touch the film untill I got a better scanner.
I am also thinking about improved tonescaling or High Dynamic Range compression based on Raw scan data.

Thanks, yes. That’s exactly the sort of improvements I mean. Software improvements may make it possible to make the image look better and so forth. Even look different. You never know what you might want to do with an image.

But you are right, once adjustments have been applied, and no significant new adjustments are anticipated, there’s little benefit in keeping more than 24-bits color.
Yes.



Hecate

veni, vidi, reliqui
U
Uni
Apr 7, 2004
Kennedy McEwen wrote:
In article , Uni
writes

http://www.aja.com/kona.htm
I have built systems (monochrome as it happens, but that shouldn’t influence the result) with 12-bit ADCs on the video channel output. The difference cannot be perceived, but it is something the marketing folks like to exploit. 😉

I believe, humans should never be limited to what dumb computers typically provide.
My eyesight (and/or hearing) isn’t digital and never will be.

🙂

Uni
RH
Roger Halstead
Apr 7, 2004
On Sun, 04 Apr 2004 21:17:12 GMT, "Mike Russell" wrote:

Robert A wrote:
Mike, my 4000-dpi 16-bit scans take up more than 100Mb each. Normally when working in Photoshop, I convert to 8-bit right away. The reason I scan in 16-bit is for greater dynamic range and shadow detail. Once the scan is done, is there any useful reason to keep that 16-bit data?

Are you scanning medium format? I wonder if you are getting any additional resolution over, say, a 25 or 50 meg scan. Have you experimented and can you see the difference on your prints? You may even be losing sharpness by scanning at too high a resolution. Scanning at a high resolution introduces

My experience has been the higher resolutions such as 4000 Vs 2000 can introduce a grain effect, while the lower resolution gives a softer image with less detail.

I had some aerial photos of the big bridge between the lower and upper peninsulas of Michigan. I could do far more to the lower resolution scan before the grain effect turned up.

It doesn’t matter if it’s E6 or Kodachrome, that grain effect is there and annoying as can be.

softness, which you must then compensate for by sharpening.
But back to your question. Volumes have been written on the topic of 8 bits versus 16, and I have contributed some bulk to that discussion.

Scanning at 16 bits on the LS5000 ED appears to give a better image than at 8 bits when it comes to post processing. This was readily apparent in some extreme enlargements of tiny parts of the photograph. The grain effect was noticeably less at 16 bits.

That is just my experience from some experimenting today. I normally scan at 8 bits except for some problem slides.

Were I scanning just my present day work I’d scan at 16 and then go to 8, but with the volume and age of the slides I’m doing it’s strictly 8 bit and those create 66 meg un-cropped images and about 53 cropped to a rectangle (get rid of the round corners of the slide mounts). 16 bit is twice the 8 bit or 106 to 132 megs each.
My personal conclusion is that 8 bits per channel is plenty for today’s technology, and the evidence I offer is that (for a gamma 1.8 or greater image) it is impossible to tell by looking, and looking, after all, is what we do with photographs.

But there are those for whom that argument is not convincing, and the act of throwing away any image data is not something they can justify. Whether I agree with the technical reasons for this extra data, I have to say many of these people do create better photographs and prints than I do.
So, pick which side of the fence you want to be on. Above all keep your originals in a safe place – scanners can only continue to get better and better.

I figure at their age a lot of the originals will be useless in a few more years.

Roger Halstead (K8RI & ARRL life member)
(N833R, S# CD-2 Worlds oldest Debonair)
www.rogerhalstead.com
KM
Kennedy McEwen
Apr 7, 2004
In article <4072f07c$0$565$>, Bart van der Wolf
writes
"Kennedy McEwen" wrote in message
In article , Hecate
writes

If you save at maximum bit depth without
making adjustments you will then always have a raw (or even RAW <g>) file which may produce better, or different results at a later date because

a. the technology may have improved and,
b. you may want to make different adjustments to the file than those you originally thought of.
Unless you evolve some new technology to replace your eyes then "a" is irrelevant. However, if you are going to save without making any adjustments at all then do so at the highest bit depth available.

Just guessing but, if "Hectate" was thinking of a VueScan Raw (64-bits), it is possible to benefit from e.g. improvements that Ed Hamrick makes on his IR-cleaning method, and he’s working on his "Curves adjustment option". It happened in the past, IR-cleaning improved and all I had to do was rerun VS on the Raw file and never had to touch the film untill I got a better scanner.
I am also thinking about improved tonescaling or High Dynamic Range compression based on Raw scan data.

But you are right, once adjustments have been applied, and no significant new adjustments are anticipated, there’s little benefit in keeping more than 24-bits color.
Yes – that is specifically why I did not respond to Hectate’s reason "b".

Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he’s pissed.
Python Philosophers (replace ‘nospam’ with ‘kennedym’ when replying)
KM
Kennedy McEwen
Apr 7, 2004
In article , Hecate
writes
On Tue, 6 Apr 2004 17:54:53 +0100, Kennedy McEwen
wrote:

In article , Hecate
writes

If you save at maximum bit depth without
making adjustments you will then always have a raw (or even RAW <g>) file which may produce better, or different results at a later date because

a. the technology may have improved and,
b. you may want to make different adjustments to the file than those you originally thought of.
Unless you evolve some new technology to replace your eyes then "a" is irrelevant. However, if you are going to save without making any adjustments at all then do so at the highest bit depth available.

I meant the technology for making image adjustments. <shrug>
The latest couple of versions of Photoshop have all of the processing options for 16bpc that I believe I need. Sure, there are some plug-ins available which automate some of those functions for pulling detail out of the deep shadows, but the capability is there in the application itself if you are prepared to do it. The only improvement likely in the technology is true 16bpc processing as opposed to PS’s 15-bit approximation, but 1 bit isn’t gonna make a great deal of difference. —
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he’s pissed.
Python Philosophers (replace ‘nospam’ with ‘kennedym’ when replying)
KM
Kennedy McEwen
Apr 7, 2004
In article , Uni
writes
Kennedy McEwen wrote:
In article , Uni
writes

http://www.aja.com/kona.htm
I have built systems (monochrome as it happens, but that shouldn’t influence the result) with 12-bit ADCs on the video channel output. The difference cannot be perceived, but it is something the marketing folks like to exploit. 😉

I believe, humans should never be limited to what dumb computers typically provide.
My eyesight (and/or hearing) isn’t digital and never will be.
That isn’t a problem even with 8-bpc computer graphics because your analogue eyes have more SNR than is available on an 8bpc image. Most studies have indicated that the SNR of your retina is somewhere between 6 & 7 equivalent bits. Even with the requirement to convert from a linear encoding scheme onto a display gamma, 8-bpc is still more than you can visibly discern.

Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he’s pissed.
Python Philosophers (replace ‘nospam’ with ‘kennedym’ when replying)
PE
Preston Earle
Apr 7, 2004
"Kennedy McEwen" wrote: "Nevertheless, since you clearly believe that higher resolution scans are softer and thus, by default, that lower resolution scans are sharper, can we expect to see your Minolta film scanner appearing on e-bay whilst you "trade up" to a sharper 300, perhaps only 100ppi, piece of antiquity? Why don’t you just go the whole hog and flash a single photodiode at your slides to get an ultrasharp 1×1 pixel rendition of the entire image on each slide. the next step is just to remove the sensor completely and type a random character into a file called "image.raw" and observe the infinite sharpness of it all. ;-)"
—————————

Let’s remember that images don’t have resolution until they are displayed/printed. A 3800×2500-pixel image is just a 3800×2500-pixel image. If printed at a 12×8-inch size, it is "hi-res" (~300ppi). If it is printed at 12×8-feet, it is "lo-res" (25ppi). When displayed on a monitor of 1024×768-pixel dimensions, it will have more detail when viewed at 100% (pixel-for-pixel, showing only a small portion of the image) than at 25%, showing the full image on the screen.

As Dan Margulis points out, it is the pixels in an image that are *not* the main subject that provide the detail in an image. For example, in the horse-picture in the reference cited, it is the non-grass pixels that provide detail/texture in the grassy area. If all pixels are "grass", the texture is a uniform carpet with little detail.

There is an optimum range for print resolution: below a certain level, the print appears "pixilated"–above a certain level, the "detail" pixels get overwhelmed by the "main subject" pixels and disappear, yielding an image lacking detail (i.e. "softer").

I could go on, but I suspect Kennedy knows this difference, and I don’t want to repetitive. *Large* images generally have more detail than *small* ones, but very "hi-res* images (above 600ppi) will generally appear softer and less detailed than their lower-res (say, 225ppi) brothers.

Preston Earle
WF
Wayne Fulton
Apr 7, 2004
In article <A9Ucc.764$>,
says…

Let’s remember that images don’t have resolution until they are displayed/printed. A 3800×2500-pixel image is just a 3800×2500-pixel image. If printed at a 12×8-inch size, it is "hi-res" (~300ppi). If it is printed at 12×8-feet, it is "lo-res" (25ppi). When displayed on a monitor of 1024×768-pixel dimensions, it will have more detail when viewed at 100% (pixel-for-pixel, showing only a small portion of the image) than at 25%, showing the full image on the screen.
As Dan Margulis points out, it is the pixels in an image that are *not* the main subject that provide the detail in an image. For example, in the horse-picture in the reference cited, it is the non-grass pixels that provide detail/texture in the grassy area. If all pixels are "grass", the texture is a uniform carpet with little detail.
There is an optimum range for print resolution: below a certain level, the print appears "pixilated"–above a certain level, the "detail" pixels get overwhelmed by the "main subject" pixels and disappear, yielding an image lacking detail (i.e. "softer").
I could go on, but I suspect Kennedy knows this difference, and I don’t want to repetitive. *Large* images generally have more detail than *small* ones, but very "hi-res* images (above 600ppi) will generally appear softer and less detailed than their lower-res (say, 225ppi) brothers.

You initially presented your argument as the subjective qualities of High Resolution Images vs Low Resolution Images (whatever that means), but your conclusion seems rather backwards when you claim "high resolution images are soft".

Dan Margulis was discussing undesirable results due to attempts to print images at an EXCESSIVE RESOLUTION, greater than the specific media screen can handle, but you failed to make that distinction. He did use the word HIGH, but he made the context be very clear. There is a conceptual difference between high and excessive.


Wayne
http://www.scantips.com "A few scanning tips"
T
toby
Apr 7, 2004
Kennedy McEwen …
In article , Toby Thain
writes
Kennedy McEwen wrote in message
news:…

Remember the GIF format and the Unisys debacle?

Unlike GIF, unencumbered PSP readers and writers exist, as I have shown.
As did many readers and writers for GIF, until Unisys decided to enforce their IPR which they had previously been quite happy for everyone to use openly.

I’ll say it again. The PSP reading and writing code is unencumbered; Jasc does not have a patent on LZ77 and cannot patent RLE. While its enforcement was something of a surprise at the time, Unisys did actually have a patent (now expired, apparently), where Jasc does not. So it is not the same situation at all.

In that vein, understand that Jasc cannot "revoke
licenses" for free code that reads and writes PSP as it has been documented to date. Unisys’ patent was on the specific LZW compression method. Jasc has no such hold on LZ77, for instance.

Patents are not the only form of IPR and Jasc certainly do own IPR in the PSP format. What they choose to do with that in the future is anyone’s guess, just as nobody would have predicted Unisys enforcing their IPR in the GIF format.

There is no enforceable patent etc. in the PSP format through v8. I’ve read it and implemented it and my implementation is under GPL (with acknowledgement to Adler & Gailly, of course, for zlib).

Toby
KM
Kennedy McEwen
Apr 7, 2004
In article <A9Ucc.764$>, Preston Earle
writes
"Kennedy McEwen" wrote: "Nevertheless, since you clearly believe that higher resolution scans are softer and thus, by default, that lower resolution scans are sharper, can we expect to see your Minolta film scanner appearing on e-bay whilst you "trade up" to a sharper 300, perhaps only 100ppi, piece of antiquity? Why don’t you just go the whole hog and flash a single photodiode at your slides to get an ultrasharp 1×1 pixel rendition of the entire image on each slide. the next step is just to remove the sensor completely and type a random character into a file called "image.raw" and observe the infinite sharpness of it all. ;-)"
—————————

Let’s remember that images don’t have resolution until they are displayed/printed.

On the contrary – some of the systems I design provide data to equipment designed by colleagues which is totally dependent on resolution, yet an actual image may never be produced at any stage of the process. At most, all that is ever output by the system after photons enter the lens is an angular coordinate! However none of the system would function without resolution.

Indeed, the very measurement of resolution does not even depend on an image being produced!

There is an optimum range for print resolution: below a certain level, the print appears "pixilated"–above a certain level, the "detail" pixels get overwhelmed by the "main subject" pixels and disappear, yielding an image lacking detail (i.e. "softer").
No, and that is NOT what Dan is saying either. As I mentioned in my previous response, Dan is specifically addressing how the image is decimated by the rendering process. There is no resolution beyond which an image becomes apparently softer. The absolute proof of this is to take your best low resolution unsharpened image that you consider to be the most sharp reproduction you can create and prop it up on a stand so that it can be viewed next to the original scene at the same relative scale. The original scene has effectively infinite ppi, with a resolution limited only by your eyes. Guess which will be sharper!

I could go on, but I suspect Kennedy knows this difference, and I don’t want to repetitive. *Large* images generally have more detail than *small* ones, but very "hi-res* images (above 600ppi) will generally appear softer and less detailed than their lower-res (say, 225ppi) brothers.
Sorry Earle, but that is complete rubbish. I have in front of me at this moment, two identical images. One printed at 240ppi, selected because it fits integrally with the resampling density of the Epson printer I used to create it. The other is scanned and printed at 720ppi. You might want to try to explain why the latter print not only looks sharper to the naked eye, but considerably sharper when examined under a x4 magnifier. (If you search through the archives on this group you will find a previous thread where I addressed this and my normal practice of printing all of my contact sheets at this resolution specifically for this reason.)

Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he’s pissed.
Python Philosophers (replace ‘nospam’ with ‘kennedym’ when replying)
KM
Kennedy McEwen
Apr 7, 2004
In article , Kennedy McEwen
writes
That isn’t a problem even with 8-bpc computer graphics because your analogue eyes have more SNR than is available on an 8bpc image.

There should, obviously, be a "no" after "your analogue eyes have" there.

Work is just so enjoyable I was clearly rushing to get off there before posting that at breakfast this morning! 😉

Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he’s pissed.
Python Philosophers (replace ‘nospam’ with ‘kennedym’ when replying)
KM
Kennedy McEwen
Apr 7, 2004
In article , Toby Thain
writes
I’ll say it again. The PSP reading and writing code is unencumbered; Jasc does not have a patent on LZ77 and cannot patent RLE. While its enforcement was something of a surprise at the time, Unisys did actually have a patent (now expired, apparently), where Jasc does not. So it is not the same situation at all.
And I’ll say it again too – patents are not the only form of IPR! —
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he’s pissed.
Python Philosophers (replace ‘nospam’ with ‘kennedym’ when replying)
H
Hecate
Apr 8, 2004
On Wed, 7 Apr 2004 09:04:14 +0100, Kennedy McEwen
wrote:

I meant the technology for making image adjustments. <shrug>
The latest couple of versions of Photoshop have all of the processing options for 16bpc that I believe I need. Sure, there are some plug-ins available which automate some of those functions for pulling detail out of the deep shadows, but the capability is there in the application itself if you are prepared to do it. The only improvement likely in the technology is true 16bpc processing as opposed to PS’s 15-bit approximation, but 1 bit isn’t gonna make a great deal of difference.

Yes, I agree that it is unlikely that further improvements in the capability of Photoshop will make much difference. Then again, the head of IBM, in 159 (?) said the world would only need 5 computers. I never say never.



Hecate

veni, vidi, reliqui
U
Uni
Apr 8, 2004
Preston Earle wrote:
"Kennedy McEwen" wrote: "Nevertheless, since you clearly believe that higher resolution scans are softer and thus, by default, that lower resolution scans are sharper, can we expect to see your Minolta film scanner appearing on e-bay whilst you "trade up" to a sharper 300, perhaps only 100ppi, piece of antiquity? Why don’t you just go the whole hog and flash a single photodiode at your slides to get an ultrasharp 1×1 pixel rendition of the entire image on each slide. the next step is just to remove the sensor completely and type a random character into a file called "image.raw" and observe the infinite sharpness of it all. ;-)"
—————————

Let’s remember that images don’t have resolution until they are displayed/printed. A 3800×2500-pixel image is just a 3800×2500-pixel image. If printed at a 12×8-inch size, it is "hi-res" (~300ppi). If it is printed at 12×8-feet, it is "lo-res" (25ppi). When displayed on a monitor of 1024×768-pixel dimensions, it will have more detail when viewed at 100% (pixel-for-pixel, showing only a small portion of the image) than at 25%, showing the full image on the screen.

I agree. A very pixelated image will show a moiré pattern when zoomed. This does not mean a moiré pattern exists in the image.

Uni

As Dan Margulis points out, it is the pixels in an image that are *not* the main subject that provide the detail in an image. For example, in the horse-picture in the reference cited, it is the non-grass pixels that provide detail/texture in the grassy area. If all pixels are "grass", the texture is a uniform carpet with little detail.
There is an optimum range for print resolution: below a certain level, the print appears "pixilated"–above a certain level, the "detail" pixels get overwhelmed by the "main subject" pixels and disappear, yielding an image lacking detail (i.e. "softer").
I could go on, but I suspect Kennedy knows this difference, and I don’t want to repetitive. *Large* images generally have more detail than *small* ones, but very "hi-res* images (above 600ppi) will generally appear softer and less detailed than their lower-res (say, 225ppi) brothers.

Preston Earle

T
toby
Apr 8, 2004
Kennedy McEwen …
In article , Toby Thain
writes
I’ll say it again. The PSP reading and writing code is unencumbered; Jasc does not have a patent on LZ77 and cannot patent RLE. While its enforcement was something of a surprise at the time, Unisys did actually have a patent (now expired, apparently), where Jasc does not. So it is not the same situation at all.

And I’ll say it again too – patents are not the only form of IPR!

In order that I can correct my understanding – Please be *explicit* how Jasc can "revoke" existing free implementations of the PSP format, without holding any relevant patents? Since I have published one, this is of more than academic interest. What attack should I expect? Which parts of the implementation are encumbered?

Toby
D
davidjl
Apr 8, 2004
"Toby Thain" wrote:
Kennedy McEwen wrote:
In article , Toby Thain
writes
I’ll say it again. The PSP reading and writing code is unencumbered; Jasc does not have a patent on LZ77 and cannot patent RLE. While its enforcement was something of a surprise at the time, Unisys did actually have a patent (now expired, apparently), where Jasc does not. So it is not the same situation at all.

And I’ll say it again too – patents are not the only form of IPR!

In order that I can correct my understanding – Please be *explicit* how Jasc can "revoke" existing free implementations of the PSP format, without holding any relevant patents? Since I have published one, this is of more than academic interest. What attack should I expect? Which parts of the implementation are encumbered?

For starters, they could change the specs, leaving other implementations non-functional with respect to the current version of the program.

David J. Littleboy
Tokyo, Japan
KM
Kennedy McEwen
Apr 8, 2004
In article , Toby Thain
writes
Kennedy McEwen wrote in message
news:…
In article , Toby Thain
writes
I’ll say it again. The PSP reading and writing code is unencumbered; Jasc does not have a patent on LZ77 and cannot patent RLE. While its enforcement was something of a surprise at the time, Unisys did actually have a patent (now expired, apparently), where Jasc does not. So it is not the same situation at all.

And I’ll say it again too – patents are not the only form of IPR!

In order that I can correct my understanding – Please be *explicit* how Jasc can "revoke" existing free implementations of the PSP format, without holding any relevant patents? Since I have published one, this is of more than academic interest. What attack should I expect? Which parts of the implementation are encumbered?
I don’t think you should expect any attack at all, but it is niaive to believe that you will be able to use their proprietary standard indefinitely. That may well be the case but, since it is proprietary, it is not guaranteed – unless you have some written agreement from Jasc specifically stating otherwise.

How sure are you that every aspect of their format has indeed been openly published on the reference you provided? Before answering, you would be well advised to read again the disclaimers at the start of their publication, which would appear to provide Jasc with specific legal protection from anyone making claims in just such an event. Do you know that the next version of Jasc software will not search for a specific byte sequence which only their software has written to the files since the format was first used?

Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he’s pissed.
Python Philosophers (replace ‘nospam’ with ‘kennedym’ when replying)
PE
Preston Earle
Apr 8, 2004
"Wayne Fulton" wrote: "You initially presented your argument as the subjective qualities of High Resolution Images vs Low Resolution Images (whatever that means), but your conclusion seems rather backwards when you claim "high resolution images are soft".

"Dan Margulis was discussing undesirable results due to attempts to print images at an EXCESSIVE RESOLUTION, greater than the specific media screen can handle, but you failed to make that distinction. He did use the word HIGH, but he made the context be very clear. There is a conceptual difference between high and excessive."
—————————

Point taken. If a 3900×2600-pixel image is printed to 6"x4", it will be at 650ppi RESOLUTION, an excessive amount for generally-used output methods. If a 1800×1200-pixel image of the same scene is printed at 6"x4", it will be at 300ppi, generally considered hi-res but not excessive.

In the context of Archiving, particularly when the object is making prints of modest size (4"x6"), I thought it was important to make the point that bigger is not necessarily better. I find it difficult not to scan even casual images at "Maximum Resolution", even when this results in files four times bigger than I really need and clean-up times four times longer than for files of Optimum Resolution. I scan everything at full frame, even though I know I won’t be needing the unnecessary parts of the image. I know this is a waste of time and resources, yet I find it hard not to do.

Preston Earle
PE
Preston Earle
Apr 8, 2004
"Kennedy McEwen" wrote: "I have in front of me at this moment, two identical images. One printed at 240ppi, selected because it fits integrally with the resampling density of the Epson printer I used to create it. The other is scanned and printed at 720ppi. You might want to try to explain why the latter print not only looks sharper to the naked eye, but considerably sharper when examined under a x4 magnifier. (If you search through the archives on this group you will find a previous thread where I addressed this and my normal practice of printing all of my contact sheets at this resolution specifically for this reason.)"
—————————-

Well, are they "identical" or not? If identical, how do you tell them apart? Do you mean they are images of the same scene or scans of the same piece of film? It would be hard to get two "identical images" without duplicating one to make the other.

If you are talking about duplicated images from the same file, one printed at one print-resolution and one at another, I’m not sure what this shows. There is an optimum pixels-per-output-dot range, usually 1.5 to 2, for conventional and stochastic offset screens, and I haven’t seen any discussion that this is not also true for ink-jet printers. If not duplicate files, then scanner software and image-manipulation software can introduce all sorts of unknown manipulations to the file data. Print drivers can also do "funky" things to particular images. And some image issues are subject-related, moiré being the first to come to mind.

If you believe that more pixels-per-output-dot is always better than fewer, I guess we disagree. If I’m misunderstanding your points, perhaps you need to explain them in words of fewer syllables for this simple printer. (I’m trying to be cute, not making light of you or your position. I apologize if this reads the wrong way.)

Preston Earle
F
franktlcc
Apr 8, 2004
zzzzz…..
WF
Wayne Fulton
Apr 9, 2004
In article <AAjdc.10449$>,
says…
In the context of Archiving, particularly when the object is making prints of modest size (4"x6"), I thought it was important to make the point that bigger is not necessarily better. I find it difficult not to scan even casual images at "Maximum Resolution", even when this results in files four times bigger than I really need and clean-up times four times longer than for files of Optimum Resolution. I scan everything at full frame, even though I know I won’t be needing the unnecessary parts of the image. I know this is a waste of time and resources, yet I find it hard not to do.

I think it is the typical behavior of human male animals <g>

Margulis was specifically discussing 150 lpi prepress screens, and he begins that chapter saying "it’s silly to assume greater resolution is better".

Prepress conventional wisdom says dpi should be in the range of 1.4 to
2.0 times lpi. These are intended to be limits, but we can find some
writers calling this multipler to be a "quality factor". We all assume
2.0 must be better than 1.4. But Margulis is saying and showing in his
printed book that the low end may give sharper final images.

(note that PDF files can only show print-size images on the video screen resampled to about 1/4 size, and the RGB JPG images in this file are not even screened. We need the printed book to see what he shows. It is about printing).

But the automatic thinking of most males is that if the maximum of 2.0 is good, think how great 4.0 must be. <g> We males dont always grasp the concept of appropriate, and we understand maximum as a goal to be exceeded. <g>


Wayne
http://www.scantips.com "A few scanning tips"
KM
Kennedy McEwen
Apr 9, 2004
In article <XAjdc.10451$>, Preston
Earle writes
"Kennedy McEwen" wrote: "I have in front of me at this moment, two identical images. One printed at 240ppi, selected because it fits integrally with the resampling density of the Epson printer I used to create it. The other is scanned and printed at 720ppi. You might want to try to explain why the latter print not only looks sharper to the naked eye, but considerably sharper when examined under a x4 magnifier. (If you search through the archives on this group you will find a previous thread where I addressed this and my normal practice of printing all of my contact sheets at this resolution specifically for this reason.)"
—————————-

Well, are they "identical" or not? If identical, how do you tell them apart? Do you mean they are images of the same scene or scans of the same piece of film? It would be hard to get two "identical images" without duplicating one to make the other.

I think I explained the situation quite clearly, but obviously not clearly enough. The images are "as identical" as the examples you cited in Margulis’ text.

These images are contact sheets – small thumbnail images of negative scans which are roughly 1.25x the actual size of the negatives themselves. They are stored interleaved with my negatives in archive files for ease of identification of negatives and search purposes. Of the first several pages I have two sets. The initial set made by printing the composite sheets of 4000ppi scans at exactly 240ppi on the page. The second sheet, printed at 720ppi on the page. This latter format I now use as a standard because the images are *much* sharper under normal viewing conditions.

If you are talking about duplicated images from the same file, one printed at one print-resolution and one at another, I’m not sure what this shows.

What it shows is exactly the topic of this sub-thread – the statement that higher resolution results in softer images – is simply not true.

There is an optimum pixels-per-output-dot range, usually 1.5 to 2, for conventional and stochastic offset screens, and I haven’t seen any discussion that this is not also true for ink-jet printers. If not duplicate files, then scanner software and image-manipulation software can introduce all sorts of unknown manipulations to the file data. Print drivers can also do "funky" things to particular images. And some image issues are subject-related, moiré being the first to come to mind.
As I wrote previously, one image is printed at 240ppi *on the page* the other at 720ppi *on the page* both sizes selected to specifically match the native resampling pf the Epson desktop printer range. There is *no* issue here of the driver doing "funky things to particular images", the images were resampled using known algorithms. In addition, being several contact sheets, there are approximately 100 images in the selection – not massive, but significant enough quantity to demonstrate that this is the norm.

If you believe that more pixels-per-output-dot is always better than fewer, I guess we disagree. If I’m misunderstanding your points, perhaps you need to explain them in words of fewer syllables for this simple printer. (I’m trying to be cute, not making light of you or your position. I apologize if this reads the wrong way.)
Well, by way of explanation, you partially address the issue in your paragraph above. Epson desktop photo printers all resample to a native resolution of 720ppi before the stochastic dot rendering process, which can be between 1440x720dpi to 5760x1440dpi depending on the printer. Since all such cases are well above your nominal 2 output dots per pixel it is clear that all of the Epson range of printers (the very wide format range resamples to 360ppi) operate perfectly well in the region where it is not possible to even encounter the decimation artefacts that Margulis is referring to. Consequently the situation simply never arises where pixel decimation results in higher resolution images being printed with softer results than lower resolution equivalents. —
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he’s pissed.
Python Philosophers (replace ‘nospam’ with ‘kennedym’ when replying)
T
toby
Apr 9, 2004
Kennedy McEwen …
In article , Toby Thain
writes
Kennedy McEwen wrote in message
news:…
In article , Toby Thain
writes
I’ll say it again. The PSP reading and writing code is unencumbered; Jasc does not have a patent on LZ77 and cannot patent RLE. While its enforcement was something of a surprise at the time, Unisys did actually have a patent (now expired, apparently), where Jasc does not. So it is not the same situation at all.

And I’ll say it again too – patents are not the only form of IPR!

In order that I can correct my understanding – Please be *explicit* how Jasc can "revoke" existing free implementations of the PSP format, without holding any relevant patents? Since I have published one, this is of more than academic interest. What attack should I expect? Which parts of the implementation are encumbered?
I don’t think you should expect any attack at all, but it is niaive to believe that you will be able to use their proprietary standard indefinitely. That may well be the case but, since it is proprietary, it is not guaranteed – unless you have some written agreement from Jasc specifically stating otherwise.

How sure are you that every aspect of their format has indeed been openly published on the reference you provided? Before answering, you would be well advised to read again the disclaimers at the start of their publication, which would appear to provide Jasc with specific legal protection from anyone making claims in just such an event. Do you know that the next version of Jasc software will not search for a specific byte sequence which only their software has written to the files since the format was first used?

I think it is obvious that in *future* versions they reserve the right to make an incompatible spec and/or not make it public (as Adobe has withdrawn theirs), but I am only discussing the versions through v8 which are published.

Since the version 8 application is also released, it is easily confirmed that no such retrospective "revocation" of interoperability has already occurred. Likewise, I still insist that they have no legal power over *existing* 3rd party implementations.

To date Jasc shows more common sense than Adobe, by choosing to encourage 3rd party developers and opening their specifications.

Toby
R
RSD99
Apr 9, 2004
Wayne Fulton posted "…
Prepress conventional wisdom says dpi should be in the range of 1.4 to 2.0 times lpi. These are intended to be limits, but we can find some writers calling this multipler to be a "quality factor". We all assume 2.0 must be better than 1.4. But Margulis is saying and
showing in his printed book that the low end may give sharper final images. …."

That discounts the cases where the scan shows a moir
T
toby
Apr 9, 2004
"RSD99" …
Wayne Fulton posted "…
Prepress conventional wisdom says dpi should be in the range of 1.4 to 2.0 times lpi. These are intended to be limits, but we can find some writers calling this multipler to be a "quality factor". We all assume 2.0 must be better than 1.4. But Margulis is saying and
showing in his printed book that the low end may give sharper final images. …"

That discounts the cases where the scan shows a moiré pattern (as in textured cloth), or fine lines that have "the jaggies."
[snip]

One also has to consider the problem of moiré between the image (e.g. fabric pattern) and the halftone dot screen itself – the only solution to which is to use a stochastic or screenless halftone. (The other step usually necessary to reduce moiré is to scan at a higher resolution.)

T

How to Master Sharpening in Photoshop

Give your photos a professional finish with sharpening in Photoshop. Learn to enhance details, create contrast, and prepare your images for print, web, and social media.

Related Discussion Topics

Nice and short text about related topics in discussion sections