In article <sgicc.24215$>, Preston
Earle writes
"Kennedy McEwen" wrote: "Scanning at an increased sampling density may not offer any more resolution in the image, but it certainly cannot make it any less sharp or softer!" and, later, "But as you can see, from your first quoted line above which you have conveniently retained throughout this thread, your statement concerned high resolution scanning, not resampling."
———————
I haven’t followed this thread closely, and I don’t know anything about Nyquist frequencies, but aren’t you quibbling a little? <g>
Obviously not! 🙂
If an image
is printed at a particular size from two otherwise similar files of different resolutions, the image from the higher-res original will be softer and appear less sharp.
That is complete rubbish – the higher resolution image will always, unless you have had to degrade it in some way to meet the constraints of your printer, be much sharper than the lower resolution image (assuming that the image contains adequate fine detail with which to observe the difference in the first place).
Where on earth did you ever get the idea that higher resolution meant less sharp results?
Whether from scanning or resizing, the
higher-res image will be softer, will it not?
Definitely not. A higher resolution scan will contain finer detail and sharper edges than a lower resolution scan, if the information is present in the image in the first place. Even when it is not, it cannot be less sharp than the lower resolution scan, only as sharp. A resized image can never contain any more detail or sharper edges than it already has. How much the resized image softens depends on the algorithm used. For example, nearest neighbour interpolation will retain apparent edge sharpness completely, whilst bicubic will soften it slightly and bilinear more so. None of these algorithms will provide an image as sharp as a higher resolution scan – again assuming that the image has higher resolution content to bring out in the first place.
[See page 12 of the PDF at
http://www.ledet.com/margulis/PP7_Ch15_Resolution.pdf (or page 306 of the book).]
Not the same thing at all. Both images have been resampled to exactly the same resolution for presentation on the page. This is fairly obvious if you zoom into the images in the pdf file you referenced – although the upper image has been scanned at 3x the resolution it has exactly the same pixel dimensions as the lower image.
What you are looking at here is 3rd (and higher, odd) harmonic distortion caused by reproducing each sample as a square pixel on the page. (Recall harmonic distortion in audio – well you get it in images too!) Each sample, however, only represents the image at an infinitely small point in space, called a delta function, which has a volume equal to the average light incident on the CCD sensor centred at that point. The sample, in reality does not exist anywhere else, however an array of delta functions is not a particularly useful thing to look at – for one thing they require an infinite video bandwidth to reproduce on your monitor, and an infinite dpi printer to represent them. So each delta function is represented instead by a pixel, which has a finite dimension but is, in fact, a completely false representation. What should occur between the samples depends on how the user chooses to reproduce the delta function in pixel terms – how he *interpolates* between the samples. Block pixels are simply a uniform square interpolation – introducing every odd harmonic spatial frequency above what is possible for the samples to contain, which is simply false information. However they do make the image look artificially sharp. It is important to draw a distinction right away between the use of the term interpolation here and what is normally referred to by the same term in upscaling – this is simply how each sample is represented by a pixel, in terms of it size, shape and intensity profile, in the final image.
That conventional square uniform pixel reconstruction process is no more valid than a linear interpolated pixel, where each pixel is represented by an intensity at its centre prportional to the volume of the delta function it represents and which linearly merges to reach the intensity of the neighbouring pixels at their centres. In the simplest, bilinear, case each pixel is effectively a square based pyramid (height representing intensity), with the corners incident on the centre of the neighbouring pixels. Although bilinear is the simplest version of this and implements the linear merging only in the horizontal and vertical axes, you can imagine octagonal interpolation where the intensity of each pixel merges to the 8 nearest neighbours, or even circularly symmetric interpolation. Clearly such interpolation schemes cannot be linear, since the sum of the uniform samples must also be a uniform illumination field but, nevertheless, such interpolation is possible. Similarly there are higher order profile pixels which have intensities which are polynomial curves, even pixels which extend their intensity profile well beyond their nearest neighbours, although being constrained to zero at them. Indeed, the ideal pixel reproduction, which introduces no spatial harmonic distortion on the image at all, would have just such a profile, extending to infinity in all directions.
What you are doing when you upscale an image using bilinear, bicubic or any other interpolation method is *simulating* that pixel representation by using higher resolution pixels to create the intermediate samples. What you therefore perceive as a reduction in sharpness through bilinear upscaling is merely the effect of a different pixel reproduction, not a loss in sharpness over the original lower sampling density original.
The proof of this? Simply upscale using nearest neighbour interpolation. That gives you a simulation of the square uniform pixel reproduction using several new pixels to represent each old one but now, of course, the effect of sharpness is retained.
In short, the effective sharpness of a scaled image is nothing to do with the resolution, simply how each pixel is represented in the first place. Quite different from increased scanning resolution, where information conveying true image sharpness is pulled off of the original medium, rather than synthetic odd harmonic distortions.
Mike was not referring to how the image was printed or reproduced in his comments, merely what happened when an image was scanned – hence my original question.
Nevertheless, since you clearly believe that higher resolution scans are softer and thus, by default, that lower resolution scans are sharper, can we expect to see your Minolta film scanner appearing on e-bay whilst you "trade up" to a sharper 300, perhaps only 100ppi, piece of antiquity? Why don’t you just go the whole hog and flash a single photodiode at your slides to get an ultrasharp 1×1 pixel rendition of the entire image on each slide. the next step is just to remove the sensor completely and type a random character into a file called "image.raw" and observe the infinite sharpness of it all. 😉 —
Kennedy
Yes, Socrates himself is particularly missed;
A lovely little thinker, but a bugger when he’s pissed.
Python Philosophers (replace ‘nospam’ with ‘kennedym’ when replying)