Determining the ideal resolution automatically?

TA
Posted By
Tim.Ahrens
Apr 30, 2008
Views
909
Replies
17
Status
Closed
Some of the images I have (some photos, some scans) are slightly blurred, some more, some less. So, I can reduce the resolution practically without loss of information. To be sure, I typically reduce the resolution, then increase the resolution back to the original and if there is no visible loss of information compared to the original image I know that the resolution was not too low. It typically takes a few attempts to find out how far I can go.

My question is: is there a tool that does this job automatically? I mean, in a blurry image, determining what the "real" resolution is and then downsample automatically? Sorry if this has been asked before, I couldn’t find anything.

Thanks!
Tim

How to Master Sharpening in Photoshop

Give your photos a professional finish with sharpening in Photoshop. Learn to enhance details, create contrast, and prepare your images for print, web, and social media.

J
Joe
Apr 30, 2008
wrote:

Some of the images I have (some photos, some scans) are slightly blurred, some more, some less. So, I can reduce the resolution practically without loss of information. To be sure, I typically reduce the resolution, then increase the resolution back to the original and if there is no visible loss of information compared to the original image I know that the resolution was not too low. It typically takes a few attempts to find out how far I can go.
My question is: is there a tool that does this job automatically? I mean, in a blurry image, determining what the "real" resolution is and then downsample automatically? Sorry if this has been asked before, I couldn’t find anything.

Thanks!
Tim

As far as I know, there is no magical nor psychic tool available in any shape or form.

– In general, reducing then increasing resolution won’t do you any good but wasting time and imaging.

Yes, you can use tool’s like sharpen to trick your eyes, tool to remove or reduce the blurry area (or specific color around the edge) etc. some company often give a fancy name "FOCUS" to make $$$

– Automatically? the only automatically I know is learning the basic of photography.

– Blur, learn to set the shutter speed

– Sharp/Soft, learn to set the camera to get good IQ

– And you may need GOOD LENS to capture good IQ

And while waiting to improve your photography, saving $$$ for top_of_the_line_LENS I may be able to give few hints to try.

1. Don’t mess with the resolution, especially reducing then increasing which won’t do you no good but wating your valuable time that I think you should use to learn the real trick.

2. You may try using

– "Contrast" to boost the color’s

– Combination of "Level", "Curves", "Color Balance" and similar. I forget
the names of the commands, but there are few basic commands with option to adjust individual color’s which you may be able to use to reduce the blurry edge.

– You may need to learn to use Layer and Masking these are real handy combination which I almost never go without layer and mask.

That’s about it, and you don’t see me mention anything about RESOLUTION which to me it ain’t that important. Or I usually only work with good hi-rez image to give me the option to do real retouching instead of fixing the poor or damaged image.
TA
Tim.Ahrens
Apr 30, 2008
Thanks for your explanation, Joe.

– In general, reducing then increasing resolution won’t do you any good but wasting time and imaging.

I should have explained that a bit better. I am doing this only to find out how far down I can go. Once I know that I only downsample the image without increasing the resolution again. This is purely a test whether I can downsample to a certain resolution without loss of visual information. I believe this could be done automatically.

I am not trying to improve the quality of a blurred image. I only want to save disk space by not storing slightly blurred images at an unnecessarily high resolution.
RG
Roy G
Apr 30, 2008
wrote in message
Thanks for your explanation, Joe.

– In general, reducing then increasing resolution won’t do you any good but
wasting time and imaging.

I should have explained that a bit better. I am doing this only to find out how far down I can go. Once I know that I only downsample the image without increasing the resolution again. This is purely a test whether I can downsample to a certain resolution without loss of visual information. I believe this could be done automatically.
I am not trying to improve the quality of a blurred image. I only want to save disk space by not storing slightly blurred images at an unnecessarily high resolution.

Hi.

Exactly why do you want to store these "Blurry" images at a low res.

I presume that, making them low res means you do not intend to use them again.

It strikes me that if they are not worth keeping at a usable resolution, they are not worth keeping.

Do remember Memory for storage is at an all time low price. 500Gig External HDDs can be got for very little.

Roy G
J
Joe
May 1, 2008
wrote:

Thanks for your explanation, Joe.

– In general, reducing then increasing resolution won’t do you any good but wasting time and imaging.

I should have explained that a bit better. I am doing this only to find out how far down I can go. Once I know that I only downsample the image without increasing the resolution again. This is purely a test whether I can downsample to a certain resolution without loss of visual information. I believe this could be done automatically.
I am not trying to improve the quality of a blurred image. I only want to save disk space by not storing slightly blurred images at an unnecessarily high resolution.

For displaying then sure you may be able to reduce the size but keep quality acceptable for viewing, for printing you may be able to boost the pixel (using some trick *not* by increasing either W, H, or PPI) for *little* better or larger print etc.. other than that we have to live with whatever the original may be.
TA
Tim.Ahrens
May 1, 2008
On 30 Apr, 15:35, "Roy G" wrote:
Exactly why do you want to store these "Blurry" images at a low res.

As I said, in order to save disc space. It is also about memory and processor usage. Of course, many things can be solved by using hardware power but that is really not elegant. Why should I store and work with images at an uanppropriately high resolution?

I presume that, making them low res means you do not intend to use them again.

Yes, I am going to use them again. I am not storing them at low res (whatever that means) but at lower res. Simply at the highest resolution that makes sense. This has nothing to do with the intended use or output technology. It is simply a matter of what information is actually contained within the image.

It strikes me that if they are not worth keeping at a usableresolution, they are not worth keeping.

They are not totally blurry, just a bit. In fact, almost all images that come out of a digital camera or scanner at a high resolution are slightly blurry (in relation to the pixel size). This means that they can be downsampled without ant loss of visual information

On 1 May, 03:57, Joe wrote:
For displaying then sure you may be able to reduce the size but keep quality acceptable for viewing, for printing you may be able to boost the pixel (using some trick *not* by increasing either W, H, or PPI) for *little* better or larger print etc.. other than that we have to live with whatever the original may be.

Sorry, I am not able to make any sense of that.

I am a bit surprised that it is so difficult to explain what my concern is. To me it is only natural that you do not want to work with images that are at an unreasonably high resolution. Here "unreasonably" means that the resolution does not correspond to the actual sharpness of the image. I do not mean unreasonable in respect to the intended use. This is purely a matter of information processing. Even though we have pretty big hard drives and fast computers now, I believe it is wrong to have too much junk data on your system. If have an 8 megapixel image on my computer that I can downsample to 2 megapixels without a loss then I should definitely do so. This does not mean that the image is so blurry that it is totally useless, sharp 2 megapixels are quite useful and preferable over slightly blurred 8 megapixels.
OR
Owen Ransen
May 1, 2008
On Wed, 30 Apr 2008 03:46:10 -0700 (PDT), wrote:

My question is: is there a tool that does this job automatically? I mean, in a blurry image, determining what the "real" resolution is and then downsample automatically? Sorry if this has been asked before, I couldn’t find anything.

Going highly technical but I suppose you’d need a program which did a 2D Fourier Tranform on the image to find out the highest spatial frequencies which exist in the image. That would tell you the max sampling size you’d need in x and y axes, and hence the number of pixels in x and y.

Unfortunately it may go gah gah if you have dust or specks in the image, the FFT would not "know" you wanted to get rid of those.

I understand your reasoning because I’ve thought about this myself with a scanning application I am writing. Why store at 1200 DPI an image which is has exactly the same data at 200 DPI (scanned DPI I mean).

Exagerating a white piece of paper looks the same at 1 DPI as at 1000 DPI!

The FFT won’t be in the first version though!

Easy to use graphics effects:
http://www.ransen.com/
TA
Tim.Ahrens
May 1, 2008
Very interesting thoughts, Owen. FFT sounds good, I hadn’t thought of that. I had in mind the rather primitive method of doing internally the afore mentioned downsample-upsample thing and then compare it to the original. That would probably lead to similar results anyway.

Unfortunately it may go gah gah if you have dust or specks in the image, the FFT would not "know" you wanted to get rid of those.

Yes, there are some issues. A single grain of dust should not spoil the system. You would probably need some sort of tolerance/threshold value. Maybe one that represents the accumulation of frequencies (power?) filtered away. But then you might have small, very acute zones you want to preserve within a largely blurry image. They might fall within a global threshold. Then you would have to do some sort of "local" thing and introduce another parameter. Or, could dust be detected by its frequency spectrum?
It’s not a piece of cake but I believe the problems could be tackled with a system that is not overly complicated internally or for the user.

Exagerating a white piece of paper looks the same at 1 DPI as at 1000 DPI!

Good example!

The FFT won’t be in the first version though!

Well, if you are including it in a future version I will be glad to do some beta testing. 🙂

Cheers,
Tim
J
Joe
May 1, 2008
wrote:

On 1 May, 03:57, Joe wrote:
For displaying then sure you may be able to reduce the size but keep quality acceptable for viewing, for printing you may be able to boost the pixel (using some trick *not* by increasing either W, H, or PPI) for *little* better or larger print etc.. other than that we have to live with whatever the original may be.

Sorry, I am not able to make any sense of that.

I am a bit surprised that it is so difficult to explain what my concern is. To me it is only natural that you do not want to work with images that are at an unreasonably high resolution. Here "unreasonably" means that the resolution does not correspond to the actual sharpness of the image. I do not mean unreasonable in respect to the intended use. This is purely a matter of information processing. Even though we have pretty big hard drives and fast computers now, I believe it is wrong to have too much junk data on your system. If have an 8 megapixel image on my computer that I can downsample to 2 megapixels without a loss then I should definitely do so. This does not mean that the image is so blurry that it is totally useless, sharp 2 megapixels are quite useful and preferable over slightly blurred 8 megapixels.

Well, as I said I think it would require some type of magical wand to get more (without loss) for less (reducing from 8MP to 2MP which is 1/4 of the original). I and I don’t think anyone really care what you want to do with your stuff, but *if* you want to hear the *difference* between

– 2MP vs 8MP

– 2MP without quality loss vs 8MP with blurry

– 2MP without quality loss vs the original blurry 8MP

.. and so on, then it won’t be what you want to do with your 2MP images, but what other thinks about the RESULT. And I don’t think people care much about how big your hard drive is (my 3 internal and 2 external drives give me 2.5TB here) or how you store your images. I have hundreds of thousands of hi-rez images (tens of thousands of RAW, Original, and Retouched consider single image).

Hahaha and don’t ask me why I keep the RAW and Original JPG with the final retouched images. And just incase some may ask why RAW + Original JPG .. well, because I still don’t have totally trust on RAW file and RAW converter yet to discard the original JPG.
OR
Owen Ransen
May 2, 2008
On Thu, 1 May 2008 07:15:39 -0700 (PDT), wrote:

The FFT won’t be in the first version though!

Well, if you are including it in a future version I will be glad to do some beta testing. 🙂

I’ll contact you today!

Easy to use graphics effects:
http://www.ransen.com/
J
Joe
May 2, 2008
Owen Ransen wrote:

On Wed, 30 Apr 2008 03:46:10 -0700 (PDT), wrote:

My question is: is there a tool that does this job automatically? I mean, in a blurry image, determining what the "real" resolution is and then downsample automatically? Sorry if this has been asked before, I couldn’t find anything.

Going highly technical but I suppose you’d need a program which did a 2D Fourier Tranform on the image to find out the highest spatial frequencies which exist in the image. That would tell you the max sampling size you’d need in x and y axes, and hence the number of pixels in x and y.

Unfortunately it may go gah gah if you have dust or specks in the image, the FFT would not "know" you wanted to get rid of those.
I understand your reasoning because I’ve thought about this myself with a scanning application I am writing. Why store at 1200 DPI an image which is has exactly the same data at 200 DPI (scanned DPI I mean).

Exagerating a white piece of paper looks the same at 1 DPI as at 1000 DPI!

The FFT won’t be in the first version though!

Hmmm I don’t think 1200-PPI vs 200-PPI is same or have anything to do with turning blurry 8MP image into sharp 2MP image without loss. Or, most people would agree with you that it’s kinda crazy for an average people to keep the 1200-PPI when it may do just find or more than good enough at 200-PPI or sometime even 150-PPI and depending on the image it may even be better with less than 100-PPI than more.

Some people may even think it’s crazy to scan at 1200-PPI to begin with. And I hope you understand why in some case I agree smaller may be better, while disagree in other case. And that’s why I never care to set my image to 300-PPI or 150-PPI or why not enjoy the one more than 300-PPI or increasing the PPI higher than its real value won’t do no good.
TA
Tim.Ahrens
May 2, 2008
On 2 May, 05:13, Joe wrote:
Hmmm I don’t think 1200-PPI vs 200-PPI is same or have anything to do with turning blurry 8MP image into sharp 2MP image without loss. Or, most people would agree with you that it’s kinda crazy for an average people to keep the 1200-PPI when it may do just find or more than good enough at 200-PPI or sometime even 150-PPI and depending on the image it may even be better with less than 100-PPI than more.

Some people may even think it’s crazy to scan at 1200-PPI to begin with. And I hope you understand why in some case I agree smaller may be better, while disagree in other case. And that’s why I never care to set my image to 300-PPI or 150-PPI or why not enjoy the one more than 300-PPI or increasing the PPI higher than its real value won’t do no good.

Joe,

thanks for your effort but I have the impression it has still not become clear what I mean in the first place.

Let me explain with a concrete example. This is a fairly random photo from flickr’s most interesting: http://www.flickr.com/photos/21301000@N03/2453519084/sizes/o / This is an excellent shot but it is slightly blurry.

To prove this, do the following test:

1. Download the original size and open the image in PS
2. Reduce the resolution from 300 dpi to 200 dpi using normal bicubic
3. Increase the resolution to 300 dpi using normal bicubic
4. Select all
5. Copy
6. Revert
7. Paste

Now you can switch on and off that layer to compare it to the original. You will see no degradation.
Then, do the same test but instead of going down to 200 dpi, try 100 dpi. In the end you _will_ note degradation compared to the original.

What does this test tell us? We can work with this image at 200 dpi without any quality loss of qulity but we cannot go down much further than that. If I had taken the photo and wanted to rework it and store it, say, as TIFF then I would downsample it to 200 dpi (without going back to 300, of course!), saving more than half the data size. There is absolutely no reason to keep it at 300 dpi no matter what the intended output is, no matter how big my hard drive and no matter how fast my computer. Note that this does not give me "more for less" but "the same fore less".

But, how far exactly can we go down in resolution? If you have lots of time you could try 150 dpi, maybe more or maybe less and so on until you have found out how far you can go. Don’t you agree that it would be handy to have this done my the computer? There are issues as described in the earlier posts but it is not impossible and it does not require magic.

In the case of photos it may not be crucial to avoid data size because they are typically not that large but when I am working with scans I really do not want to work at an unnecessarily high resolution. If I have a scan with 300 MB pixel dimensions which I can reduce to 100 MB without a loss in quality I will definitely do so because it speeds things up and frees resources on my computer.
P
Pico
May 3, 2008
wrote in message

As I said, in order to save disc space. It is also about memory and processor usage. Of course, many things can be solved by using hardware power but that is really not elegant.

That’s what they said in the days of $20,000 30mb disc drives and 16K of RAM.
P
Pico
May 3, 2008
wrote in message

Let me explain with a concrete example. This is a fairly random photo from flickr’s most interesting:
http://www.flickr.com/photos/21301000@N03/2453519084/sizes/o / This is an excellent shot but it is slightly blurry.

That picture is blurry for lots of reasons. First, it’s not in proper focus. Second, the backlight is too much for the lens, which is no stellar performer in the best case. The Coolpix P4 can shoot much smaller image sizes, so why don’t you set your camera properly and be done with this bullshit? Please do not use the expression ‘resolution’ when all one is concerned about is pixels.

It’s an 8mpx camera, so where do you come from talking about monsterously large files? If you are talking about larger files, then post the larger files. Be pertinent.
LL
Leo Lichtman
May 7, 2008
wrote: Some of the images I have (some photos, some
scans) are slightly
blurred, some more, some less. So, I can reduce the resolution practically without loss of information. To be sure, I typically reduce the resolution, then increase the resolution back to the original and if there is no visible loss of information compared to the original image I know that the resolution was not too low. It typically takes a few attempts to find out how far I can go.
My question is: is there a tool that does this job automatically?
^^^^^^^^^^^^^^^^^^^^^^
Your objective is very clear, and I don’t see why it is so difficult for some people to grasp it.

The first obstacle I see to your goal is in the fact that you are doing this visually. Your determination of the least acceptable resolution is based on your ability to see and judge when a picture has been degraded. Some people will be better at this than others, but in every case it will be a judgment call. Can it be done by a computer?

Secondly, you recognize that the blurriness of the image comes from two separate sources: 1.) The image itself, as created by the lens, and limited by the steadiness of the camera, movement of the subject, and even waviness in the air. 2.) The way the image is divided into pixels, either by the camera or the way you store it. I believe these two effects interact. It’s not enough just to exceed the sharpness of the original image by your pixel density. As you cut the pixel density looking for the most economical value, it will not be a sharp cutoff–it will be a curve, which causes the image to get blurrier over a range.
R
Roberto
May 8, 2008
Some of the images I have (some photos, some scans) are slightly blurred, some more, some less. So, I can reduce the resolution practically without loss of information. To be sure, I typically reduce the resolution, then increase the resolution back to the original and if there is no visible loss of information compared to the original image I know that the resolution was not too low. It typically takes a few attempts to find out how far I can go.
My question is: is there a tool that does this job automatically? I mean, in a blurry image, determining what the "real" resolution is and then downsample automatically? Sorry if this has been asked before, I couldn’t find anything.

adobe imageready is supposed to help user optimize image for web use, but I don’t think it would figure out the minimum resolution required to represent a photo

nevertheless, it may be better than nothing. I think you can download a free trial
TA
Tim.Ahrens
May 12, 2008
It’s not enough just to exceed the sharpness of the original image by your pixel density. As you cut the pixel density looking for the most economical value, it will not be a sharp cutoff–it will be a curve, which causes the image to get blurrier over a range.

Good question. Do you mean Nyquist is not valid for 2D-data? Don’t know, I am not an expert enough in this area. In sound processing and storage, choosing the ideal (lowest acceptable) sample rate is a rather trivial thing so I was assuming the same counts for images.

I guess it also depends on the downsampling principle. I used bicubic but that is not the only possibility, at least if you are not restricted to Photoshop. Choosing a different downsampling method would probably make the automatism behave differently. Maybe I should simply it give it a try and see whether I can get it automated in some useful way.
P
Pico
May 12, 2008
wrote in message

Good question. Do you mean Nyquist is not valid for 2D-data? Don’t know, I am not an expert enough in this area. In sound processing and storage, choosing the ideal (lowest acceptable) sample rate is a rather trivial thing so I was assuming the same counts for images.

There is a quality called accutance in imaging. It is how sharp and clear an image appears to the human eye. Eventually someone will claim they have found the algorithm to make it automatic and the rest of us, the experienced, will be relieved of yet another 100 million clueless would-be photographers. To date, auto-everything else has done a marvelous job of that.

Must-have mockup pack for every graphic designer 🔥🔥🔥

Easy-to-use drag-n-drop Photoshop scene creator with more than 2800 items.

Related Discussion Topics

Nice and short text about related topics in discussion sections