"Golden" wrote:
How much difference is there between sRGB and adobeRGB, or even Wide?
It took some effort to answer that question. First I explain what I did:
Calculations were performed in double precision floating point in Excel. For the smaller of the two gamut that are compared I created 29791 sample RGB patches, so each channel has 31 numerically equally spaced points from 0.001 to 1. This means that I used the range of 1000:1 (this was to avoid zero channel values) .
Then converted this RGB dataset to the larger gamut.
Then took the divisions Rs/Rl, Gs/Gl and Bs/Sl for each of the 29791 RGB pairs where index "s" means the smaller gamut value and index "l" means the larger gamut value. Because both the spaces have the same amount of digital levels per channel these results are in essence the multipliers with what the per channel quantisation steps in the smaller gamut are multiplied for that particular color when looking at the situation in the larger gamut.
I then took max, average, variance and stdev over these multipliers (over the 3*29791 result values).
An example (using 8-bit notation for simpicity): In case we have RGBsmall=100,50,25 that converts to RGBlarge=50,80,30 then for this color (the color that we see is the same in both the spaces, just the RGB values are different) the red channel in the larger gamut has 100/50 = 2 times the quantization what the R channel of the same color has in the RGBsmall. And for this color the green channel in the larger gamut has 50/80 = 0.625 times the quantization what the G channel of the same color has in the RGBsmall. And for this color the blue channel in the larger gamut has 25/30=0.833 times the quantization what the B channel of the same color has in the RGBsmall.
In addition I used D65 whitepoint, absolute colorimetry and since we are interested about the effect of the gamut volume alone the gamma was set to 1.0 for all the color spaces.
Now to the results, they are quite interesting:
sadRGB to adobergb(1998):
max=1.3978, average=0.9585, variance=0.049, stdev=0.2213
sadRGB to Widegamut:
max=3.1208, average=0.932, variance=0.0934, stdev=0.3056
sadRGB to CIE XYZ:
max=2.3017, average=0.9323, variance=0.1311, stdev=0.3621
adobergb(1998) to Widegamut:
max=3.4334, average=0.9245, variance=0.0578, stdev=0.2404
adobergb(1998) to CIE XYZ:
max=1.6471, average=0.9229, variance=0.1058, stdev=0.3252
(multiplier 0.5 means one bit gain, multiplier 1 means no change, multiplier 2 means one bit loss, multiplier 4 means two bits loss to the gradation).
So, when going from a smaller gamut to a larger gamut there are some colors (some subvolumes inside the whole gamut volume) that suffer a little (since the max value is > 1) however in al the cases the average is a little less than 1 so, on average, there is a little benefit (the gradation steps are, on average, a little smaller). The small variance means that the multiplier are rather close to the average so the subvolumes of the gamut where the gradation goes close to the max are very small.
Now, these results seem to to be impossible, until it is realized that the dataset that was used was limited to the to smaller gamut. This is the only way how the gradation in the RGB channels can be examined since the rest of the colors that the larger gamut holds are naturally out-of-gamut colors for the smaller gamut (so can not be defined there at all). This means that the gradation of those colors that can only be defined in the larger gamut must be more coarse so that the larger gamut size comes true.
So, this evaluation tells that there is absolutely nothing to worry about when converting from a smaller gamut RGB working-space to a larger one in regards of quantization as long as the data is kept in the 16-bit/c mode. Even the very large "CIE 1931 D65 Gamma 1.0" profile that I have been using, for over a year now, as my RGB working-space does not introduce any problems.
I did this evaluation using my AIM.XLA, it is freeware Excel Add-In for colorimetric and spectral color calculations, available at
http://www.aim-dtp.net/aim/technology/aim_xla/index.htm My scanner is nominally 14-bits
So it has 14-bit analog to digital converter. However the sensor in your scanner has something about 10-bit effective dynamic range at the best so the extra 4 bits do not contain useful image data.
Timo Autiokari
http://www.aim-dtp.net/