If we ever find out what is an "infrared" channel I am glad
to rename the images back to SANE_FRAME_RGBI.
> > Right now the only link of the 4th channel of a Coolscan scanned
> > image to "infrared" is that it has been scanned with an infrared LED as light
> > source. After that I'm doing all kinds of transformation on it
> > which mix R,G,B and I (and I will do even more in the future)
> > to make it show only the dust in the image, and not the
> > color information. That is to say: when you get the 4th channel
> > out of the backend it is no longer an "infrared" image but a
> > "dust" image, so we might as well define SANE_FRAME_RGBD(ust).
>
> I hope it can still output the pure RGBI as well.
I don't see much point in sending out the "infrared" channel.
This opens so much questions with no reasonable answers:
What LUT should I apply?
Should I invert it if I scan negatives?
...
As there is no use of an infrared channel so far -other
than dust removal- it is difficult to imagine what to do
with it.
Once you have converted the RGB channels with the user-gamma-LUT
there is almost no way to use the infrared channel for dust-removal.
(actually, I am thinking of a method but it makes a lot of assumptions
on the image content)
Unless you scan with the maximal resolution (10/12 bit) and no LUT,
you cannot easlily reconstruct the dust-image from the
infrared channel . That'a why I think the conversion from Inrared
to "dust" should be done by the backend. It is a very scanner
dependent operation as it depends a lot on the
light source and CCD sensitivity for the different wavelengths.
Maybe I'll add a raw format option that writes all data
without transformation to a 64Bit image (4*16) to
be used by a program that can do the Coolscan specific dust-removal
and which can apply the LUT afterwards.
This type of data-flow is basically optimized for speed - meaning
you don't have to wait on the scanner and the scanner doesnt
have to wait for the dust removal - it is not because I hope
there will be a general way to treat these images.
> Just curious, but why would your production code do the 4x4 color
> transform in the backend, and the defect interpolation elsewhere?
> It would seem more logical to do it in the same place.
Not so much. The dust removal can be done on the image with the
gamma-LUT applied (and therefore on a 8 bit image)
but not the 4*4 transformation.
For the 4*4 transformation we need the "raw" values
of the scanned image because we know how to transform them,
which is no longer the case if the user has applied a LUT.
While the 4*4 transformation can be done "on the fly" pixel per pixel
without having all the image available, the dustremoval needs
the whole image (or parts of it) to do spacial interpolation.
Unless I store the whole image in the backend,
dustremoval inside the backend is not optimal.
I guess the 4*4 transform is what they call "Color Managment" in
the TWAIN backend. It may be interesting even for RGB scanning.
I am right now investigating the idea to do everything inside the
backend (including dust-removal) but this requires up to 80 MB
of memory.
> As more manufacturers will take a licence on Digital ICE, we can
> expect more scanners that have an IR channel. Therefore, at some point
> in time, it will make sense to do the defect removal in the frontend
> instead of in all the backends and a SANE_FRAME_RGBI will be essential.
I just don't believe there will be an infrared channel that is
suffiently standard that a dust-removal algorithm can work with it,
unless it is already transformed to a "dust" image.
The raw infrared channel is too much mixed with the red channel
to be usefull.
Greetings from Paris
Andreas
-- Source code, list archive, and docs: http://www.mostang.com/sane/ To unsubscribe: echo unsubscribe sane-devel | mail majordomo@mostang.com