Re: [Announce] WinSANE 0.1.0.0 Release

Nick Lamb (njl98r@ecs.soton.ac.uk)
Tue, 11 May 1999 15:11:59 +0100 (GMT)

On Tue, 11 May 1999, Ewald R. de Wit wrote:

> A month ago there was some talk on this list about a standard 16 bit
> format. But that thread abrubtly ended and as far as I know nothing
> concrete transpired out of it. That would mean that the SANE standard
> still does not define any standard >8 bit format, and the deep bit data
> is packed as it comes out of the scanner (i.e. vendor specific).

Go and look back at that thread. It ends abruptly because it was pointed
out that the SANE standards documents explain all this very thoroughly.

> > This is neither of those two ways, so it's not possible for a properly
> > written frontend to correctly understand that format.
>
> My still-under-construction frontend has no problems with it.. it
> checks the vendor string to see what packing to use.

Your frontend is broken. Simple as that. It sounds as though the HP
backend is broken too. Didn't you even think to read the documentation
when you first encountered this sort of problem?

One last time - here is the section of the SANE standard which explains
what you're supposed to do if your sample isn't 8 or 16-bit clean. It
could probably be explained better, and I'm sure a clarification from
someone reading would be accepted into a future version, but it's
already pretty clear what is meant.

----------
Valid bit depths are 1, 8, or 16 bits per sample. If a device's
natural bit depth is something else, it is up to the driver to scale
the sample values appropriately (e.g., a 4 bit sample could be scaled
by a factor of four to represent a sample value of depth 8).
----------

No doubt someone who's been asleep for the past three months will now
say - "Uh, how does machine byte order affect this?". So I'll include
the paragraph which explains that too.

----------
Conceptually, each frame is transmitted a byte at a time. Each byte
may contain 8 sample values (for an image bit depth of 1), one full
sample value (for an image bit depth of 8), or a partial sample value
(for an image bit depth of 16 or bigger). In the latter case, the
bytes of each sample value are transmitted in the machine's native
byte order.
----------

And although it's not terribly clear in the standards document as
currently written (this should probably be cleared up) I think that
consensus was reached that it's acceptable to use the "depth" of a
frame format to express the true depth (eg. 12-bits) for apps which
might really care about the actual depth of the original samples.
An application which doesn't care can always round depth up to the
nearest multiple of 8, and take lineart as a special exception.

Nick.

--
Source code, list archive, and docs: http://www.mostang.com/sane/
To unsubscribe: echo unsubscribe sane-devel | mail majordomo@mostang.com