Oliver Rauch wrote:
>
> Hi,
>
> I though a bit about the way the most (all?) backends shift the data
> from the scsi buffer to the sane frontend:
>
> a fork creates a reader_process that gets the data from the scsi buffer
> and shall prepare it for sane_read. For that it writes it into a pipe.
>
> The main idea behind that is that sane_read() can be called with
> any request-size independent from the size of one scanline.
>
> But for the scanner we do it totally wrong here:
> we write the data into a pipe that's size is on most systems about 4KB.
> When the pipe buffer is full the write command does not return until
> it was able to write the data into the pipe buffer.
>
> So we try to press the scsi buffer data (32-128 KB) into a 4KB buffer
> what slows down scanning (creates a lot of pauses) and for that we create
> a second process that also needs lots of memory.
>
> I think the pipe is the only reason why scanning large images with sane
> produces the backtracing of the scanhead and makes scanning very slow.
That the write() to the pipe blocks, can indeed be an important reason
for scanhead stops. But some scanners are even more "sensitive": The
Sharp JX250 for example needs a "read data" command be sent as fast as
possible to the device after the previous command has been finished in
order to avoid scan head stops. For that reason, the Sharp backend uses
the functions sanei_scsi_req_enter and sanei_scsi_req_wait. While the
implementation of these functions in Sane 1.0.1 helped to avoid scan
head stop in many situations, it was not fast enough under certain
conditions (slower host processors and/or higher scan resolutions).
Speeding up the implementation of these functions by using the queueing
capabilities of the newer version of the Linux SG driver avoided scan
head stop under most circumstances.
>
> What can we do?
>
> I see two possibilities.
> 1) We do not write the data into a pipe. Instead we write it into a file.
> The disadvantage is that we have to store the image twice while scanning.
> The advantage is that a slow frontend (e.g. network scanning) does not
> slow down the scan speed (Backtracking is very slow).
Using a file for buffering, is indeed kind of "last resort". But we can
leave this for example to Brian's patch to saned. I tried it with the
Sharp backend, and it works very good.
>
> 2) We work with two buffers that have the same size as the scsi buffer
> (in the moment we use one of these buffers). sane_read reads the data from
> the one buffer while the other buffer is filled with the scsi/scanner data.
> As far as I can see the memory usage is not greater than in the moment
> because the existing fork already produces a second buffer of 32-128KB,
> but only one of the two buffers is needed/used at a time.
>
> There are 2 ways to do this
> a) a forked reader_process writes the data into shared memory
> (this does not work on all systems, on other systems where no shared memory
> is available we could keep the pipe or b:).
> b) there is no own reader process, the backend itself calls sanei_scsi_read()
> when one buffer is empty. This is a bit slower but does not need shared memory.
> I am not sure if the scanning really slows down when we do not use an own reader process.
> All we need is a sanei_scsi_read routine that is able to do non-blocking (returns if no data is available).
> The coping the scsi buffer in the backend buffer does not take much time if the routine does not
> wait until data is available.
> If this works like I expect we do not need any fork or thread.
>
> All this could be hidden in a sanei_ routine so the backend does not see anything about that.
> So a backend would not call sanei_scsi_read(), instead it would call sanei_scsi_read_buffered().
I think, sanei_scsi_read_buffered() should also allow command queueing.
In fact, the command queueing has even without a function like
sanei_scsi_read_buffered() similar capabilities. I suspect that the
Sharp backend is a little bit "over-tuned" for Linux: It forks a reader
process, uses command queueing _and_ shared memory. I left it in this
state mainly, because real command queueing with the Sane SCSI API is at
present only available for Linux, so that other OS can least profit for
the fact, that reader process does not need much time to forward the
data to the parent process. But it would be worth to try to remove the
reader process, but to keep command queueing. I was simply too lazy to
try that, but could do it -- but that might take some time (quite much
other work to do during the next weeks...).
>
> May be it would be a good idea to implement both (or all three with the exisiting) routines
> and let the user select the routine that works best for him.
That could result in a quite complicated configuration -- your proposals
are not mutually exclusive :)
Abel
-- Source code, list archive, and docs: http://www.mostang.com/sane/ To unsubscribe: echo unsubscribe sane-devel | mail majordomo@mostang.com
This archive was generated by hypermail 2b29 : Mon Feb 21 2000 - 06:01:11 PST