backend fork/reader-process <-> sane_read

From: Oliver Rauch (oliver.rauch@Wolfsburg.DE)
Date: Sat Feb 19 2000 - 09:33:19 PST

  • Next message: Oliver Rauch: "Re: xsane bugs / suggestions"

    Hi,

    I though a bit about the way the most (all?) backends shift the data
    from the scsi buffer to the sane frontend:

    a fork creates a reader_process that gets the data from the scsi buffer
    and shall prepare it for sane_read. For that it writes it into a pipe.

    The main idea behind that is that sane_read() can be called with
    any request-size independent from the size of one scanline.

    But for the scanner we do it totally wrong here:
    we write the data into a pipe that's size is on most systems about 4KB.
    When the pipe buffer is full the write command does not return until
    it was able to write the data into the pipe buffer.

    So we try to press the scsi buffer data (32-128 KB) into a 4KB buffer
    what slows down scanning (creates a lot of pauses) and for that we create
    a second process that also needs lots of memory.

    I think the pipe is the only reason why scanning large images with sane
    produces the backtracing of the scanhead and makes scanning very slow.

    What can we do?

    I see two possibilities.
    1) We do not write the data into a pipe. Instead we write it into a file.
    The disadvantage is that we have to store the image twice while scanning.
    The advantage is that a slow frontend (e.g. network scanning) does not
    slow down the scan speed (Backtracking is very slow).

    2) We work with two buffers that have the same size as the scsi buffer
    (in the moment we use one of these buffers). sane_read reads the data from
    the one buffer while the other buffer is filled with the scsi/scanner data.
    As far as I can see the memory usage is not greater than in the moment
    because the existing fork already produces a second buffer of 32-128KB,
    but only one of the two buffers is needed/used at a time.

    There are 2 ways to do this
    a) a forked reader_process writes the data into shared memory
    (this does not work on all systems, on other systems where no shared memory
    is available we could keep the pipe or b:).
    b) there is no own reader process, the backend itself calls sanei_scsi_read()
    when one buffer is empty. This is a bit slower but does not need shared memory.
    I am not sure if the scanning really slows down when we do not use an own reader process.
    All we need is a sanei_scsi_read routine that is able to do non-blocking (returns if no data is available).
    The coping the scsi buffer in the backend buffer does not take much time if the routine does not
    wait until data is available.
    If this works like I expect we do not need any fork or thread.

    All this could be hidden in a sanei_ routine so the backend does not see anything about that.
    So a backend would not call sanei_scsi_read(), instead it would call sanei_scsi_read_buffered().

    May be it would be a good idea to implement both (or all three with the exisiting) routines
    and let the user select the routine that works best for him.

    Comments welcome.

    Bye
    Oliver

    --
    Homepage:       http://www.wolfsburg.de/~rauch
    sane-umax:      http://www.wolfsburg.de/~rauch/sane/sane-umax.html
    xsane:          http://www.wolfsburg.de/~rauch/sane/sane-xsane.html
    E-Mail:         mailto:Oliver.Rauch@Wolfsburg.DE
    

    -- Source code, list archive, and docs: http://www.mostang.com/sane/ To unsubscribe: echo unsubscribe sane-devel | mail majordomo@mostang.com



    This archive was generated by hypermail 2b29 : Sat Feb 19 2000 - 09:33:37 PST