Re: memory leak: Microtek E6 / aha152x / RH 5.0

Leigh Orf (orf@mailbag.com)
Tue, 17 Mar 1998 11:11:36 -0600

OK, I'm following up my own post about this memory leak. I applied
the unofficial memleak-deluxe patch to the 2.0.33 kernel. The memleak
patch writes a file called /proc/memleak which can be analyzed by a few
enclosed scripts.

the first number the 'allocation count', the number of memory objects
allocated in a certain FILE:LINE. If some allocation point shows a
constantly increasing allocation count, it's probably a memory leak.

The FAQ goes on to say:

NOTE: the VM subsystems usually have very fluctuating allocation counts,
think twice before calling them a memory leak.

buffer.c is part of that code, but since it's growing consistently (not
fluctuating), and has the largest magnitude, I'm pretty sure that's
where the trouble is. I ran the dosum script six times, once after
each scan, concatenated the output to a file, and it appears that the
memory being allocated at lines 1004 and 1359 in buffer.c are not being
deallocated (esp. 1004 as far as magnitude of the leak, I'm not sure
about 1359). This increase in buffer-cache matches my observations of
xsysinfo output. If I would have run it about 20 more times I would have
been out of memory altogether.

home[1006]:/home/orf/memleak-deluxe% grep buffer out | grep 1004
22337 buffer.c:1004
25750 buffer.c:1004
29923 buffer.c:1004
33732 buffer.c:1004
37766 buffer.c:1004
42143 buffer.c:1004

home[1007]:/home/orf/memleak-deluxe% grep buffer out | grep 1359

5590 buffer.c:1359
6448 buffer.c:1359
7476 buffer.c:1359
8443 buffer.c:1359
9440 buffer.c:1359
10542 buffer.c:1359

Here are code snippets where these allocations occur:

986 static void get_more_buffer_heads(void)
987 {
988 struct wait_queue wait = { current, NULL };
989 struct buffer_head * bh;
990
991 while (!unused_list) {
992 /*
993 * This is critical. We can't swap out pages to get
994 * more buffer heads, because the swap-out may need
995 * more buffer-heads itself. Thus GFP_ATOMIC.
996 *
997 * This is no longer true, it is GFP_BUFFER again, the
998 * swapping code now knows not to perform I/O when that
999 * GFP level is specified... -DaveM
1000 */
1001 /* we now use kmalloc() here instead of gfp as we want
1002 to be able to easily release buffer heads - they
1003 took up quite a bit of memory (tridge) */
***1004 bh = (struct buffer_head *) kmalloc(sizeof(*bh),GFP_BUFFER);
1005 if (bh) {
1006 put_unused_buffer_head(bh);
1007 nr_buffer_heads++;
1008 return;
1009 }
1010
1011 /*
1012 * Uhhuh. We're _really_ low on memory. Now we just
1013 * wait for old buffer heads to become free due to
1014 * finishing IO..
1015 */
1016 run_task_queue(&tq_disk);
.

.

.

.

1345 static int grow_buffers(int pri, int size)
1346 {
1347 unsigned long page;
1348 struct buffer_head *bh, *tmp;
1349 struct buffer_head * insert_point;
1350 int isize;
1351
1352 if ((size & 511) || (size > PAGE_SIZE)) {
1353 printk("VFS: grow_buffers: size = %d\n",size);
1354 return 0;
1355 }
1356
1357 isize = BUFSIZE_INDEX(size);
1358
***1359 if (!(page = __get_free_page(pri)))
1360 return 0;
1361 bh = create_buffers(page, size);
1362 if (!bh) {
1363 free_page(page);
1364 return 0;
1365 }
1366
1367 insert_point = free_list[isize];

Anyhow, I am not a kernel hacker and could be barking up the wrong
tree... let me know if any of you have ideas on this, or if you can
point me in the right direction. I figure I'm not the only one with this
problem.

Leigh Orf

--
Source code, list archive, and docs: http://www.mostang.com/sane/
To unsubscribe: echo unsubscribe sane-devel | mail majordomo@mostang.com