From: Yuri Pudgorodsky <yur@asplinux.ru>
To: "André Dahlqvist" <andre_dahlqvist@post.netlink.se>
Cc: Rik van Riel <riel@conectiva.com.br>,
Molnar Ingo <mingo@debella.ikk.sztaki.hu>,
"David S. Miller" <davem@redhat.com>,
torvalds@transmeta.com, linux-kernel@vger.kernel.org,
linux-mm@kvack.org
Subject: test9-pre3+t9p2-vmpatch VM deadlock during socket I/O
Date: Fri, 22 Sep 2000 20:38:43 +0400 [thread overview]
Message-ID: <39CB8B13.391C067D@asplinux.ru> (raw)
In-Reply-To: <20000922161055.A1088@post.netlink.se>
I also encounter instant lockup of test9-pre3 + t9p2-vmpatch / SMP (two CPU).
under high I/O via UNIX domain sockets:
- running 10 simple tasks doing
#define BUFFERSIZE 204800
for (j = 0; ; j++) {
if (socketpair(PF_LOCAL, SOCK_STREAM, 0, p) == -1) {
exit(1);
}
fcntl(p[0], F_SETFL, O_NONBLOCK);
fcntl(p[1], F_SETFL, O_NONBLOCK);
write(p[0], crap, BUFFERSIZE);
write(p[1], crap, BUFFERSIZE);
}
So it looks like swap_out() cannot obtain lock_kernel()
holded by a swap_out() on a second CPU.... See below.
Call trace (looks very similar on both CPU):
Trace; c020aa3e <stext_lock+18a6/8848>
(called from c0133eb4 <swap_out+0x28>)
Trace; c0133eb4 <swap_out+28/228> args (6, 3, 0)
Trace; c0134e50 <refill_inactive+c8/170> args (3, 1)
Trace; c0134f75 <do_try_to_free_pages+7d/9c> args (3,1)
Trace; c0135168 <wakeup_kswapd+84/bc>
Trace; c0135d72 <__alloc_pages+1d6/264>
Trace; c0135e17 <__get_free_pages+17/28>
Trace; c01322ce <kmem_cache_grow+e2/264>
....
Under lockup, memory map looks like:
Active: 121 Inactive_dirty: 12217 Inactive_clean: 0 free: 12210 (256 512 768)
and does not change from time to time.
Most frequent EIP locations (from Sys-AltRq/P):
Trace; c0133f74 <swap_out+e8/228>
Trace; c0133f23 <swap_out+97/228>
Trace; c0134039 <swap_out+1ad/228>
Trace; c020aa37 <stext_lock+189f/8848>
Trace; c020aa3e <stext_lock+18a6/8848>
In a hope for a quick fix,
Yuri Pudgorodsky
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux.eu.org/Linux-MM/
next prev parent reply other threads:[~2000-09-22 16:38 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2000-09-21 16:44 [patch *] VM deadlock fix Rik van Riel
2000-09-21 20:28 ` Roger Larsson
2000-09-21 23:31 ` Problem remains - page_launder? (Was: Re: [patch *] VM deadlock fix) Roger Larsson
2000-09-21 22:23 ` [patch *] VM deadlock fix David S. Miller
2000-09-22 0:18 ` Andrea Arcangeli
2000-09-21 23:57 ` David S. Miller
2000-09-22 8:39 ` Rik van Riel
2000-09-22 8:54 ` test9-pre5+t9p2-vmpatch VM deadlock during write-intensive workload Molnar Ingo
2000-09-22 9:00 ` Molnar Ingo
2000-09-22 9:08 ` Rik van Riel
2000-09-22 9:14 ` Molnar Ingo
2000-09-22 9:34 ` Molnar Ingo
2000-09-22 10:27 ` Rik van Riel
2000-09-22 13:10 ` André Dahlqvist
2000-09-22 14:10 ` André Dahlqvist
2000-09-22 16:38 ` Yuri Pudgorodsky [this message]
2000-09-22 16:20 ` Mohammad A. Haque
2000-09-22 17:39 ` Linus Torvalds
2000-09-25 13:47 ` Rik van Riel
2000-09-22 12:16 ` [patch *] VM deadlock fix Martin Diehl
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=39CB8B13.391C067D@asplinux.ru \
--to=yur@asplinux.ru \
--cc=andre_dahlqvist@post.netlink.se \
--cc=davem@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mingo@debella.ikk.sztaki.hu \
--cc=riel@conectiva.com.br \
--cc=torvalds@transmeta.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox