From: Hillf Danton <hdanton@sina.com>
To: David Howells <dhowells@redhat.com>
Cc: willy@infradead.org, trond.myklebust@primarydata.com, hch@lst.de,
linux-nfs@vger.kernel.org, linux-mm@kvack.org,
linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 0/2] mm: Fix NFS swapfiles and use DIO read for swapfiles
Date: Fri, 13 Aug 2021 10:59:14 +0800 [thread overview]
Message-ID: <20210813025914.2762-1-hdanton@sina.com> (raw)
In-Reply-To: <162876946134.3068428.15475611190876694695.stgit@warthog.procyon.org.uk>
On Thu, 12 Aug 2021 12:57:41 +0100 David Howells wrote:
>
> Hi Willy, Trond,
>
> Here's a change to make reads from the swapfile use async DIO rather than
> readpage(), as requested by Willy.
>
> Whilst trying to make this work, I found that NFS's support for swapfiles
> seems to have been non-functional since Aug 2019 (I think), so the first
> patch fixes that. Question is: do we actually *want* to keep this
> functionality, given that it seems that no one's tested it with an upstream
> kernel in the last couple of years?
>
> I tested this using the procedure and program outlined in the first patch.
>
> I also encountered occasional instances of the following warning, so I'm
> wondering if there's a scheduling problem somewhere:
>
> BUG: workqueue lockup - pool cpus=0-3 flags=0x5 nice=0 stuck for 34s!
> Showing busy workqueues and worker pools:
> workqueue events: flags=0x0
> pwq 6: cpus=3 node=0 flags=0x0 nice=0 active=1/256 refcnt=2
> in-flight: 1565:fill_page_cache_func
> workqueue events_highpri: flags=0x10
> pwq 3: cpus=1 node=0 flags=0x1 nice=-20 active=1/256 refcnt=2
> in-flight: 1547:fill_page_cache_func
> pwq 1: cpus=0 node=0 flags=0x0 nice=-20 active=1/256 refcnt=2
> in-flight: 1811:fill_page_cache_func
> workqueue events_unbound: flags=0x2
> pwq 8: cpus=0-3 flags=0x5 nice=0 active=3/512 refcnt=5
> pending: fsnotify_connector_destroy_workfn, fsnotify_mark_destroy_workfn, cleanup_offline_cgwbs_workfn
> workqueue events_power_efficient: flags=0x82
> pwq 8: cpus=0-3 flags=0x5 nice=0 active=4/256 refcnt=6
> pending: neigh_periodic_work, neigh_periodic_work, check_lifetime, do_cache_clean
> workqueue writeback: flags=0x4a
> pwq 8: cpus=0-3 flags=0x5 nice=0 active=1/256 refcnt=4
> in-flight: 433(RESCUER):wb_workfn
Is it a memory tight scenario that got rescuer active?
> workqueue rpciod: flags=0xa
> pwq 8: cpus=0-3 flags=0x5 nice=0 active=38/256 refcnt=40
> in-flight: 7:rpc_async_schedule, 1609:rpc_async_schedule, 1610:rpc_async_schedule, 912:rpc_async_schedule, 1613:rpc_async_schedule, 1631:rpc_async_schedule, 34:rpc_async_schedule, 44:rpc_async_schedule
> pending: rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule, rpc_async_schedule
> workqueue ext4-rsv-conversion: flags=0x2000a
> pool 1: cpus=0 node=0 flags=0x0 nice=-20 hung=59s workers=2 idle: 6
> pool 3: cpus=1 node=0 flags=0x1 nice=-20 hung=43s workers=2 manager: 20
> pool 6: cpus=3 node=0 flags=0x0 nice=0 hung=0s workers=3 idle: 498 29
> pool 8: cpus=0-3 flags=0x5 nice=0 hung=34s workers=9 manager: 1623
> pool 9: cpus=0-3 flags=0x5 nice=-20 hung=0s workers=2 manager: 5224 idle: 859
>
> Note that this is due to DIO writes to NFS only, as far as I can tell, and
> that no reads had happened yet.
>
> David
> ---
> David Howells (2):
> nfs: Fix write to swapfile failure due to generic_write_checks()
> mm: Make swap_readpage() for SWP_FS_OPS use ->direct_IO() not ->readpage()
>
>
> mm/page_io.c | 73 +++++++++++++++++++++++++++++++++++++++++++++++-----
> 1 file changed, 67 insertions(+), 6 deletions(-)
Print memory info to help understand the busy rescuer.
+++ x/kernel/workqueue.c
@@ -4710,12 +4710,16 @@ static void show_pwq(struct pool_workque
}
if (has_in_flight) {
bool comma = false;
+ bool rescuer = false;
pr_info(" in-flight:");
hash_for_each(pool->busy_hash, bkt, worker, hentry) {
if (worker->current_pwq != pwq)
continue;
+ if (worker->rescue_wq)
+ rescuer = true;
+
pr_cont("%s %d%s:%ps", comma ? "," : "",
task_pid_nr(worker->task),
worker->rescue_wq ? "(RESCUER)" : "",
@@ -4725,6 +4729,11 @@ static void show_pwq(struct pool_workque
comma = true;
}
pr_cont("\n");
+ if (rescuer) {
+ pr_cont("\n");
+ show_free_areas(0, NULL);
+ pr_cont("\n");
+ }
}
list_for_each_entry(work, &pool->worklist, entry) {
prev parent reply other threads:[~2021-08-13 2:59 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-12 11:57 David Howells
2021-08-12 11:57 ` [PATCH 1/2] nfs: Fix write to swapfile failure due to generic_write_checks() David Howells
2021-08-12 11:57 ` [PATCH 2/2] mm: Make swap_readpage() for SWP_FS_OPS use ->direct_IO() not ->readpage() David Howells
2021-08-12 12:21 ` Christoph Hellwig
2021-08-12 12:57 ` David Howells
2021-08-12 15:39 ` Matthew Wilcox
2021-08-12 17:02 ` Christoph Hellwig
2021-08-12 17:48 ` Darrick J. Wong
2021-08-12 18:14 ` Matthew Wilcox
2021-08-12 20:13 ` Theodore Ts'o
2021-08-13 6:54 ` Christoph Hellwig
2021-08-12 13:00 ` Matthew Wilcox
2021-08-12 13:23 ` David Howells
2021-08-12 13:37 ` David Howells
2021-08-12 13:50 ` Matthew Wilcox
2021-08-12 14:16 ` David Howells
2021-08-12 12:18 ` [PATCH 0/2] mm: Fix NFS swapfiles and use DIO read for swapfiles Christoph Hellwig
2021-08-13 2:59 ` Hillf Danton [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210813025914.2762-1-hdanton@sina.com \
--to=hdanton@sina.com \
--cc=dhowells@redhat.com \
--cc=hch@lst.de \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nfs@vger.kernel.org \
--cc=trond.myklebust@primarydata.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox