From: Mel Gorman <mgorman@techsingularity.net>
To: Chuck Lever <chuck.lever@oracle.com>
Cc: Linux NFS Mailing List <linux-nfs@vger.kernel.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"kuba@kernel.org" <kuba@kernel.org>
Subject: Re: [PATCH RFC] SUNRPC: Refresh rq_pages using a bulk page allocator
Date: Mon, 22 Feb 2021 17:43:49 +0000 [thread overview]
Message-ID: <20210222174349.GJ3697@techsingularity.net> (raw)
In-Reply-To: <33A16CEA-24CA-447A-AE8C-824771E9B3E1@oracle.com>
On Mon, Feb 22, 2021 at 02:58:04PM +0000, Chuck Lever wrote:
> > There is a conflict at the end where rq_page_end gets updated. The 5.11
> > code assumes that the loop around the allocator definitely gets all
> > the required pages. What tree is this patch based on and is it going in
> > during this merge window? While the conflict is "trivial" to resolve,
> > it would be buggy because on retry, "i" will be pointing to the wrong
> > index and pages potentially leak. Rather than guessing, I'd prefer to
> > base a series on code you've tested.
>
> I posted this patch as a proof of concept. There is a clean-up patch
> that goes before it to deal properly with rq_page_end. I can post
> both if you really want to apply this and play with it.
>
It's for the best. It doesn't belong in the series as such but it may
affect what the bulk allocator usage looks like.
>
> > The slowpath for the bulk allocator also sucks a bit for the semantics
> > required by this caller. As the bulk allocator does not walk the zonelist,
> > it can return failures prematurely -- fine for an optimistic bulk allocator
> > that can return a subset of pages but not for this caller which really
> > wants those pages. The allocator may need NOFAIL-like semantics to walk
> > the zonelist if the caller really requires success or at least walk the
> > zonelist if the preferred zone is low on pages. This patch would also
> > need to preserve the schedule_timeout behaviour so it does not use a lot
> > of CPU time retrying allocations in the presense of memory pressure.
>
> Waiting half a second before trying again seems like overkill, though.
>
It is both overkill and time is not directly correlated with memory
pressure. However, I would also suggest removing the timeout as a separate
patch as it's not related to the bulk allocator in case someone does
encounter a high CPU usage problem and bisects it the patch using the
bulk allocator for the first time.
--
Mel Gorman
SUSE Labs
prev parent reply other threads:[~2021-02-22 17:43 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-15 16:06 Chuck Lever
2021-02-22 9:35 ` Mel Gorman
2021-02-22 14:58 ` Chuck Lever
2021-02-22 17:43 ` Mel Gorman [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210222174349.GJ3697@techsingularity.net \
--to=mgorman@techsingularity.net \
--cc=chuck.lever@oracle.com \
--cc=kuba@kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox