linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Kent Overstreet <kent.overstreet@linux.dev>
To: NeilBrown <neilb@suse.de>
Cc: Matthew Wilcox <willy@infradead.org>,
	 Amir Goldstein <amir73il@gmail.com>,
	paulmck@kernel.org, lsf-pc@lists.linux-foundation.org,
	 linux-mm@kvack.org,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	 Jan Kara <jack@suse.cz>
Subject: Re: [Lsf-pc] [LSF/MM/BPF TOPIC] Reclamation interactions with RCU
Date: Thu, 29 Feb 2024 21:39:17 -0500	[thread overview]
Message-ID: <l25hltgrr5epdzrdxsx6lgzvvtzfqxhnnvdru7vfjozdfhl4eh@xvl42xplak3u> (raw)
In-Reply-To: <170925937840.24797.2167230750547152404@noble.neil.brown.name>

On Fri, Mar 01, 2024 at 01:16:18PM +1100, NeilBrown wrote:
> On Thu, 29 Feb 2024, Matthew Wilcox wrote:
> > On Tue, Feb 27, 2024 at 09:19:47PM +0200, Amir Goldstein wrote:
> > > On Tue, Feb 27, 2024 at 8:56 PM Paul E. McKenney <paulmck@kernel.org> wrote:
> > > >
> > > > Hello!
> > > >
> > > > Recent discussions [1] suggest that greater mutual understanding between
> > > > memory reclaim on the one hand and RCU on the other might be in order.
> > > >
> > > > One possibility would be an open discussion.  If it would help, I would
> > > > be happy to describe how RCU reacts and responds to heavy load, along with
> > > > some ways that RCU's reactions and responses could be enhanced if needed.
> > > >
> > > 
> > > Adding fsdevel as this should probably be a cross track session.
> > 
> > Perhaps broaden this slightly.  On the THP Cabal call we just had a
> > conversation about the requirements on filesystems in the writeback
> > path.  We currently tell filesystem authors that the entire writeback
> > path must avoid allocating memory in order to prevent deadlock (or use
> > GFP_MEMALLOC).  Is this appropriate?  It's a lot of work to assure that
> > writing pagecache back will not allocate memory in, eg, the network stack,
> > the device driver, and any other layers the write must traverse.
> > 
> > With the removal of ->writepage from vmscan, perhaps we can make
> > filesystem authors lives easier by relaxing this requirement as pagecache
> > should be cleaned long before we get to reclaiming it.
> > 
> > I don't think there's anything to be done about swapping anon memory.
> > We probably don't want to proactively write anon memory to swap, so by
> > the time we're in ->swap_rw we really are low on memory.
> > 
> > 
> 
> While we are considering revising mm rules, I would really like to
> revised the rule that GFP_KERNEL allocations are allowed to fail.
> I'm not at all sure that they ever do (except for large allocations - so
> maybe we could leave that exception in - or warn if large allocations
> are tried without a MAY_FAIL flag).
> 
> Given that GFP_KERNEL can wait, and that the mm can kill off processes
> and clear cache to free memory, there should be no case where failure is
> needed or when simply waiting will eventually result in success.  And if
> there is, the machine is a gonner anyway.
> 
> Once upon a time user-space pages could not be ripped out of a process
> by the oom killer until the process actually exited, and that meant that
> GFP_KERNEL allocations of a process being oom killed should not block
> indefinitely in the allocator.  I *think* that isn't the case any more.
> 
> Insisting that GFP_KERNEL allocations never returned NULL would allow us
> to remove a lot of untested error handling code....

If memcg ever gets enabled for all kernel side allocations we might
start seeing failures of GFP_KERNEL allocations.

I've got better fault injection code coming, I'll be posting it right
after memory allocation profiling gets merged - that'll help with the
testing situation.

The big blocker on enabling memcg for all kernel allocations is
performance overhead, but I hear that's getting worked on as well.

We'd probably want to add a gfp flag to annotate which allocations we
want to fail because of memcg, though...


  reply	other threads:[~2024-03-01  2:39 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-27 18:56 Paul E. McKenney
2024-02-27 19:19 ` [Lsf-pc] " Amir Goldstein
2024-02-27 22:59   ` Paul E. McKenney
2024-03-01  3:28     ` Kent Overstreet
2024-03-05  2:43       ` Paul E. McKenney
2024-03-05  2:56       ` Yosry Ahmed
2024-02-28 19:37   ` Matthew Wilcox
2024-02-29  1:29     ` Dave Chinner
2024-02-29  4:20       ` Kent Overstreet
2024-02-29  4:17     ` Kent Overstreet
2024-02-29  4:24       ` Matthew Wilcox
2024-02-29  4:44         ` Kent Overstreet
2024-03-01  2:16     ` NeilBrown
2024-03-01  2:39       ` Kent Overstreet [this message]
2024-03-01  2:48         ` Matthew Wilcox
2024-03-01  3:09           ` Kent Overstreet
2024-03-01  3:33             ` James Bottomley
2024-03-01  3:52               ` Kent Overstreet
2024-03-01  4:01                 ` Kent Overstreet
2024-03-01  4:09                   ` NeilBrown
2024-03-01  4:18                     ` Kent Overstreet
2024-03-01  4:18                   ` James Bottomley
2024-03-01  4:08                 ` James Bottomley
2024-03-01  4:15                   ` Kent Overstreet
2024-03-05  2:54           ` Yosry Ahmed
2024-03-01  5:54       ` Dave Chinner
2024-03-01 20:20         ` Kent Overstreet
2024-03-01 23:47           ` NeilBrown
2024-03-02  0:02             ` Kent Overstreet
2024-03-02 11:33               ` Tetsuo Handa
2024-03-02 16:53                 ` Matthew Wilcox
2024-03-03 22:45               ` NeilBrown
2024-03-03 22:54                 ` Kent Overstreet
2024-03-04  0:20                 ` Dave Chinner
2024-03-04  1:16                   ` NeilBrown
2024-03-04  0:35                 ` Matthew Wilcox
2024-03-04  1:27                   ` NeilBrown
2024-03-04  2:05                   ` Kent Overstreet
2024-03-12 14:46                 ` Vlastimil Babka
2024-03-12 22:09                   ` NeilBrown
2024-03-20 18:32                   ` Dan Carpenter
2024-03-20 18:48                     ` Vlastimil Babka
2024-03-20 18:55                       ` Matthew Wilcox
2024-03-20 19:07                         ` Kent Overstreet
2024-03-20 19:14                           ` Matthew Wilcox
2024-03-20 19:33                             ` Kent Overstreet
2024-03-20 19:09                     ` Kent Overstreet
2024-03-21  6:27                 ` Dan Carpenter
2024-03-22  1:47                   ` NeilBrown
2024-03-22  6:13                     ` Dan Carpenter
2024-03-24 22:31                       ` NeilBrown
2024-03-25  8:43                         ` Dan Carpenter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=l25hltgrr5epdzrdxsx6lgzvvtzfqxhnnvdru7vfjozdfhl4eh@xvl42xplak3u \
    --to=kent.overstreet@linux.dev \
    --cc=amir73il@gmail.com \
    --cc=jack@suse.cz \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=neilb@suse.de \
    --cc=paulmck@kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox