linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Goldwyn Rodrigues <rgoldwyn@suse.de>
Cc: Matthew Wilcox <willy@infradead.org>,
	linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
	penguin-kernel@i-love.sakura.ne.jp
Subject: Re: Removing GFP_NOFS
Date: Sat, 10 Mar 2018 09:38:12 +1100	[thread overview]
Message-ID: <20180309223812.GW7000@dastard> (raw)
In-Reply-To: <e461128e-6724-3c7f-0f62-860ac4071357@suse.de>

On Fri, Mar 09, 2018 at 08:48:32AM -0600, Goldwyn Rodrigues wrote:
> 
> 
> On 03/08/2018 10:06 PM, Dave Chinner wrote:
> > On Fri, Mar 09, 2018 at 12:35:35PM +1100, Dave Chinner wrote:
> >> On Thu, Mar 08, 2018 at 03:46:18PM -0800, Matthew Wilcox wrote:
> >>>
> >>> Do we have a strategy for eliminating GFP_NOFS?
> >>>
> >>> As I understand it, our intent is to mark the areas in individual
> >>> filesystems that can't be reentered with memalloc_nofs_save()/restore()
> >>> pairs.  Once they're all done, then we can replace all the GFP_NOFS
> >>> users with GFP_KERNEL.
> >>
> >> Won't be that easy, I think.  We recently came across user-reported
> >> allocation deadlocks in XFS where we were doing allocation with
> >> pages held in the writeback state that lockdep has never triggered
> >> on.
> >>
> >> https://www.spinics.net/lists/linux-xfs/msg16154.html
> >>
> >> IOWs, GFP_NOFS isn't a solid guide to where
> >> memalloc_nofs_save/restore need to cover in the filesystems because
> >> there's a surprising amount of code that isn't covered by existing
> >> lockdep annotations to warning us about un-intended recursion
> >> problems.
> >>
> >> I think we need to start with some documentation of all the generic
> >> rules for where these will need to be set, then the per-filesystem
> >> rules can be added on top of that...
> > 
> > So thinking a bit further here:
> > 
> > * page writeback state gets set and held:
> > 	->writepage should be under memalloc_nofs_save
> > 	->writepages should be under memalloc_nofs_save
> > * page cache write path is often under AOP_FLAG_NOFS
> > 	- should probably be under memalloc_nofs_save
> > * metadata writeback that uses page cache and page writeback flags
> >   should probably be under memalloc_nofs_save
> > 
> > What other generic code paths are susceptible to allocation
> > deadlocks?
> > 
> 
> AFAIU, these are callbacks into the filesystem from the mm code which
> are executed in case of low memory.

Except that many filesystems reject such attempts at writeback from
direct reclaim because they are a problem:

        /*
         * Refuse to write the page out if we are called from reclaim context.
         *
         * This avoids stack overflows when called from deeply used stacks in
         * random callers for direct reclaim or memcg reclaim.  We explicitly
         * allow reclaim from kswapd as the stack usage there is relatively low.
         *
         * This should never happen except in the case of a VM regression so
         * warn about it.
         */
        if (WARN_ON_ONCE((current->flags & (PF_MEMALLOC|PF_KSWAPD)) ==
                        PF_MEMALLOC))
                goto redirty;

But writback is also called on demand - the filemap_fdatawrite() and
friends interfaces. This means they can be called from anywhere in
the kernel....

> So, the calls of memory allocation
> from filesystem code are the ones that should be the one under
> memalloc_nofs_save() in order to save from recursion.

I don't think you understand the problem here - the problem is not
recursing into the writeback path - it's being in the writeback path
and doing an allocation that then triggers memory reclaim which then
recurses into the same filesystem while we hold pages in writeback
state.

i.e. the writeback path is a source of allocation deadlocks no matter
where it is called from.

> OTOH (contradicting myself here), writepages, in essence writebacks, are
> performed by per-BDI flusher threads which are kicked by the mm code in
> low memory situations, as opposed to the thread performing the allocation.
> 
> As Tetsuo pointed out, direct reclaims are the real problematic scenarios.

Sure, but I've been saying for more than 10 years we need to get rid
of direct reclaim because it's horribly inefficient when there's
lots of concurrent allocation pressure, not to mention it's full of
deadlock scenarios like this.

Really, though I'm tired of having the same arguments over and over
again about architectural problems that people just don't seem to
understand or want to fix.

> Also the shrinkers registered by filesystem code. However, there are no
> shrinkers that I know of, which allocate memory or perform locking.
> Thanks to smartly swapping into a temporary local list variable.

Go look at the XFS shrinkers that will lock inodes, dquots, buffers,
etc, run transactions, issue IO, block waiting for IO to complete,
etc.

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2018-03-09 22:38 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-08 23:46 Matthew Wilcox
2018-03-09  1:35 ` Dave Chinner
2018-03-09  4:06   ` Dave Chinner
2018-03-09 11:14     ` Tetsuo Handa
2018-03-09 14:48     ` Goldwyn Rodrigues
2018-03-09 22:38       ` Dave Chinner [this message]
2018-03-10  2:44         ` Tetsuo Handa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180309223812.GW7000@dastard \
    --to=david@fromorbit.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penguin-kernel@i-love.sakura.ne.jp \
    --cc=rgoldwyn@suse.de \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox