linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: peter enderborg <peter.enderborg@sony.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	David Rientjes <rientjes@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>, <linux-mm@kvack.org>,
	<linux-kernel@vger.kernel.org>
Subject: Re: [patch] mm, oom: stop reclaiming if GFP_ATOMIC will start failing soon
Date: Mon, 27 Apr 2020 10:20:32 +0200	[thread overview]
Message-ID: <7726e8a8-8390-cee8-3480-4e68bf26f08a@sony.com> (raw)
In-Reply-To: <20200425172706.26b5011293e8dc77b1dccaf3@linux-foundation.org>

On 4/26/20 2:27 AM, Andrew Morton wrote:
> On Fri, 24 Apr 2020 13:48:06 -0700 (PDT) David Rientjes <rientjes@google.com> wrote:
>
>> If GFP_ATOMIC allocations will start failing soon because the amount of
>> free memory is substantially under per-zone min watermarks, it is better
>> to oom kill a process rather than continue to reclaim.
>>
>> This intends to significantly reduce the number of page allocation
>> failures that are encountered when the demands of user and atomic
>> allocations overwhelm the ability of reclaim to keep up.  We can see this
>> with a high ingress of networking traffic where memory allocated in irq
>> context can overwhelm the ability to reclaim fast enough such that user
>> memory consistently loops.  In that case, we have reclaimable memory, and
> "user memory allocation", I assume?  Or maybe "blockable memory
> allocatoins".
>
>> reclaiming is successful, but we've fully depleted memory reserves that
>> are allowed for non-blockable allocations.
>>
>> Commit 400e22499dd9 ("mm: don't warn about allocations which stall for
>> too long") removed evidence of user allocations stalling because of this,
>> but the situation can apply anytime we get "page allocation failures"
>> where reclaim is happening but per-zone min watermarks are starved:
>>
>> Node 0 Normal free:87356kB min:221984kB low:416984kB high:611984kB active_anon:123009936kB inactive_anon:67647652kB active_file:429612kB inactive_file:209980kB unevictable:112348kB writepending:260kB present:198180864kB managed:195027624kB mlocked:81756kB kernel_stack:24040kB pagetables:11460kB bounce:0kB free_pcp:940kB local_pcp:96kB free_cma:0kB
>> lowmem_reserve[]: 0 0 0 0
>> Node 1 Normal free:105616kB min:225568kB low:423716kB high:621864kB active_anon:122124196kB inactive_anon:74112696kB active_file:39172kB inactive_file:103696kB unevictable:204480kB writepending:180kB present:201326592kB managed:198174372kB mlocked:204480kB kernel_stack:11328kB pagetables:3680kB bounce:0kB free_pcp:1140kB local_pcp:0kB free_cma:0kB
>> lowmem_reserve[]: 0 0 0 0
>>
>> Without this patch, there is no guarantee that user memory allocations
>> will ever be successful when non-blockable allocations overwhelm the
>> ability to get above per-zone min watermarks.
>>
>> This doesn't solve page allocation failures entirely since it's a
>> preemptive measure based on watermarks that requires concurrent blockable
>> allocations to trigger the oom kill.  To complete solve page allocation
>> failures, it would be possible to do the same watermark check for non-
>> blockable allocations and then queue a worker to asynchronously oom kill
>> if it finds watermarks to be sufficiently low as well.
>>
> Well, what's really going on here?
>
> Is networking potentially consuming an unbounded amount of memory?  If
> so, then killing a process will just cause networking to consume more
> memory then hit against the same thing.  So presumably the answer is
> "no, the watermarks are inappropriately set for this workload".
>
> So would it not be sensible to dynamically adjust the watermarks in
> response to this condition?  Maintain a larger pool of memory for these
> allocations?  Or possibly push back on networking and tell it to reduce
> its queue sizes?  So that stuff doesn't keep on getting oom-killed?
>
I think I seen similar issues when dma-buf allocate a lot.  But that is on older kernels and out of tree.
So networking is maybe not the only cause. dma-buf are used a lot for camera stuff in android.



  parent reply	other threads:[~2020-04-27  8:20 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-24 20:48 David Rientjes
2020-04-25  0:32 ` Tetsuo Handa
2020-04-26  0:27 ` Andrew Morton
2020-04-26  3:04   ` Tetsuo Handa
2020-04-27  3:12   ` David Rientjes
2020-04-27  5:03     ` Tetsuo Handa
2020-04-27 20:30     ` Andrew Morton
2020-04-27 23:03       ` David Rientjes
2020-04-27 23:35         ` Andrew Morton
2020-04-28  7:43           ` Michal Hocko
2020-04-29  8:31             ` peter enderborg
2020-04-29  9:00               ` Michal Hocko
2020-04-28  9:38       ` Vlastimil Babka
2020-04-28 21:48         ` David Rientjes
2020-04-28 23:37           ` Tetsuo Handa
2020-04-29  7:51           ` Vlastimil Babka
2020-04-29  9:04             ` Michal Hocko
2020-04-29 10:45               ` Tetsuo Handa
2020-04-29 11:43                 ` Michal Hocko
2020-04-27  8:20   ` peter enderborg [this message]
2020-04-27 15:01 ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7726e8a8-8390-cee8-3480-4e68bf26f08a@sony.com \
    --to=peter.enderborg@sony.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=rientjes@google.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox