linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: mateusznosek0@gmail.com
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	akpm@linux-foundation.org
Subject: Re: [RFC PATCH] mm/page_alloc.c: micro-optimization reduce oom critical section size
Date: Mon, 14 Sep 2020 16:22:33 +0200	[thread overview]
Message-ID: <20200914142233.GT16999@dhcp22.suse.cz> (raw)
In-Reply-To: <20200914100654.21746-1-mateusznosek0@gmail.com>

On Mon 14-09-20 12:06:54, mateusznosek0@gmail.com wrote:
> From: Mateusz Nosek <mateusznosek0@gmail.com>
> 
> Most operations from '__alloc_pages_may_oom' do not require oom_mutex hold.
> Exception is 'out_of_memory'. The patch refactors '__alloc_pages_may_oom'
> to reduce critical section size and improve overall system performance.

This is a real slow path. What is the point of optimizing it? Do you
have any numbers?

Also I am not convinced the patch is entirely safe. At least the last
allocation attempt is meant to be done under the lock to allow only one
task to perform this. I have forgot the complete reasoning behind that
but at least the changelog should argue why that is ok.

> Signed-off-by: Mateusz Nosek <mateusznosek0@gmail.com>
> ---
>  mm/page_alloc.c | 45 ++++++++++++++++++++++++---------------------
>  1 file changed, 24 insertions(+), 21 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index b9bd75cacf02..b07f950a5825 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3935,18 +3935,7 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
>  		.order = order,
>  	};
>  	struct page *page;
> -
> -	*did_some_progress = 0;
> -
> -	/*
> -	 * Acquire the oom lock.  If that fails, somebody else is
> -	 * making progress for us.
> -	 */
> -	if (!mutex_trylock(&oom_lock)) {
> -		*did_some_progress = 1;
> -		schedule_timeout_uninterruptible(1);
> -		return NULL;
> -	}
> +	bool success;
>  
>  	/*
>  	 * Go through the zonelist yet one more time, keep very high watermark
> @@ -3959,14 +3948,17 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
>  				      ~__GFP_DIRECT_RECLAIM, order,
>  				      ALLOC_WMARK_HIGH|ALLOC_CPUSET, ac);
>  	if (page)
> -		goto out;
> +		return page;
> +
> +	/* Check if somebody else is making progress for us. */
> +	*did_some_progress = mutex_is_locked(&oom_lock);
>  
>  	/* Coredumps can quickly deplete all memory reserves */
>  	if (current->flags & PF_DUMPCORE)
> -		goto out;
> +		return NULL;
>  	/* The OOM killer will not help higher order allocs */
>  	if (order > PAGE_ALLOC_COSTLY_ORDER)
> -		goto out;
> +		return NULL;
>  	/*
>  	 * We have already exhausted all our reclaim opportunities without any
>  	 * success so it is time to admit defeat. We will skip the OOM killer
> @@ -3976,12 +3968,12 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
>  	 * The OOM killer may not free memory on a specific node.
>  	 */
>  	if (gfp_mask & (__GFP_RETRY_MAYFAIL | __GFP_THISNODE))
> -		goto out;
> +		return NULL;
>  	/* The OOM killer does not needlessly kill tasks for lowmem */
>  	if (ac->highest_zoneidx < ZONE_NORMAL)
> -		goto out;
> +		return NULL;
>  	if (pm_suspended_storage())
> -		goto out;
> +		return NULL;
>  	/*
>  	 * XXX: GFP_NOFS allocations should rather fail than rely on
>  	 * other request to make a forward progress.
> @@ -3992,8 +3984,20 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
>  	 * failures more gracefully we should just bail out here.
>  	 */
>  
> +	/*
> +	 * Acquire the oom lock.  If that fails, somebody else is
> +	 * making progress for us.
> +	 */
> +	if (!mutex_trylock(&oom_lock)) {
> +		*did_some_progress = 1;
> +		schedule_timeout_uninterruptible(1);
> +		return NULL;
> +	}
> +	success = out_of_memory(&oc);
> +	mutex_unlock(&oom_lock);
> +
>  	/* Exhausted what can be done so it's blame time */
> -	if (out_of_memory(&oc) || WARN_ON_ONCE(gfp_mask & __GFP_NOFAIL)) {
> +	if (success || WARN_ON_ONCE(gfp_mask & __GFP_NOFAIL)) {
>  		*did_some_progress = 1;
>  
>  		/*
> @@ -4004,8 +4008,7 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
>  			page = __alloc_pages_cpuset_fallback(gfp_mask, order,
>  					ALLOC_NO_WATERMARKS, ac);
>  	}
> -out:
> -	mutex_unlock(&oom_lock);
> +
>  	return page;
>  }
>  
> -- 
> 2.20.1
> 

-- 
Michal Hocko
SUSE Labs


  reply	other threads:[~2020-09-14 14:22 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-14 10:06 mateusznosek0
2020-09-14 14:22 ` Michal Hocko [this message]
2020-09-15 13:09   ` Mateusz Nosek
2020-09-15 14:04     ` Michal Hocko
2020-09-16 11:13       ` Mateusz Nosek

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200914142233.GT16999@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mateusznosek0@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox