linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: Suren Baghdasaryan <surenb@google.com>
Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, pmladek@suse.com,
	peterz@infradead.org, guro@fb.com, shakeelb@google.com,
	minchan@kernel.org, timmurray@google.com, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, kernel-team@android.com
Subject: Re: [PATCH v3 1/1] mm: count time in drain_all_pages during direct reclaim as memory pressure
Date: Thu, 24 Feb 2022 09:53:11 +0100	[thread overview]
Message-ID: <YhdHd+dXf91FP+K0@dhcp22.suse.cz> (raw)
In-Reply-To: <20220223194812.1299646-1-surenb@google.com>

On Wed 23-02-22 11:48:12, Suren Baghdasaryan wrote:
> When page allocation in direct reclaim path fails, the system will
> make one attempt to shrink per-cpu page lists and free pages from
> high alloc reserves. Draining per-cpu pages into buddy allocator can
> be a very slow operation because it's done using workqueues and the
> task in direct reclaim waits for all of them to finish before
> proceeding. Currently this time is not accounted as psi memory stall.
> 
> While testing mobile devices under extreme memory pressure, when
> allocations are failing during direct reclaim, we notices that psi
> events which would be expected in such conditions were not triggered.
> After profiling these cases it was determined that the reason for
> missing psi events was that a big chunk of time spent in direct
> reclaim is not accounted as memory stall, therefore psi would not
> reach the levels at which an event is generated. Further investigation
> revealed that the bulk of that unaccounted time was spent inside
> drain_all_pages call.
> 
> A typical captured case when drain_all_pages path gets activated:
> 
> __alloc_pages_slowpath  took 44.644.613ns
>     __perform_reclaim   took    751.668ns (1.7%)
>     drain_all_pages     took 43.887.167ns (98.3%)

Although the draining is done in the slow path these numbers suggest
that we should really reconsider the use of WQ both for draining and
other purposes (like vmstats).

> PSI in this case records the time spent in __perform_reclaim but
> ignores drain_all_pages, IOW it misses 98.3% of the time spent in
> __alloc_pages_slowpath.
> 
> Annotate __alloc_pages_direct_reclaim in its entirety so that delays
> from handling page allocation failure in the direct reclaim path are
> accounted as memory stall.
> 
> Reported-by: Tim Murray <timmurray@google.com>
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> Acked-by: Johannes Weiner <hannes@cmpxchg.org>

Acked-by: Michal Hocko <mhocko@suse.com>

Thanks!

> ---
> changes in v3:
> - Moved psi_memstall_leave after the "out" label
> 
>  mm/page_alloc.c | 10 ++++++----
>  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 3589febc6d31..029bceb79861 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4595,13 +4595,12 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order,
>  					const struct alloc_context *ac)
>  {
>  	unsigned int noreclaim_flag;
> -	unsigned long pflags, progress;
> +	unsigned long progress;
>  
>  	cond_resched();
>  
>  	/* We now go into synchronous reclaim */
>  	cpuset_memory_pressure_bump();
> -	psi_memstall_enter(&pflags);
>  	fs_reclaim_acquire(gfp_mask);
>  	noreclaim_flag = memalloc_noreclaim_save();
>  
> @@ -4610,7 +4609,6 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order,
>  
>  	memalloc_noreclaim_restore(noreclaim_flag);
>  	fs_reclaim_release(gfp_mask);
> -	psi_memstall_leave(&pflags);
>  
>  	cond_resched();
>  
> @@ -4624,11 +4622,13 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
>  		unsigned long *did_some_progress)
>  {
>  	struct page *page = NULL;
> +	unsigned long pflags;
>  	bool drained = false;
>  
> +	psi_memstall_enter(&pflags);
>  	*did_some_progress = __perform_reclaim(gfp_mask, order, ac);
>  	if (unlikely(!(*did_some_progress)))
> -		return NULL;
> +		goto out;
>  
>  retry:
>  	page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac);
> @@ -4644,6 +4644,8 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order,
>  		drained = true;
>  		goto retry;
>  	}
> +out:
> +	psi_memstall_leave(&pflags);
>  
>  	return page;
>  }
> -- 
> 2.35.1.473.g83b2b277ed-goog

-- 
Michal Hocko
SUSE Labs


  parent reply	other threads:[~2022-02-24  8:53 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-23 19:48 Suren Baghdasaryan
2022-02-24  7:10 ` Shakeel Butt
2022-02-24  8:53 ` Michal Hocko [this message]
2022-02-24 16:28   ` Suren Baghdasaryan
2022-02-25  1:31     ` Suren Baghdasaryan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YhdHd+dXf91FP+K0@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=akpm@linux-foundation.org \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@android.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=peterz@infradead.org \
    --cc=pmladek@suse.com \
    --cc=shakeelb@google.com \
    --cc=surenb@google.com \
    --cc=timmurray@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox