linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Ryan Roberts <ryan.roberts@arm.com>
To: Yin Fengwei <fengwei.yin@intel.com>,
	linux-mm@kvack.org, akpm@linux-foundation.org,
	willy@infradead.org, yuzhao@google.com
Subject: Re: [PATCH 0/2] Reduce lock contention related with large folio
Date: Mon, 17 Apr 2023 11:33:06 +0100	[thread overview]
Message-ID: <34bf85ce-681b-ebb0-de31-7afc7aa9c5b2@arm.com> (raw)
In-Reply-To: <20230417075643.3287513-1-fengwei.yin@intel.com>

On 17/04/2023 08:56, Yin Fengwei wrote:
> Ryan tried to enable the large folio for anonymous mapping [1].
> 
> Unlike large folio for page cache which doesn't trigger frequent page
> allocation/free, large folio for anonymous mapping is allocated/freeed
> more frequently. So large folio for anonymous mapping exposes some lock
> contention.
> 
> Ryan mentioned the deferred queue lock in [1]. We also met other two
> lock contention: lru lock and zone lock.
> 
> This series tries to mitigate the deferred queue lock and reduce lru
> lock in some level.
> 
> The patch1 tries to reduce deferred queue lock by not acquiring queue
> lock when check whether the folio is in deferred list or not. Test
> page fault1 of will-it-scale showed 60% deferred queue lock contention
> reduction.
> 
> The patch2 tries to reduce lru lock by allowing batched add large folio
> to lru list. Test page fault1 of will-it-scale showed 20% lru lock
> contention reduction.
> 
> The zone lock contention happens on large folio free path and related
> with commit f26b3fa04611 "mm/page_alloc: limit number of high-order
> pages on PCP during bulk free" and will not be address by this series.

I applied this series on top of mine and did some quick perf tests. See
https://lore.kernel.org/linux-mm/d9987135-3a8a-e22c-13f9-506d3249332b@arm.com/.
The change is certainly reducing time spent in the kernel, but there are other
problems I'll need to investigate. So:

Tested-by: Ryan Roberts <ryan.roberts@arm.com>

Thanks,
Ryan

> 
> 
> [1]
> https://lore.kernel.org/linux-mm/20230414130303.2345383-1-ryan.roberts@arm.com/
> 
> Yin Fengwei (2):
>   THP: avoid lock when check whether THP is in deferred list
>   lru: allow large batched add large folio to lru list
> 
>  include/linux/pagevec.h | 19 +++++++++++++++++--
>  mm/huge_memory.c        | 19 ++++++++++++++++---
>  mm/swap.c               |  3 +--
>  3 files changed, 34 insertions(+), 7 deletions(-)
> 



      parent reply	other threads:[~2023-04-17 10:33 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-17  7:56 Yin Fengwei
2023-04-17  7:56 ` [PATCH 1/2] THP: avoid lock when check whether THP is in deferred list Yin Fengwei
2023-04-17  7:56 ` [PATCH 2/2] lru: allow large batched add large folio to lru list Yin Fengwei
2023-04-17 12:25   ` Matthew Wilcox
2023-04-18  1:57     ` Yin Fengwei
2023-04-18  2:37       ` Yin Fengwei
2023-04-18  6:39         ` Huang, Ying
2023-04-17 10:33 ` Ryan Roberts [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=34bf85ce-681b-ebb0-de31-7afc7aa9c5b2@arm.com \
    --to=ryan.roberts@arm.com \
    --cc=akpm@linux-foundation.org \
    --cc=fengwei.yin@intel.com \
    --cc=linux-mm@kvack.org \
    --cc=willy@infradead.org \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox