From: Huang Ying <ying.huang@intel.com>
To: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org,
Arjan Van De Ven <arjan@linux.intel.com>,
Huang Ying <ying.huang@intel.com>,
Andrew Morton <akpm@linux-foundation.org>,
Mel Gorman <mgorman@techsingularity.net>,
Vlastimil Babka <vbabka@suse.cz>,
David Hildenbrand <david@redhat.com>,
Johannes Weiner <jweiner@redhat.com>,
Dave Hansen <dave.hansen@linux.intel.com>,
Michal Hocko <mhocko@suse.com>,
Pavel Tatashin <pasha.tatashin@soleen.com>,
Matthew Wilcox <willy@infradead.org>,
Christoph Lameter <cl@linux.com>
Subject: [PATCH 01/10] mm, pcp: avoid to drain PCP when process exit
Date: Wed, 20 Sep 2023 14:18:47 +0800 [thread overview]
Message-ID: <20230920061856.257597-2-ying.huang@intel.com> (raw)
In-Reply-To: <20230920061856.257597-1-ying.huang@intel.com>
In commit f26b3fa04611 ("mm/page_alloc: limit number of high-order
pages on PCP during bulk free"), the PCP (Per-CPU Pageset) will be
drained when PCP is mostly used for high-order pages freeing to
improve the cache-hot pages reusing between page allocation and
freeing CPUs.
But, the PCP draining mechanism may be triggered unexpectedly when
process exits. With some customized trace point, it was found that
PCP draining (free_high == true) was triggered with the order-1 page
freeing with the following call stack,
=> free_unref_page_commit
=> free_unref_page
=> __mmdrop
=> exit_mm
=> do_exit
=> do_group_exit
=> __x64_sys_exit_group
=> do_syscall_64
Checking the source code, this is the page table PGD
freeing (mm_free_pgd()). It's a order-1 page freeing if
CONFIG_PAGE_TABLE_ISOLATION=y. Which is a common configuration for
security.
Just before that, page freeing with the following call stack was
found,
=> free_unref_page_commit
=> free_unref_page_list
=> release_pages
=> tlb_batch_pages_flush
=> tlb_finish_mmu
=> exit_mmap
=> __mmput
=> exit_mm
=> do_exit
=> do_group_exit
=> __x64_sys_exit_group
=> do_syscall_64
So, when a process exits,
- a large number of user pages of the process will be freed without
page allocation, it's highly possible that pcp->free_factor becomes
> 0.
- after freeing all user pages, the PGD will be freed, which is a
order-1 page freeing, PCP will be drained.
All in all, when a process exits, it's high possible that the PCP will
be drained. This is an unexpected behavior.
To avoid this, in the patch, the PCP draining will only be triggered
for 2 consecutive high-order page freeing.
On a 2-socket Intel server with 224 logical CPU, we tested kbuild on
one socket with `make -j 112`. With the patch, the build time
decreases 3.4% (from 206s to 199s). The cycles% of the spinlock
contention (mostly for zone lock) decreases from 43.6% to 40.3% (with
PCP size == 361). The number of PCP draining for high order pages
freeing (free_high) decreases 50.8%.
This helps network workload too for reduced zone lock contention. On
a 2-socket Intel server with 128 logical CPU, with the patch, the
network bandwidth of the UNIX (AF_UNIX) test case of lmbench test
suite with 16-pair processes increase 17.1%. The cycles% of the
spinlock contention (mostly for zone lock) decreases from 50.0% to
45.8%. The number of PCP draining for high order pages
freeing (free_high) decreases 27.4%. The cache miss rate keeps 0.3%.
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Christoph Lameter <cl@linux.com>
---
include/linux/mmzone.h | 5 ++++-
mm/page_alloc.c | 11 ++++++++---
2 files changed, 12 insertions(+), 4 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 4106fbc5b4b3..64d5ed2bb724 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -676,12 +676,15 @@ enum zone_watermarks {
#define high_wmark_pages(z) (z->_watermark[WMARK_HIGH] + z->watermark_boost)
#define wmark_pages(z, i) (z->_watermark[i] + z->watermark_boost)
+#define PCPF_PREV_FREE_HIGH_ORDER 0x01
+
struct per_cpu_pages {
spinlock_t lock; /* Protects lists field */
int count; /* number of pages in the list */
int high; /* high watermark, emptying needed */
int batch; /* chunk size for buddy add/remove */
- short free_factor; /* batch scaling factor during free */
+ u8 flags; /* protected by pcp->lock */
+ u8 free_factor; /* batch scaling factor during free */
#ifdef CONFIG_NUMA
short expire; /* When 0, remote pagesets are drained */
#endif
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0c5be12f9336..828dcc24b030 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2370,7 +2370,7 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
{
int high;
int pindex;
- bool free_high;
+ bool free_high = false;
__count_vm_events(PGFREE, 1 << order);
pindex = order_to_pindex(migratetype, order);
@@ -2383,8 +2383,13 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
* freeing without allocation. The remainder after bulk freeing
* stops will be drained from vmstat refresh context.
*/
- free_high = (pcp->free_factor && order && order <= PAGE_ALLOC_COSTLY_ORDER);
-
+ if (order && order <= PAGE_ALLOC_COSTLY_ORDER) {
+ free_high = (pcp->free_factor &&
+ (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER));
+ pcp->flags |= PCPF_PREV_FREE_HIGH_ORDER;
+ } else if (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER) {
+ pcp->flags &= ~PCPF_PREV_FREE_HIGH_ORDER;
+ }
high = nr_pcp_high(pcp, zone, free_high);
if (pcp->count >= high) {
free_pcppages_bulk(zone, nr_pcp_free(pcp, high, free_high), pcp, pindex);
--
2.39.2
next prev parent reply other threads:[~2023-09-20 6:19 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-20 6:18 [PATCH 00/10] mm: PCP high auto-tuning Huang Ying
2023-09-20 6:18 ` Huang Ying [this message]
2023-10-11 12:46 ` [PATCH 01/10] mm, pcp: avoid to drain PCP when process exit Mel Gorman
2023-10-11 17:16 ` Andrew Morton
2023-10-12 13:09 ` Mel Gorman
2023-10-12 13:35 ` Huang, Ying
2023-10-12 12:21 ` Huang, Ying
2023-09-20 6:18 ` [PATCH 02/10] cacheinfo: calculate per-CPU data cache size Huang Ying
2023-09-20 9:24 ` Sudeep Holla
2023-09-22 7:56 ` Huang, Ying
2023-10-11 12:20 ` Mel Gorman
2023-10-12 12:08 ` Huang, Ying
2023-10-12 12:52 ` Mel Gorman
2023-10-12 13:12 ` Huang, Ying
2023-10-12 15:22 ` Mel Gorman
2023-10-13 3:06 ` Huang, Ying
2023-10-16 15:43 ` Mel Gorman
2023-09-20 6:18 ` [PATCH 03/10] mm, pcp: reduce lock contention for draining high-order pages Huang Ying
2023-10-11 12:49 ` Mel Gorman
2023-10-12 12:11 ` Huang, Ying
2023-09-20 6:18 ` [PATCH 04/10] mm: restrict the pcp batch scale factor to avoid too long latency Huang Ying
2023-10-11 12:52 ` Mel Gorman
2023-10-12 12:15 ` Huang, Ying
2023-09-20 6:18 ` [PATCH 05/10] mm, page_alloc: scale the number of pages that are batch allocated Huang Ying
2023-10-11 12:54 ` Mel Gorman
2023-09-20 6:18 ` [PATCH 06/10] mm: add framework for PCP high auto-tuning Huang Ying
2023-09-20 6:18 ` [PATCH 07/10] mm: tune PCP high automatically Huang Ying
2023-09-20 6:18 ` [PATCH 08/10] mm, pcp: decrease PCP high if free pages < high watermark Huang Ying
2023-10-11 13:08 ` Mel Gorman
2023-10-12 12:19 ` Huang, Ying
2023-09-20 6:18 ` [PATCH 09/10] mm, pcp: avoid to reduce PCP high unnecessarily Huang Ying
2023-10-11 14:09 ` Mel Gorman
2023-10-12 7:48 ` Huang, Ying
2023-10-12 12:49 ` Mel Gorman
2023-10-12 13:19 ` Huang, Ying
2023-09-20 6:18 ` [PATCH 10/10] mm, pcp: reduce detecting time of consecutive high order page freeing Huang Ying
2023-09-20 16:41 ` [PATCH 00/10] mm: PCP high auto-tuning Andrew Morton
2023-09-21 13:32 ` Huang, Ying
2023-09-21 15:46 ` Andrew Morton
2023-09-22 0:33 ` Huang, Ying
2023-10-11 13:05 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230920061856.257597-2-ying.huang@intel.com \
--to=ying.huang@intel.com \
--cc=akpm@linux-foundation.org \
--cc=arjan@linux.intel.com \
--cc=cl@linux.com \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=jweiner@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=pasha.tatashin@soleen.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox