From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f199.google.com (mail-pf0-f199.google.com [209.85.192.199]) by kanga.kvack.org (Postfix) with ESMTP id 2CE2A6B025E for ; Tue, 19 Dec 2017 01:41:42 -0500 (EST) Received: by mail-pf0-f199.google.com with SMTP id 3so14067912pfo.1 for ; Mon, 18 Dec 2017 22:41:42 -0800 (PST) Received: from mga02.intel.com (mga02.intel.com. [134.134.136.20]) by mx.google.com with ESMTPS id k91si10570919pld.115.2017.12.18.22.41.40 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 18 Dec 2017 22:41:41 -0800 (PST) From: Kemi Wang Subject: [PATCH v2 3/5] mm: enlarge NUMA counters threshold size Date: Tue, 19 Dec 2017 14:39:24 +0800 Message-Id: <1513665566-4465-4-git-send-email-kemi.wang@intel.com> In-Reply-To: <1513665566-4465-1-git-send-email-kemi.wang@intel.com> References: <1513665566-4465-1-git-send-email-kemi.wang@intel.com> Sender: owner-linux-mm@kvack.org List-ID: To: Greg Kroah-Hartman , Andrew Morton , Michal Hocko , Vlastimil Babka , Mel Gorman , Johannes Weiner , Christopher Lameter , YASUAKI ISHIMATSU , Andrey Ryabinin , Nikolay Borisov , Pavel Tatashin , David Rientjes , Sebastian Andrzej Siewior Cc: Dave , Andi Kleen , Tim Chen , Jesper Dangaard Brouer , Ying Huang , Aaron Lu , Aubrey Li , Kemi Wang , Linux MM , Linux Kernel We have seen significant overhead in cache bouncing caused by NUMA counters update in multi-threaded page allocation. See 'commit 1d90ca897cb0 ("mm: update NUMA counter threshold size")' for more details. This patch updates NUMA counters to a fixed size of (MAX_S16 - 2) and deals with global counter update using different threshold size for node page stats. Signed-off-by: Kemi Wang --- mm/vmstat.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/mm/vmstat.c b/mm/vmstat.c index 9c681cc..64e08ae 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -30,6 +30,8 @@ #include "internal.h" +#define VM_NUMA_STAT_THRESHOLD (S16_MAX - 2) + #ifdef CONFIG_NUMA int sysctl_vm_numa_stat = ENABLE_NUMA_STAT; @@ -394,7 +396,11 @@ void __inc_node_state(struct pglist_data *pgdat, enum node_stat_item item) s16 v, t; v = __this_cpu_inc_return(*p); - t = __this_cpu_read(pcp->stat_threshold); + if (item >= NR_VM_NUMA_STAT_ITEMS) + t = __this_cpu_read(pcp->stat_threshold); + else + t = VM_NUMA_STAT_THRESHOLD; + if (unlikely(v > t)) { s16 overstep = t >> 1; @@ -549,7 +555,10 @@ static inline void mod_node_state(struct pglist_data *pgdat, * Most of the time the thresholds are the same anyways * for all cpus in a node. */ - t = this_cpu_read(pcp->stat_threshold); + if (item >= NR_VM_NUMA_STAT_ITEMS) + t = this_cpu_read(pcp->stat_threshold); + else + t = VM_NUMA_STAT_THRESHOLD; o = this_cpu_read(*p); n = delta + o; -- 2.7.4 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org