From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1374DC433DB for ; Wed, 31 Mar 2021 11:01:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5F5C8619B1 for ; Wed, 31 Mar 2021 11:01:44 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5F5C8619B1 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=techsingularity.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AC9106B007E; Wed, 31 Mar 2021 07:01:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A9CFE6B0081; Wed, 31 Mar 2021 07:01:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9657D6B0082; Wed, 31 Mar 2021 07:01:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0120.hostedemail.com [216.40.44.120]) by kanga.kvack.org (Postfix) with ESMTP id 7A8C16B007E for ; Wed, 31 Mar 2021 07:01:41 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 355FB18C9 for ; Wed, 31 Mar 2021 11:01:41 +0000 (UTC) X-FDA: 77979878802.20.414DF3F Received: from outbound-smtp53.blacknight.com (outbound-smtp53.blacknight.com [46.22.136.237]) by imf21.hostedemail.com (Postfix) with ESMTP id F0F65E0011C5 for ; Wed, 31 Mar 2021 11:01:39 +0000 (UTC) Received: from mail.blacknight.com (pemlinmail03.blacknight.ie [81.17.254.16]) by outbound-smtp53.blacknight.com (Postfix) with ESMTPS id F3D25FAD83 for ; Wed, 31 Mar 2021 12:01:38 +0100 (IST) Received: (qmail 6962 invoked from network); 31 Mar 2021 11:01:38 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.22.4]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 31 Mar 2021 11:01:38 -0000 Date: Wed, 31 Mar 2021 12:01:37 +0100 From: Mel Gorman To: Thomas Gleixner Cc: Linux-MM , Linux-RT-Users , LKML , Chuck Lever , Jesper Dangaard Brouer , Matthew Wilcox , Sebastian Andrzej Siewior Subject: Re: [PATCH 2/6] mm/page_alloc: Convert per-cpu list protection to local_lock Message-ID: <20210331110137.GA3697@techsingularity.net> References: <20210329120648.19040-1-mgorman@techsingularity.net> <20210329120648.19040-3-mgorman@techsingularity.net> <877dln640j.ffs@nanos.tec.linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <877dln640j.ffs@nanos.tec.linutronix.de> User-Agent: Mutt/1.10.1 (2018-07-13) X-Stat-Signature: jetwq679j7i3dgnydup8jqoiwgut3owp X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: F0F65E0011C5 Received-SPF: none (techsingularity.net>: No applicable sender policy available) receiver=imf21; identity=mailfrom; envelope-from=""; helo=outbound-smtp53.blacknight.com; client-ip=46.22.136.237 X-HE-DKIM-Result: none/none X-HE-Tag: 1617188499-588064 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Mar 31, 2021 at 11:55:56AM +0200, Thomas Gleixner wrote: > On Mon, Mar 29 2021 at 13:06, Mel Gorman wrote: > > There is a lack of clarity of what exactly local_irq_save/local_irq_restore > > protects in page_alloc.c . It conflates the protection of per-cpu page > > allocation structures with per-cpu vmstat deltas. > > > > This patch protects the PCP structure using local_lock which > > for most configurations is identical to IRQ enabling/disabling. > > The scope of the lock is still wider than it should be but this is > > decreased in later patches. The per-cpu vmstat deltas are protected by > > preempt_disable/preempt_enable where necessary instead of relying on > > IRQ disable/enable. > > Yes, this goes into the right direction and I really appreciate the > scoped protection for clarity sake. > Thanks. > > #ifdef CONFIG_MEMORY_HOTREMOVE > > diff --git a/mm/vmstat.c b/mm/vmstat.c > > index 8a8f1a26b231..01b74ff73549 100644 > > --- a/mm/vmstat.c > > +++ b/mm/vmstat.c > > @@ -887,6 +887,7 @@ void cpu_vm_stats_fold(int cpu) > > > > pzstats = per_cpu_ptr(zone->per_cpu_zonestats, cpu); > > > > + preempt_disable(); > > What's the reason for the preempt_disable() here? A comment would be > appreciated. > Very good question because it's protecting vm_stat_diff and vm_numa_stat_diff in different contexts and not quite correctly at this point of the series. By the end of the series vm_numa_stat_diff is a simple counter and does not need special protection. Right now, it's protecting against a read and clear of vm_stat_diff in two contexts -- cpu_vm_stats_fold and drain_zonestats but it's only defensive. cpu_vm_stats_fold is only called when a CPU is going dead and drain_zonestats is called from memory hotplug context. The protection is necessary only if a new drain_zonestats caller was added without taking the RMW of vm_stat_diff into account which may never happen. This whole problem with preemption could be avoided altogether if this_cpu_xchg was used similar to what is done elsewhere in vmstat so.... this? diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 64429ca4957f..9528304ce24d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8969,8 +8969,9 @@ void zone_pcp_reset(struct zone *zone) struct per_cpu_zonestat *pzstats; /* - * No race with drain_pages. drain_zonestat disables preemption - * and drain_pages relies on the pcp local_lock. + * No race with drain_pages. drain_zonestat is only concerned with + * vm_*_stat_diff which is updated with this_cpu_xchg and drain_pages + * only cares about the PCP lists protected by local_lock. */ if (zone->per_cpu_pageset != &boot_pageset) { for_each_online_cpu(cpu) { diff --git a/mm/vmstat.c b/mm/vmstat.c index 01b74ff73549..34ff61a145d2 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -887,13 +887,11 @@ void cpu_vm_stats_fold(int cpu) pzstats = per_cpu_ptr(zone->per_cpu_zonestats, cpu); - preempt_disable(); for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) if (pzstats->vm_stat_diff[i]) { int v; - v = pzstats->vm_stat_diff[i]; - pzstats->vm_stat_diff[i] = 0; + v = this_cpu_xchg(pzstats->vm_stat_diff[i], 0); atomic_long_add(v, &zone->vm_stat[i]); global_zone_diff[i] += v; } @@ -903,13 +901,11 @@ void cpu_vm_stats_fold(int cpu) if (pzstats->vm_numa_stat_diff[i]) { int v; - v = pzstats->vm_numa_stat_diff[i]; - pzstats->vm_numa_stat_diff[i] = 0; + v = this_cpu_xchg(pzstats->vm_numa_stat_diff[i], 0); atomic_long_add(v, &zone->vm_numa_stat[i]); global_numa_diff[i] += v; } #endif - preempt_enable(); } for_each_online_pgdat(pgdat) { @@ -943,10 +939,9 @@ void drain_zonestat(struct zone *zone, struct per_cpu_zonestat *pzstats) { int i; - preempt_disable(); for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) if (pzstats->vm_stat_diff[i]) { - int v = pzstats->vm_stat_diff[i]; + int v = this_cpu_xchg(pzstats->vm_stat_diff[i], 0); pzstats->vm_stat_diff[i] = 0; atomic_long_add(v, &zone->vm_stat[i]); atomic_long_add(v, &vm_zone_stat[i]); @@ -955,14 +950,12 @@ void drain_zonestat(struct zone *zone, struct per_cpu_zonestat *pzstats) #ifdef CONFIG_NUMA for (i = 0; i < NR_VM_NUMA_STAT_ITEMS; i++) if (pzstats->vm_numa_stat_diff[i]) { - int v = pzstats->vm_numa_stat_diff[i]; + int v = this_cpu_xchg(pzstats->vm_numa_stat_diff[i], 0); - pzstats->vm_numa_stat_diff[i] = 0; atomic_long_add(v, &zone->vm_numa_stat[i]); atomic_long_add(v, &vm_numa_stat[i]); } #endif - preempt_enable(); } #endif -- Mel Gorman SUSE Labs