From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A090BC432C1 for ; Tue, 24 Sep 2019 14:48:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6D61E207FD for ; Tue, 24 Sep 2019 14:48:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6D61E207FD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 031F96B0008; Tue, 24 Sep 2019 10:48:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EFE7E6B000C; Tue, 24 Sep 2019 10:48:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E39EE6B0266; Tue, 24 Sep 2019 10:48:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0206.hostedemail.com [216.40.44.206]) by kanga.kvack.org (Postfix) with ESMTP id BC33C6B0008 for ; Tue, 24 Sep 2019 10:48:48 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 6700C8140 for ; Tue, 24 Sep 2019 14:48:48 +0000 (UTC) X-FDA: 75970095936.05.fight99_4cd41b0bee314 X-HE-Tag: fight99_4cd41b0bee314 X-Filterd-Recvd-Size: 4602 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Sep 2019 14:48:47 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id A7792AC84; Tue, 24 Sep 2019 14:48:46 +0000 (UTC) Date: Tue, 24 Sep 2019 16:48:46 +0200 From: Michal Hocko To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Oscar Salvador , Pavel Tatashin , Dan Williams , Thomas Gleixner Subject: Re: [PATCH v1] mm/memory_hotplug: Don't take the cpu_hotplug_lock Message-ID: <20190924144846.GA23050@dhcp22.suse.cz> References: <20190924143615.19628-1-david@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190924143615.19628-1-david@redhat.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue 24-09-19 16:36:15, David Hildenbrand wrote: > Since commit 3f906ba23689 ("mm/memory-hotplug: switch locking to a percpu > rwsem") we do a cpus_read_lock() in mem_hotplug_begin(). This was > introduced to fix a potential deadlock between get_online_mems() and > get_online_cpus() - the memory and cpu hotplug lock. The root issue was > that build_all_zonelists() -> stop_machine() required the cpu hotplug lock: > The reason is that memory hotplug takes the memory hotplug lock and > then calls stop_machine() which calls get_online_cpus(). That's the > reverse lock order to get_online_cpus(); get_online_mems(); in > mm/slub_common.c > > So memory hotplug never really required any cpu lock itself, only > stop_machine() and lru_add_drain_all() required it. Back then, > stop_machine_cpuslocked() and lru_add_drain_all_cpuslocked() were used > as the cpu hotplug lock was now obtained in the caller. > > Since commit 11cd8638c37f ("mm, page_alloc: remove stop_machine from build > all_zonelists"), the stop_machine_cpuslocked() call is gone. > build_all_zonelists() does no longer require the cpu lock and does no > longer make use of stop_machine(). > > Since commit 9852a7212324 ("mm: drop hotplug lock from > lru_add_drain_all()"), lru_add_drain_all() "Doesn't need any cpu hotplug > locking because we do rely on per-cpu kworkers being shut down before our > page_alloc_cpu_dead callback is executed on the offlined cpu.". The > lru_add_drain_all_cpuslocked() variant was removed. > > So there is nothing left that requires the cpu hotplug lock. The memory > hotplug lock and the device hotplug lock are sufficient. I would love to see this happen. The biggest offenders should be gone. I really hated how those two locks have been conflated which likely resulted in some undocumented/unintended dependencies. So for now, I cannot really tell you whether the patch is correct. It would really require a lot of testing. I am not sure this is reasonably reviewable. So please add some testing results (ideally cpu hotplug racing a lot with the memory hotplug). Then I would be willing to give this a try and see. First by keeping it in linux-next for a release or two and then eyes closed, fingers crossed and go to the wild. Do we have a tag for that Dared-by maybe? > Cc: Andrew Morton > Cc: Oscar Salvador > Cc: Michal Hocko > Cc: Pavel Tatashin > Cc: Dan Williams > Cc: Thomas Gleixner > Signed-off-by: David Hildenbrand > --- > > RFC -> v1: > - Reword and add more details why the cpu hotplug lock was needed here > in the first place, and why we no longer require it. > > --- > mm/memory_hotplug.c | 2 -- > 1 file changed, 2 deletions(-) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index c3e9aed6023f..5fa30f3010e1 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -88,14 +88,12 @@ __setup("memhp_default_state=", setup_memhp_default_state); > > void mem_hotplug_begin(void) > { > - cpus_read_lock(); > percpu_down_write(&mem_hotplug_lock); > } > > void mem_hotplug_done(void) > { > percpu_up_write(&mem_hotplug_lock); > - cpus_read_unlock(); > } > > u64 max_mem_size = U64_MAX; > -- > 2.21.0 -- Michal Hocko SUSE Labs