From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.5 required=3.0 tests=MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D052C432C2 for ; Thu, 26 Sep 2019 07:26:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3A42F206E0 for ; Thu, 26 Sep 2019 07:26:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3A42F206E0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BF8F36B0008; Thu, 26 Sep 2019 03:26:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BAB3E6B000A; Thu, 26 Sep 2019 03:26:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A72266B000C; Thu, 26 Sep 2019 03:26:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0098.hostedemail.com [216.40.44.98]) by kanga.kvack.org (Postfix) with ESMTP id 8514A6B0008 for ; Thu, 26 Sep 2019 03:26:49 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 328DF7587 for ; Thu, 26 Sep 2019 07:26:49 +0000 (UTC) X-FDA: 75976239738.18.tiger89_75af2e9cf3a39 X-HE-Tag: tiger89_75af2e9cf3a39 X-Filterd-Recvd-Size: 6234 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Thu, 26 Sep 2019 07:26:48 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id E7BF3AF30; Thu, 26 Sep 2019 07:26:46 +0000 (UTC) Date: Thu, 26 Sep 2019 09:26:45 +0200 From: Michal Hocko To: Qian Cai Cc: David Hildenbrand , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Oscar Salvador , Pavel Tatashin , Dan Williams , Thomas Gleixner Subject: Re: [PATCH v1] mm/memory_hotplug: Don't take the cpu_hotplug_lock Message-ID: <20190926072645.GA20255@dhcp22.suse.cz> References: <20190924143615.19628-1-david@redhat.com> <1569337401.5576.217.camel@lca.pw> <20190924151147.GB23050@dhcp22.suse.cz> <1569351244.5576.219.camel@lca.pw> <2f8c8099-8de0-eccc-2056-a79d2f97fbf7@redhat.com> <1569427262.5576.225.camel@lca.pw> <20190925174809.GM23050@dhcp22.suse.cz> <1569435659.5576.227.camel@lca.pw> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <1569435659.5576.227.camel@lca.pw> User-Agent: Mutt/1.10.1 (2018-07-13) Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed 25-09-19 14:20:59, Qian Cai wrote: > On Wed, 2019-09-25 at 19:48 +0200, Michal Hocko wrote: > > On Wed 25-09-19 12:01:02, Qian Cai wrote: > > > On Wed, 2019-09-25 at 09:02 +0200, David Hildenbrand wrote: > > > > On 24.09.19 20:54, Qian Cai wrote: > > > > > On Tue, 2019-09-24 at 17:11 +0200, Michal Hocko wrote: > > > > > > On Tue 24-09-19 11:03:21, Qian Cai wrote: > > > > > > [...] > > > > > > > While at it, it might be a good time to rethink the whole l= ocking over there, as > > > > > > > it right now read files under /sys/kernel/slab/ could trigg= er a possible > > > > > > > deadlock anyway. > > > > > > >=20 > > > > > >=20 > > > > > > [...] > > > > > > > [=A0=A0442.452090][ T5224] -> #0 (mem_hotplug_lock.rw_sem){= ++++}: > > > > > > > [=A0=A0442.459748][ T5224]=A0=A0=A0=A0=A0=A0=A0=A0validate_= chain+0xd10/0x2bcc > > > > > > > [=A0=A0442.464883][ T5224]=A0=A0=A0=A0=A0=A0=A0=A0__lock_ac= quire+0x7f4/0xb8c > > > > > > > [=A0=A0442.469930][ T5224]=A0=A0=A0=A0=A0=A0=A0=A0lock_acqu= ire+0x31c/0x360 > > > > > > > [=A0=A0442.474803][ T5224]=A0=A0=A0=A0=A0=A0=A0=A0get_onlin= e_mems+0x54/0x150 > > > > > > > [=A0=A0442.479850][ T5224]=A0=A0=A0=A0=A0=A0=A0=A0show_slab= _objects+0x94/0x3a8 > > > > > > > [=A0=A0442.485072][ T5224]=A0=A0=A0=A0=A0=A0=A0=A0total_obj= ects_show+0x28/0x34 > > > > > > > [=A0=A0442.490292][ T5224]=A0=A0=A0=A0=A0=A0=A0=A0slab_attr= _show+0x38/0x54 > > > > > > > [=A0=A0442.495166][ T5224]=A0=A0=A0=A0=A0=A0=A0=A0sysfs_kf_= seq_show+0x198/0x2d4 > > > > > > > [=A0=A0442.500473][ T5224]=A0=A0=A0=A0=A0=A0=A0=A0kernfs_se= q_show+0xa4/0xcc > > > > > > > [=A0=A0442.505433][ T5224]=A0=A0=A0=A0=A0=A0=A0=A0seq_read+= 0x30c/0x8a8 > > > > > > > [=A0=A0442.509958][ T5224]=A0=A0=A0=A0=A0=A0=A0=A0kernfs_fo= p_read+0xa8/0x314 > > > > > > > [=A0=A0442.515007][ T5224]=A0=A0=A0=A0=A0=A0=A0=A0__vfs_rea= d+0x88/0x20c > > > > > > > [=A0=A0442.519620][ T5224]=A0=A0=A0=A0=A0=A0=A0=A0vfs_read+= 0xd8/0x10c > > > > > > > [=A0=A0442.524060][ T5224]=A0=A0=A0=A0=A0=A0=A0=A0ksys_read= +0xb0/0x120 > > > > > > > [=A0=A0442.528586][ T5224]=A0=A0=A0=A0=A0=A0=A0=A0__arm64_s= ys_read+0x54/0x88 > > > > > > > [=A0=A0442.533634][ T5224]=A0=A0=A0=A0=A0=A0=A0=A0el0_svc_h= andler+0x170/0x240 > > > > > > > [=A0=A0442.538768][ T5224]=A0=A0=A0=A0=A0=A0=A0=A0el0_svc+0= x8/0xc > > > > > >=20 > > > > > > I believe the lock is not really needed here. We do not deall= ocated > > > > > > pgdat of a hotremoved node nor destroy the slab state because= an > > > > > > existing slabs would prevent hotremove to continue in the fir= st place. > > > > > >=20 > > > > > > There are likely details to be checked of course but the lock= just seems > > > > > > bogus. > > > > >=20 > > > > > Check 03afc0e25f7f ("slab: get_online_mems for > > > > > kmem_cache_{create,destroy,shrink}"). It actually talk about th= e races during > > > > > memory as well cpu hotplug, so it might even that cpu_hotplug_l= ock removal is > > > > > problematic? > > > > >=20 > > > >=20 > > > > Which removal are you referring to? get_online_mems() does not me= ss with > > > > the cpu hotplug lock (and therefore this patch). > > >=20 > > > The one in your patch. I suspect there might be races among the who= le NUMA node > > > hotplug, kmem_cache_create, and show_slab_objects(). See bfc8c90139= eb ("mem- > > > hotplug: implement get/put_online_mems") > > >=20 > > > "kmem_cache_{create,destroy,shrink} need to get a stable value of c= pu/node > > > online mask, because they init/destroy/access per-cpu/node kmem_cac= he parts, > > > which can be allocated or destroyed on cpu/mem hotplug." > >=20 > > I still have to grasp that code but if the slub allocator really need= s > > a stable cpu mask then it should be using the explicit cpu hotplug > > locking rather than rely on side effect of memory hotplug locking. > >=20 > > > Both online_pages() and show_slab_objects() need to get a stable va= lue of > > > cpu/node online mask. > >=20 > > Could tou be more specific why online_pages need a stable cpu online > > mask? I do not think that show_slab_objects is a real problem because= a > > potential race shouldn't be critical. >=20 > build_all_zonelists() > __build_all_zonelists() > for_each_online_cpu(cpu) OK, this is using for_each_online_cpu but why is this a problem? Have you checked what the code actually does? Let's say that online_pages is racing with cpu hotplug. A new CPU appears/disappears from the online mask while we are iterating it, right? Let's start with cpu offlining case. We have two choices, either the cpu is still visible and we update its local node configuration even though it will disappear shortly which is ok because we are not touching any data that disappears (it's all per-cpu). Case when the cpu is no longer there is not really interesting. For the online case we might miss a cpu but that should be tolerateable because that is not any different from triggering the online independently of the memory hotplug. So there has to be a hook from that code path as well. If there is none then this is buggy irrespective of the locking. Makes sense? --=20 Michal Hocko SUSE Labs