From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39652C433F5 for ; Tue, 2 Nov 2021 12:27:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A1EEF608FB for ; Tue, 2 Nov 2021 12:27:14 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org A1EEF608FB Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 07BD46B006C; Tue, 2 Nov 2021 08:27:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 02B376B0071; Tue, 2 Nov 2021 08:27:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E5C17940009; Tue, 2 Nov 2021 08:27:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0082.hostedemail.com [216.40.44.82]) by kanga.kvack.org (Postfix) with ESMTP id D8EE16B006C for ; Tue, 2 Nov 2021 08:27:13 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 964391844AD15 for ; Tue, 2 Nov 2021 12:27:13 +0000 (UTC) X-FDA: 78763915146.07.0BDC520 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf03.hostedemail.com (Postfix) with ESMTP id 2E0B73000099 for ; Tue, 2 Nov 2021 12:27:07 +0000 (UTC) Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id CAFE11FD75; Tue, 2 Nov 2021 12:27:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1635856031; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Lb3sXNgxIYkY72+OGozUnacdQhpziHv4MBif6sV3P3Q=; b=XcqGPV5wA8l4NnlzvUr7wGo7VA5dT478XE+Bx4Na/Uykvj5PJjFmxQGw6HAeNMFkqykBKT ZbCvPMfhpanF4zJpRJJw5b/udrOm4+1jNmon8wJhb8LRKtPeD2hu0fvZyFJkfWoXrUO84u c0msjVUeEu9Wis80hBBUKlh7oD1v2uA= Received: from suse.cz (unknown [10.100.201.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by relay2.suse.de (Postfix) with ESMTPS id 9BFE8A3B81; Tue, 2 Nov 2021 12:27:11 +0000 (UTC) Date: Tue, 2 Nov 2021 13:27:11 +0100 From: Michal Hocko To: David Hildenbrand Cc: Alexey Makhalov , "linux-mm@kvack.org" , Andrew Morton , "linux-kernel@vger.kernel.org" , "stable@vger.kernel.org" , Oscar Salvador Subject: Re: [PATCH] mm: fix panic in __alloc_pages Message-ID: References: <20211101201312.11589-1-amakhalov@vmware.com> <7136c959-63ff-b866-b8e4-f311e0454492@redhat.com> <42abfba6-b27e-ca8b-8cdf-883a9398b506@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <42abfba6-b27e-ca8b-8cdf-883a9398b506@redhat.com> X-Rspamd-Queue-Id: 2E0B73000099 X-Stat-Signature: 1neqtrj99ge9ij3yakbcomk95sdicy9i Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=XcqGPV5w; spf=pass (imf03.hostedemail.com: domain of mhocko@suse.com designates 195.135.220.29 as permitted sender) smtp.mailfrom=mhocko@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com X-Rspamd-Server: rspam02 X-HE-Tag: 1635856027-942224 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue 02-11-21 13:06:06, David Hildenbrand wrote: > On 02.11.21 12:44, Michal Hocko wrote: > > On Tue 02-11-21 12:00:57, David Hildenbrand wrote: > >> On 02.11.21 11:34, Alexey Makhalov wrote: > > [...] > >>>> The node onlining logic when onlining a CPU sounds bogus as well: = Let's > >>>> take a look at try_offline_node(). It checks that: > >>>> 1) That no memory is *present* > >>>> 2) That no CPU is *present* > >>>> > >>>> We should online the node when adding the CPU ("present"), not whe= n > >>>> onlining the CPU. > >>> > >>> Possible. > >>> Assuming try_online_node was moved under add_cpu(), let=E2=80=99s > >>> take look on this call stack: > >>> add_cpu() > >>> try_online_node() > >>> __try_online_node() > >>> hotadd_new_pgdat() > >>> At line 1190 we'll have a problem: > >>> 1183 pgdat =3D NODE_DATA(nid); > >>> 1184 if (!pgdat) { > >>> 1185 pgdat =3D arch_alloc_nodedata(nid); > >>> 1186 if (!pgdat) > >>> 1187 return NULL; > >>> 1188 > >>> 1189 pgdat->per_cpu_nodestats =3D > >>> 1190 alloc_percpu(struct per_cpu_nodestat); > >>> 1191 arch_refresh_nodedata(nid, pgdat); > >>> > >>> alloc_percpu() will go for all possible CPUs and will eventually en= d up > >>> calling alloc_pages_node() trying to use subject nid for correspond= ing CPU > >>> hitting the same state #2 problem as NODE_DATA(nid) is still NULL a= nd nid > >>> is not yet online. > >> > >> Right, we will end up calling pcpu_alloc_pages()->alloc_pages_node()= for > >> each possible CPU. We use cpu_to_node() to come up with the NID. > >=20 > > Shouldn't this be numa_mem_id instead? Memory less nodes are odd litt= le >=20 > Hm, good question. Most probably yes for offline nodes. >=20 > diff --git a/mm/percpu-vm.c b/mm/percpu-vm.c > index 2054c9213c43..c21ff5bb91dc 100644 > --- a/mm/percpu-vm.c > +++ b/mm/percpu-vm.c > @@ -84,15 +84,19 @@ static int pcpu_alloc_pages(struct pcpu_chunk *chun= k, > gfp_t gfp) > { > unsigned int cpu, tcpu; > - int i; > + int i, nid; > =20 > gfp |=3D __GFP_HIGHMEM; > =20 > for_each_possible_cpu(cpu) { > + nid =3D cpu_to_node(cpu); > + > + if (nid =3D=3D NUMA_NO_NODE || !node_online(nid)) > + nid =3D numa_mem_id(); or simply nid =3D cpu_to_mem(cpu) > for (i =3D page_start; i < page_end; i++) { > struct page **pagep =3D &pages[pcpu_page_idx(cp= u, i)]; > =20 > - *pagep =3D alloc_pages_node(cpu_to_node(cpu), g= fp, 0); > + *pagep =3D alloc_pages_node(nid, gfp, 0); > if (!*pagep) > goto err; > } >=20 >=20 > > critters crafted into the MM code without wider considerations. From > > time to time we are struggling with some fallouts but the primary thi= ng > > is that zonelists should be valid for all memory less nodes. >=20 > Yes, but a zonelist cannot be correct for an offline node, where we mig= ht > not even have an allocated pgdat yet. No pgdat, no zonelist. So as soon= as > we allocate the pgdat and set the node online (->hotadd_new_pgdat()), t= he zone lists have to be correct. And I can spot an build_all_zonelists()= in hotadd_new_pgdat(). Yes, that is what I had in mind. We are talking about two things here. Memoryless nodes and offline nodes. The later sounds like a bug to me. > I agree that someone passing an offline NID into an allocator function > should be fixed. Right > Maybe __alloc_pages_bulk() and alloc_pages_node() should bail out direc= tly > (VM_BUG()) in case we're providing an offline node with eventually no/s= tale pgdat as > preferred nid. Historically, those allocation interfaces were not trying to be robust against wrong inputs because that adds cpu cycles for everybody for "what if buggy" code. This has worked (surprisingly) well. Memory less nodes have brought in some confusion but this is still something that we can address on a higher level. Nobody give arbitrary nodes as an input. cpu_to_node might be tricky because it can point to a memory less node which along with __GFP_THISNODE is very likely not something anybody wants. Hence cpu_to_mem should be used for allocations. I hate we have two very similar APIs... But something seems wrong in this case. cpu_to_node shouldn't return offline nodes. That is just a land mine. It is not clear to me how the cpu has been brought up so that the numa node allocation was left behind. As pointed in other email add_cpu resp. cpu_up is not it. Is it possible that the cpu bring up was only half way? --=20 Michal Hocko SUSE Labs