From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4756EC433F5 for ; Tue, 2 Nov 2021 13:41:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 04E0561053 for ; Tue, 2 Nov 2021 13:41:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 04E0561053 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 52FC3940009; Tue, 2 Nov 2021 09:41:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4DF5F6B0071; Tue, 2 Nov 2021 09:41:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3801A940009; Tue, 2 Nov 2021 09:41:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0033.hostedemail.com [216.40.44.33]) by kanga.kvack.org (Postfix) with ESMTP id 2A23D6B006C for ; Tue, 2 Nov 2021 09:41:33 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id BBE2A183C16BA for ; Tue, 2 Nov 2021 13:41:32 +0000 (UTC) X-FDA: 78764102424.03.8414809 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf08.hostedemail.com (Postfix) with ESMTP id CAD743000098 for ; Tue, 2 Nov 2021 13:41:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1635860491; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QQLlgC2+pa37FKsjZxqtI9j/y4YM779H4bT7Ffbeg8Y=; b=cbBib5CqfqnOSZT890n+PvPeNzCwlA/tgZERWqs9W911uNIdlwOuFk8y20p0952DCgL5yT 8gFFs7rOmxKo3L+uwKzJm7SnwGC1LK5NIjTb7iGGyLT6mxEl4ZLEqe1BcBx0G7ozpY5BuQ 02RvwqFmC3I9lRO3o369iQ9nvt93R84= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-246-21zuMpPGOAGVkSiOwFzdeQ-1; Tue, 02 Nov 2021 09:41:28 -0400 X-MC-Unique: 21zuMpPGOAGVkSiOwFzdeQ-1 Received: by mail-wm1-f69.google.com with SMTP id 67-20020a1c0046000000b0032cd88916e5so5312905wma.6 for ; Tue, 02 Nov 2021 06:41:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent :content-language:to:cc:references:from:organization:subject :in-reply-to:content-transfer-encoding; bh=QQLlgC2+pa37FKsjZxqtI9j/y4YM779H4bT7Ffbeg8Y=; b=fbC12Nq6SQd0KJi3KSxslT+7MqKovxLMfypZy1rBtVp+x7OerFN9N7JwQcB7yZ1ZyH +aPsH0mjSh4KLu7F4KWoOY0yd840Q0e0iE3amLeEijLXs6MqsfHjEfqyz0eG9wrXz6YO Gx2MwoEXt3GjIWeAMOQQ7u7tYLKYWtgSZ2IgZ04x8EboW3fwZJUro4WeoukdKWh6agmX VoCZFPnv+vnpUJKQyTv07GyvzCkWsrLkxXvb1Md6+J8wIo7foN0wUrH9+PhM3DcNIgrc qQrOsA9usPpuhc19qyaZW11Dybs7RL/XPx9/YZSJHpG0eZsyOQc0BzFzFYWV1bYek2RR n6iA== X-Gm-Message-State: AOAM533hj9kmPfAjA6uA0UoIwf4cz3y6ZRAzSjNZgM68gVsEOBQt+W+0 EOQRQKnQW2kmdT/059LQZ7An/bqp8BQJ4bIt7MTT1pUDAdWRg7zwVbATvSjGJz8+Gv0bMEtwc5t BSOtwrWX+Ih0= X-Received: by 2002:a05:6000:2a4:: with SMTP id l4mr20861705wry.238.1635860487380; Tue, 02 Nov 2021 06:41:27 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzljnTgliElH/Zh8xTvWK6IlBoYotDE7+Q/yAvlPzs5SnQ+krLEegJ6Pm58VdcUkdgkuD90Yw== X-Received: by 2002:a05:6000:2a4:: with SMTP id l4mr20861664wry.238.1635860487135; Tue, 02 Nov 2021 06:41:27 -0700 (PDT) Received: from [192.168.3.132] (p5b0c6810.dip0.t-ipconnect.de. [91.12.104.16]) by smtp.gmail.com with ESMTPSA id u15sm2896627wmq.12.2021.11.02.06.41.26 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 02 Nov 2021 06:41:26 -0700 (PDT) Message-ID: Date: Tue, 2 Nov 2021 14:41:25 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.2.0 To: Michal Hocko Cc: Alexey Makhalov , "linux-mm@kvack.org" , Andrew Morton , "linux-kernel@vger.kernel.org" , "stable@vger.kernel.org" , Oscar Salvador References: <7136c959-63ff-b866-b8e4-f311e0454492@redhat.com> <42abfba6-b27e-ca8b-8cdf-883a9398b506@redhat.com> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH] mm: fix panic in __alloc_pages In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Stat-Signature: 6wusd1sg5znu3oeixop3zef45zd9kqzp Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=cbBib5Cq; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf08.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=david@redhat.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: CAD743000098 X-HE-Tag: 1635860481-256936 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 02.11.21 14:25, Michal Hocko wrote: > On Tue 02-11-21 13:39:06, David Hildenbrand wrote: >>>> Yes, but a zonelist cannot be correct for an offline node, where we might >>>> not even have an allocated pgdat yet. No pgdat, no zonelist. So as soon as >>>> we allocate the pgdat and set the node online (->hotadd_new_pgdat()), the zone lists have to be correct. And I can spot an build_all_zonelists() in hotadd_new_pgdat(). >>> >>> Yes, that is what I had in mind. We are talking about two things here. >>> Memoryless nodes and offline nodes. The later sounds like a bug to me. >> >> Agreed. memoryless nodes should just have proper zonelists -- which >> seems to be the case. >> >>>> Maybe __alloc_pages_bulk() and alloc_pages_node() should bail out directly >>>> (VM_BUG()) in case we're providing an offline node with eventually no/stale pgdat as >>>> preferred nid. >>> >>> Historically, those allocation interfaces were not trying to be robust >>> against wrong inputs because that adds cpu cycles for everybody for >>> "what if buggy" code. This has worked (surprisingly) well. Memory less >>> nodes have brought in some confusion but this is still something that we >>> can address on a higher level. Nobody give arbitrary nodes as an input. >>> cpu_to_node might be tricky because it can point to a memory less node >>> which along with __GFP_THISNODE is very likely not something anybody >>> wants. Hence cpu_to_mem should be used for allocations. I hate we have >>> two very similar APIs... >> >> To be precise, I'm wondering if we should do: >> >> diff --git a/include/linux/gfp.h b/include/linux/gfp.h >> index 55b2ec1f965a..8c49b88336ee 100644 >> --- a/include/linux/gfp.h >> +++ b/include/linux/gfp.h >> @@ -565,7 +565,7 @@ static inline struct page * >> __alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) >> { >> VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); >> - VM_WARN_ON((gfp_mask & __GFP_THISNODE) && !node_online(nid)); >> + VM_WARN_ON(!node_online(nid)); >> >> return __alloc_pages(gfp_mask, order, nid, NULL); >> } >> >> (Or maybe VM_BUG_ON) >> >> Because it cannot possibly work and we'll dereference NULL later. > > VM_BUG_ON would be silent for most configurations and crash would happen > even without it so I am not sure about the additional value. VM_WARN_ON > doesn't really add much on top - except it would crash in some > configurations. If we really care to catch this case then we would have > to do a reasonable fallback with a printk note and a dumps stack. As I learned, VM_BUG_ON and friends are active for e.g., Fedora, which can catch quite some issues early, before they end up in enterprise distro kernels. I think it has value. > Something like > if (unlikely(!node_online(nid))) { > pr_err("%d is an offline numa node and using it is a bug in a caller. Please report...\n"); > dump_stack(); > nid = numa_mem_id(); > } > > But again this is adding quite some cycles to a hotpath of the page > allocator. Is this worth it? Don't think a fallback makes sense. > >>> But something seems wrong in this case. cpu_to_node shouldn't return >>> offline nodes. That is just a land mine. It is not clear to me how the >>> cpu has been brought up so that the numa node allocation was left >>> behind. As pointed in other email add_cpu resp. cpu_up is not it. >>> Is it possible that the cpu bring up was only half way? >> >> I tried to follow the code (what sets a CPU present, what sets a CPU >> online, when do we update cpu_to_node() mapping) and IMHO it's all a big >> mess. Maybe it's clearer to people familiar with that code, but CPU >> hotplug in general seems to be a confusing piece of (arch-specific) code. > > Yes there are different arch specific parts that make this quite hard to > follow. > > I think we want to learn how exactly Alexey brought that cpu up. Because > his initial thought on add_cpu resp cpu_up doesn't seem to be correct. > Or I am just not following the code properly. Once we know all those > details we can get in touch with cpu hotplug maintainers and see what > can we do. Yes. > > Btw. do you plan to send a patch for pcp allocator to use cpu_to_mem? You mean s/cpu_to_node/cpu_to_mem/ or also handling offline nids? cpu_to_mem() corresponds to cpu_to_node() unless on ia64+ppc IIUC, so it won't help for this very report. > One last thing, there were some mentions of __GFP_THISNODE but I fail to > see connection with the pcp allocator... Me to. If pcpu would be using __GFP_THISNODE, we'd be hitting the VM_WARN_ON but still crash. -- Thanks, David / dhildenb