From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8AC6AC433F5 for ; Tue, 2 Nov 2021 12:39:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3224760F24 for ; Tue, 2 Nov 2021 12:39:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3224760F24 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 883A794000A; Tue, 2 Nov 2021 08:39:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 832DA940009; Tue, 2 Nov 2021 08:39:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6D33094000A; Tue, 2 Nov 2021 08:39:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0152.hostedemail.com [216.40.44.152]) by kanga.kvack.org (Postfix) with ESMTP id 5A453940009 for ; Tue, 2 Nov 2021 08:39:11 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 1433C184548C5 for ; Tue, 2 Nov 2021 12:39:11 +0000 (UTC) X-FDA: 78763945302.11.BE6656A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf22.hostedemail.com (Postfix) with ESMTP id 7DAF21905 for ; Tue, 2 Nov 2021 12:39:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1635856749; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=jIkHBillbCKCZYFZuYQdXQ8luVuR63mT21wNxhTkVuM=; b=BB++xVUynx8mhTUznoCeZvnNricmBVTo3YWK3RqGBFvaHCZKwymWZXbRTE8pHUAqPgbk+K OvJ2pw4+tqJsCJRHamxK+7EGE6UC9oMnGvaTNHyJ2AIHRBYC2ukXELmwZXAxCnIMbrgEGL 2fBHMhMpwkNWf2c0eV9b2+59jlj2ENc= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-406-7q0tK6C_MOmFGG54ykEilQ-1; Tue, 02 Nov 2021 08:39:08 -0400 X-MC-Unique: 7q0tK6C_MOmFGG54ykEilQ-1 Received: by mail-wm1-f69.google.com with SMTP id k25-20020a05600c1c9900b00332f798ba1dso1076490wms.4 for ; Tue, 02 Nov 2021 05:39:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent:subject :content-language:to:cc:references:from:organization:in-reply-to :content-transfer-encoding; bh=jIkHBillbCKCZYFZuYQdXQ8luVuR63mT21wNxhTkVuM=; b=i/IHKHaO9u1qIS2bv1R65UHOQzW40oOZtaP2ZG0Vp5Hs61Dwc1xBYZX0Zb9rg1hsTU J6DZaXVxl6ruluAMAmOmz4m7OYa9bzsS4ZWiabes6qV0YlfX/XYSMtUN0kDcapPhazZp 5F30kr92orBxkgh7aZVs/JdNkv+fXkwx0l+ElxTKyZsd16yDsA15ff8zCEccFGGdDUGp It//h9TTx56nPSjn54nZzCofnTNPR3kE6NAm8zeSr5jej1lW4rfOzpWivMELHHUbnh1M vk6Q1DeeT73riS6eVcdp1A+gjf3h2G/EkTqWueKiSpcT8UvX4HP6a30DrEM8nYX0BbmZ FtZA== X-Gm-Message-State: AOAM532g8uCx0v4dHdUkinsBVY0SQAK9qPtwzKmYjvooG0oGjVB5RKE2 MSD5nX9PHhtpHYbhqD2KdjVNM5on2qw5ppSIaWCMFY399D8n0NgKxnP45N1lwG0zyRh15BdW5g7 z3w5VJnpGNzE= X-Received: by 2002:a5d:648b:: with SMTP id o11mr47850883wri.56.1635856747722; Tue, 02 Nov 2021 05:39:07 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyPAtManKqas0sqXcvZrMMHN6XCbXPcL93RLbXpskjnJsSOhAZs99TXw5dZs5B4HcxSez/rGg== X-Received: by 2002:a5d:648b:: with SMTP id o11mr47850848wri.56.1635856747513; Tue, 02 Nov 2021 05:39:07 -0700 (PDT) Received: from [192.168.3.132] (p5b0c6810.dip0.t-ipconnect.de. [91.12.104.16]) by smtp.gmail.com with ESMTPSA id h14sm2656442wmq.34.2021.11.02.05.39.06 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 02 Nov 2021 05:39:07 -0700 (PDT) Message-ID: Date: Tue, 2 Nov 2021 13:39:06 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.2.0 Subject: Re: [PATCH] mm: fix panic in __alloc_pages To: Michal Hocko Cc: Alexey Makhalov , "linux-mm@kvack.org" , Andrew Morton , "linux-kernel@vger.kernel.org" , "stable@vger.kernel.org" , Oscar Salvador References: <20211101201312.11589-1-amakhalov@vmware.com> <7136c959-63ff-b866-b8e4-f311e0454492@redhat.com> <42abfba6-b27e-ca8b-8cdf-883a9398b506@redhat.com> From: David Hildenbrand Organization: Red Hat In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=BB++xVUy; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf22.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=david@redhat.com X-Rspamd-Server: rspam03 X-Stat-Signature: ozkfrmjhayp49zzrudhj6zr6fg985xi6 X-Rspamd-Queue-Id: 7DAF21905 X-HE-Tag: 1635856750-845446 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: >> Yes, but a zonelist cannot be correct for an offline node, where we might >> not even have an allocated pgdat yet. No pgdat, no zonelist. So as soon as >> we allocate the pgdat and set the node online (->hotadd_new_pgdat()), the zone lists have to be correct. And I can spot an build_all_zonelists() in hotadd_new_pgdat(). > > Yes, that is what I had in mind. We are talking about two things here. > Memoryless nodes and offline nodes. The later sounds like a bug to me. Agreed. memoryless nodes should just have proper zonelists -- which seems to be the case. >> Maybe __alloc_pages_bulk() and alloc_pages_node() should bail out directly >> (VM_BUG()) in case we're providing an offline node with eventually no/stale pgdat as >> preferred nid. > > Historically, those allocation interfaces were not trying to be robust > against wrong inputs because that adds cpu cycles for everybody for > "what if buggy" code. This has worked (surprisingly) well. Memory less > nodes have brought in some confusion but this is still something that we > can address on a higher level. Nobody give arbitrary nodes as an input. > cpu_to_node might be tricky because it can point to a memory less node > which along with __GFP_THISNODE is very likely not something anybody > wants. Hence cpu_to_mem should be used for allocations. I hate we have > two very similar APIs... To be precise, I'm wondering if we should do: diff --git a/include/linux/gfp.h b/include/linux/gfp.h index 55b2ec1f965a..8c49b88336ee 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -565,7 +565,7 @@ static inline struct page * __alloc_pages_node(int nid, gfp_t gfp_mask, unsigned int order) { VM_BUG_ON(nid < 0 || nid >= MAX_NUMNODES); - VM_WARN_ON((gfp_mask & __GFP_THISNODE) && !node_online(nid)); + VM_WARN_ON(!node_online(nid)); return __alloc_pages(gfp_mask, order, nid, NULL); } (Or maybe VM_BUG_ON) Because it cannot possibly work and we'll dereference NULL later. > > But something seems wrong in this case. cpu_to_node shouldn't return > offline nodes. That is just a land mine. It is not clear to me how the > cpu has been brought up so that the numa node allocation was left > behind. As pointed in other email add_cpu resp. cpu_up is not it. > Is it possible that the cpu bring up was only half way? I tried to follow the code (what sets a CPU present, what sets a CPU online, when do we update cpu_to_node() mapping) and IMHO it's all a big mess. Maybe it's clearer to people familiar with that code, but CPU hotplug in general seems to be a confusing piece of (arch-specific) code. Also, I have no clue if cpu_to_node() mapping will get invalidated after unplugging that CPU, or if the mapping will simply stay around for all eternity ... -- Thanks, David / dhildenb