From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75FB3C61DA4 for ; Tue, 14 Feb 2023 11:29:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0C4486B0073; Tue, 14 Feb 2023 06:29:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 074006B0074; Tue, 14 Feb 2023 06:29:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E55F06B0075; Tue, 14 Feb 2023 06:29:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D52BE6B0073 for ; Tue, 14 Feb 2023 06:29:51 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id A42EF80E7E for ; Tue, 14 Feb 2023 11:29:51 +0000 (UTC) X-FDA: 80465677782.14.33D12E2 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf26.hostedemail.com (Postfix) with ESMTP id 3AF00140005 for ; Tue, 14 Feb 2023 11:29:49 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=JRSyPeah; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf26.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676374189; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=B5kdxsYbT6HT386pMDnxVDxE4u71l/UySB2qYCenzys=; b=V8A7mwyHgUIIpjeC3EdU9g6SwnHmJ1bI6RmDCR3+CddMYMlwcuo7ePZEt6BT0byPJVJezR E4Td5hIf6Wl9DgH/FXP/3smULkyVFVoJrgHgZVsCdvjyYOuSfXJvFi4sqk/vqYi4YeRHof 4js0GpIXGEvgm6FBcBS+OfqjsHxqZYI= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=JRSyPeah; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf26.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676374189; a=rsa-sha256; cv=none; b=Wq7UyOj87b9Nt+cZ3tUlrz5ULRaLPnGxvoDKB55G0xdz7roW0TMLm4p0TzXacqs611NOxk oTHUOsWGD0TjNMgTGNEKW5SGXg2efV7gxc3C/8+X/brNpKAx8QlY48/DF6mUUrqTOM2zx1 5RGO73yGOK/MGGATrtUyNnfl3m2514Q= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1676374188; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B5kdxsYbT6HT386pMDnxVDxE4u71l/UySB2qYCenzys=; b=JRSyPeahV+n2uH0pIQEPfnh52j6Wqj6+CPz+EOSic8BzmYHw87NftgyTjxHsDaXFCurP7l KzQqVwi/KjUau0QOgfWsq1DuAVFe0PdTeNNmVpGWKYq4j3fEVcgmpqFHuWzo94Cl6JUzAv 54LGa5d1vlLs9MI+pd15+p/7Nnx6vOM= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-116-QhFk5Qd3POiXhDJsbdd6OA-1; Tue, 14 Feb 2023 06:29:45 -0500 X-MC-Unique: QhFk5Qd3POiXhDJsbdd6OA-1 Received: by mail-wm1-f69.google.com with SMTP id m3-20020a05600c3b0300b003dfdc6021bcso7635730wms.3 for ; Tue, 14 Feb 2023 03:29:45 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:in-reply-to:organization:from:references :cc:to:content-language:subject:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=B5kdxsYbT6HT386pMDnxVDxE4u71l/UySB2qYCenzys=; b=DROXtht1l+BYz+TS7SJ5ZU48avOLeiVhnTlPXYDeZlqwnhPEEv9fBz2oBATUyqA+/P gDnATTVvy/wAMD6Zsnb/r/P4OCuYxsdlSLKe2Go1rl2p6g5uOng9Am8+KNnDXoOdoOnR VMU4faVHP637ndAjcSYPKkyEuol3ylzUXPgTMqhEqRYBLoSOxYhr7T06jU+h6uZCa3Mp tJV5dh2GxbRrrdTSErAPMLJrEqkBsrWG5QlaWTHh3IeIW4wxhdE8mHvNY3LpUNMxUtoj gQCt8Y0FJD/9OVlAtPrC6i4AuxpseeQ0FtI2n/1W56QS1lNC5xJR2FsGymgi33+ArL67 Dy5w== X-Gm-Message-State: AO0yUKX2wQEmE1v9V/AuqI/Gt+Hmmd7A+DCM9SCKemEL6ZnLNTIpZ86X op+oKRQAlFsNSSxnDJvejBsdaSjPIOefxcu5JK3mTTZc5lMe5YaVtVKiqKgO2Z3K4+n/sHuVsNj dPDME3ylYnls= X-Received: by 2002:adf:e54e:0:b0:2bf:c09a:c60e with SMTP id z14-20020adfe54e000000b002bfc09ac60emr1853951wrm.2.1676374184210; Tue, 14 Feb 2023 03:29:44 -0800 (PST) X-Google-Smtp-Source: AK7set+WvmvugFF3GicinFyEHzyqoWTVJzxRKV2C+s2TMVInReqBpgLaTuuBgfEY9CTH9qwIcR6+zQ== X-Received: by 2002:adf:e54e:0:b0:2bf:c09a:c60e with SMTP id z14-20020adfe54e000000b002bfc09ac60emr1853937wrm.2.1676374183947; Tue, 14 Feb 2023 03:29:43 -0800 (PST) Received: from ?IPV6:2003:cb:c709:1700:969:8e2b:e8bb:46be? (p200300cbc709170009698e2be8bb46be.dip0.t-ipconnect.de. [2003:cb:c709:1700:969:8e2b:e8bb:46be]) by smtp.gmail.com with ESMTPSA id t9-20020adfeb89000000b002be0b1e556esm12711631wrn.59.2023.02.14.03.29.43 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 14 Feb 2023 03:29:43 -0800 (PST) Message-ID: <4386151c-0328-d207-9a71-933ef61817f9@redhat.com> Date: Tue, 14 Feb 2023 12:29:42 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.6.0 Subject: Re: [PATCH] mm: page_alloc: don't allocate page from memoryless nodes To: Qi Zheng , Qi Zheng , Mike Rapoport Cc: Vlastimil Babka , akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Teng Hu , Matthew Wilcox , Mel Gorman , Oscar Salvador , Muchun Song References: <20230212110305.93670-1-zhengqi.arch@bytedance.com> <2484666e-e78e-549d-e075-b2c39d460d71@suse.cz> <85af4ada-96c8-1f99-90fa-9b6d63d0016e@bytedance.com> <67240e55-af49-f20a-2b4b-b7d574cd910d@gmail.com> <22f0e262-982e-ea80-e52a-a3c924b31d58@redhat.com> From: David Hildenbrand Organization: Red Hat In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 3AF00140005 X-Stat-Signature: e3zhk4njhoz6nmsx4fm7naoxsnf4nybg X-HE-Tag: 1676374189-1673 X-HE-Meta: U2FsdGVkX1/oP8yEre+pxxeJShvLKXiAsIjFDLpTq2ECo234CobfTbaQsY4W3qV3ZgrC7yapbRCfO5+gfDzwfjzXszQx9q8cTMl+4aF9r8o5Kd5+rzPBlq9esWH/34SWf6P1/8YGNSExlMH5vbQfn1O3ZKZhw4MW2U+mcFfVRmRLnB/q8eZyA/Ek114kI6mzNHw8dwRLV8+azthDWRyyHUhKlEwQb9u4SMd457pW6l29kWE4yR3thexHqr7BwFnN0wTOtqaQvQphLGse4W8NJy4PJ/DY8V+rowLX2GSkV47wvn/NQf0MOS3mQ6sN66PrmJCZJrOzcPRpiMULvixQFzzcjEaJtpsO7+c0lOYK9+H6azzfi6iEOUfCYBOCjBr3pU894qsW5eipPX8L1fcWKD2/l0u1f1yaL5hL0g5NJJno/G7XyqVP00fvyQQ7baI9jOJUtJeNntGPtcohU3Pq7o56JOTcE4DDa506gHKnVlh4nXNgN/BVqgORHwGsz4BwRB2PgHsE2WKMOq9oYJvs5euxlVQimxrL6t45K7EMN1jHhq/5TaKFbdphNwPGIRCm1mmUmxRHDKD07fMpq79NdwFUL3V/U/JlSzsY9c0EFGAe12VQzNeH/lE3lQy8lztvcaQ4v3+ZHpUhbLQse3yNATG4iMgSRQRYazCGbp3zqBxUopkbYzuIHVTX3653yCRLM5sryZS75x+JicXn9FdOy89UICdsJ3miF96KxyTG/l9WjN0iwFUsBYmGHwbimGjBR0OdZWJ9RCQHrMh13MUj6fA93E0tKEiGO1BW+OGDlJt33hZc5p0mhrQT2G8goLvqPC0jaoQ9f4NkIphizZuDwwtqXSZ8Pm44c65b79/y5F17mv9RfbTlFZd5eHUqYi4FOmZuWsExopd+q58rh/sHtldmqCcjDpNlfa+V/utwHgtXsfnCnRMIJjxoOHBx2AA+wrUZsVcmIlkO/2F/rhg q6qEj30a B2WbAYynhJ9CO7V3dX3KfYpO3zyaIvjBjKFryc0NZ6vW0Ibm8lq9cgtWRLaqUi+LY/O2Bm10Tb+HmpqroE4t4NY8yvYG6ogyLdv1uU1Us8lH3w0FC+0EvKuafaZBEe1WRamFqhxMU2g11KN3SHR5RLHuRNu9F9ExiDTBY8UbZ3lKLFkFCgTX54/nGp+2xT7gBCGrK1k1dPlMb60iiLsQy4BgJ3Ect1ORF3UNFilo7AXRH5EXKxKCO5yWcyc9eAstB36X7/81JnK4KdL7WyY4TzBajnsnEGOthNmHjuzKZPd4RaZWfuW4l3vITaA69snqBuNLNIxukIDqNGg3Zoi1ViFJFGncbq86uQEpq8fHrU8oXpUs9/36cuDoiEc//H++RNoDx8VEScnVbNJ6SRrARIqReUaRN82ShUKjiHBvaVQq0FMD/JHE1kG9D2K1zqAbYLp2dp8S5AoIGutUU8PPvk9+oMkXpeQbngXWFfeni5dR1j/2Y+9TFI6P4KSSipVJyRour+H9omiX65pZBdFdMf0CfrgnIoF8Ju0vsbBrKsxywF6b40Vtzg21IjPM/2MKahcv1lU5oSN69x1FYJtn4l4ySeUCdF5TgzPwV08cK438Zf6A= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 14.02.23 12:26, Qi Zheng wrote: > > > On 2023/2/14 19:22, David Hildenbrand wrote: >> On 14.02.23 11:26, Qi Zheng wrote: >>> >>> >>> On 2023/2/14 17:43, Mike Rapoport wrote: >>>> On Tue, Feb 14, 2023 at 10:17:03AM +0100, David Hildenbrand wrote: >>>>> On 14.02.23 09:42, Vlastimil Babka wrote: >>>>>> On 2/13/23 12:00, Qi Zheng wrote: >>>>>>> >>>>>>> >>>>>>> On 2023/2/13 16:47, Vlastimil Babka wrote: >>>>>>>> On 2/12/23 12:03, Qi Zheng wrote: >>>>>>>>> In x86, numa_register_memblks() is only interested in >>>>>>>>> those nodes which have enough memory, so it skips over >>>>>>>>> all nodes with memory below NODE_MIN_SIZE (treated as >>>>>>>>> a memoryless node). Later on, we will initialize these >>>>>>>>> memoryless nodes (allocate pgdat in free_area_init() >>>>>>>>> and build zonelist etc), and will online these nodes >>>>>>>>> in init_cpu_to_node() and init_gi_nodes(). >>>>>>>>> >>>>>>>>> After boot, these memoryless nodes are in N_ONLINE >>>>>>>>> state but not in N_MEMORY state. But we can still allocate >>>>>>>>> pages from these memoryless nodes. >>>>>>>>> >>>>>>>>> In SLUB, we only process nodes in the N_MEMORY state, >>>>>>>>> such as allocating their struct kmem_cache_node. So if >>>>>>>>> we allocate a page from the memoryless node above to >>>>>>>>> SLUB, the struct kmem_cache_node of the node corresponding >>>>>>>>> to this page is NULL, which will cause panic. >>>>>>>>> >>>>>>>>> For example, if we use qemu to start a two numa node kernel, >>>>>>>>> one of the nodes has 2M memory (less than NODE_MIN_SIZE), >>>>>>>>> and the other node has 2G, then we will encounter the >>>>>>>>> following panic: >>>>>>>>> >>>>>>>>> [    0.149844] BUG: kernel NULL pointer dereference, address: >>>>>>>>> 0000000000000000 >>>>>>>>> [    0.150783] #PF: supervisor write access in kernel mode >>>>>>>>> [    0.151488] #PF: error_code(0x0002) - not-present page >>>>>>>>> <...> >>>>>>>>> [    0.156056] RIP: 0010:_raw_spin_lock_irqsave+0x22/0x40 >>>>>>>>> <...> >>>>>>>>> [    0.169781] Call Trace: >>>>>>>>> [    0.170159]  >>>>>>>>> [    0.170448]  deactivate_slab+0x187/0x3c0 >>>>>>>>> [    0.171031]  ? bootstrap+0x1b/0x10e >>>>>>>>> [    0.171559]  ? preempt_count_sub+0x9/0xa0 >>>>>>>>> [    0.172145]  ? kmem_cache_alloc+0x12c/0x440 >>>>>>>>> [    0.172735]  ? bootstrap+0x1b/0x10e >>>>>>>>> [    0.173236]  bootstrap+0x6b/0x10e >>>>>>>>> [    0.173720]  kmem_cache_init+0x10a/0x188 >>>>>>>>> [    0.174240]  start_kernel+0x415/0x6ac >>>>>>>>> [    0.174738]  secondary_startup_64_no_verify+0xe0/0xeb >>>>>>>>> [    0.175417]  >>>>>>>>> [    0.175713] Modules linked in: >>>>>>>>> [    0.176117] CR2: 0000000000000000 >>>>>>>>> >>>>>>>>> In addition, we can also encountered this panic in the actual >>>>>>>>> production environment. We set up a 2c2g container with two >>>>>>>>> numa nodes, and then reserved 128M for kdump, and then we >>>>>>>>> can encountered the above panic in the kdump kernel. >>>>>>>>> >>>>>>>>> To fix it, we can filter memoryless nodes when allocating >>>>>>>>> pages. >>>>>>>>> >>>>>>>>> Signed-off-by: Qi Zheng >>>>>>>>> Reported-by: Teng Hu >>>>>>>> >>>>>>>> Well AFAIK the key mechanism to only allocate from "good" nodes >>>>>>>> is the >>>>>>>> zonelist, we shouldn't need to start putting extra checks like >>>>>>>> this. So it >>>>>>>> seems to me that the code building the zonelists should take the >>>>>>>> NODE_MIN_SIZE constraint in mind. >>>>>>> >>>>>>> Indeed. How about the following patch: >>>>>> >>>>>> +Cc also David, forgot earlier. >>>>>> >>>>>> Looks good to me, at least. >>>>>> >>>>>>> @@ -6382,8 +6378,11 @@ int find_next_best_node(int node, nodemask_t >>>>>>> *used_node_mask) >>>>>>>             int min_val = INT_MAX; >>>>>>>             int best_node = NUMA_NO_NODE; >>>>>>> >>>>>>> -       /* Use the local node if we haven't already */ >>>>>>> -       if (!node_isset(node, *used_node_mask)) { >>>>>>> +       /* >>>>>>> +        * Use the local node if we haven't already. But for >>>>>>> memoryless >>>>>>> local >>>>>>> +        * node, we should skip it and fallback to other nodes. >>>>>>> +        */ >>>>>>> +       if (!node_isset(node, *used_node_mask) && node_state(node, >>>>>>> N_MEMORY)) { >>>>>>>                     node_set(node, *used_node_mask); >>>>>>>                     return node; >>>>>>>             } >>>>>>> >>>>>>> For memoryless node, we skip it and fallback to other nodes when >>>>>>> build its zonelists. >>>>>>> >>>>>>> Say we have node0 and node1, and node0 is memoryless, then: >>>>>>> >>>>>>> [    0.102400] Fallback order for Node 0: 1 >>>>>>> [    0.102931] Fallback order for Node 1: 1 >>>>>>> >>>>>>> In this way, we will not allocate pages from memoryless node0. >>>>>>> >>>>> >>>>> In offline_pages(), we'll first build_all_zonelists() to then >>>>> node_states_clear_node()->node_clear_state(node, N_MEMORY); >>>>> >>>>> So at least on the offlining path, we wouldn't detect it properly yet I >>>>> assume, and build a zonelist that contains a now-memory-less node? >>>> >>>> Another question is what happens if a new memory is plugged into a node >>>> that had < NODE_MIN_SIZE of memory and after hotplug it stops being >>>> "memoryless". >>> >>> When going online and offline a memory will re-call >>> build_all_zonelists() to re-establish the zonelists (the zonelist of >>> itself and other nodes). So it can stop being "memoryless" >>> automatically. >>> >>> But in online_pages(), did not see the check of < NODE_MIN_SIZE. >> >> TBH, this is the first time I hear of NODE_MIN_SIZE and it seems to be a >> pretty x86 specific thing. >> >> Are we sure we want to get NODE_MIN_SIZE involved? > > Maybe add an arch_xxx() to handle it? I still haven't figured out what we want to achieve with NODE_MIN_SIZE at all. It smells like an arch-specific hack looking at "Don't confuse VM with a node that doesn't have the minimum amount of memory" Why shouldn't mm-core deal with that? I'd appreciate an explanation of the bigger picture, what the issue is and what the approach to solve it is (including memory onlining/offlining). -- Thanks, David / dhildenb