From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2561C64ED6 for ; Tue, 28 Feb 2023 15:53:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E8C086B0072; Tue, 28 Feb 2023 10:53:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E150A6B0073; Tue, 28 Feb 2023 10:53:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C8EB46B0074; Tue, 28 Feb 2023 10:53:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id B4A7A6B0072 for ; Tue, 28 Feb 2023 10:53:55 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 8EE1A8017C for ; Tue, 28 Feb 2023 15:53:55 +0000 (UTC) X-FDA: 80517146430.16.DCAD6A8 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf25.hostedemail.com (Postfix) with ESMTP id 18033A0014 for ; Tue, 28 Feb 2023 15:53:52 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ctTO0t8+; spf=pass (imf25.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677599633; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9urmB+eiv6j/Lo/lx4LKexV35abQKB6hzTKK5DXDMQo=; b=kBs1f/Du3TwQqJ7QisGtW6Ina24S02oyYJDpt6GgpsXdLL8Y+5UQnV8V6l7hmcC5l+RZVM zbBAJ6XOb9YVjwO4aEIGU8h7KE0c2yI9/KoQBISRonMc6IfLchh0XxqI6n86cHKdo7PVWE JBtPMhlqNlaIDZfufQ/qw0f5Cijf/F4= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ctTO0t8+; spf=pass (imf25.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677599633; a=rsa-sha256; cv=none; b=sbWG2bWa3Lxmu8/DSU/2nHx6nvUMCx2kR5QFwFH8cKcQ7HPGf6EZAG2mDeN0bOx7wblBiX Ob4m+3NKc8uvXoP2QqrEsGjc4TYrfvI840AO5Mho/tsvHXxpoWoK8jr244otHqge+7CdHV SLUVGRb9sy63fInN9rhUDMeZUTBFqEE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1677599632; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9urmB+eiv6j/Lo/lx4LKexV35abQKB6hzTKK5DXDMQo=; b=ctTO0t8+kA02DwxhZbwtG8V0gVbpXMdeppNYDdkdaMPUTrO7qR42t2xeNARdPmh3xy635+ ndpqHOgMONsbBwX2ROVWlMfNd5ugul97AUX6MZuGszBmIshS0Ug088yuPKzRF/Kopd02Cp 7y/swUJt6Qiem3gGIkXTaYe7sLwFWpg= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-638-QNf-JqL5PKeD3eQvOZEw7Q-1; Tue, 28 Feb 2023 10:53:51 -0500 X-MC-Unique: QNf-JqL5PKeD3eQvOZEw7Q-1 Received: by mail-wm1-f72.google.com with SMTP id n27-20020a05600c3b9b00b003e9ca0f4677so4398040wms.8 for ; Tue, 28 Feb 2023 07:53:50 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:in-reply-to:subject:organization:from :references:cc:to:content-language:user-agent:mime-version:date :message-id:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=9urmB+eiv6j/Lo/lx4LKexV35abQKB6hzTKK5DXDMQo=; b=3fcFGdv32NRVwY1ypH5XCbiu1Hei4mE9meZF0zq69q+UrCCQ5uT+k+AJpsazQuLQDB idyJB/GWMgcVqN/+lxu0f7oZ6SR1rRTWtzhxb+voXMLlhut4kUy73n/vSvBKXJDQf/JL IlLIc1KPaNDl8BH0x+Kku8XqnvjA5gyakuTZLwYCzBzri5J+CLVUtC2Ca0a3OctDVjbz TgbehSWonyvCe8bHzKcmFt4zIELLLaHK5cC4mZSRndkcHHOHkbfPcRKSUsG1QF7o29/4 5/W/5vCcFtfvavn6LlViZLde3Tn3gKT4fJJGoaMTJIJzLbv7bs78tGj2Ua6K81VnZxex kegw== X-Gm-Message-State: AO0yUKWbUBZEwMefe8VITsCYRuhd78IT1Gc9HdBC3I8Kh4x90dWmr016 uUWZyqyaRviHCTpFXdeDQDA77+3zJBGXP/S8GUbixtCZsPkO+KuGjI2ifJS6NbGoLfdbr/0iPI5 cKfl7UElU3s8= X-Received: by 2002:a5d:4bcc:0:b0:2cc:4d13:908d with SMTP id l12-20020a5d4bcc000000b002cc4d13908dmr2573543wrt.11.1677599629955; Tue, 28 Feb 2023 07:53:49 -0800 (PST) X-Google-Smtp-Source: AK7set9z87rAKtDOAB1eeCRczD4u3n/GbdmdNmJXV6/ODc8cOWP2y6Tp9WUpx79HbnQobf6rUJ+lbQ== X-Received: by 2002:a5d:4bcc:0:b0:2cc:4d13:908d with SMTP id l12-20020a5d4bcc000000b002cc4d13908dmr2573522wrt.11.1677599629577; Tue, 28 Feb 2023 07:53:49 -0800 (PST) Received: from ?IPV6:2003:cb:c706:b800:3757:baed:f95e:20ac? (p200300cbc706b8003757baedf95e20ac.dip0.t-ipconnect.de. [2003:cb:c706:b800:3757:baed:f95e:20ac]) by smtp.gmail.com with ESMTPSA id n7-20020a5d4c47000000b002c5534db60bsm10053533wrt.71.2023.02.28.07.53.48 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 28 Feb 2023 07:53:48 -0800 (PST) Message-ID: <6b6cd2fe-2309-b471-8950-3c4334462e69@redhat.com> Date: Tue, 28 Feb 2023 16:53:47 +0100 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.8.0 To: Marcelo Tosatti , Christoph Lameter Cc: Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Mel Gorman References: <20230209150150.380060673@redhat.com> <20230209153204.656996515@redhat.com> From: David Hildenbrand Organization: Red Hat Subject: Re: [PATCH v2 01/11] mm/vmstat: remove remote node draining In-Reply-To: <20230209153204.656996515@redhat.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Stat-Signature: 15bzis8e8aekbsmsf7d8k8p1kn1n983q X-Rspam-User: X-Rspamd-Queue-Id: 18033A0014 X-Rspamd-Server: rspam06 X-HE-Tag: 1677599632-833390 X-HE-Meta: U2FsdGVkX1/Xe8nfMbGvBPWaYEpX/xVfuSiHFlU+9rJ8mCmwd2oTJ/FfVHzXhbCZ79RfDLq+ARUHGMgLM+J0kI2Ze+TgpFDCN5vhCTbQ6qStmiDydiCLBCN/f65aMjy+Y84qzPq0r25KG5Ct1db4flA2Baxvh0ydI9NfO4TXfE51opPV6I4NR/0buu2WwPUuCpkjCOkYN7xzd3VBnTOMUCLZn4ftbAzl2/+d/Lth8okyqgSNaM7cTaNLkpnk2ol5Ra9j2ZPFdd3Ep4XY91UsomLENNVZsXtlnOQ22/NZnzW9EZ6ljCz83vJNmOTslWAN3kK+Ox4jArAycfHuXvU+1WZsJ47abQdq58WNvRPo/SA5jq+5aKqKseTqTBQsvycnQwTdj7uaTNWuoAatKNh9+uvRF3qzIUjzJ4PiNpVKQzZTBV77ryMFp8ZRJ0jFVHzsZqfexoodsDRtt2S/sRZQm4ziqVPV1FPJHm30gKA71UNmrB3goBlG6TMv7/6NWOF8mN+8b/RjAJbBexqLC9wfVV+hVEj/dbMcA69nlix3Ud7emYZCbpFl3EhgXXKv++TvsPHghgSzDM6cXZQge8xHvoV4FcvqObYIfqTOOgPvB8Okv8nZBIefVrHN7/7UewznnA0BmlMtcaFlFKuZur+Y5eUfxHSXfcyGuFWkaq5BnajiEk8cDFLqsX8PMG7PDymZj2CwAOSn1hIfMzpVf8XAlklC5uu1CnCHavwhE1/4yyx+MTG+veGTRAptMBPAUbuf35uoz9lsIJZwefhzmi6sYxg71Fte+kM5kfufUDIl1dUjhbV5mRxuDYiga6924xPpIEWIStcg6zc66fjXbaB/Y6EBatlEJhZz2ZpE/gyTeqCKVFvj89pUwEMAdSzhdS4GKRJnw2YQ5XF3XznF4xbclqFNhwomysT3xe+muvQWoO8sG9RwONCgH30+aNWRfD6gn2R1+piOMp/T3d6LCIZ SQ+HWdzU CYEfe47VM3p0NILKd3RAG2rEJelRim+K0C7aYN/soFflYEjBqaoAbwuRGJZ+Vy3HwRfMUWuaPKe5IzweTTfkZtTsON3Pxh/U1Ej7LIQa0N/Tk3x7teSjhn/J8URX8B2MPcJn5RJhv8A1wnKdEzUmx7qlV3R+SB0qtvZ56pvzNgjarLA13F4lQyAdArTRskylMBlza+nG9pKlwMAMkwL4My7tBITv+aEYj4FcOdbAAaH6IKoGmDw4qW0sqz+GIC62zX9tkHbkVMPH2bLZtom5fQXSX/nRcGre6kS6Q5X1ayXIRG7ttyHauSBu8Mpx0nV+L8+HhBxNHLepHMe5yaCpJhgmkjNc/jnWw0FvsmnyCfSHIHDVtESB+xt6mjD6nMVUWm/flrdPBoMcXONPR1s1lGdQUnpt4lndbxF8jsRgRrShE9qM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 09.02.23 16:01, Marcelo Tosatti wrote: > Draining of pages from the local pcp for a remote zone was necessary > since: > > "Note that remote node draining is a somewhat esoteric feature that is > required on large NUMA systems because otherwise significant portions > of system memory can become trapped in pcp queues. The number of pcp is > determined by the number of processors and nodes in a system. A system > with 4 processors and 2 nodes has 8 pcps which is okay. But a system > with 1024 processors and 512 nodes has 512k pcps with a high potential > for large amount of memory being caught in them." > > Since commit 443c2accd1b6679a1320167f8f56eed6536b806e > ("mm/page_alloc: remotely drain per-cpu lists"), drain_all_pages() is able > to remotely free those pages when necessary. > I'm a bit new to that piece of code, so sorry for the dummy questions. I'm staring at linux master, (1) I think you're removing the single user of drain_zone_pages(). So we should remove drain_zone_pages() as well. (2) drain_zone_pages() documents that we're draining the PCP (bulk-freeing them) of the current CPU on remote nodes. That bulk- freeing will properly adjust free memory counters. What exactly is the impact when no longer doing that? Won't the "snapshot" of some counters eventually be wrong? Do we care? Describing the difference between instructed refresh of vmstat and "remotely drain per-cpu lists" in order to move free memory from the pcp to the buddy would be great. Because removing this code here looks nice, but I am not 100% sure about the implications. CCing Mel as well. > Signed-off-by: Marcelo Tosatti > > Index: linux-vmstat-remote/include/linux/mmzone.h > =================================================================== > --- linux-vmstat-remote.orig/include/linux/mmzone.h > +++ linux-vmstat-remote/include/linux/mmzone.h > @@ -577,9 +577,6 @@ struct per_cpu_pages { > int high; /* high watermark, emptying needed */ > int batch; /* chunk size for buddy add/remove */ > short free_factor; /* batch scaling factor during free */ > -#ifdef CONFIG_NUMA > - short expire; /* When 0, remote pagesets are drained */ > -#endif > > /* Lists of pages, one per migrate type stored on the pcp-lists */ > struct list_head lists[NR_PCP_LISTS]; > Index: linux-vmstat-remote/mm/vmstat.c > =================================================================== > --- linux-vmstat-remote.orig/mm/vmstat.c > +++ linux-vmstat-remote/mm/vmstat.c > @@ -803,7 +803,7 @@ static int fold_diff(int *zone_diff, int > * > * The function returns the number of global counters updated. > */ > -static int refresh_cpu_vm_stats(bool do_pagesets) > +static int refresh_cpu_vm_stats(void) > { > struct pglist_data *pgdat; > struct zone *zone; > @@ -814,9 +814,6 @@ static int refresh_cpu_vm_stats(bool do_ > > for_each_populated_zone(zone) { > struct per_cpu_zonestat __percpu *pzstats = zone->per_cpu_zonestats; > -#ifdef CONFIG_NUMA > - struct per_cpu_pages __percpu *pcp = zone->per_cpu_pageset; > -#endif > > for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) { > int v; > @@ -826,44 +823,8 @@ static int refresh_cpu_vm_stats(bool do_ > > atomic_long_add(v, &zone->vm_stat[i]); > global_zone_diff[i] += v; > -#ifdef CONFIG_NUMA > - /* 3 seconds idle till flush */ > - __this_cpu_write(pcp->expire, 3); > -#endif > } > } > -#ifdef CONFIG_NUMA > - > - if (do_pagesets) { > - cond_resched(); > - /* > - * Deal with draining the remote pageset of this > - * processor > - * > - * Check if there are pages remaining in this pageset > - * if not then there is nothing to expire. > - */ > - if (!__this_cpu_read(pcp->expire) || > - !__this_cpu_read(pcp->count)) > - continue; > - > - /* > - * We never drain zones local to this processor. > - */ > - if (zone_to_nid(zone) == numa_node_id()) { > - __this_cpu_write(pcp->expire, 0); > - continue; > - } > - > - if (__this_cpu_dec_return(pcp->expire)) > - continue; > - > - if (__this_cpu_read(pcp->count)) { > - drain_zone_pages(zone, this_cpu_ptr(pcp)); > - changes++; > - } > - } > -#endif > } I think you can then also get rid of the "changes" local variable and do a "return fold_diff(global_zone_diff, global_node_diff);" directly. -- Thanks, David / dhildenb