From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E529EC64ED6 for ; Tue, 28 Feb 2023 19:47:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 544D56B0071; Tue, 28 Feb 2023 14:47:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4F40B6B0073; Tue, 28 Feb 2023 14:47:23 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3BB646B0074; Tue, 28 Feb 2023 14:47:23 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 2DC0C6B0071 for ; Tue, 28 Feb 2023 14:47:23 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id E4FF0C0C1B for ; Tue, 28 Feb 2023 19:47:22 +0000 (UTC) X-FDA: 80517734724.26.5505E78 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf13.hostedemail.com (Postfix) with ESMTP id 0A06720009 for ; Tue, 28 Feb 2023 19:47:20 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="L/wUKcZ9"; spf=pass (imf13.hostedemail.com: domain of mtosatti@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=mtosatti@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1677613641; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=qEFLf5PLiDm1XqQlabR9wXxvhN/9FzZfFHYx13O7GVs=; b=RrymFLp3HjL7aSVvzX9WgysFB1NFzmaxLZcuAq/yaUuvECmfqGN+d940uEqDGrk/no6PYE wMARQsSauPpfv8rjtUa3+JZD3W2pqrqdBiC5wloHhJi6TkjJg+LKHlQlhMOawtpY0h96qK 2YPfFduwi+vK49t2KWn4y0CjRppixjg= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="L/wUKcZ9"; spf=pass (imf13.hostedemail.com: domain of mtosatti@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=mtosatti@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1677613641; a=rsa-sha256; cv=none; b=qJwIBOhf3CwjMefqAKpp1ZhfK1a9Sui58/f1Y1x4ayfOYJw9kRJDPVWwHOe82e54x0GPux KVst64ABoWZXS4OZCtRuJLF0ypsLP6Z5weyrnd8EIPygYKaXYh1pshmuRy6l1i3g/78m1A TvkOdyimi+TfCD4o5eN1mB1eA9vsj8E= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1677613640; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=qEFLf5PLiDm1XqQlabR9wXxvhN/9FzZfFHYx13O7GVs=; b=L/wUKcZ9FECtsSEchhzyhoPQYqB0T5ynYDF6AIaI0Ta2Xg4MrHloWK+BeiCco5APjoVhTi Vz/lLZ3+ael0ox5osrTOql/FtmKIeTde0xwfqncFm+LpWY0aXUPNqvf4gNcx6HCVszjalb 6Ifkh7iwjV0w3ewaeD4ClfYP1E1l3EE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-6-DKmFVjuvOa-JMyuSBLRd3A-1; Tue, 28 Feb 2023 14:47:16 -0500 X-MC-Unique: DKmFVjuvOa-JMyuSBLRd3A-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4AF7A800B23; Tue, 28 Feb 2023 19:47:16 +0000 (UTC) Received: from tpad.localdomain (ovpn-112-2.gru2.redhat.com [10.97.112.2]) by smtp.corp.redhat.com (Postfix) with ESMTPS id D4C174010E86; Tue, 28 Feb 2023 19:47:15 +0000 (UTC) Received: by tpad.localdomain (Postfix, from userid 1000) id 0DD43400E71A2; Tue, 28 Feb 2023 16:36:00 -0300 (-03) Date: Tue, 28 Feb 2023 16:36:00 -0300 From: Marcelo Tosatti To: David Hildenbrand Cc: Christoph Lameter , Aaron Tomlin , Frederic Weisbecker , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Mel Gorman Subject: Re: [PATCH v2 01/11] mm/vmstat: remove remote node draining Message-ID: References: <20230209150150.380060673@redhat.com> <20230209153204.656996515@redhat.com> <6b6cd2fe-2309-b471-8950-3c4334462e69@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6b6cd2fe-2309-b471-8950-3c4334462e69@redhat.com> X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 0A06720009 X-Rspam-User: X-Stat-Signature: zdqzy9e3rsedufrtcne7e8giuiakj66y X-HE-Tag: 1677613640-656483 X-HE-Meta: U2FsdGVkX1/jXdC1b2Dm66f1gAtqaWzuK/TBJl8rcnny3UZwYZWkBInPu9SiDzPgmwUfJzbv44QHMQM7SCvEKHRgbkIWAYHsm86xeRnTAYWKyRnT6Z/ZKJgjbVdsk3kOb2QVeQ5Xqo8HUAnNAAEzHgiscPLftk+9/VDOHwEBv/qLlCIbbdI0gshswszrd1F5u2YbCUPKTwbUJyM8T/uF/trMrzMJNGj5KcFU7B0RLOx/Z6oFz9pT1s4+uGZvVOVbpbeNH1Awq0ANuCn6Y7k45cEszrh/PD30hwFygZAdWQryqJ/80TY7p8ORh80oQYdwavInmC3L+aan1//ukJ476LJ1SbRQBaduZNWUKppEAvabwOcH7eUb9pLcIJBmGcsWIyXZSRRiQ5RTxcPo/8sR0Fz6IuWaW7Goog4Wx5kNtpEZnLcwFq7o4yHJ0k+2H7uIca5BkS4ugkoYe6DuvCW1rc1KoSH2sYOCru8ot4PG6T2GUz26bj+6lJqRNC9257TwODvJQSeQHqEZ6TQ2FMPwtOSiuRos+uvwjN9dkI5C9yBk+NZvY9kDb1QGGF1heVYlnDhDiQ6F+iSheBNO+1c+agWsRUfOXmYiWp8oaX25Q8dcL7hsp6aQl6sU2X3rcLtc64o/vMi9QNJHF8Xzhh9pVv0NXJ0NNFzs72DF8gJ6TfQbfBH862tUXefRsK1OvLSER3U9e1joRq6xwwpNtYlw2wPCAkjUP1iTBwSJPdJ/x4y9DKsYOmQdKgsa0+RwdnDvl15Su+kZqFPLSLaUhwFdwyPo7ig1Jcjt4FUWww7w+OBokCqtPd38Z+VF3/2qsxpjhN4ewg3qoQoyxT/257F4hNH4vwLu5fy/kzGOayuEQXH+MluPRWppf8aYod3BOfpaSW27ep3JSXzwAH96fxYXXESGZN9ItYdjznTtkjmgasTS3U4uHB8TemhJtdl+ch+34VAekCBUBrmnQY3FYMZ g/Wgrps8 0VX1HcjVCVnenTIIGYbEqljK3s4qemVt0Qf5zRsnSYcKEOnxXX8+iZ7KHYz8AvUCXZox1mVH08QGIVXhS4tqupVXOEcIrEemRh7Esw+TNWLW+FYAGXmPo10NvUGeFixGD0f5hPXT5mikZti4I5ob2aJsqIkQSFaYFAyZ/ZWx1jQc+bNoj+fW2iPFnLKDipr8+krdt X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Feb 28, 2023 at 04:53:47PM +0100, David Hildenbrand wrote: > On 09.02.23 16:01, Marcelo Tosatti wrote: > > Draining of pages from the local pcp for a remote zone was necessary > > since: > > > > "Note that remote node draining is a somewhat esoteric feature that is > > required on large NUMA systems because otherwise significant portions > > of system memory can become trapped in pcp queues. The number of pcp is > > determined by the number of processors and nodes in a system. A system > > with 4 processors and 2 nodes has 8 pcps which is okay. But a system > > with 1024 processors and 512 nodes has 512k pcps with a high potential > > for large amount of memory being caught in them." > > > > Since commit 443c2accd1b6679a1320167f8f56eed6536b806e > > ("mm/page_alloc: remotely drain per-cpu lists"), drain_all_pages() is able > > to remotely free those pages when necessary. > > > > I'm a bit new to that piece of code, so sorry for the dummy questions. I'm > staring at linux master, > > (1) I think you're removing the single user of drain_zone_pages(). So we > should remove drain_zone_pages() as well. Done. > (2) drain_zone_pages() documents that we're draining the PCP > (bulk-freeing them) of the current CPU on remote nodes. That bulk- > freeing will properly adjust free memory counters. What exactly is > the impact when no longer doing that? Won't the "snapshot" of some > counters eventually be wrong? Do we care? Don't see why the snapshot of counters will be wrong. Instead of freeing pages on pcp list of remote nodes after they are considered idle ("3 seconds idle till flush"), what will happen is that drain_all_pages() will free those pcps, for example after an allocation fails on direct reclaim: page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac); /* * If an allocation failed after direct reclaim, it could be because * pages are pinned on the per-cpu lists or in high alloc reserves. * Shrink them and try again */ if (!page && !drained) { unreserve_highatomic_pageblock(ac, false); drain_all_pages(NULL); drained = true; goto retry; } In both cases the pages are freed (and counters maintained) here: static inline void __free_one_page(struct page *page, unsigned long pfn, struct zone *zone, unsigned int order, int migratetype, fpi_t fpi_flags) { struct capture_control *capc = task_capc(zone); unsigned long buddy_pfn = 0; unsigned long combined_pfn; struct page *buddy; bool to_tail; VM_BUG_ON(!zone_is_initialized(zone)); VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page); VM_BUG_ON(migratetype == -1); if (likely(!is_migrate_isolate(migratetype))) __mod_zone_freepage_state(zone, 1 << order, migratetype); VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page); VM_BUG_ON_PAGE(bad_range(zone, page), page); while (order < MAX_ORDER - 1) { if (compaction_capture(capc, page, order, migratetype)) { __mod_zone_freepage_state(zone, -(1 << order), migratetype); return; } > Describing the difference between instructed refresh of vmstat and "remotely > drain per-cpu lists" in order to move free memory from the pcp to the buddy > would be great. The difference is that now remote PCPs will be drained on demand, either via kcompactd or direct reclaim (through drain_all_pages), when memory is low. For example, with the following test: dd if=/dev/zero of=file bs=1M count=32000 on a tmpfs filesystem: kcompactd0-116 [005] ...1 228232.042873: drain_all_pages <-kcompactd_do_work kcompactd0-116 [005] ...1 228232.042873: __drain_all_pages <-kcompactd_do_work dd-479485 [003] ...1 228232.455130: __drain_all_pages <-__alloc_pages_slowpath.constprop.0 dd-479485 [011] ...1 228232.721994: __drain_all_pages <-__alloc_pages_slowpath.constprop.0 gnome-shell-3750 [015] ...1 228232.723729: __drain_all_pages <-__alloc_pages_slowpath.constprop.0 The commit message was indeed incorrect. Updated one: "mm/vmstat: remove remote node draining Draining of pages from the local pcp for a remote zone should not be necessary, since once the system is low on memory (or compaction on a zone is in effect), drain_all_pages should be called freeing any unused pcps." Thanks! > Because removing this code here looks nice, but I am not 100% sure about the > implications. CCing Mel as well. > > > > Signed-off-by: Marcelo Tosatti > > > > Index: linux-vmstat-remote/include/linux/mmzone.h > > =================================================================== > > --- linux-vmstat-remote.orig/include/linux/mmzone.h > > +++ linux-vmstat-remote/include/linux/mmzone.h > > @@ -577,9 +577,6 @@ struct per_cpu_pages { > > int high; /* high watermark, emptying needed */ > > int batch; /* chunk size for buddy add/remove */ > > short free_factor; /* batch scaling factor during free */ > > -#ifdef CONFIG_NUMA > > - short expire; /* When 0, remote pagesets are drained */ > > -#endif > > /* Lists of pages, one per migrate type stored on the pcp-lists */ > > struct list_head lists[NR_PCP_LISTS]; > > Index: linux-vmstat-remote/mm/vmstat.c > > =================================================================== > > --- linux-vmstat-remote.orig/mm/vmstat.c > > +++ linux-vmstat-remote/mm/vmstat.c > > @@ -803,7 +803,7 @@ static int fold_diff(int *zone_diff, int > > * > > * The function returns the number of global counters updated. > > */ > > -static int refresh_cpu_vm_stats(bool do_pagesets) > > +static int refresh_cpu_vm_stats(void) > > { > > struct pglist_data *pgdat; > > struct zone *zone; > > @@ -814,9 +814,6 @@ static int refresh_cpu_vm_stats(bool do_ > > for_each_populated_zone(zone) { > > struct per_cpu_zonestat __percpu *pzstats = zone->per_cpu_zonestats; > > -#ifdef CONFIG_NUMA > > - struct per_cpu_pages __percpu *pcp = zone->per_cpu_pageset; > > -#endif > > for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) { > > int v; > > @@ -826,44 +823,8 @@ static int refresh_cpu_vm_stats(bool do_ > > atomic_long_add(v, &zone->vm_stat[i]); > > global_zone_diff[i] += v; > > -#ifdef CONFIG_NUMA > > - /* 3 seconds idle till flush */ > > - __this_cpu_write(pcp->expire, 3); > > -#endif > > } > > } > > -#ifdef CONFIG_NUMA > > - > > - if (do_pagesets) { > > - cond_resched(); > > - /* > > - * Deal with draining the remote pageset of this > > - * processor > > - * > > - * Check if there are pages remaining in this pageset > > - * if not then there is nothing to expire. > > - */ > > - if (!__this_cpu_read(pcp->expire) || > > - !__this_cpu_read(pcp->count)) > > - continue; > > - > > - /* > > - * We never drain zones local to this processor. > > - */ > > - if (zone_to_nid(zone) == numa_node_id()) { > > - __this_cpu_write(pcp->expire, 0); > > - continue; > > - } > > - > > - if (__this_cpu_dec_return(pcp->expire)) > > - continue; > > - > > - if (__this_cpu_read(pcp->count)) { > > - drain_zone_pages(zone, this_cpu_ptr(pcp)); > > - changes++; > > - } > > - } > > -#endif > > } > > I think you can then also get rid of the "changes" local variable and do a > "return fold_diff(global_zone_diff, global_node_diff);" directly. Done.