From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 849FFE7716E for ; Fri, 6 Dec 2024 01:22:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0DA556B0148; Thu, 5 Dec 2024 20:22:57 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 08B276B0149; Thu, 5 Dec 2024 20:22:57 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E96226B014A; Thu, 5 Dec 2024 20:22:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id CAE6E6B0148 for ; Thu, 5 Dec 2024 20:22:56 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 7410816190A for ; Fri, 6 Dec 2024 01:22:56 +0000 (UTC) X-FDA: 82862784522.25.FB925B0 Received: from out30-101.freemail.mail.aliyun.com (out30-101.freemail.mail.aliyun.com [115.124.30.101]) by imf26.hostedemail.com (Postfix) with ESMTP id 8D510140002 for ; Fri, 6 Dec 2024 01:22:39 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=iNJlAzW5; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf26.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.101 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733448157; a=rsa-sha256; cv=none; b=hBZB7MuSHkcgitXqEMoC8h78Ya2AC/jFKcGLLAKAtDHY0euPTZEGeZI8++ZVi8Dl/3AsWm nm2qZ8E69mWJ/HVz/aosK2/egtjAvokrQ6UVLsIdqCDu5DU19hF/Hg4bobcUF0WPYzZruT Z8xei6otx3H5RgleOUAf2aPsu7hcpHM= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=iNJlAzW5; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf26.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.101 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733448157; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WFnC8RHk6kOTFK2JQOqDh20DsGWdHfd8ksIujar0mQU=; b=kieYnm9pt8TtxMnDWKQak0mSWt2LPNFi/s6Oi9DL4mpfKXD60ifzoefz+UVnXW6A+cBY27 YxqqTw/vuG75e7eSDWZJ7gmdCXQ12nseh60bP0jeC97IPC1vGMlS8nujEQs/OcVG+Abx2h DNQflhUFzrFplKQkDWGGQGvcXxOt7JI= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1733448169; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=WFnC8RHk6kOTFK2JQOqDh20DsGWdHfd8ksIujar0mQU=; b=iNJlAzW5GGd/evPh/DqBf3DUcvM1ISIHKrpukFguK98rTz6AaCs93sWs1kh3i+rlkXI4NJ/JLmNucyVOohq69+5wMRrbf/3IghGTf8STiT2KyYU+SqaXH7RyuymEMH91gjVM5zH7XF5X1v74ngo6HFmTu/xpEdLNikvnlUUm5mY= Received: from 30.74.144.111(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WKudkJO_1733448166 cluster:ay36) by smtp.aliyun-inc.com; Fri, 06 Dec 2024 09:22:47 +0800 Message-ID: <8137952d-b310-4c42-aec3-8906e7301921@linux.alibaba.com> Date: Fri, 6 Dec 2024 09:22:46 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH hotfix] mm: shmem: fix ShmemHugePages at swapout To: Hugh Dickins , Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <5ba477c8-a569-70b5-923e-09ab221af45b@google.com> From: Baolin Wang In-Reply-To: <5ba477c8-a569-70b5-923e-09ab221af45b@google.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 8D510140002 X-Stat-Signature: 1ka5xuscx1ntxm9zgabif8u5dynqyg6r X-Rspam-User: X-HE-Tag: 1733448159-549054 X-HE-Meta: U2FsdGVkX187WybtjtU8gkP+w/5Kv2f6l7OkBX64rKKB70e5ivpSHvGV0TQp0H93x19vpNxbQzFA/iCzdz55ozamhJM66iw0O1E6NiquiixgrfDs0uKcl8lndsSDUSyXacM3HD2q0/vZtgeN6SxsRdolmcJCguvHJvPk0KjaTg754Ti9OsphLp6NUnQ6zv7RKac0qvx5oZBu21T0o2mhSmA2h/wAUi4sSXd3oLwOhld0A+XrwXMl/LbKQ6ZehCegHiHlZUy/AhvvjVRakAcKinFpZjN1h9rRwbE2/4ITWXOnChfV7r/XbK4tl1dlOw8GJxD9mVBV9PwFBsN7aeNo4XlRnYMR36yOmeky80NbmsemXjdEO+DMRNGgl14yfCXyNp4+xrpl8VZqKDrcBJ/ZQ65Vsdm98hGTNFl0JxDM3JYS08Tz5+3eQdVKvNJfvojt+iBLoHmehVl2pC4DDgtV1hsXw4pHNNd9SgpCYGAVv9NxoWDRdN6LsiRZGnqOPhLmIaJav8R+p1aqDWOlEY6T+wP3ZLTf5Pg5Jb5IIjGR77YZBkg3sVUrjV8smLMpwNkgQN5vNP00FOhLRHvvdAmBjtqLaus1S8h3Zh+YbuxezeNuU6YfzW0PPWvkUekbZ0GMmQ8I5Ld4sckeYiCAkU3MVDIFC9zicsHjO0vcT1Yj4kf/WK/RtnQdqccoEdy07173ERaoENncJEZ8uO2gdJnC5ObP6LR7EkBXrNqIqTzSoiYyY2cWwyAOw7TVQqXbNS+0KEm60VHLfvqF47ZtEq6EDO08TyvQeZ4VagLKh1mX/4jM92fgN+xoRpRU94/0FY5X0GI3u6sAjuQVHIgHoNdlnieAjA5kJbjpkAiBqxS2RDKwES/A9pxnjK0U1R5Tbv/2OtDRuqCTm7idBl7vo2/43lt9JvHnnz2gHiKplTyxiQtwYp9i9TZvCa9DnkGxNoA7sDM0mNvWbnQK/Ctrgww SMXIp6TW Wge4tozJE2XV+/5/4tw5tH07IF4xxPQ8qUki0bs60TYl4eUUxsCRrrSleAgIZEvYYa34PKBdIuw2/ZwHqLnL+XD1EMDOcvugh8gW6TOX9SKlcnOJj5qJbGWX6kTC3ddA33qVZpZNzppH3fLjykOz9DZfY3ieseHgjuru0B9h0S77sJhrdJXg1gNFG2qf8kqPR7vVfeQxLLxfpZ52fSp6Osy2LJ4Ka+X3ACmJj3vyKwi0Be6SU0S3S+fK6BLByUlA/qDJ4K7EGQFPOpSbk2eOggeq5YQHvGlOjNMudrdi/1Cb2kaUfSZOvIbU7HI4A6bNMRS85 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/12/5 14:50, Hugh Dickins wrote: > /proc/meminfo ShmemHugePages has been showing overlarge amounts (more > than Shmem) after swapping out THPs: we forgot to update NR_SHMEM_THPS. > > Add shmem_update_stats(), to avoid repetition, and risk of making that > mistake again: the call from shmem_delete_from_page_cache() is the bugfix; > the call from shmem_replace_folio() is reassuring, but not really a bugfix > (replace corrects misplaced swapin readahead, but huge swapin readahead > would be a mistake). > > Fixes: 809bc86517cc ("mm: shmem: support large folio swap out") > Signed-off-by: Hugh Dickins > Cc: stable@vger.kernel.org Indeed. Thanks for fixing. Reviewed-by: Baolin Wang Tested-by: Baolin Wang > --- > mm/shmem.c | 22 ++++++++++++---------- > 1 file changed, 12 insertions(+), 10 deletions(-) > > diff --git a/mm/shmem.c b/mm/shmem.c > index ccb9629a0f70..f6fb053ac50d 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -787,6 +787,14 @@ static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index, > } > #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ > > +static void shmem_update_stats(struct folio *folio, int nr_pages) > +{ > + if (folio_test_pmd_mappable(folio)) > + __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr_pages); > + __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr_pages); > + __lruvec_stat_mod_folio(folio, NR_SHMEM, nr_pages); > +} > + > /* > * Somewhat like filemap_add_folio, but error if expected item has gone. > */ > @@ -821,10 +829,7 @@ static int shmem_add_to_page_cache(struct folio *folio, > xas_store(&xas, folio); > if (xas_error(&xas)) > goto unlock; > - if (folio_test_pmd_mappable(folio)) > - __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr); > - __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr); > - __lruvec_stat_mod_folio(folio, NR_SHMEM, nr); > + shmem_update_stats(folio, nr); > mapping->nrpages += nr; > unlock: > xas_unlock_irq(&xas); > @@ -852,8 +857,7 @@ static void shmem_delete_from_page_cache(struct folio *folio, void *radswap) > error = shmem_replace_entry(mapping, folio->index, folio, radswap); > folio->mapping = NULL; > mapping->nrpages -= nr; > - __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, -nr); > - __lruvec_stat_mod_folio(folio, NR_SHMEM, -nr); > + shmem_update_stats(folio, -nr); > xa_unlock_irq(&mapping->i_pages); > folio_put_refs(folio, nr); > BUG_ON(error); > @@ -1969,10 +1973,8 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp, > } > if (!error) { > mem_cgroup_replace_folio(old, new); > - __lruvec_stat_mod_folio(new, NR_FILE_PAGES, nr_pages); > - __lruvec_stat_mod_folio(new, NR_SHMEM, nr_pages); > - __lruvec_stat_mod_folio(old, NR_FILE_PAGES, -nr_pages); > - __lruvec_stat_mod_folio(old, NR_SHMEM, -nr_pages); > + shmem_update_stats(new, nr_pages); > + shmem_update_stats(old, -nr_pages); > } > xa_unlock_irq(&swap_mapping->i_pages); >