From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AE2E9CF8841 for ; Thu, 20 Nov 2025 10:43:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0B6C66B000A; Thu, 20 Nov 2025 05:43:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 08F7B6B002E; Thu, 20 Nov 2025 05:43:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EE6BA6B002F; Thu, 20 Nov 2025 05:43:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id DC61B6B000A for ; Thu, 20 Nov 2025 05:43:39 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 61A781406D3 for ; Thu, 20 Nov 2025 10:43:37 +0000 (UTC) X-FDA: 84130649274.24.148F5B0 Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by imf29.hostedemail.com (Postfix) with ESMTP id C3025120004 for ; Thu, 20 Nov 2025 10:43:35 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="XhY/1+/7"; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf29.hostedemail.com: domain of david@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=david@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1763635415; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zN4Agw66Z+ARfRuCOi0sbMGECnN3JTavGo6QqKpSkfI=; b=R1Cwg85XlMFzeDeuvzPJCz6CDotNg+17WuZrpf4b2HnPccyzMVnKy9wsfRnCjcua/BOrY2 6rtX8oO/ASCY6s/UnKIOlvS74Ol6+YeclMU29XbnITSea4SjncoXUZmNRKItgZLweNaZEV HTyNkPW5imRSUvLhR2MuT4/RebbQb6o= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1763635415; a=rsa-sha256; cv=none; b=itnQuU1QAmIXaHWpzkj3p3beC0u+ZVtU1RVQllM2mAVdCRm8ZyemuvMq+0gnA/PIIrE6xN N8oGhvCxmtwALYYFKI3MBWjzG/b+3qdvJxGIbXKJDPQs9Js5UUsmEsMkKrmklOdn7JxTm6 d8QV05kBZhyCuOINH6GpwHlJudMrFFY= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="XhY/1+/7"; dmarc=pass (policy=quarantine) header.from=kernel.org; spf=pass (imf29.hostedemail.com: domain of david@kernel.org designates 172.234.252.31 as permitted sender) smtp.mailfrom=david@kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id B7D8543A81; Thu, 20 Nov 2025 10:43:34 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8BC57C4CEF1; Thu, 20 Nov 2025 10:43:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1763635414; bh=9f/U412E7kdzklQJjAr7BwVvqYGEsPTODqZz6a3+sCc=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=XhY/1+/7NMweMVA4ro3CHHHMBZW3+zgGPrU9U6q/UXgeNgQ1lb+HFlvax8lV4u5gN bZKq3oMIVzfvefQrS7UDhxdgV3tNcahf8dVB5chaBxy+JVFulhZJfbV61r7y6DVZcV GHxxHeo2XmOVLRBV9pYqlwrVU4M1x7SeJhaTKE5SSSmKsAyp2JFN0zJLGctnqO1Pqb /FNYGbxoqeZTJUR7BcfDHNQhL09orBJHg/eln6I5jzoFDhFL//iOOjEal/8tk+jd2z b1ihrOs6DhNr9SEBm5QFnPiBamlJeII9RsXepsgi7pW7AGN0rulIbmYwiyEjvphNP/ L3q1Lg5kG+Xzg== Message-ID: <8fb76ed2-92e1-4e33-94ba-c1e4a21ec316@kernel.org> Date: Thu, 20 Nov 2025 11:43:26 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] fixup: mm/huge_memory.c: introduce folio_split_unmapped To: Balbir Singh , linux-kernel@vger.kernel.org, linux-mm@kvack.org, dri-devel@lists.freedesktop.org Cc: Andrew Morton , Zi Yan , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Oscar Salvador , Lorenzo Stoakes , Baolin Wang , "Liam R. Howlett" , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Lyude Paul , Danilo Krummrich , David Airlie , Simona Vetter , Ralph Campbell , =?UTF-8?Q?Mika_Penttil=C3=A4?= , Matthew Brost , Francois Dugast References: <20251120030709.2933665-1-balbirs@nvidia.com> <4b87bd52-5582-4599-8a97-38843e27d0a7@kernel.org> <875a9449-fcdd-4aef-9d77-1703dd02edf0@nvidia.com> <3c332dea-ea60-4939-9843-ac7d3068c7c8@kernel.org> From: "David Hildenbrand (Red Hat)" Content-Language: en-US In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Stat-Signature: a7o3agi65kbj13jzjar3ocx1d98n9ep3 X-Rspam-User: X-Rspamd-Queue-Id: C3025120004 X-Rspamd-Server: rspam10 X-HE-Tag: 1763635415-710282 X-HE-Meta: U2FsdGVkX18PCH1HlciJQ2rWMrKgjVjS4JpZ4b/x5ODHzGxTuBAXCv3cXPiy6M7IxlWtGnYlUaJvpOL1mviqLnHtKVM/RCDSlz6HNswXUHfx8/E23XVFRL6oo3kJ6Is4v61MIdCoUsHqI+DDAWlEMJv/1TgdRzjiS5Yji8tpH10no/bnSBRj1JL7OTFAoQpVyFXb59wz+cUALsBVA1d5hBDbnFxL6OhCqeacqTuQSkidD67ITOOoFI8On2fjc/JGDLpFNbpu+N+VJS0LRUR/aOoNtHRrUZFtUl50TarZd8zO2M2iikJWNjWRoRjErp/VuU9hQBR3LghROzugsN7RF2SwaMlm6AdCy1LRT/+9zPc5m4pTR1hyfOvDgcJ5RtLK2v9P51beJ30NjtlEa4MObHln9fiYozDiKDmYZVrJfVeihpz6fJHjGAK3JEFCcfdNVe0zmeXcWx4AzAFGpSyG8rlC/XNWawUq7HyydzsR7Sqq2WI7PDDx1ZgdKf0zYO74oNjLzeVCgKGeP+PtrwbvezkviIvv2GKie/+rPACaAkqDANWkqwdHwMF9+5+ksMKUOwdpH9cbAitIolb0Db/1pbJXZcleOMhorErsZoQwCYcUwnZ7NePwZ18ebBy8ntzzGgg7HWGzCYjzCq+E9PI18RSDsYbUS6nnviMOZr0klHXnsnC2zLZrXmjVRKIE2PYw3fdlt7xivx9JnLg0BGIIA4judJCDS2HppqHCp8kYKQyiCPKG+LO8LzC3RuvG3t67PXEoKj68YG2ZHk2NjPZhOzFd5bNo9vvAtlIV0Omizad1A2b3c+hkxKAIibCEr4etJVLR9EjeLHQVYvrrJy1um3Rmvs0wKSuQi3/tOEx47wy6hcDlMPmX086FdJ5rAJJITopP5SP9zQjnQUHlJrqTtfDitq8CqvHQE43MDYxCygktCrnf8fWqDVn8t46PVvpLTxIKXw5BQSQxFXgirLh agYcyxhJ 48p5EX+9VZcSKfdpFL7ZkwSYKKBJm8R3ZDliS+Uf4N2PNuaAYambOPCuZfejaS3zmvUSGrdJWQGD7kr4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 11/20/25 11:35, Balbir Singh wrote: > On 11/20/25 20:32, David Hildenbrand (Red Hat) wrote: >> On 11/20/25 10:25, Balbir Singh wrote: >>> On 11/20/25 20:09, David Hildenbrand (Red Hat) wrote: >>>> On 11/20/25 04:07, Balbir Singh wrote: >>>>> Code refactoring of __folio_split() via helper >>>>> __folio_freeze_and_split_unmapped() caused a regression with clang-20 >>>>> with CONFIG_SHMEM=n, the compiler was not able to optimize away the >>>>> call to shmem_uncharge() due to changes in nr_shmem_dropped. >>>>> Fix this by checking for shmem_mapping() prior to calling >>>>> shmem_uncharge(), shmem_mapping() returns false when CONFIG_SHMEM=n. >>>>> >>>>> smatch also complained about parameter end being used without >>>>> initialization, which is a false positive, but keep the tool happy >>>>> by sending in initialized parameters. end is initialized to 0. >>>>> >>>>> Add detailed documentation comments for folio_split_unmapped() >>>>> >>>>> Cc: Andrew Morton >>>>> Cc: David Hildenbrand >>>>> Cc: Zi Yan >>>>> Cc: Joshua Hahn >>>>> Cc: Rakie Kim >>>>> Cc: Byungchul Park >>>>> Cc: Gregory Price >>>>> Cc: Ying Huang >>>>> Cc: Alistair Popple >>>>> Cc: Oscar Salvador >>>>> Cc: Lorenzo Stoakes >>>>> Cc: Baolin Wang >>>>> Cc: "Liam R. Howlett" >>>>> Cc: Nico Pache >>>>> Cc: Ryan Roberts >>>>> Cc: Dev Jain >>>>> Cc: Barry Song >>>>> Cc: Lyude Paul >>>>> Cc: Danilo Krummrich >>>>> Cc: David Airlie >>>>> Cc: Simona Vetter >>>>> Cc: Ralph Campbell >>>>> Cc: Mika Penttilä >>>>> Cc: Matthew Brost >>>>> Cc: Francois Dugast >>>>> >>>>> Signed-off-by: Balbir Singh >>>>> --- >>>>>    mm/huge_memory.c | 32 ++++++++++++++++++++++---------- >>>>>    1 file changed, 22 insertions(+), 10 deletions(-) >>>>> >>>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c >>>>> index 78a31a476ad3..c4267a0f74df 100644 >>>>> --- a/mm/huge_memory.c >>>>> +++ b/mm/huge_memory.c >>>>> @@ -3751,6 +3751,7 @@ static int __folio_freeze_and_split_unmapped(struct folio *folio, unsigned int n >>>>>        int ret = 0; >>>>>        struct deferred_split *ds_queue; >>>>>    +    VM_WARN_ON_ONCE(!mapping && end != 0); >>>> >>>> You could drop the "!= 0" >>> >>> Ack >>> >>> VM_WARN_ONE(!mapping && end); >>> >>>> >>>>>        /* Prevent deferred_split_scan() touching ->_refcount */ >>>>>        ds_queue = folio_split_queue_lock(folio); >>>>>        if (folio_ref_freeze(folio, 1 + extra_pins)) { >>>>> @@ -3919,7 +3920,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order, >>>>>        int nr_shmem_dropped = 0; >>>>>        int remap_flags = 0; >>>>>        int extra_pins, ret; >>>>> -    pgoff_t end; >>>>> +    pgoff_t end = 0; >>>>>        bool is_hzp; >>>>>          VM_WARN_ON_ONCE_FOLIO(!folio_test_locked(folio), folio); >>>>> @@ -4049,7 +4050,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order, >>>>>          local_irq_enable(); >>>>>    -    if (nr_shmem_dropped) >>>>> +    if (mapping && shmem_mapping(mapping) && nr_shmem_dropped) >>>>>            shmem_uncharge(mapping->host, nr_shmem_dropped); >>>> >>>> That looks questionable. We shouldn't add runtime check to handle buildtime things. >>>> >>>> Likely what you want is instead >>>> >>>> if (IS_ENABLED(CONFIG_SHMEM) && nr_shmem_dropped) >>>>      shmem_uncharge() >>>> >>> >>> shmem_mapping() returns false for CONFIG_SHMEM=n and shmem_mapping() checks that the mapping >>> is indeed for shmem ops before uncharging. Happy to change it if you like, >>> your version is more readable >> Good point, but the questionable thing is that it looks like nr_shmem_dropped >> could be set for non-shmem mappings, when it's really just a compiler thing. >> >> What about handling it through a proper stub so we can keep this calling code simple? >> >> diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h >> index 5b368f9549d67..e38cb01031200 100644 >> --- a/include/linux/shmem_fs.h >> +++ b/include/linux/shmem_fs.h >> @@ -136,11 +136,15 @@ static inline bool shmem_hpage_pmd_enabled(void) >> >>  #ifdef CONFIG_SHMEM >>  extern unsigned long shmem_swap_usage(struct vm_area_struct *vma); >> +extern void shmem_uncharge(struct inode *inode, long pages); >>  #else >>  static inline unsigned long shmem_swap_usage(struct vm_area_struct *vma) >>  { >>         return 0; >>  } >> +static inline void shmem_uncharge(struct inode *inode, long pages) >> +{ >> +} >>  #endif >>  extern unsigned long shmem_partial_swap_usage(struct address_space *mapping, >>                                                 pgoff_t start, pgoff_t end); >> @@ -194,7 +198,6 @@ static inline pgoff_t shmem_fallocend(struct inode *inode, pgoff_t eof) >>  } >> >>  extern bool shmem_charge(struct inode *inode, long pages); >> -extern void shmem_uncharge(struct inode *inode, long pages); >> >>  #ifdef CONFIG_USERFAULTFD >>  #ifdef CONFIG_SHMEM >> >> > > Agreed, I would like to let this patch proceed and then immediately follow up patch > along the lines of CONFIG_SHMEM as separate independent patch (independent of this > series). What do you think? Let's do it properly right away, no need to hurry that much. -- Cheers David