From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0C5FC36002 for ; Wed, 9 Apr 2025 22:07:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 274882800AD; Wed, 9 Apr 2025 18:07:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1FE1D2800AB; Wed, 9 Apr 2025 18:07:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 09D412800AD; Wed, 9 Apr 2025 18:07:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id DD62E2800AB for ; Wed, 9 Apr 2025 18:07:55 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 8CF021CC9C8 for ; Wed, 9 Apr 2025 22:07:56 +0000 (UTC) X-FDA: 83315893752.30.EB5D6A6 Received: from mail-yw1-f173.google.com (mail-yw1-f173.google.com [209.85.128.173]) by imf04.hostedemail.com (Postfix) with ESMTP id D131B40007 for ; Wed, 9 Apr 2025 22:07:54 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=KjHluJL4; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf04.hostedemail.com: domain of nifan.cxl@gmail.com designates 209.85.128.173 as permitted sender) smtp.mailfrom=nifan.cxl@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744236474; a=rsa-sha256; cv=none; b=o5Y5lMum8IIKuqwnXIbSlzB3Qz/PCpeZeeSVJBUywCxU9JWFHiE3eta4y37I48O1AxQv/f FV9EHiuohYBsx3+9qN9y1HNipLhzbkNPDiTqRyrTdmVye7ONih2s1ve5y7luapDRLrY/c0 jzBOE30fnWAl9HVNT4SMXM0b3f0V5d4= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=KjHluJL4; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf04.hostedemail.com: domain of nifan.cxl@gmail.com designates 209.85.128.173 as permitted sender) smtp.mailfrom=nifan.cxl@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744236474; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=KIOAufFGfzvsiWIx0kg6chyCPXkxwvpFYvwyJCLyIhw=; b=AtHfrcal76Tb6zMFCRZjzmvTzEZT7V8XTsSngwHsiNieVCjsJBkBU1TduxEC9XkgJxtqXO ZNqtFyo6FwaXdME3Ku2P5fPfNjbPnnDVp81DvLHfSpYbi8GmkaJIjAd1dM2XV8574qwvqF DtMVimOEaECpGYgigyb4jQeCuxcdjKI= Received: by mail-yw1-f173.google.com with SMTP id 00721157ae682-6fece18b3c8so1726177b3.3 for ; Wed, 09 Apr 2025 15:07:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1744236474; x=1744841274; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:from:to:cc:subject:date:message-id:reply-to; bh=KIOAufFGfzvsiWIx0kg6chyCPXkxwvpFYvwyJCLyIhw=; b=KjHluJL49nrBPbx2T7Nm15RiGNpa1SWH2pSdr9WA9HecfOV/s6V2uD1xtw5fwjkebL ZbUXlwyZsCpH0X/BYGsa92fPHIeMEH68ZXhjvasVQxUpvMzznbUzomFaZ27rPtrONwU8 wOdnF0seISmOLktwxkdhRjDAv6iGN/wvITnn/wjhuzsG+AsOXasQpvpFFJNmeS9xYl4t hEFBdhRfcaS7kQiX0hgGCDsFFkubAZOHyMcfJPKCjXNzIixsd1Bk/9WSsqBgvnUmBFDm cB3aFEkFaAxPLluvkRZZczsC6omfRtI+oI5OE3hCwYp42RmQavb+dTTwm/WykuKQjT9o WHDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744236474; x=1744841274; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=KIOAufFGfzvsiWIx0kg6chyCPXkxwvpFYvwyJCLyIhw=; b=ghk/Lie0XrBeJ+vyQU5PIz8CE2OG6ARvNZqLqEOjYtQP7pW0P7WhnQCYT9GCv6SAe+ zH+cv1s5wD6MYrt5Mp+l9eMVVMafr4vVAMtfCyDJOWf2cKJGlWbwIgWo4qIwm9WgPznQ Fh2vLxoURtLmOq/HNpH72KtXYGhNFlfKa0RTRVSzenTSQ6n1dzGiJ6bfkn+OvI00ldya Xt/dbCBjdxE8lCbUHa/o/mGCHV20oPYvq+7TLHYdf/So/swGJhiwSeCYS/LZfhSw1gSY PzXzaM7E4uH00Bmlmbwb62A7hwFzRbHk3bwkDGNhePVi+WcHwXDUNjBG0Urd8MG7bhws x7Ew== X-Forwarded-Encrypted: i=1; AJvYcCXXifL4glaCTS9TL7qPoI6WFlAFfBN4u07Vu7l6q1uSkj8oLGaUT8jjMmlk4+ktVN0usoDprSYIlg==@kvack.org X-Gm-Message-State: AOJu0Yx0xqkKGmqrYWU4rj8k48Iu4EV5K7sttoIXdUsqYbSsTnm4VK5G weAESHY4CWwlSyKizoc1XyTz8R998q4T1E71vpeAja0vsTnjFgnJ X-Gm-Gg: ASbGnct4m5WShY5IeQpLvP7UQ4H6kUJEYSMTRA0v38yKa4MNmSyOGeUtynLDqCI8Tg7 nLr5l1mod8+NFG9h3cV46tEKOLmW18NUie+1fAch8XhT5A2gIZchHUN1DbKo+kBDszJnckrSJ7q rKf9YJL6RqxC+jyhpMe/Ez9ZJW/LxxvSNhrf5BsmYy/3gMjtLgGOqPDkGLEx5fULcpPA2Zyo4P3 At0g2OjZnVN7bpWYb/IHMRX3xjYIj9wLN6iJ8qwYb7g99asK76qc1amhm0V43a4+dS6L9ncMSCv 3Nvv6dw7V0AciSsnhYGiUMn4XYndwupl5azRAih4 X-Google-Smtp-Source: AGHT+IGi6jXn92yv38JH3Zmq59oA3rL8jNYUD6UXdxnrzc1t/Yyzmf0exmbDDhLrnFJjM6qa9uhH6w== X-Received: by 2002:a05:690c:c09:b0:6fb:9495:1650 with SMTP id 00721157ae682-70549e9c89cmr8685617b3.11.1744236473747; Wed, 09 Apr 2025 15:07:53 -0700 (PDT) Received: from debian ([50.205.20.42]) by smtp.gmail.com with ESMTPSA id 00721157ae682-7053e3be5f5sm4690107b3.122.2025.04.09.15.07.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Apr 2025 15:07:53 -0700 (PDT) From: Fan Ni X-Google-Original-From: Fan Ni Date: Wed, 9 Apr 2025 15:07:50 -0700 To: Matthew Wilcox Cc: nifan.cxl@gmail.com, muchun.song@linux.dev, mcgrof@kernel.org, a.manzanares@samsung.com, dave@stgolabs.net, akpm@linux-foundation.org, david@redhat.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] mm/hugetlb: Convert &folio->page to folio_page(folio, 0) Message-ID: References: <20250409004937.634713-1-nifan.cxl@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: D131B40007 X-Stat-Signature: nuzo93cswaznqbecbni9m5jp78u7hein X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1744236474-384528 X-HE-Meta: U2FsdGVkX18XgyaRD5w76V8p8KyuPFZVnNNjA/1ciAU69RAjO3aOzBvZIR0hzu4emaFUrZSGX4T1U0/5KFodopFAIA/Zt4KyZZZgwWkr0aJLOyVyc8IaXvZnH5H4mLw5WYKBMW0HMQ6XY1QN1oOdA1N/aeIqyATbDeAHZkYOGUC6kBa3BZd9YaylqR6i79nWS3IfT5kgSmzgksBKL9GHH/cDeF0B7184PUXq+VWDj0JmVEtvO6fLPtWk5crmRVXLAxtY6Dp7cPcf2ypz/h5+1Dagu+eIs8jH4VJwV/mdf4D3+pZF7EZ8UVaD1bQ/DgKMzyzNlWKuGD0Wz0mC94XNyTUGheSZm4pKFJbpAjxmstjVAqKRYCrsGi/1cXI2MIE6TN+OxVwYqBFW2FMT0QR8Xwc1uRec0oMgbun3PLnvlWWWDHGW1UtXYhoRnX+ZEYHGaT71t/5LITWWdGZJdvSLBz4B5ReKVaJA6KWa6FSmSzR1pYA+DHpc0OaPKANK9HtJSNmuHQUNANSL+Navw1prCWFOLAxUwuTHfhXfilloxzt25NE+G21GQ0PJOxdwq6599aSDn0dky1IDMcn/eGSJh5QyeaY65ugwg5F8WYsa/9j9nFa3jJJShcpDRcOOAL4x76wLM6GaMAU901Hqdr+y+lHzKps9GlYekDZ5Ux0X9PcT2M5s82T9LDmY/RBcYy313OjwxEmC/F8/wEvIIRsq8BItxzdzry57lX3OHQgBzrZ+f+THfhuVMXTrfQTtCM/dPfa7qkLFU3abZnxiEvkQo7+v4C+NX4+NGBnzWrAufMQgBUmvkeW4IvlzC7BylldVcp1pOhaNLWyPumoeIlCkHV89y9ZQJvOP7T5uDFNSgV4cf9pWVlWAwSIWxfeCnvchgOrK2I/h8GlRJ+LrtfakN8MoyyplZHL59YITDFbleQceV3CvA8VSzyLvT1WU5v59NB2CxhGiyXrf2wlLz4/ Q8c2C97r MyYX5RjlE7WLA5KntrdEj7x8MyS/gjE/uarr8neD2UuR2Rr73hmOinR8BM9HlcKi4OlZMAGB4SWt5svH/hYOUeKloSnethdoR6gTwrDFTzzlZ6Hfh5myRer628fEEM7k2OyI4IiSjZcOQpnPzenIkVmnFuSIH3jLoYqly X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Apr 09, 2025 at 04:15:30AM +0100, Matthew Wilcox wrote: > On Tue, Apr 08, 2025 at 05:49:10PM -0700, nifan.cxl@gmail.com wrote: > > From: Fan Ni > > > > Convert the use of &folio->page to folio_page(folio, 0) where struct > > filio fits in. This is part of the efforts to move some fields out of > > struct page to reduce its size. > > Thanks for sending the patch. You've mixed together quite a few things; > I'd suggest focusing on one API at a time. ... > > */ > > - free_page_and_swap_cache(&new_folio->page); > > + free_page_and_swap_cache(folio_page(new_folio, 0)); > > } > > free_page_and_swap_cache() should be converted to be > free_folio_and_swap_cache(). While looking into this function, I see it is defined as a macro in swap.h as below, #define free_page_and_swap_cache(page) \ put_page(page) While checking put_page() in include/linux.mm.h, it always converts page to folio, is there a reason why we do not take "folio" directly? static inline void put_page(struct page *page) { struct folio *folio = page_folio(page); if (folio_test_slab(folio)) return; folio_put(folio); } Fan > > > > > > - return __folio_split(folio, new_order, &folio->page, page, list, true); > > + return __folio_split(folio, new_order, folio_page(folio, 0), page, > > + list, true); > > } > > Probably right. > > > { > > - return __folio_split(folio, new_order, split_at, &folio->page, list, > > - false); > > + return __folio_split(folio, new_order, split_at, folio_page(folio, 0), > > + list, false); > > } > > Ditto. > > > > > - return split_huge_page_to_list_to_order(&folio->page, list, ret); > > + return split_huge_page_to_list_to_order(folio_page(folio, 0), list, > > + ret); > > } > > Ditto. > > > > > - if (is_migrate_isolate_page(&folio->page)) > > + if (is_migrate_isolate_page(folio_page(folio, 0))) > > continue; > > I think we need an is_migrate_isolate_folio() instead of this. > > > if (folio_test_anon(folio)) > > - __ClearPageAnonExclusive(&folio->page); > > + __ClearPageAnonExclusive(folio_page(folio, 0)); > > folio->mapping = NULL; > > ... David. > > > > > - split_page_owner(&folio->page, huge_page_order(src), huge_page_order(dst)); > > + split_page_owner(folio_page(folio, 0), huge_page_order(src), > > + huge_page_order(dst)); > > See earlier. > > > if (folio_mapcount(old_folio) == 1 && folio_test_anon(old_folio)) { > > - if (!PageAnonExclusive(&old_folio->page)) { > > + if (!PageAnonExclusive(folio_page(old_folio, 0))) { > > folio_move_anon_rmap(old_folio, vma); > > - SetPageAnonExclusive(&old_folio->page); > > + SetPageAnonExclusive(folio_page(old_folio, 0)); > > } > > David. > > > } > > VM_BUG_ON_PAGE(folio_test_anon(old_folio) && > > - PageAnonExclusive(&old_folio->page), &old_folio->page); > > + PageAnonExclusive(folio_page(old_folio, 0)), > > + folio_page(old_folio, 0)); > > The PageAnonExclusive() part of this change is for David to comment on, > but this should be a VM_BUG_ON_FOLIO() instead of calling folio_page() > to keep this a VM_BUG_ON_PAGE(). > > > > > - unmap_ref_private(mm, vma, &old_folio->page, > > - vmf->address); > > + unmap_ref_private(mm, vma, folio_page(old_folio, 0), > > + vmf->address); > > unmap_ref_private() only has one caller (this one), so make that take a > folio. This is a whole series, all by itself. > > > hugetlb_cgroup_migrate(old_folio, new_folio); > > - set_page_owner_migrate_reason(&new_folio->page, reason); > > + set_page_owner_migrate_reason(folio_page(new_folio, 0), reason); > > > > See earlier about page owner being folio or page based. > > > int ret; > > - unsigned long vmemmap_start = (unsigned long)&folio->page, vmemmap_end; > > + unsigned long vmemmap_start = (unsigned long)folio_page(folio, 0), vmemmap_end; > > unsigned long vmemmap_reuse; > > Probably right. > > > int ret = 0; > > - unsigned long vmemmap_start = (unsigned long)&folio->page, vmemmap_end; > > + unsigned long vmemmap_start = (unsigned long)folio_page(folio, 0), vmemmap_end; > > unsigned long vmemmap_reuse; > > Ditto. > > > - unsigned long vmemmap_start = (unsigned long)&folio->page, vmemmap_end; > > + unsigned long vmemmap_start = (unsigned long)folio_page(folio, 0), vmemmap_end; > > unsigned long vmemmap_reuse; > > Ditto. > > > */ > > - spfn = (unsigned long)&folio->page; > > + spfn = (unsigned long)folio_page(folio, 0); > > Ditto. > > > register_page_bootmem_memmap(pfn_to_section_nr(spfn), > > - &folio->page, > > - HUGETLB_VMEMMAP_RESERVE_SIZE); > > + folio_page(folio, 0), > > + HUGETLB_VMEMMAP_RESERVE_SIZE); > > Don't change the indentation, but looks right. > > > result = SCAN_SUCCEED; > > - trace_mm_collapse_huge_page_isolate(&folio->page, none_or_zero, > > - referenced, writable, result); > > + trace_mm_collapse_huge_page_isolate(folio_page(folio, 0), > > + none_or_zero, referenced, > > + writable, result); > > return result; > > trace_mm_collapse_huge_page_isolate() should take a folio. > > > release_pte_pages(pte, _pte, compound_pagelist); > > - trace_mm_collapse_huge_page_isolate(&folio->page, none_or_zero, > > - referenced, writable, result); > > + trace_mm_collapse_huge_page_isolate(folio_page(folio, 0), > > + none_or_zero, referenced, > > + writable, result); > > return result; > > ditto. > > > out: > > - trace_mm_khugepaged_scan_pmd(mm, &folio->page, writable, referenced, > > - none_or_zero, result, unmapped); > > + trace_mm_khugepaged_scan_pmd(mm, folio_page(folio, 0), writable, > > + referenced, none_or_zero, result, > > + unmapped); > > return result; > > ditto, > > > result = install_pmd > > - ? set_huge_pmd(vma, haddr, pmd, &folio->page) > > + ? set_huge_pmd(vma, haddr, pmd, folio_page(folio, 0)) > > : SCAN_SUCCEED; > > I feel that set_huge_pmd() should take a folio.