From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08437C36010 for ; Wed, 9 Apr 2025 03:15:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7D879280041; Tue, 8 Apr 2025 23:15:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7871628003C; Tue, 8 Apr 2025 23:15:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 64E08280041; Tue, 8 Apr 2025 23:15:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 477C828003C for ; Tue, 8 Apr 2025 23:15:43 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 4DF50BB9B4 for ; Wed, 9 Apr 2025 03:15:43 +0000 (UTC) X-FDA: 83313040566.26.4E9DE3C Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id 59D4014000C for ; Wed, 9 Apr 2025 03:15:40 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=tGYmQ8gH; dmarc=none; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744168541; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=48M+FnhY7uZ8tqy247D7H8JPYIzLS6Ux7OX9yqw3LtQ=; b=W7nzS5+w7PLeXHvnMYdpw+kwOq+GpO4ohgJA6Q4HUu2ADYEiYgbojMxPxafEh7WxKhTAqU XscBwb+luSR+cpTZiGiia/esImdUw7A2ZgFt421RkGaQStz0P+G2JloYrTNyrjB20F8a47 Z7rXIYv9Oxql3hkRhtSCijZ3RtvzAG8= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=tGYmQ8gH; dmarc=none; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744168541; a=rsa-sha256; cv=none; b=mSi9Yyv2wO4Mn/lTxqMDQZYcfNnQKMxYDaYLOidNhMCOYT+HfqeR1nEcRiVw8X0NQv8w2t NNwFfgvbe4zhzkBSrTC2ycZGAu9JsrPR8zKSNC0RlbJarS+UxI4HXOj8iI5msnV8XlcnNp 9FDz/WS50JN84Vcke9FzB6hPE9tS8oc= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=48M+FnhY7uZ8tqy247D7H8JPYIzLS6Ux7OX9yqw3LtQ=; b=tGYmQ8gHW2+T2oVngTBiEIcONj Kpu0p1Q30aeq3gSZsaPEMHwhx89hNzD51hsDOZeStKcwe7VMEo6ZOBYtoOwuh09px++UDSc97mLMf 011rcscX3bvdQRLOSBUQyj/XaXoAudTiPuV4XIZa7zbe25nVnLe/FZPfUe434sawd1fNv+oMJAybh jSV54wvRzu0OYM6tTpv0W+f1pYzKgbnaLSf+PCHG2NjZnBT0TdlTMEFH8hfz/9nWYtilwzzMOJ7B7 f0epa6E2dnp7M/tgXgR1IvZLHLTXth//4US/Ydk5ik8U+/NtWYnI/qxpHJSfhg3ebMJ+Xljm+xfPu Y+2URBNw==; Received: from willy by casper.infradead.org with local (Exim 4.98.2 #2 (Red Hat Linux)) id 1u2LuU-00000000pTD-1C1a; Wed, 09 Apr 2025 03:15:30 +0000 Date: Wed, 9 Apr 2025 04:15:30 +0100 From: Matthew Wilcox To: nifan.cxl@gmail.com Cc: muchun.song@linux.dev, mcgrof@kernel.org, a.manzanares@samsung.com, dave@stgolabs.net, akpm@linux-foundation.org, david@redhat.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Fan Ni Subject: Re: [PATCH] mm/hugetlb: Convert &folio->page to folio_page(folio, 0) Message-ID: References: <20250409004937.634713-1-nifan.cxl@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250409004937.634713-1-nifan.cxl@gmail.com> X-Rspamd-Server: rspam01 X-Stat-Signature: kpwr9umcfzi58kuebcx458h94e6133pj X-Rspam-User: X-Rspamd-Queue-Id: 59D4014000C X-HE-Tag: 1744168540-371580 X-HE-Meta: U2FsdGVkX1/OJGVRCNf0dEk+ovUjIONezSjOKy/geIsVfEu3ki4lh0U+E+WRFgd5tcVM9nUE649tgUw+hEHkveDrA9hV/oupFc5t5llB48SuQfYdPp+ampKuvqNm10z7OKZSeA8Cdv8w20Fn5BO3y8JuIVsaWi7ugur1hgGpBoZ6DW4+xJwMEuqwOsILnzml0t6nfhbVZL8yEDh7oFwk2ircQ1x34hMiVu4MJwhD3YT2jqfE+eVo9MPunR48T1RZlXVJfhodlVGoKPFbgbaSRt2C/5DQQptakIYc01Ez/Fa+RhCHxYH3MezeVoHsmGa4TmfFNlHoWdUWtouM+uYI/gUz+PCCELnUWryTWlEsK8volqW01UJoLvxVosi+DPDndgNYV8+7H9Tobct8tnoCaOxlY9FkGE7G/sOZGrfSavVIDOeuKFbv4FMVwZhuVIJNjCpWntrkrFJL2mUZ1vH1GjjMH4UQXcnxddRJPI43TMcjUn0iWFUkYFAXn5dbvpuveMOwIpqpMrwyYeK88ysAMZMa2EI3ukjYAtOCrTEhnBNWocPSI+3ccE9+uF026YuggJHcNX1Dxhvm/kgVTWBlrKR22uD3gtSZPEcoNpduWg1YOzS4OjDXH6vbFQB533VjcbXDBr2AMrA7nPDTkOAckovCYsyprOcPW3WTbKDLoE7foLABut/hkyFf+Bt9gcbrqxYZMlTbhkEJXmjTWRQINqy93TpglcnT+3qxlW2X3ODERaTyZEpvrK/PlNo/SDx7Oz8ytIM4Mx59NIcnmPuwRdP6fSvnzYPNZo7XP2MXTpdq4tW4uHx3RMXfIPOtPrh38Ji8H2IHnd0xASdJQDdK6wKNgdrJadPy7tMeGNz2DD5YmV2L+Zs3k8N1w7BhFApNYyYS5p9ui2Ue7Q0+WBpFs6KutIRcX5e4IcxNFB8JQBr+rjOOcnnB5edQYPMHGd1o/JXoNcCBQKuTMI+ajVV dtpaRH0y aXKTwj9kkUOKcLGAR+rDbXdmY56nWFfZYF5CiMmlbuOzjv2ZDOR0aqm9qQCdKaPW3i6V/mGpMgK1srafHOm9frriRkzQh+Sm4YomyasuDs7elzIuK9TmqSiCsN1YlRpF1L5eA4A2DgCFcQMZfq+hKfOdkOSDftRAjNz9xzry6BYE04OT7O1CasXxzyjF+r8HohdayH7WxFpGdS/t+aK4h5GNzDPlKk/Q8PilUpyBd+3FdnuMn7ugUInprox4kz5eiqN1KNSX97OUvbk4yZ0OhyvYXkLjapiNaJOs5DsVYbOJy8iVq/Vb2DD+eFtXxx52LgNnC2SmspAF9ifRv8I9mRXcHNTygC9KKZ8pNfLiUKkZ52SVSOx5/yLGLQBAtZzGRkcjcNwu7V5Kf5XE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Apr 08, 2025 at 05:49:10PM -0700, nifan.cxl@gmail.com wrote: > From: Fan Ni > > Convert the use of &folio->page to folio_page(folio, 0) where struct > filio fits in. This is part of the efforts to move some fields out of > struct page to reduce its size. Thanks for sending the patch. You've mixed together quite a few things; I'd suggest focusing on one API at a time. > folio_get(folio); > - folio_add_file_rmap_pmd(folio, &folio->page, vma); > + folio_add_file_rmap_pmd(folio, folio_page(folio, 0), vma); > add_mm_counter(mm, mm_counter_file(folio), HPAGE_PMD_NR); I think this is fine, but would defer to David Hildenbrand. > folio_get(folio); > - folio_add_file_rmap_pud(folio, &folio->page, vma); > + folio_add_file_rmap_pud(folio, folio_page(folio, 0), vma); > add_mm_counter(mm, mm_counter_file(folio), HPAGE_PUD_NR); If that is fine, then so is this (put them in the same patchset). > spin_unlock(ptl); > - if (flush_needed) > - tlb_remove_page_size(tlb, &folio->page, HPAGE_PMD_SIZE); > + if (flush_needed) { > + tlb_remove_page_size(tlb, folio_page(folio, 0), > + HPAGE_PMD_SIZE); > + } You don't need to add the extra braces here. I haven't looked into this family of APIs; not sure if we should be passing the folio here or if it should be taking a folio argument. > if (folio_maybe_dma_pinned(src_folio) || > - !PageAnonExclusive(&src_folio->page)) { > + !PageAnonExclusive(folio_page(src_folio, 0))) { > err = -EBUSY; mmm. Another David question. > for (i = new_nr_pages; i < nr_pages; i += new_nr_pages) { > - struct page *new_head = &folio->page + i; > + struct page *new_head = folio_page(folio, i); > This is definitely the right thing to do. > @@ -3403,7 +3405,7 @@ static void __split_folio_to_order(struct folio *folio, int old_order, > if (new_order) > folio_set_order(folio, new_order); > else > - ClearPageCompound(&folio->page); > + ClearPageCompound(folio_page(folio, 0)); > } I might be inclined to leave this one alone; this whole function needs to be rewritten as part of the folio split. > folio_split_memcg_refs(folio, old_order, split_order); > - split_page_owner(&folio->page, old_order, split_order); > + split_page_owner(folio_page(folio, 0), old_order, split_order); > pgalloc_tag_split(folio, old_order, split_order); Not sure if split_folio_owner is something that should exist. Haven't looked into it. > */ > - free_page_and_swap_cache(&new_folio->page); > + free_page_and_swap_cache(folio_page(new_folio, 0)); > } free_page_and_swap_cache() should be converted to be free_folio_and_swap_cache(). > > - return __folio_split(folio, new_order, &folio->page, page, list, true); > + return __folio_split(folio, new_order, folio_page(folio, 0), page, > + list, true); > } Probably right. > { > - return __folio_split(folio, new_order, split_at, &folio->page, list, > - false); > + return __folio_split(folio, new_order, split_at, folio_page(folio, 0), > + list, false); > } Ditto. > > - return split_huge_page_to_list_to_order(&folio->page, list, ret); > + return split_huge_page_to_list_to_order(folio_page(folio, 0), list, > + ret); > } Ditto. > > - if (is_migrate_isolate_page(&folio->page)) > + if (is_migrate_isolate_page(folio_page(folio, 0))) > continue; I think we need an is_migrate_isolate_folio() instead of this. > if (folio_test_anon(folio)) > - __ClearPageAnonExclusive(&folio->page); > + __ClearPageAnonExclusive(folio_page(folio, 0)); > folio->mapping = NULL; ... David. > > - split_page_owner(&folio->page, huge_page_order(src), huge_page_order(dst)); > + split_page_owner(folio_page(folio, 0), huge_page_order(src), > + huge_page_order(dst)); See earlier. > if (folio_mapcount(old_folio) == 1 && folio_test_anon(old_folio)) { > - if (!PageAnonExclusive(&old_folio->page)) { > + if (!PageAnonExclusive(folio_page(old_folio, 0))) { > folio_move_anon_rmap(old_folio, vma); > - SetPageAnonExclusive(&old_folio->page); > + SetPageAnonExclusive(folio_page(old_folio, 0)); > } David. > } > VM_BUG_ON_PAGE(folio_test_anon(old_folio) && > - PageAnonExclusive(&old_folio->page), &old_folio->page); > + PageAnonExclusive(folio_page(old_folio, 0)), > + folio_page(old_folio, 0)); The PageAnonExclusive() part of this change is for David to comment on, but this should be a VM_BUG_ON_FOLIO() instead of calling folio_page() to keep this a VM_BUG_ON_PAGE(). > > - unmap_ref_private(mm, vma, &old_folio->page, > - vmf->address); > + unmap_ref_private(mm, vma, folio_page(old_folio, 0), > + vmf->address); unmap_ref_private() only has one caller (this one), so make that take a folio. This is a whole series, all by itself. > hugetlb_cgroup_migrate(old_folio, new_folio); > - set_page_owner_migrate_reason(&new_folio->page, reason); > + set_page_owner_migrate_reason(folio_page(new_folio, 0), reason); > See earlier about page owner being folio or page based. > int ret; > - unsigned long vmemmap_start = (unsigned long)&folio->page, vmemmap_end; > + unsigned long vmemmap_start = (unsigned long)folio_page(folio, 0), vmemmap_end; > unsigned long vmemmap_reuse; Probably right. > int ret = 0; > - unsigned long vmemmap_start = (unsigned long)&folio->page, vmemmap_end; > + unsigned long vmemmap_start = (unsigned long)folio_page(folio, 0), vmemmap_end; > unsigned long vmemmap_reuse; Ditto. > - unsigned long vmemmap_start = (unsigned long)&folio->page, vmemmap_end; > + unsigned long vmemmap_start = (unsigned long)folio_page(folio, 0), vmemmap_end; > unsigned long vmemmap_reuse; Ditto. > */ > - spfn = (unsigned long)&folio->page; > + spfn = (unsigned long)folio_page(folio, 0); Ditto. > register_page_bootmem_memmap(pfn_to_section_nr(spfn), > - &folio->page, > - HUGETLB_VMEMMAP_RESERVE_SIZE); > + folio_page(folio, 0), > + HUGETLB_VMEMMAP_RESERVE_SIZE); Don't change the indentation, but looks right. > result = SCAN_SUCCEED; > - trace_mm_collapse_huge_page_isolate(&folio->page, none_or_zero, > - referenced, writable, result); > + trace_mm_collapse_huge_page_isolate(folio_page(folio, 0), > + none_or_zero, referenced, > + writable, result); > return result; trace_mm_collapse_huge_page_isolate() should take a folio. > release_pte_pages(pte, _pte, compound_pagelist); > - trace_mm_collapse_huge_page_isolate(&folio->page, none_or_zero, > - referenced, writable, result); > + trace_mm_collapse_huge_page_isolate(folio_page(folio, 0), > + none_or_zero, referenced, > + writable, result); > return result; ditto. > out: > - trace_mm_khugepaged_scan_pmd(mm, &folio->page, writable, referenced, > - none_or_zero, result, unmapped); > + trace_mm_khugepaged_scan_pmd(mm, folio_page(folio, 0), writable, > + referenced, none_or_zero, result, > + unmapped); > return result; ditto, > result = install_pmd > - ? set_huge_pmd(vma, haddr, pmd, &folio->page) > + ? set_huge_pmd(vma, haddr, pmd, folio_page(folio, 0)) > : SCAN_SUCCEED; I feel that set_huge_pmd() should take a folio.