From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A516C369D5 for ; Mon, 28 Apr 2025 17:18:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 92E2D6B00A9; Mon, 28 Apr 2025 13:18:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8DC216B00AA; Mon, 28 Apr 2025 13:18:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 705AA6B00AC; Mon, 28 Apr 2025 13:18:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 4CCE26B00A9 for ; Mon, 28 Apr 2025 13:18:27 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 45B63160E98 for ; Mon, 28 Apr 2025 17:18:27 +0000 (UTC) X-FDA: 83384111454.30.475B90C Received: from mail-qv1-f46.google.com (mail-qv1-f46.google.com [209.85.219.46]) by imf29.hostedemail.com (Postfix) with ESMTP id 6C624120012 for ; Mon, 28 Apr 2025 17:18:25 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=jl32uiCh; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf29.hostedemail.com: domain of nifan.cxl@gmail.com designates 209.85.219.46 as permitted sender) smtp.mailfrom=nifan.cxl@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745860705; a=rsa-sha256; cv=none; b=ah6VRS+/7w8hBTGZ7TD9LH4QoHsPPwLeb0md5k8qhsvehM8p+VD5D1c6trCPF//RJIgRDE liOPsbVxJpVYpRTt+AXuIz5gaZHTSMCy8m4mPcgio0Wae87shG7KTmYsnSsOngl4wtonZ2 7aw0PVtRvvhnPB5XFKaEz3qJzayhW8I= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=jl32uiCh; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf29.hostedemail.com: domain of nifan.cxl@gmail.com designates 209.85.219.46 as permitted sender) smtp.mailfrom=nifan.cxl@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745860705; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=k9GLyHiHBYPWRVABHbMnnhE1eRgqrotaJLvz7nhIF2A=; b=WUNfvoyp9j0vS1nwuESFCPoVY/JghLeMIh2vGt1L9+olZ9CRAGwZ+6SBvHG6fCy7luawHB +10bjiBWcB5JLaOJiyy9AkDaYAuOaYESYZRqk9bK+fgnbYPHQsGLBGN8CKlDClPcmnqIxp AUZliB70/4F1k1IPMtiAwECcGHbsDOY= Received: by mail-qv1-f46.google.com with SMTP id 6a1803df08f44-6e8f94c2698so27909156d6.0 for ; Mon, 28 Apr 2025 10:18:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1745860704; x=1746465504; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=k9GLyHiHBYPWRVABHbMnnhE1eRgqrotaJLvz7nhIF2A=; b=jl32uiChN8ubRNwo+nEY5qKMkUL1CqyWI54PzhF5zl6Y2ifTt8jRAD1eYIsybzNjD3 fQ3s7b1M+bsSOUFCPIsW6f20P4xFEsDyz1wijfzyg0WLQZE+XwXgrkn0hno0MZK+4NHq shyCheogbevLmfeRKXw76QEEm6Y37uU4AQ8Ne+C7myvq1zhz+IvI4By84cRff1DkikWv 7oOaEjHyyq7Pi3Uz4ebwBrPbSDmn7eMV2LL95kXR2ZnhIuzPZ1bZr7G4OpFHdC5tTa07 S4ziePWvAcb8ECGwMSamDdzKtR2vKQA9WLgN7rqoSLZIr5PvFmZkidfrbPtiKWMlvrNK uWiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745860704; x=1746465504; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=k9GLyHiHBYPWRVABHbMnnhE1eRgqrotaJLvz7nhIF2A=; b=dAQvZK/iOH4ajdnRvkydJW9BqkpxanZ+26nqUlVqTLhxHDX2wseibTpPm+OPA1/tZJ Z2+W1gMANBfVHAoLANEtVnNrPeloHvezwJj2vSwSJUGuZJ6gpiBmKMxr6FOc1PDP8E6O oH5Gj/6f6zWiYp51e0SrjtdXXSji9Ij3aYMFROfRJzRohwCbTK+cF5kHsGNuPCI+Y30e wRo779mYiecJsuvtGabdehYEBFZjCDCnN84GRwD60QJZmFvQNLPwFx1vzJ3MCd1YQVvo mAcbQKUDIWqKt5nqfZyO8gLgH6zFbmUQZmuY3IuTX/smnCXPRD3FLpZ4WogjXvSunyrV Y/bQ== X-Forwarded-Encrypted: i=1; AJvYcCVo+4yRJTVkFKFEQKMfbI4HKpGTGOhwwN7D98fYGBVv9VbfceZnBaR2dZJ3EfBSv5t34Oa0cc7MjA==@kvack.org X-Gm-Message-State: AOJu0Yy6bPxMPEB8/hOSXjISkeuHm9EWGbwE/f6yB93SqLNxnud4awZ6 v/GBSEWSjf/OgGROh5vcTO/lEHR6X69LiJSObZlIieH5f9n+wnD5 X-Gm-Gg: ASbGncsMlEQf/Px+ooK0oq7+tzRQ8EBnh5AookPzXYNm39h+xyTcXheo7gGDP+B21/L z3KsUY6WLTcISZEsEDHcm4mkFY03PguffwaPO27/YjFaWFJMh1efdmyMyE+Ydc2xkMXsyEJh3Nt HdjiTo4NO0JcgZNJiiWfZyN1o8M2D5vJL6hTwoohB127O3CGpDGYHfodXT9eCqo1jkSKsBa1kh2 mpAPrqQAfZWBaA2WiJ7TedAt1NjrtrfdQUJgfvVNrRnYqmwB6zq/VJntM1jIfBKi9o1/XqBkQ9e RB+d0oQOLLnpUbUPo6mrKgupb89k69VFc1MPP57pCxnJA6J9S9OCwWD9fg== X-Google-Smtp-Source: AGHT+IEETmc76i/GhvmDIeRmFwP+L2/5Ogmmw4PRYrgxmPYGFVESci3tRKGsTwmGBWmCenqLJ62h9Q== X-Received: by 2002:a05:6214:5009:b0:6e4:2f7f:d0bb with SMTP id 6a1803df08f44-6f4f053085cmr9931726d6.4.1745860704360; Mon, 28 Apr 2025 10:18:24 -0700 (PDT) Received: from localhost.localdomain ([2607:fb90:8ee2:8c9a:73d0:fe8a:86bb:e664]) by smtp.gmail.com with ESMTPSA id 6a1803df08f44-6f4c08ef3e7sm63443436d6.6.2025.04.28.10.18.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Apr 2025 10:18:23 -0700 (PDT) From: nifan.cxl@gmail.com To: muchun.song@linux.dev, willy@infradead.org Cc: mcgrof@kernel.org, a.manzanares@samsung.com, dave@stgolabs.net, akpm@linux-foundation.org, david@redhat.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Fan Ni Subject: [PATCH v3 3/4] mm/hugetlb: Refactor __unmap_hugepage_range() to take folio instead of page Date: Mon, 28 Apr 2025 10:11:46 -0700 Message-ID: <20250428171608.21111-6-nifan.cxl@gmail.com> X-Mailer: git-send-email 2.47.2 In-Reply-To: <20250428171608.21111-3-nifan.cxl@gmail.com> References: <20250428171608.21111-3-nifan.cxl@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 6C624120012 X-Stat-Signature: jptttkicihshggk5uj5bg8f6x576c3jd X-Rspam-User: X-HE-Tag: 1745860705-972356 X-HE-Meta: U2FsdGVkX1+tzpQlMcUjwdxAYh8wNPnqBKPnLHaB8X9JtYm+TpXHOKMTuYyNeaaossIz3jfsqaz7+cHKWauyQ+klj1Aa1gHM4uM3XBznlEYWcT5i8xrpro8ktQmFeMw0ytspWqSTzGz42dbMiAkAcTq2UjSsdoUGAz0PeMogbEZTXOTmcM8xwb52gnpWzTFmzDpqfovUUOWk6Q+Z+XVj3bc1vRp59l2yhoy+Ui3S+wSUeQ1eHfKV2JeCeBKJT2OtxNj/HI6455UFVYwOtyvNgagt/FtX6dY2XSTIN7qcBvc/+DFaBvWP+6++O1UOZp3Todii60eQLtxHP+u6s2dQIfVRLnI5g/GUF7e8s+Cd2P5QZHmXBrzSkB7eei+uKwwwODQXkiHazCZ3kkoqStcgtlIUR8De1xlDtUD15a51K6eT4a8M3nzy4O9eOrd0ASMDfytT9quhDFYIsNxXnIdp4n6GXYzW9B6q0y1OZSkhhyLtXDY57pcEUPd3cYJ+Hs1JDHkbGmLb1CQiGz9Z4xM8qbuIP+g3JqBFN5V9p1OAWajs/Ostcx1cqSPLqwOWMTzYX9JpJqkMKkdpYlYgj9uWlZZ1OqX401MsjqFT4AzMMMlkwcEqvqqg/+DTeM1cPy84FMTHiVxN29fQw261BVnHJy8v/Y9To5UiNoYkAk43SYcp+uN9qDXo9bvd8+WTZbxZ5ORMiwYZk5cqVwCU1JUTXZeBPAeXt7v4ZTCUj+TemukBNHBIc8+FGO0OzXkAFN0QlUHqmni1lbXAMYTP1GKncapTF0lZbYYdMPR2HSfXIxMSqBvEiY8x32ek9fnub5XPiWfZog4/oBLOWlNdyJdjOCV7V5hBFM+9zKnPwHuc6pPu3PNlFvwjG29yBxiwp7he3MZiv95r25b1uUSOyvEsX+YP4xY3qvT0P8ol51gRYX2LYIxagsNv6uR6zsMwnz4QaswisYTiSJMqWZIgtza ohAyF8jT gNQhai3FbRLsSFTbemJfdbGhtL6JzmhsPihXybGX3HOYaX/5KFXAkz4n3ltEjVc5TtmE9fawp0HeinqYkwJLmR8mo50CrLKvX6EnTQMTKFAL2tWuI+hmASt6N1+lT2NX+oXGiSrlsAewNnh+1H9m6gYLJJqjTfY1f1caFkHdA+p4VW6l5uvSf8wu+7R2Qj5/7TeE6stJbwXhDZnpKOH/T96pxGGC6PrmP+0v8HNZy5SwwBQUePVr+H1yfLE3VmsPCpwi7GdCMtmFvIUZ0Nr2aKrl62Juee2GfJ+wExCvyzqYTbXZ81Ng6iMrVADOnonuPBN5d9ut0lmMMIAjy6rnuQ6Dvkb1AMefxRTW175ive9iYY0ABlEQEz9cYdenAYsF7T1iNhL7WEkjjgCxh/FZmOX+vG9cWkxfxPN66WvlQQ5TxT5aeqUxdLqQFupzhGtn2pxeTXP2bqsoDvVomrGeSPhff1A3x0guM9PjJI4szWic7Mu1dLsINg/UVEQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Fan Ni The function __unmap_hugepage_range() has two kinds of users: 1) unmap_hugepage_range(), which passes in the head page of a folio. Since unmap_hugepage_range() already takes folio and there are no other uses of the folio struct in the function, it is natural for __unmap_hugepage_range() to take folio also. 2) All other uses, which pass in NULL pointer. In both cases, we can pass in folio. Refactor __unmap_hugepage_range() to take folio. Signed-off-by: Fan Ni --- include/linux/hugetlb.h | 4 ++-- mm/hugetlb.c | 10 +++++----- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 83d85cbb4284..3a07a60c8cd9 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -133,7 +133,7 @@ void unmap_hugepage_range(struct vm_area_struct *, void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, - struct page *ref_page, zap_flags_t zap_flags); + struct folio *, zap_flags_t zap_flags); void hugetlb_report_meminfo(struct seq_file *); int hugetlb_report_node_meminfo(char *buf, int len, int nid); void hugetlb_show_meminfo_node(int nid); @@ -452,7 +452,7 @@ static inline long hugetlb_change_protection( static inline void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, - unsigned long end, struct page *ref_page, + unsigned long end, struct folio *folio, zap_flags_t zap_flags) { BUG(); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 7601e3d344bc..6696206d556e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5808,7 +5808,7 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma, void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, - struct page *ref_page, zap_flags_t zap_flags) + struct folio *folio, zap_flags_t zap_flags) { struct mm_struct *mm = vma->vm_mm; unsigned long address; @@ -5885,8 +5885,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, * page is being unmapped, not a range. Ensure the page we * are about to unmap is the actual page of interest. */ - if (ref_page) { - if (page != ref_page) { + if (folio) { + if (page_folio(page) != folio) { spin_unlock(ptl); continue; } @@ -5952,7 +5952,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, /* * Bail out after unmapping reference page if supplied */ - if (ref_page) + if (folio) break; } tlb_end_vma(tlb, vma); @@ -6027,7 +6027,7 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start, tlb_gather_mmu(&tlb, vma->vm_mm); __unmap_hugepage_range(&tlb, vma, start, end, - &folio->page, zap_flags); + folio, zap_flags); mmu_notifier_invalidate_range_end(&range); tlb_finish_mmu(&tlb); -- 2.47.2