From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 955E3C3ABC0 for ; Wed, 7 May 2025 09:48:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C3F666B000A; Wed, 7 May 2025 05:48:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BC85D6B0083; Wed, 7 May 2025 05:48:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A674D6B0085; Wed, 7 May 2025 05:48:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 877246B000A for ; Wed, 7 May 2025 05:48:47 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 40935CC577 for ; Wed, 7 May 2025 09:48:49 +0000 (UTC) X-FDA: 83415637578.03.5A099F0 Received: from out30-118.freemail.mail.aliyun.com (out30-118.freemail.mail.aliyun.com [115.124.30.118]) by imf27.hostedemail.com (Postfix) with ESMTP id 1C6C040008 for ; Wed, 7 May 2025 09:48:45 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b="KtPH/0sE"; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf27.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.118 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1746611327; a=rsa-sha256; cv=none; b=Ug4g6r5R5bdu7VYkPeQomMyDVjDHug7MEbDYq5xddILJwqEGTHp9WUby4y0kFDrYKhL8xz e0djq/Lpdpqsvd0HMs4bFzz4CXU5S9k4Yhh3v7MB1wQ2uiat7wHJ3a1Z+/lEDxc0drbx9O pBTMAjZtvYAjDueupJb9xIYFp1gT5ag= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b="KtPH/0sE"; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf27.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.118 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1746611327; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=V0O8cm6bffSFegRdIrSugEF6D3zwGb4jo+mv6tMRPJg=; b=GaoXCcjs1t6fXKwkemBiBvoKGKeb5p/B5SG+cMefPbpFe9ma0Da5s8P9n0LKXxNyd7R9ra g0MCh5bMQBKi4whEU3d1jnQW2gOt2fN8hpvBxr8sEReNu9GtPTfQ5Yq9pHgWByV5v+PvxW FuDYfq7qY/xJxhqoUMJTTodERoho5hg= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1746611323; h=Message-ID:Date:MIME-Version:Subject:To:From:Content-Type; bh=V0O8cm6bffSFegRdIrSugEF6D3zwGb4jo+mv6tMRPJg=; b=KtPH/0sEqHesbXRJyroYx1WP8gyCC1f5c77NiFXbiVY12Afn/8qgMYxBUMW7DZbB3j27JxYJBN02s52keN4nwC5VKJ3EDk9QTmuR18WUyo7twyDggTiBMXHS5cni7eCQ8Sbo8FTcnSuDouO99EZ2hp1tZ5yn0Za5p1fM7Qav7vs= Received: from 30.74.144.111(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WZp3AbP_1746611321 cluster:ay36) by smtp.aliyun-inc.com; Wed, 07 May 2025 17:48:41 +0800 Message-ID: <6a8418ba-dbd1-489f-929b-e31831bea0cf@linux.alibaba.com> Date: Wed, 7 May 2025 17:48:40 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH 2/2] mm: mincore: use folio_pte_batch() to batch process large folios To: Dev Jain , akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, 21cnbao@gmail.com, ryan.roberts@arm.com, ziy@nvidia.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <7ad05bc9299de5d954fb21a2da57f46dd6ec59d0.1742960003.git.baolin.wang@linux.alibaba.com> <17289428-894a-4397-9d61-c8500d032b28@arm.com> From: Baolin Wang In-Reply-To: <17289428-894a-4397-9d61-c8500d032b28@arm.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 1C6C040008 X-Stat-Signature: jr6xqpz4ixs4fgba8i9hn3fpo4krjdnr X-HE-Tag: 1746611325-965745 X-HE-Meta: U2FsdGVkX1/LSSNycioutL9ixj5QbtE8xHVYd0gqeJPXkjVtmAC1Hoa+nOEeVNOHPA2j/WCnJW1jPxMo6BHjbjhnY60SN2mO2xFBFqFHr12dsHYHzlNiP57jQvu28tj9ClF+DFOW7q5GAqOd8SgD/1JSdKAj1p73XfPQ2lrqtEuOTzXcw0RS9arzCsLQDXpgJhGWYrfyCpnTGvJ17tQvjFiUn11yKjacDGE+WdkNpc/ALP9anA9vhbyzg4bVToOk0QYkh1b9i6QsfQg7EVcvsH2zk/T4wN8hWOU7bVbZ4XanuY/tihF+sZwQ5l5dLuSGJUjv1fHAnzHV99Yh+M1ug+Vb2iUf0SHKBwHZl7ZN4rg1ggHDfOmMORT5q4rtH4lA2uZUBHVoZ+NIk9TkPHt1gf/Fdnne8lyX4YzT4N36J5bh5sLS/U37QF90sX5/g5xduTkGgFDr/9yzJIj+SCVyUJQOUJy7NJpVtnbadp1GBMris/i2Z1CCyRPrzJitY4ABT4VUj1GyLJ4qvHwm5rstMNZb1UQluVx5wkYwCWiJj18sEC8EOUl33cnRkJYioRdZqcOdYRdfTTs6Ot5Dy9SJTKea5PFtsy+WwMP5tanpv687FkGEL8+fr5Rvdlfhq0EcvYrYyi2RCOVlmkxhKTG4PKDZsl0lnm7GCLZeWRLlCRAtJ9Jw5IP9VJ1/SRpILYVkLfpxPfRONmKUAL1g6fMM74Oe8JfNRV6rR4Dr0BZCsFUU993vdBQ9C+3A/0eUswytsPmCePkW86+LduWiPe0M2+4GfBOxiN+4FNUivWIP3f5KzPWsGLIHX7rWJ5eYJ5HNwP6QZ4ka8Af7/KkLBT81lsTAcil889QrTSkAHBgk2zcz1ZiliEHUv7Owm7MyEvxzKOM61dUqd3GVBJ1csVeutxaBX07f4PoN81MX4PvqOMY/k7Qe1L3fBigYrs2VEYi0Qv/4etq1MtaexIIOYn8 PLpd54Yz gzbZh1tXVEluJtKCRCBzPi7Iylh5TUNXNBt2HPtsmZDxkp/I3xvusar0F2xfxvEWx3yqdyWt/Uos6uEPmPa54GMdM6+BNoSoIFvJi9RAdm9rgQD84M6z0gzyEXx/u0OaF/xQqgqxRrqfshaXBNrH4Uj+ZnwvyMy0fRMRjuSJecexEZbniNpHKemsqx7kgpeCSSXQRsNygSH0jmn64ZyYDaoK0LR5TRplkr0cGyxb0Wpmvgn4Ur48nWU7zAgconX2wOI+YB6L8KAwZ5m+Ez/8zbP3T4E4y78nceFEn+Vc8CfcGiWWyax44tK6QIr1vf5hsI/kIDFJYhT9PCs1pHE8kiE/REq7MgpkaEL7DNcPOATloas71uUh9xZutVuViTgguIMgLeuVuxUr9x6zkX2iZ6Wk9XLuxPqQraEWnBvsq1mRINC0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2025/5/7 13:12, Dev Jain wrote: > > > On 26/03/25 9:08 am, Baolin Wang wrote: >> When I tested the mincore() syscall, I observed that it takes longer with >> 64K mTHP enabled on my Arm64 server. The reason is the >> mincore_pte_range() >> still checks each PTE individually, even when the PTEs are contiguous, >> which is not efficient. >> >> Thus we can use folio_pte_batch() to get the batch number of the present >> contiguous PTEs, which can improve the performance. I tested the >> mincore() >> syscall with 1G anonymous memory populated with 64K mTHP, and observed an >> obvious performance improvement: >> >> w/o patch        w/ patch        changes >> 6022us            1115us            +81% >> >> Moreover, I also tested mincore() with disabling mTHP/THP, and did not >> see any obvious regression. >> >> Signed-off-by: Baolin Wang >> --- >>   mm/mincore.c | 27 ++++++++++++++++++++++----- >>   1 file changed, 22 insertions(+), 5 deletions(-) >> >> diff --git a/mm/mincore.c b/mm/mincore.c >> index 832f29f46767..88be180b5550 100644 >> --- a/mm/mincore.c >> +++ b/mm/mincore.c >> @@ -21,6 +21,7 @@ >>   #include >>   #include "swap.h" >> +#include "internal.h" >>   static int mincore_hugetlb(pte_t *pte, unsigned long hmask, unsigned >> long addr, >>               unsigned long end, struct mm_walk *walk) >> @@ -105,6 +106,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned >> long addr, unsigned long end, >>       pte_t *ptep; >>       unsigned char *vec = walk->private; >>       int nr = (end - addr) >> PAGE_SHIFT; >> +    int step, i; >>       ptl = pmd_trans_huge_lock(pmd, vma); >>       if (ptl) { >> @@ -118,16 +120,31 @@ static int mincore_pte_range(pmd_t *pmd, >> unsigned long addr, unsigned long end, >>           walk->action = ACTION_AGAIN; >>           return 0; >>       } >> -    for (; addr != end; ptep++, addr += PAGE_SIZE) { >> +    for (; addr != end; ptep += step, addr += step * PAGE_SIZE) { >>           pte_t pte = ptep_get(ptep); >> +        step = 1; >>           /* We need to do cache lookup too for pte markers */ >>           if (pte_none_mostly(pte)) >>               __mincore_unmapped_range(addr, addr + PAGE_SIZE, >>                            vma, vec); >> -        else if (pte_present(pte)) >> -            *vec = 1; >> -        else { /* pte is a swap entry */ >> +        else if (pte_present(pte)) { >> +            if (pte_batch_hint(ptep, pte) > 1) { >> +                struct folio *folio = vm_normal_folio(vma, addr, pte); >> + >> +                if (folio && folio_test_large(folio)) { >> +                    const fpb_t fpb_flags = FPB_IGNORE_DIRTY | >> +                                FPB_IGNORE_SOFT_DIRTY; >> +                    int max_nr = (end - addr) / PAGE_SIZE; >> + >> +                    step = folio_pte_batch(folio, addr, ptep, pte, >> +                            max_nr, fpb_flags, NULL, NULL, NULL); >> +                } >> +            } > > Can we go ahead with this along with [1], that will help us generalize > for all arches. > > [1] https://lore.kernel.org/all/20250506050056.59250-3-dev.jain@arm.com/ > (Please replace PAGE_SIZE with 1) As discussed with Ryan, we don’t need to call folio_pte_batch() (something like the code below), so your patch seems unnecessarily complicated. However, David is unhappy about the open-coded pte_batch_hint(). static int mincore_hugetlb(pte_t *pte, unsigned long hmask, unsigned long addr, unsigned long end, struct mm_walk *walk) @@ -105,6 +106,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, pte_t *ptep; unsigned char *vec = walk->private; int nr = (end - addr) >> PAGE_SHIFT; + int step, i; ptl = pmd_trans_huge_lock(pmd, vma); if (ptl) { @@ -118,16 +120,21 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, walk->action = ACTION_AGAIN; return 0; } - for (; addr != end; ptep++, addr += PAGE_SIZE) { + for (; addr != end; ptep += step, addr += step * PAGE_SIZE) { pte_t pte = ptep_get(ptep); + step = 1; /* We need to do cache lookup too for pte markers */ if (pte_none_mostly(pte)) __mincore_unmapped_range(addr, addr + PAGE_SIZE, vma, vec); - else if (pte_present(pte)) - *vec = 1; - else { /* pte is a swap entry */ + else if (pte_present(pte)) { + unsigned int max_nr = (end - addr) / PAGE_SIZE; + + step = min(pte_batch_hint(ptep, pte), max_nr); + for (i = 0; i < step; i++) + vec[i] = 1; + } else { /* pte is a swap entry */ swp_entry_t entry = pte_to_swp_entry(pte); if (non_swap_entry(entry)) { @@ -146,7 +153,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, #endif } } - vec++; + vec += step; } pte_unmap_unlock(ptep - 1, ptl); out: