From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33094C3ABBE for ; Thu, 8 May 2025 04:09:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 27FD26B0089; Thu, 8 May 2025 00:09:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2302B6B008A; Thu, 8 May 2025 00:09:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0F89D6B008C; Thu, 8 May 2025 00:09:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id DE73D6B0089 for ; Thu, 8 May 2025 00:09:42 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id EAC065A3CE for ; Thu, 8 May 2025 04:09:43 +0000 (UTC) X-FDA: 83418411846.07.D2BCAF5 Received: from out30-99.freemail.mail.aliyun.com (out30-99.freemail.mail.aliyun.com [115.124.30.99]) by imf18.hostedemail.com (Postfix) with ESMTP id D8CFB1C0002 for ; Thu, 8 May 2025 04:09:40 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=efspxjNn; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf18.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.99 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1746677381; a=rsa-sha256; cv=none; b=FcSo0oBlYJzoclU0YjVayLTOgSxKF+YUw1f1IOrGie9cvFMPF+0DMaI03IBvBSHe1mkf3i qq+94cChu1Y4AW+ZuMKQyNKsEtAwpYahy3sY0XsK4YlC5i5XqKrD+iSD7mL8LeWv8MInLK 5+ltN8sMXhGnKD25LLw3EYMb5ok/MoU= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=efspxjNn; dmarc=pass (policy=none) header.from=linux.alibaba.com; spf=pass (imf18.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.99 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1746677381; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=UGP4SdAGpL+u7FxUuxhPuYPqxiJA4339HAssaexHvLE=; b=bhj8Tg3CHA7E7k0xor9u0G2Qv9/IrdgICg0E9ruT/2JUItPC4Ppo41wnp4Q7fbt/AUe0Ni XJdgIiBqHeMm60lz4Gvtc5Zx+kxpkDWDp72/hlRtC2vPDRoVE2o0bJkvYEK2Ib0mCFDFUw HwjuaNC22lB6u5YJdajZkU77F0rnj4Y= DKIM-Signature:v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1746677376; h=From:To:Subject:Date:Message-ID:MIME-Version; bh=UGP4SdAGpL+u7FxUuxhPuYPqxiJA4339HAssaexHvLE=; b=efspxjNncRJRCKaKU07ITa+HMnJcF6BL22oOCWG4IoD1TZTd8zuwGH9Kt8y+Zf7u0nINn5/yyh4ReOH2OChzoPiZaRSrQdHDgtiGWLFu/+K0XxBIKGgOvmUlCMEqp347+5r4nLWHRfNGwaSioma6FYOiFkz0+NxI/EKNBgwQDyo= Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WZzDCrr_1746677374 cluster:ay36) by smtp.aliyun-inc.com; Thu, 08 May 2025 12:09:35 +0800 From: Baolin Wang To: akpm@linux-foundation.org, david@redhat.com Cc: 21cnbao@gmail.com, ryan.roberts@arm.com, dev.jain@arm.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2] mm: mincore: use pte_batch_bint() to batch process large folios Date: Thu, 8 May 2025 12:09:27 +0800 Message-ID: X-Mailer: git-send-email 2.43.5 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: D8CFB1C0002 X-Stat-Signature: eo7xrz84e5c49ob9myxgij8t4ik7qqqg X-Rspam-User: X-HE-Tag: 1746677380-909683 X-HE-Meta: U2FsdGVkX19WD9UHSu07pDvjJsWqJLSK8KlK+V3ia30iSDnI9heZYWG3jBi/l5VhnsFhaTJqYKiJA3lgYjt5Pv2tk+wuts1DGaLnA4Jvh/8WSGQkcDPQQWMFKGlvf6MQUwNL+WasZUA3wLeY3yFM1rlBwJR/RPSQTiWdhQd6MU/z+/urafcDKYCbvbjzOlBCavpQ3qLh3RF5CD+iR5C7orpoa/ZOLPcy3RAdZUCDpB9ufQuYFJ4ZJ8QFJ2o2afG3PVUodmUZNAOY3KsjHOkhw5Dp+JZWOU5z/AIAF4Ujsg7940dfyzSgntjQroR7TGG6J5T+fjZ2NvvsyaiCPVBAsXZdbmmUnwT01k7qSX777rbXft9L0gGY0ProyadjlEG3NGavXqvo1I5Nd1iIBK/t3YmLT6I8/N79xF75jjDjtGYB5Wa8ETq2tIY/neq5AokXvfH3dv8Vl30T+C9H1qg0QwHf4+sw1V36F2QlMYDM8zPb+offmZMPLUbG6xJB0/A3xYQz4w8RRFsNyFQgg1cM96wgxLsHwhv8FIXAYaLKL6S0I2SPjhuloCEsdhWxet0RAPPlNI4UFk38nTHSJDf4dcEnPXmliI3SdHL00vcuRuO08uVqN1lE5ly/cxzWMT+zY2gcCpM5NLnCE9xMH1cq5IA3pzf+nY04pm4siw+f4RRW4z2ncJBBQKxL22zXMIR/1ksREN8+WUaKLFXTlFjmOkWLTGwFqYAK+mZ6xaaa9lIxeZ+XYgRJkirFJylWgu9lBwonnIUI/6RPrhb1qA1KfuWIMxii1Jz2jrc587gi20OZdJ1HMDnn3JIT3nALgt4ZrH2VHBY+w59y47BhVrEY7vUqQQSjEkCl0+hDHMoO6hpc7sIM01lIdWX+9HijAL2SYAPb/ZfDJHtmTA4U4LLlrxq06zE1qbJpj+L7153wOBf6BXStI9AH4UsOmVVVpJNnzB5RiQrQbs4mM/OjcNH KYc0w3PR ZQoEgPigUajxttE0zkGgyt69SG3EG47dEurW0UrzvvUALA8DonoNUQsoFb5hAcAMrGYUFx0TchU5rlm+3sFLTgu1sOB639B182vmwrMJUblcEXRu9qoyI+rP6hdPw3uwb36rgV2rCyEvapq4BQuwyLLih2RMzm2RPgZU7kGQIwmauKP2MHUuUzoVI8UjfRpX3rrBO5mLYAsy9xAxfv99llcy+JaocBUQXv02dedmuK78QfWn3Eu+B+DGf8dX2rLZ8MIh0NZH9Pi9lzJSuTCYibEmw3g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When I tested the mincore() syscall, I observed that it takes longer with 64K mTHP enabled on my Arm64 server. The reason is the mincore_pte_range() still checks each PTE individually, even when the PTEs are contiguous, which is not efficient. Thus we can use pte_batch_hint() to get the batch number of the present contiguous PTEs, which can improve the performance. I tested the mincore() syscall with 1G anonymous memory populated with 64K mTHP, and observed an obvious performance improvement: w/o patch w/ patch changes 6022us 549us +91% Moreover, I also tested mincore() with disabling mTHP/THP, and did not see any obvious regression for base pages. Signed-off-by: Baolin Wang --- Changes from v1: - Change to use pte_batch_hint() to get the batch number, per Ryan. Note: I observed the min_t() can introduce a slight performance regression for base pages, so I change to add a batch size check for base pages, which can resolve the performance regression issue. --- mm/mincore.c | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-) diff --git a/mm/mincore.c b/mm/mincore.c index 832f29f46767..2e6a9123305e 100644 --- a/mm/mincore.c +++ b/mm/mincore.c @@ -21,6 +21,7 @@ #include #include "swap.h" +#include "internal.h" static int mincore_hugetlb(pte_t *pte, unsigned long hmask, unsigned long addr, unsigned long end, struct mm_walk *walk) @@ -105,6 +106,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, pte_t *ptep; unsigned char *vec = walk->private; int nr = (end - addr) >> PAGE_SHIFT; + int step, i; ptl = pmd_trans_huge_lock(pmd, vma); if (ptl) { @@ -118,16 +120,23 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, walk->action = ACTION_AGAIN; return 0; } - for (; addr != end; ptep++, addr += PAGE_SIZE) { + for (; addr != end; ptep += step, addr += step * PAGE_SIZE) { pte_t pte = ptep_get(ptep); + step = 1; /* We need to do cache lookup too for pte markers */ if (pte_none_mostly(pte)) __mincore_unmapped_range(addr, addr + PAGE_SIZE, vma, vec); - else if (pte_present(pte)) - *vec = 1; - else { /* pte is a swap entry */ + else if (pte_present(pte)) { + unsigned int batch = pte_batch_hint(ptep, pte); + + if (batch > 1) + step = min_t(unsigned int, batch, nr); + + for (i = 0; i < step; i++) + vec[i] = 1; + } else { /* pte is a swap entry */ swp_entry_t entry = pte_to_swp_entry(pte); if (non_swap_entry(entry)) { @@ -146,7 +155,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, #endif } } - vec++; + vec += step; } pte_unmap_unlock(ptep - 1, ptl); out: -- 2.43.5