From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69F97C369B1 for ; Wed, 16 Apr 2025 08:55:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 92C176B0007; Wed, 16 Apr 2025 04:55:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8DBD76B000A; Wed, 16 Apr 2025 04:55:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A1966B000C; Wed, 16 Apr 2025 04:55:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 5BAD76B0007 for ; Wed, 16 Apr 2025 04:55:35 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 2A921160267 for ; Wed, 16 Apr 2025 08:55:36 +0000 (UTC) X-FDA: 83339298672.28.74CBB28 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf16.hostedemail.com (Postfix) with ESMTP id 3964D180005 for ; Wed, 16 Apr 2025 08:55:34 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf16.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1744793734; a=rsa-sha256; cv=none; b=U7m+EKxqZMXZm6ffYpMT8SBgsis4ibWMdP2VFQ4bf3GB6T2FxvFWYlBYz2tCCVlPl8EiT2 6pZe2W+DPJYHguQ73QYLyPprFhTGTzNPO2G/tY8949PFY0HHPY846WHx5cSUGFnoPLV3XA 3VyT8GJeog/B6l7H0MqHmGJP0ae5LQw= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf16.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1744793734; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=9sVSRIVcd+Sx7HR8J2j+p1SkOSU52V4lxX5+Av4llmM=; b=NKtB7Ty9ce6JlBVls8o2ocxycLKBOb0nNmX8P9+2qMEU/47u/PppRitazP2j4KWuqlYH91 NwpqyTu0nCn92lnp9scY2myOydZkJjgZKeoQtJ4mt9EaxhLWdH9gUy89avtOPoO+Sw9z8t /5mZVlmOXxanlwmdra5MCI/JmnyAMlg= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3B773152B; Wed, 16 Apr 2025 01:55:31 -0700 (PDT) Received: from [10.163.75.121] (unknown [10.163.75.121]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2F9F03F66E; Wed, 16 Apr 2025 01:55:28 -0700 (PDT) Message-ID: <4089150b-0cb8-45f1-bdae-035b047bd9a6@arm.com> Date: Wed, 16 Apr 2025 14:25:25 +0530 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] mempolicy: Optimize queue_folios_pte_range by PTE batching To: Baolin Wang , David Hildenbrand , akpm@linux-foundation.org Cc: ryan.roberts@arm.com, willy@infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, hughd@google.com, vishal.moola@gmail.com, yang@os.amperecomputing.com, ziy@nvidia.com References: <20250411081301.8533-1-dev.jain@arm.com> <09c77ab5-65fc-4bca-a7e5-2b11bba9330d@redhat.com> <9ed4c113-37eb-4e3d-98a1-f46f786aaea9@arm.com> Content-Language: en-US From: Dev Jain In-Reply-To: Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 3964D180005 X-Stat-Signature: pep71r781f3tmp5epstrm9p7e1s84fzr X-Rspam-User: X-HE-Tag: 1744793734-395771 X-HE-Meta: U2FsdGVkX18IbcnTRElzyMyF7zEc+Tp2siPsuUOcWABedLyGcbO7GThLgm4SfAvxzDfY77ySpU0F01tI+h+iu0MVScrHYKx0H2hmWsMW+cbzVO/HCC9T+vH7pUjlX+To1QVs5eP0Swaq0Rzu6BljedS7LwTdf4j8O8m9Jsd5WnVvlOf8FFSQHGxl5Wk5x71sCwpL/XKppVk9KdXOZOixTThP6odEDN/gdP4zosHrbkXHXKROjKjJOwqs8VAMNMK1gsIv6+4FZrIcLYDhMzyzSIPjJYZnG0LxS/VHVrQXMuvEQi7Lowe+zeIc/dHRKCrHblAoy0TRohDeEKUEpfeRYTZ8G+ER5NzQUm9nstE6uj1yrzZWaGAsdrVMcI79r1aIyzPgdpcupph/66YK8mCARgHA5NfvpnAY7rrNr/yRWbo5NVU50Fg+Yr5JFPSQaRcr3f7Y342LS2GC1iO01TULNAeJcAE3d5BFla2Ka7UywTd3g34SqJHLZOciNv5dLJoQIjKPFeCHBjh1JdCx3cG5pjUClxmekoDexrmpxwafg46Df05tAMvnwCZkjQOUJyMAxBCikpw4GAFUI9JP084NMM58kW1sEvlcxquUzWWsmK2JBePUsVgZ0LQvc51Dtn4RBqZshoDRGa9fxLLk4PnXKMIbWWIQNyNzUzWVSFzc9YgkWWFdD/20rm7D3Z7pROGM394NH6y6DAgLZWjgB75U8Njp6oJGsMJe5xnT/lk+364f7kcwXgOcbEE6LPr9tFoHdCYqwNUy5Sbkj0BXlgWXzT4OITuQurXP9KW2iIVltgO4X7YPK8tGUUEhha1OPAq234WH4o1XEHMj9W0URFbLpovsS9ZPRf4DostFlOjVwpSFgqqFZykcuiBJOrOsgFj1ySL5AU/C8hbBaOfa4bpivxp70pfHmgoiPiuehGvYcuGzLyofcxheTTCISLm9hB8aPaaNQ0WpBZGQ2Ca59p9 eSSKscXv zT6X8gNxKDgKLDoG7vyPERJ/PzH0ntfnlC+Wn6ZTRzhXDYGLF7Tic7rStohp+PDyPRO95ZclWd7ikOlV1kuXWPyaZU4FGz3qgERVtkgAxgJiTffmPaSQchI4KFRHUIsKIVkqAPgdPPHfHhYqHt7U4ofDOHNxDRk2iEsGu3kJp+J+Y+HD05yDQRTlmQZDWCpYk+/JP+0VmauLU6ozluGYpbi0tdioZ1MzwzkjLnWdqE8OIhIBKmnhpTdIq5hRjxIDBNlHGIZSp8+BLivuZ9fKHqvS9O+Qm8q472uPtyi+/jn7XdfhuiQdQUtiEmhX19wE4g+MMQThqOKX714wnlSumf/m8WJiVNIoV/Nf1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 16/04/25 1:03 pm, Baolin Wang wrote: > > > On 2025/4/15 19:47, Dev Jain wrote: >> >> >> On 15/04/25 3:47 pm, David Hildenbrand wrote: >>> On 11.04.25 10:13, Dev Jain wrote: >>>> After the check for queue_folio_required(), the code only cares >>>> about the >>>> folio in the for loop, i.e the PTEs are redundant. Therefore, >>>> optimize this >>>> loop by skipping over a PTE batch mapping the same folio. >>>> >>>> Signed-off-by: Dev Jain >>>> --- >>>> Unfortunately I have only build tested this since my test >>>> environment is >>>> broken. >>>> >>>>   mm/mempolicy.c | 12 +++++++++++- >>>>   1 file changed, 11 insertions(+), 1 deletion(-) >>>> >>>> diff --git a/mm/mempolicy.c b/mm/mempolicy.c >>>> index b28a1e6ae096..b019524da8a2 100644 >>>> --- a/mm/mempolicy.c >>>> +++ b/mm/mempolicy.c >>>> @@ -573,6 +573,9 @@ static int queue_folios_pte_range(pmd_t *pmd, >>>> unsigned long addr, >>>>       pte_t *pte, *mapped_pte; >>>>       pte_t ptent; >>>>       spinlock_t *ptl; >>>> +    int max_nr; >>>> +    const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; >>>> +    int nr = 1; >>> >>> Try sticking to reverse xmas tree, please. (not completely the case >>> here, but fpb_flags can easily be moved all he way to the top) >> >> I thought that the initializations were to be kept at the bottom. >> Asking for future patches, should I put all declarations in reverse- >> xmas fashion (even those which I don't intend to touch w.r.t the patch >> logic), or do I do that for only my additions? >> >>> >>> Also, why are you initializing nr to 1 here if you reinitialize it >>> below? >> >> Yup no need, I thought pte += nr will blow up due to nr not being >> initialized, but it won't because it gets executed just before the >> start of the second iteration. >> >>> >>>  >       ptl = pmd_trans_huge_lock(pmd, vma);>       if (ptl) { >>>> @@ -586,7 +589,8 @@ static int queue_folios_pte_range(pmd_t *pmd, >>>> unsigned long addr, >>>>           walk->action = ACTION_AGAIN; >>>>           return 0; >>>>       } >>>  > -    for (; addr != end; pte++, addr += PAGE_SIZE) {> +    for (; >>> addr != end; pte += nr, addr += nr * PAGE_SIZE) { >>>> +        nr = 1; >>>>           ptent = ptep_get(pte); >>>>           if (pte_none(ptent)) >>>>               continue; >>>> @@ -607,6 +611,11 @@ static int queue_folios_pte_range(pmd_t *pmd, >>>> unsigned long addr, >>>>           if (!queue_folio_required(folio, qp)) >>>>               continue; >>>>           if (folio_test_large(folio)) { >>>> +            max_nr = (end - addr) >> PAGE_SHIFT; >>>> +            if (max_nr != 1) >>>> +                nr = folio_pte_batch(folio, addr, pte, ptent, >>>> +                             max_nr, fpb_flags, >>>> +                             NULL, NULL, NULL); >>> >>> We should probably do that immediately after we verified that >>> vm_normal_folio() have us something reasonable. >> >> But shouldn't we keep the small folio case separate to avoid the >> overhead of folio_pte_batch()? >> >>> >>>>               /* >>>>                * A large folio can only be isolated from LRU once, >>>>                * but may be mapped by many PTEs (and Copy-On-Write may >>>> @@ -633,6 +642,7 @@ static int queue_folios_pte_range(pmd_t *pmd, >>>> unsigned long addr, >>>>               qp->nr_failed++; >>>>               if (strictly_unmovable(flags)) >>>>                   break; >>>> +            qp->nr_failed += nr - 1; >>> >>> Can't we do qp->nr_failed += nr; above? >> >> I did not dive deep into the significance of nr_failed, but I did that >> to keep the code, before and after the change, equivalent: >> >> Claim: if we reach qp->nr_failed++ for a single pte, we will reach >> here for all ptes belonging to the same batch. > > Sorry, I missed the previous discussion (I replied to your new version). > I think this claim is incorrect, we will skip remaining ptes belonging > to the same batch with checking 'qp->large'. > >         if (folio_test_large(folio)) { >             if (folio == qp->large) >                 continue; >             qp->large = folio; >         } Oops you are right, I missed that.