From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE40AC7EE32 for ; Fri, 27 Jun 2025 11:55:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 52FE36B00AF; Fri, 27 Jun 2025 07:55:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4E0266B00B3; Fri, 27 Jun 2025 07:55:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3F7576B00C2; Fri, 27 Jun 2025 07:55:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 2B4566B00AF for ; Fri, 27 Jun 2025 07:55:22 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 63E0F14068D for ; Fri, 27 Jun 2025 11:55:20 +0000 (UTC) X-FDA: 83601025200.29.5A003DE Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf12.hostedemail.com (Postfix) with ESMTP id 194C04000D for ; Fri, 27 Jun 2025 11:55:17 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Y8pmMmo5; spf=pass (imf12.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1751025318; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=stZwKACz8+3WmyYSkVNq9qa5rR3dAnSzYyTawu7aF9A=; b=7dLWqQ7JJAjGDxj1e0jSBgX6xiy0CULO6P0Jhc/59DwTI8/Z9gkSf+zyZX0r1mx9vd6Nfl PIeyBBAWis48XlNa5f6UvKnAcNngxu/Gb2m9UUhuhJn14AAq6vCajvDtVAUujGcQze9ohA iTzJryfH+LwMqfoyAhtJ6mMBIW7uPDs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1751025318; a=rsa-sha256; cv=none; b=x0S3UxN6+VmVfKqL1ZuRba/awupzQrVx8a28TutAYlqdeh60u05a5m1hBIjHLrR34JsJbi KxemGQhnGW3Ci2tMp3VnfULskoYpRcMI7V3/DS53v14Op7WX4VrwrUSVOjsIY6ytVXAOjM WnSkcPKb6557bKud4BH53756vJTkOnI= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Y8pmMmo5; spf=pass (imf12.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com; dmarc=pass (policy=quarantine) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1751025317; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=stZwKACz8+3WmyYSkVNq9qa5rR3dAnSzYyTawu7aF9A=; b=Y8pmMmo5NnVQ6pI0nmlUvb2WiJJXkkd9JgnzOK8Rq4vAM9ac5YmiXEq5CG9KvapOOkjB3k qrb75ybWTe9kAmPCfP/IDpAEYiNkEuy5XGol2RyzkDOXp6feuikJdxuTKpxDYJSgXYbcnF WSyTHPF2bH9Bvta0biDTJiWiFPA8bHU= Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com [209.85.221.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-1-gOWLqFOfPI-Dda1oiup6sw-1; Fri, 27 Jun 2025 07:55:16 -0400 X-MC-Unique: gOWLqFOfPI-Dda1oiup6sw-1 X-Mimecast-MFC-AGG-ID: gOWLqFOfPI-Dda1oiup6sw_1751025315 Received: by mail-wr1-f71.google.com with SMTP id ffacd0b85a97d-3a4f6ff23ccso1345126f8f.2 for ; Fri, 27 Jun 2025 04:55:16 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1751025315; x=1751630115; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=stZwKACz8+3WmyYSkVNq9qa5rR3dAnSzYyTawu7aF9A=; b=GyWgKqOWbY9wIQer0FbL9INCC5uumiCZABcJPsbeZmhpKDw9Modu/k7bbgVoLBczmB QgSRzEeiiFit8no/FmQDxgl7a+ZnsvzGE+Oq76xTMhaVeNqkHovztN+mCKvAPkGKKuA1 CfChv+kptHML0jT1td8vsr3OcNrkQmkpd/eW4HLrUioy4Vgq9Sielthz3Ly2CcgFNVEb tMbhsiiyVGLzRnLBgpEJRRFwxVB6ybR3YhUJPM+h71/h1VH3EEbxee6s2tG+mpNXRYpu zlLa+rleT14JztS5j3bnxm/rJpIuG/mMQmn+8b4QhthzEwiQEqj4IbixnllqKMMwG7qY hJJg== X-Gm-Message-State: AOJu0YwA7Y/lqfrjo4+zr7sV6AeQjycr7A2r9uU0R/q+ru08u4URGTN2 G5vOUHt5kP456O4jC3mGGYgXP5BXzKrOoPM/rjuQOkF3sQAKV8UeXM9Cj1T0IliAWkfPJDrKUmQ +27ViUWlAqyE/uhMJLFk9RpsaVDIfj05B7PufkDuPLbPbldHIeeZc X-Gm-Gg: ASbGncui5TC0kBkzNtotadpd457I8hCkXlXxkB7IkbcC4p2wJvUIzbcNYXWmACVbLHX W+pCaASclQicctSaCinLRQADrqQkosOLh23fSd+YB7w3nfl6Le8Mk59Lc+5HybZQIBxbl59qenA o4823pj2jFSBbk5TAeClwu3tq8zXSdZqLSH31QqTRElWKEpckqu+zxUHvbTVTJdCfrvYnGat5hX jCpR5p3Kb5SR6tKuqN85R48s9mq1rpN9YQkh/xr5k1Fel9It/aUlEq/20Bj+1xaGkavEIqs4eK6 /5qwQDk+GbsYLXlfVnCWe9T+EmPaFSQPRF8RWsh64qbnD6o8jhJq0e7ypazAk+nQgswx6kh2E24 8PiLzSNg= X-Received: by 2002:a5d:6ac4:0:b0:3a5:8abe:a264 with SMTP id ffacd0b85a97d-3a917603b9dmr1823527f8f.37.1751025314858; Fri, 27 Jun 2025 04:55:14 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGk+O+t9TXDSPfzprR1dkRhSp8WWlD2y7FYctPhL8RRYqJ8oWIk/Y5KEsOj/Dpqi4ryG2nLQA== X-Received: by 2002:a5d:6ac4:0:b0:3a5:8abe:a264 with SMTP id ffacd0b85a97d-3a917603b9dmr1823503f8f.37.1751025314383; Fri, 27 Jun 2025 04:55:14 -0700 (PDT) Received: from localhost (p200300d82f2d5d00f1a32f3065759425.dip0.t-ipconnect.de. [2003:d8:2f2d:5d00:f1a3:2f30:6575:9425]) by smtp.gmail.com with UTF8SMTPSA id 5b1f17b1804b1-4538327fed8sm38347465e9.1.2025.06.27.04.55.13 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 27 Jun 2025 04:55:13 -0700 (PDT) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , "Liam R. Howlett" , Lorenzo Stoakes , Vlastimil Babka , Jann Horn , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Zi Yan , Matthew Brost , Joshua Hahn , Rakie Kim , Byungchul Park , Gregory Price , Ying Huang , Alistair Popple , Pedro Falcato , Rik van Riel , Harry Yoo Subject: [PATCH v1 1/4] mm: convert FPB_IGNORE_* into FPB_HONOR_* Date: Fri, 27 Jun 2025 13:55:07 +0200 Message-ID: <20250627115510.3273675-2-david@redhat.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250627115510.3273675-1-david@redhat.com> References: <20250627115510.3273675-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: m7AUwbufbxWYEdkSp5OVVsp3HOxbn8L-8YX8QET4-uU_1751025315 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit content-type: text/plain; charset="US-ASCII"; x-default=true X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 194C04000D X-Stat-Signature: 4wquox1451hcmeggzp813z3x395nhg4e X-Rspam-User: X-HE-Tag: 1751025317-800213 X-HE-Meta: U2FsdGVkX19OEpaEBMfv8HI1omVeZoh63/NQ0Ws4S4ubKU4Ke/l6itJHLdTEi30ujqAM1Bb+Yks6F/9uxBBQ5o95HkGTocr44gHD7ap2At/6xJPox5U9+oLlzfkOCQc1rc7IV7Abnh1rp6JemXbAHiRPvGoKBGGXfdMa0nA3+73WOD2wMHGlEEzEw529NlcjAK8dXkDjS2IQhnY9O/jSOezT3qsEjnh2SzLmZEDrACo9WcY48UDBL/cOiwvrcKZJHaMcWK7HPU7RJxGHYKsp2FgpabwybL+Vf/TVdyDQXzVQZM+3Jm83kBdL0OZij9x/E4Lwz/e+LwUNLLiG2gqgYm//EQvZlwRA1za6sqC+Uuzv1wWojm/HioUvFTrMjtDweoOdNd1aE2znSly/i5GzRWpif5fP71aAJ3GXlYKsN/cmVwmq8CR4opWnFHpFGjp0SGbVwth2XEutbrMQV/S74pHXoR8gFfNncvamcnQBCpg5OPVW1YxO7+lH89Od5xpHB7twatMV55TxeUjbyjZ3pcSGSUQBEjqzLFWVr8diTU4BFvl0YiAKYA24/T01uDhiH4TCbURTBGD3G65iMAsRcoxnLfzB1NKoZ2dqoOdaJI7m5B3GeZmwI8HT1CFW2KU0lqpeeimsRQiow0a/RiES2ZIXoxx+H4GgnCjNIfkwtnObeJfVRhXf6AXEVAhjU3cX+jFVYBjOUVtB3DDElQWvh0d3X2eLubpNuQOU3LSQPouKB3t24+5ujwO6E8eJ1b0qjKcPoqCzSncxiSUR87qg8UDxiIu89zrF6lNDorRf018KWFWZ4GL2q8P5+qn7kruDQUGQ3FpVSnuhQyuwzeS2nqipWB7hvihXz4m9BVraiw36zyBP9s1/SE63skTxOQIlvXjXUfLRxbqj3vbmmhTgu82cNhy4Uir/I0SAFT0XB/Y6GV2hcxv740xaUHCKXFOqCLntKreHMHIcTpaJv4Q e92xwm9C gkjnutZq5G4WKRW0mpR+zXkA7Xkm7BN1zfA2SnQPriNcwPTkM7K1yrg9Q3yIPp6CjIN3BxAOL4FZOf1AOLLUMOtt8XbcyZMVYVuztzNWzC266cORpXE4efWwN0Ia5t2oo2F+ALuZgYNXP+I35dwSm5bdIDLr2gi0KdFkIlZRi7Y3oSIZtFoYt/x3NeRm4Zm3ex9ONZ+ioi8CuX6RTBxVRwEqFJifnVgHYlO3bVsPpbNl1nOGHsXGz8ORYg7qgMMhlCue3UeWMi5z+x4J7j2E2gqeSE2xWvcn06lf7y7FAIdhOU/5qVMy7qYoghbKjiK5Yr679/KCqFSjYO+7NxFLR7aa4N+VE22h1wd6LQMFMl727eyMIdbBUTzcG1Krjc4MK3rxgP5HC3+ZQK6NDR/VUcewONq/jOSSeElq0+3H438NJU8BZJV/J6lZTMoqpGOeetqE5ubFxx+QcVkstCbafD8QNFD2wEGdQemCwGRzceyXLdQCVJ8fSF89NH7O/LmxRsOVxy1WdH6DWCyech1zOBmMByOzzGFYMROuZGfgOcuyx1Wnwz3vZyNKfFrbcujv9+GQafFTBtm7xEAodDVUX6VVaTTUxrpC9vqgpYrI1mvlng1w171dsA94JXfQYqClr4D7t+JKfJZvaaJ43HPyxDaMv1c3AsohNknUe X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Honoring these PTE bits is the exception, so let's invert the meaning. With this change, most callers don't have to pass any flags. No functional change intended. Signed-off-by: David Hildenbrand --- mm/internal.h | 16 ++++++++-------- mm/madvise.c | 3 +-- mm/memory.c | 11 +++++------ mm/mempolicy.c | 4 +--- mm/mlock.c | 3 +-- mm/mremap.c | 3 +-- mm/rmap.c | 3 +-- 7 files changed, 18 insertions(+), 25 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index e84217e27778d..9690c75063881 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -202,17 +202,17 @@ static inline void vma_close(struct vm_area_struct *vma) /* Flags for folio_pte_batch(). */ typedef int __bitwise fpb_t; -/* Compare PTEs after pte_mkclean(), ignoring the dirty bit. */ -#define FPB_IGNORE_DIRTY ((__force fpb_t)BIT(0)) +/* Compare PTEs honoring the dirty bit. */ +#define FPB_HONOR_DIRTY ((__force fpb_t)BIT(0)) -/* Compare PTEs after pte_clear_soft_dirty(), ignoring the soft-dirty bit. */ -#define FPB_IGNORE_SOFT_DIRTY ((__force fpb_t)BIT(1)) +/* Compare PTEs honoring the soft-dirty bit. */ +#define FPB_HONOR_SOFT_DIRTY ((__force fpb_t)BIT(1)) static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags) { - if (flags & FPB_IGNORE_DIRTY) + if (!(flags & FPB_HONOR_DIRTY)) pte = pte_mkclean(pte); - if (likely(flags & FPB_IGNORE_SOFT_DIRTY)) + if (likely(!(flags & FPB_HONOR_SOFT_DIRTY))) pte = pte_clear_soft_dirty(pte); return pte_wrprotect(pte_mkold(pte)); } @@ -236,8 +236,8 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags) * pages of the same large folio. * * All PTEs inside a PTE batch have the same PTE bits set, excluding the PFN, - * the accessed bit, writable bit, dirty bit (with FPB_IGNORE_DIRTY) and - * soft-dirty bit (with FPB_IGNORE_SOFT_DIRTY). + * the accessed bit, writable bit, dirty bit (unless FPB_HONOR_DIRTY is set) and + * soft-dirty bit (unless FPB_HONOR_SOFT_DIRTY is set). * * start_ptep must map any page of the folio. max_nr must be at least one and * must be limited by the caller so scanning cannot exceed a single page table. diff --git a/mm/madvise.c b/mm/madvise.c index e61e32b2cd91f..661bb743d2216 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -347,10 +347,9 @@ static inline int madvise_folio_pte_batch(unsigned long addr, unsigned long end, pte_t pte, bool *any_young, bool *any_dirty) { - const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; int max_nr = (end - addr) / PAGE_SIZE; - return folio_pte_batch(folio, addr, ptep, pte, max_nr, fpb_flags, NULL, + return folio_pte_batch(folio, addr, ptep, pte, max_nr, 0, NULL, any_young, any_dirty); } diff --git a/mm/memory.c b/mm/memory.c index 0f9b32a20e5b7..ab2d6c1425691 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -990,10 +990,10 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma * by keeping the batching logic separate. */ if (unlikely(!*prealloc && folio_test_large(folio) && max_nr != 1)) { - if (src_vma->vm_flags & VM_SHARED) - flags |= FPB_IGNORE_DIRTY; - if (!vma_soft_dirty_enabled(src_vma)) - flags |= FPB_IGNORE_SOFT_DIRTY; + if (!(src_vma->vm_flags & VM_SHARED)) + flags |= FPB_HONOR_DIRTY; + if (vma_soft_dirty_enabled(src_vma)) + flags |= FPB_HONOR_SOFT_DIRTY; nr = folio_pte_batch(folio, addr, src_pte, pte, max_nr, flags, &any_writable, NULL, NULL); @@ -1535,7 +1535,6 @@ static inline int zap_present_ptes(struct mmu_gather *tlb, struct zap_details *details, int *rss, bool *force_flush, bool *force_break, bool *any_skipped) { - const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; struct mm_struct *mm = tlb->mm; struct folio *folio; struct page *page; @@ -1565,7 +1564,7 @@ static inline int zap_present_ptes(struct mmu_gather *tlb, * by keeping the batching logic separate. */ if (unlikely(folio_test_large(folio) && max_nr != 1)) { - nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, fpb_flags, + nr = folio_pte_batch(folio, addr, pte, ptent, max_nr, 0, NULL, NULL, NULL); zap_present_folio_ptes(tlb, vma, folio, page, pte, ptent, nr, diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 1ff7b2174eb77..2a25eedc3b1c0 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -675,7 +675,6 @@ static void queue_folios_pmd(pmd_t *pmd, struct mm_walk *walk) static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct mm_walk *walk) { - const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; struct vm_area_struct *vma = walk->vma; struct folio *folio; struct queue_pages *qp = walk->private; @@ -713,8 +712,7 @@ static int queue_folios_pte_range(pmd_t *pmd, unsigned long addr, continue; if (folio_test_large(folio) && max_nr != 1) nr = folio_pte_batch(folio, addr, pte, ptent, - max_nr, fpb_flags, - NULL, NULL, NULL); + max_nr, 0, NULL, NULL, NULL); /* * vm_normal_folio() filters out zero pages, but there might * still be reserved folios to skip, perhaps in a VDSO. diff --git a/mm/mlock.c b/mm/mlock.c index 3cb72b579ffd3..2238cdc5eb1c1 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -307,14 +307,13 @@ void munlock_folio(struct folio *folio) static inline unsigned int folio_mlock_step(struct folio *folio, pte_t *pte, unsigned long addr, unsigned long end) { - const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; unsigned int count = (end - addr) >> PAGE_SHIFT; pte_t ptent = ptep_get(pte); if (!folio_test_large(folio)) return 1; - return folio_pte_batch(folio, addr, pte, ptent, count, fpb_flags, NULL, + return folio_pte_batch(folio, addr, pte, ptent, count, 0, NULL, NULL, NULL); } diff --git a/mm/mremap.c b/mm/mremap.c index 36585041c760d..d4d3ffc931502 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -173,7 +173,6 @@ static pte_t move_soft_dirty_pte(pte_t pte) static int mremap_folio_pte_batch(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t pte, int max_nr) { - const fpb_t flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; struct folio *folio; if (max_nr == 1) @@ -183,7 +182,7 @@ static int mremap_folio_pte_batch(struct vm_area_struct *vma, unsigned long addr if (!folio || !folio_test_large(folio)) return 1; - return folio_pte_batch(folio, addr, ptep, pte, max_nr, flags, NULL, + return folio_pte_batch(folio, addr, ptep, pte, max_nr, 0, NULL, NULL, NULL); } diff --git a/mm/rmap.c b/mm/rmap.c index 3b74bb19c11dd..a29d7d29c7283 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1849,7 +1849,6 @@ void folio_remove_rmap_pud(struct folio *folio, struct page *page, static inline bool can_batch_unmap_folio_ptes(unsigned long addr, struct folio *folio, pte_t *ptep) { - const fpb_t fpb_flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; int max_nr = folio_nr_pages(folio); pte_t pte = ptep_get(ptep); @@ -1860,7 +1859,7 @@ static inline bool can_batch_unmap_folio_ptes(unsigned long addr, if (pte_pfn(pte) != folio_pfn(folio)) return false; - return folio_pte_batch(folio, addr, ptep, pte, max_nr, fpb_flags, NULL, + return folio_pte_batch(folio, addr, ptep, pte, max_nr, 0, NULL, NULL, NULL) == max_nr; } -- 2.49.0