linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Alexander Gordeev <agordeev@linux.ibm.com>
To: Kevin Brodsky <kevin.brodsky@arm.com>,
	David Hildenbrand <david@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Gerald Schaefer <gerald.schaefer@linux.ibm.com>,
	Heiko Carstens <hca@linux.ibm.com>,
	Christian Borntraeger <borntraeger@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Claudio Imbrenda <imbrenda@linux.ibm.com>
Cc: linux-s390@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: [PATCH v2 4/6] s390/mm: Make PTC and UV call order consistent
Date: Wed, 15 Apr 2026 17:01:22 +0200	[thread overview]
Message-ID: <7b0e73e3c6f4000f9bf7cb161d8ca9a9f2312d70.1776264097.git.agordeev@linux.ibm.com> (raw)
In-Reply-To: <cover.1776264097.git.agordeev@linux.ibm.com>

In various code paths, page_table_check_pte_clear() is called
before converting a secure page, while in others it is called
after. Make this consistent and always perform the conversion
after the PTC hook has been called. Also make all conversion‑
eligibility condition checks look the same, and rework the one
in ptep_get_and_clear_full() slightly.

Acked-by: Claudio Imbrenda <imbrenda@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
---
 arch/s390/include/asm/pgtable.h | 39 +++++++++++++++------------------
 1 file changed, 18 insertions(+), 21 deletions(-)

diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index 42688ea4337f..010a33fec867 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -1219,10 +1219,10 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
 	pte_t res;
 
 	res = ptep_xchg_lazy(mm, addr, ptep, __pte(_PAGE_INVALID));
+	page_table_check_pte_clear(mm, addr, res);
 	/* At this point the reference through the mapping is still present */
 	if (mm_is_protected(mm) && pte_present(res))
 		WARN_ON_ONCE(uv_convert_from_secure_pte(res));
-	page_table_check_pte_clear(mm, addr, res);
 	return res;
 }
 
@@ -1238,10 +1238,10 @@ static inline pte_t ptep_clear_flush(struct vm_area_struct *vma,
 	pte_t res;
 
 	res = ptep_xchg_direct(vma->vm_mm, addr, ptep, __pte(_PAGE_INVALID));
+	page_table_check_pte_clear(vma->vm_mm, addr, res);
 	/* At this point the reference through the mapping is still present */
 	if (mm_is_protected(vma->vm_mm) && pte_present(res))
 		WARN_ON_ONCE(uv_convert_from_secure_pte(res));
-	page_table_check_pte_clear(vma->vm_mm, addr, res);
 	return res;
 }
 
@@ -1265,26 +1265,23 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
 	} else {
 		res = ptep_xchg_lazy(mm, addr, ptep, __pte(_PAGE_INVALID));
 	}
-
 	page_table_check_pte_clear(mm, addr, res);
-
-	/* Nothing to do */
-	if (!mm_is_protected(mm) || !pte_present(res))
-		return res;
-	/*
-	 * At this point the reference through the mapping is still present.
-	 * The notifier should have destroyed all protected vCPUs at this
-	 * point, so the destroy should be successful.
-	 */
-	if (full && !uv_destroy_pte(res))
-		return res;
-	/*
-	 * If something went wrong and the page could not be destroyed, or
-	 * if this is not a mm teardown, the slower export is used as
-	 * fallback instead. If even that fails, print a warning and leak
-	 * the page, to avoid crashing the whole system.
-	 */
-	WARN_ON_ONCE(uv_convert_from_secure_pte(res));
+	/* At this point the reference through the mapping is still present */
+	if (mm_is_protected(mm) && pte_present(res)) {
+		/*
+		 * The notifier should have destroyed all protected vCPUs at
+		 * this point, so the destroy should be successful.
+		 */
+		if (full && !uv_destroy_pte(res))
+			return res;
+		/*
+		 * If something went wrong and the page could not be destroyed,
+		 * or if this is not a mm teardown, the slower export is used
+		 * as fallback instead. If even that fails, print a warning and
+		 * leak the page, to avoid crashing the whole system.
+		 */
+		WARN_ON_ONCE(uv_convert_from_secure_pte(res));
+	}
 	return res;
 }
 
-- 
2.51.0



  parent reply	other threads:[~2026-04-15 15:01 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-15 15:01 [PATCH v2 0/6] s390/mm: Batch PTE updates in lazy MMU mode Alexander Gordeev
2026-04-15 15:01 ` [PATCH v2 1/6] mm: Make lazy MMU mode context-aware Alexander Gordeev
2026-04-15 15:01 ` [PATCH v2 2/6] mm/pgtable: Fix bogus comment to clear_not_present_full_ptes() Alexander Gordeev
2026-04-15 15:01 ` [PATCH v2 3/6] s390/mm: Complete ptep_get() conversion Alexander Gordeev
2026-04-15 15:01 ` Alexander Gordeev [this message]
2026-04-15 15:01 ` [PATCH v2 5/6] s390/mm: Batch PTE updates in lazy MMU mode Alexander Gordeev
2026-04-15 15:01 ` [PATCH v2 6/6] s390/mm: Allow lazy MMU mode disabling Alexander Gordeev

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7b0e73e3c6f4000f9bf7cb161d8ca9a9f2312d70.1776264097.git.agordeev@linux.ibm.com \
    --to=agordeev@linux.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=borntraeger@linux.ibm.com \
    --cc=david@redhat.com \
    --cc=gerald.schaefer@linux.ibm.com \
    --cc=gor@linux.ibm.com \
    --cc=hca@linux.ibm.com \
    --cc=imbrenda@linux.ibm.com \
    --cc=kevin.brodsky@arm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-s390@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox