linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] secretmem: Conert page_is_secretmem() to folio_is_secretmem()
@ 2023-08-22 20:23 Matthew Wilcox (Oracle)
  2023-08-23  9:38 ` David Hildenbrand
  0 siblings, 1 reply; 2+ messages in thread
From: Matthew Wilcox (Oracle) @ 2023-08-22 20:23 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm, Mike Rapoport

The only caller already has a folio, so use it to save calling
compound_head() in PageLRU() and remove a use of page->mapping.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org>
---
 include/linux/secretmem.h | 15 +++++++--------
 mm/gup.c                  |  2 +-
 2 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
index 988528b5da43..35f3a4a8ceb1 100644
--- a/include/linux/secretmem.h
+++ b/include/linux/secretmem.h
@@ -6,24 +6,23 @@
 
 extern const struct address_space_operations secretmem_aops;
 
-static inline bool page_is_secretmem(struct page *page)
+static inline bool folio_is_secretmem(struct folio *folio)
 {
 	struct address_space *mapping;
 
 	/*
-	 * Using page_mapping() is quite slow because of the actual call
-	 * instruction and repeated compound_head(page) inside the
-	 * page_mapping() function.
+	 * Using folio_mapping() is quite slow because of the actual call
+	 * instruction.
 	 * We know that secretmem pages are not compound and LRU so we can
 	 * save a couple of cycles here.
 	 */
-	if (PageCompound(page) || !PageLRU(page))
+	if (folio_test_large(folio) || !folio_test_lru(folio))
 		return false;
 
 	mapping = (struct address_space *)
-		((unsigned long)page->mapping & ~PAGE_MAPPING_FLAGS);
+		((unsigned long)folio->mapping & ~PAGE_MAPPING_FLAGS);
 
-	if (!mapping || mapping != page->mapping)
+	if (!mapping || mapping != folio->mapping)
 		return false;
 
 	return mapping->a_ops == &secretmem_aops;
@@ -39,7 +38,7 @@ static inline bool vma_is_secretmem(struct vm_area_struct *vma)
 	return false;
 }
 
-static inline bool page_is_secretmem(struct page *page)
+static inline bool folio_is_secretmem(struct folio *folio)
 {
 	return false;
 }
diff --git a/mm/gup.c b/mm/gup.c
index 6d0c24e93425..2f8a2d89fde1 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2600,7 +2600,7 @@ static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
 		if (!folio)
 			goto pte_unmap;
 
-		if (unlikely(page_is_secretmem(page))) {
+		if (unlikely(folio_is_secretmem(folio))) {
 			gup_put_folio(folio, 1, flags);
 			goto pte_unmap;
 		}
-- 
2.40.1



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH v2] secretmem: Conert page_is_secretmem() to folio_is_secretmem()
  2023-08-22 20:23 [PATCH v2] secretmem: Conert page_is_secretmem() to folio_is_secretmem() Matthew Wilcox (Oracle)
@ 2023-08-23  9:38 ` David Hildenbrand
  0 siblings, 0 replies; 2+ messages in thread
From: David Hildenbrand @ 2023-08-23  9:38 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle), Andrew Morton; +Cc: linux-mm, Mike Rapoport

On 22.08.23 22:23, Matthew Wilcox (Oracle) wrote:
> The only caller already has a folio, so use it to save calling
> compound_head() in PageLRU() and remove a use of page->mapping.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Reviewed-by: Mike Rapoport (IBM) <rppt@kernel.org>


Reviewed-by: David Hildenbrand <david@redhat.com>


-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2023-08-23  9:38 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-22 20:23 [PATCH v2] secretmem: Conert page_is_secretmem() to folio_is_secretmem() Matthew Wilcox (Oracle)
2023-08-23  9:38 ` David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox