From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org,
David Hildenbrand <david@redhat.com>,
Andrew Morton <akpm@linux-foundation.org>,
Matthew Wilcox <willy@infradead.org>,
Peter Xu <peterx@redhat.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>, Hugh Dickins <hughd@google.com>,
Seth Jennings <sjenning@redhat.com>,
Dan Streetman <ddstreet@ieee.org>,
Vitaly Wool <vitaly.wool@konsulko.com>
Subject: [PATCH mm-unstable v1 4/4] mm/huge_memory: work on folio->swap instead of page->private when splitting folio
Date: Mon, 21 Aug 2023 18:08:49 +0200 [thread overview]
Message-ID: <20230821160849.531668-5-david@redhat.com> (raw)
In-Reply-To: <20230821160849.531668-1-david@redhat.com>
Let's work on folio->swap instead. While at it, use folio_test_anon() and
folio_test_swapcache() -- the original folio remains valid even after
splitting (but is then an order-0 folio).
We can probably convert a lot more to folios in that code, let's focus
on folio->swap handling only for now.
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/huge_memory.c | 29 +++++++++++++++--------------
1 file changed, 15 insertions(+), 14 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index c04702ae71d2..4465915711c3 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2401,10 +2401,16 @@ static void lru_add_page_tail(struct page *head, struct page *tail,
}
}
-static void __split_huge_page_tail(struct page *head, int tail,
+static void __split_huge_page_tail(struct folio *folio, int tail,
struct lruvec *lruvec, struct list_head *list)
{
+ struct page *head = &folio->page;
struct page *page_tail = head + tail;
+ /*
+ * Careful: new_folio is not a "real" folio before we cleared PageTail.
+ * Don't pass it around before clear_compound_head().
+ */
+ struct folio *new_folio = (struct folio *)page_tail;
VM_BUG_ON_PAGE(atomic_read(&page_tail->_mapcount) != -1, page_tail);
@@ -2453,8 +2459,8 @@ static void __split_huge_page_tail(struct page *head, int tail,
VM_WARN_ON_ONCE_PAGE(true, page_tail);
page_tail->private = 0;
}
- if (PageSwapCache(head))
- set_page_private(page_tail, (unsigned long)head->private + tail);
+ if (folio_test_swapcache(folio))
+ new_folio->swap.val = folio->swap.val + tail;
/* Page flags must be visible before we make the page non-compound. */
smp_wmb();
@@ -2500,11 +2506,9 @@ static void __split_huge_page(struct page *page, struct list_head *list,
/* complete memcg works before add pages to LRU */
split_page_memcg(head, nr);
- if (PageAnon(head) && PageSwapCache(head)) {
- swp_entry_t entry = { .val = page_private(head) };
-
- offset = swp_offset(entry);
- swap_cache = swap_address_space(entry);
+ if (folio_test_anon(folio) && folio_test_swapcache(folio)) {
+ offset = swp_offset(folio->swap);
+ swap_cache = swap_address_space(folio->swap);
xa_lock(&swap_cache->i_pages);
}
@@ -2514,7 +2518,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
ClearPageHasHWPoisoned(head);
for (i = nr - 1; i >= 1; i--) {
- __split_huge_page_tail(head, i, lruvec, list);
+ __split_huge_page_tail(folio, i, lruvec, list);
/* Some pages can be beyond EOF: drop them from page cache */
if (head[i].index >= end) {
struct folio *tail = page_folio(head + i);
@@ -2559,11 +2563,8 @@ static void __split_huge_page(struct page *page, struct list_head *list,
remap_page(folio, nr);
- if (PageSwapCache(head)) {
- swp_entry_t entry = { .val = page_private(head) };
-
- split_swap_cluster(entry);
- }
+ if (folio_test_swapcache(folio))
+ split_swap_cluster(folio->swap);
for (i = 0; i < nr; i++) {
struct page *subpage = head + i;
--
2.41.0
next prev parent reply other threads:[~2023-08-21 16:09 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-21 16:08 [PATCH mm-unstable v1 0/4] mm/swap: stop using page->private on tail pages for THP_SWAP + cleanups David Hildenbrand
2023-08-21 16:08 ` [PATCH mm-unstable v1 1/4] mm/swap: stop using page->private on tail pages for THP_SWAP David Hildenbrand
2023-08-22 16:24 ` Catalin Marinas
2023-08-22 17:00 ` Yosry Ahmed
2023-08-22 17:14 ` David Hildenbrand
2023-08-23 12:15 ` David Hildenbrand
2023-08-23 15:12 ` Yosry Ahmed
2023-08-23 15:17 ` David Hildenbrand
2023-08-23 15:21 ` Yosry Ahmed
2023-08-23 15:26 ` David Hildenbrand
2023-08-23 15:31 ` Yosry Ahmed
2023-08-21 16:08 ` [PATCH mm-unstable v1 2/4] mm/swap: use dedicated entry for swap in folio David Hildenbrand
2023-08-23 13:15 ` Chris Li
2023-08-21 16:08 ` [PATCH mm-unstable v1 3/4] mm/swap: inline folio_set_swap_entry() and folio_swap_entry() David Hildenbrand
2023-08-23 13:15 ` Chris Li
2023-08-21 16:08 ` David Hildenbrand [this message]
2023-08-23 13:16 ` [PATCH mm-unstable v1 4/4] mm/huge_memory: work on folio->swap instead of page->private when splitting folio Chris Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230821160849.531668-5-david@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=catalin.marinas@arm.com \
--cc=ddstreet@ieee.org \
--cc=hughd@google.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=peterx@redhat.com \
--cc=sjenning@redhat.com \
--cc=vitaly.wool@konsulko.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox