From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: <akpm@linux-foundation.org>, <linux-mm@kvack.org>
Cc: Tony Luck <tony.luck@intel.com>,
Miaohe Lin <linmiaohe@huawei.com>, <nao.horiguchi@gmail.com>,
Matthew Wilcox <willy@infradead.org>,
David Hildenbrand <david@redhat.com>,
Muchun Song <muchun.song@linux.dev>,
Benjamin LaHaise <bcrl@kvack.org>, <jglisse@redhat.com>,
Jiaqi Yan <jiaqiyan@google.com>, Hugh Dickins <hughd@google.com>,
Vishal Moola <vishal.moola@gmail.com>,
Alistair Popple <apopple@nvidia.com>,
Jane Chu <jane.chu@oracle.com>,
Oscar Salvador <osalvador@suse.de>,
Lance Yang <ioworker0@gmail.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>
Subject: [PATCH v4 3/6] mm: migrate: split folio_migrate_mapping()
Date: Mon, 3 Jun 2024 17:24:36 +0800 [thread overview]
Message-ID: <20240603092439.3360652-4-wangkefeng.wang@huawei.com> (raw)
In-Reply-To: <20240603092439.3360652-1-wangkefeng.wang@huawei.com>
The folio refcount check for !mapping and folio_ref_freeze() for
mapping are moved out of the original folio_migrate_mapping(), and
there is no turning back for the new __folio_migrate_mapping(),
also update comment from page to folio.
Note, the folio_ref_freeze() is moved out of xas_lock_irq(),
Since the folio is already isolated and locked during migration,
so suppose that there is no functional change.
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/migrate.c | 63 ++++++++++++++++++++++++++--------------------------
1 file changed, 32 insertions(+), 31 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index e04b451c4289..e930376c261a 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -393,50 +393,36 @@ static int folio_expected_refs(struct address_space *mapping,
}
/*
- * Replace the page in the mapping.
+ * Replace the folio in the mapping.
*
* The number of remaining references must be:
- * 1 for anonymous pages without a mapping
- * 2 for pages with a mapping
- * 3 for pages with a mapping and PagePrivate/PagePrivate2 set.
+ * 1 for anonymous folios without a mapping
+ * 2 for folios with a mapping
+ * 3 for folios with a mapping and PagePrivate/PagePrivate2 set.
*/
-int folio_migrate_mapping(struct address_space *mapping,
- struct folio *newfolio, struct folio *folio, int extra_count)
+static void __folio_migrate_mapping(struct address_space *mapping,
+ struct folio *newfolio, struct folio *folio, int expected_cnt)
{
XA_STATE(xas, &mapping->i_pages, folio_index(folio));
struct zone *oldzone, *newzone;
- int dirty;
- int expected_count = folio_expected_refs(mapping, folio) + extra_count;
long nr = folio_nr_pages(folio);
long entries, i;
+ int dirty;
if (!mapping) {
- /* Anonymous page without mapping */
- if (folio_ref_count(folio) != expected_count)
- return -EAGAIN;
-
- /* No turning back from here */
+ /* Anonymous folio without mapping */
newfolio->index = folio->index;
newfolio->mapping = folio->mapping;
if (folio_test_swapbacked(folio))
__folio_set_swapbacked(newfolio);
-
- return MIGRATEPAGE_SUCCESS;
+ return;
}
oldzone = folio_zone(folio);
newzone = folio_zone(newfolio);
xas_lock_irq(&xas);
- if (!folio_ref_freeze(folio, expected_count)) {
- xas_unlock_irq(&xas);
- return -EAGAIN;
- }
-
- /*
- * Now we know that no one else is looking at the folio:
- * no turning back from here.
- */
+ /* Now we know that no one else is looking at the folio */
newfolio->index = folio->index;
newfolio->mapping = folio->mapping;
folio_ref_add(newfolio, nr); /* add cache reference */
@@ -452,7 +438,7 @@ int folio_migrate_mapping(struct address_space *mapping,
entries = 1;
}
- /* Move dirty while page refs frozen and newpage not yet exposed */
+ /* Move dirty while folio refs frozen and newfolio not yet exposed */
dirty = folio_test_dirty(folio);
if (dirty) {
folio_clear_dirty(folio);
@@ -466,22 +452,22 @@ int folio_migrate_mapping(struct address_space *mapping,
}
/*
- * Drop cache reference from old page by unfreezing
- * to one less reference.
+ * Since old folio's refcount freezed, now drop cache reference from
+ * old folio by unfreezing to one less reference.
* We know this isn't the last reference.
*/
- folio_ref_unfreeze(folio, expected_count - nr);
+ folio_ref_unfreeze(folio, expected_cnt - nr);
xas_unlock(&xas);
/* Leave irq disabled to prevent preemption while updating stats */
/*
* If moved to a different zone then also account
- * the page for that zone. Other VM counters will be
+ * the folio for that zone. Other VM counters will be
* taken care of when we establish references to the
- * new page and drop references to the old page.
+ * new folio and drop references to the old folio.
*
- * Note that anonymous pages are accounted for
+ * Note that anonymous folios are accounted for
* via NR_FILE_PAGES and NR_ANON_MAPPED if they
* are mapped to swap space.
*/
@@ -518,7 +504,22 @@ int folio_migrate_mapping(struct address_space *mapping,
}
}
local_irq_enable();
+}
+
+int folio_migrate_mapping(struct address_space *mapping, struct folio *newfolio,
+ struct folio *folio, int extra_count)
+{
+ int expected_cnt = folio_expected_refs(mapping, folio) + extra_count;
+
+ if (!mapping) {
+ if (folio_ref_count(folio) != expected_cnt)
+ return -EAGAIN;
+ } else {
+ if (!folio_ref_freeze(folio, expected_cnt))
+ return -EAGAIN;
+ }
+ __folio_migrate_mapping(mapping, newfolio, folio, expected_cnt);
return MIGRATEPAGE_SUCCESS;
}
EXPORT_SYMBOL(folio_migrate_mapping);
--
2.27.0
next prev parent reply other threads:[~2024-06-03 10:46 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-03 9:24 [PATCH v4 0/6] mm: migrate: support poison recover from migrate folio Kefeng Wang
2024-06-03 9:24 ` [PATCH v4 1/6] mm: move memory_failure_queue() into copy_mc_[user]_highpage() Kefeng Wang
2024-06-04 19:38 ` Jane Chu
2024-06-06 2:28 ` Miaohe Lin
2024-06-03 9:24 ` [PATCH v4 2/6] mm: add folio_mc_copy() Kefeng Wang
2024-06-04 19:41 ` Jane Chu
2024-06-05 3:31 ` Andrew Morton
2024-06-05 7:15 ` Kefeng Wang
2024-06-06 2:36 ` Miaohe Lin
2024-06-03 9:24 ` Kefeng Wang [this message]
2024-06-06 0:54 ` [PATCH v4 3/6] mm: migrate: split folio_migrate_mapping() Jane Chu
2024-06-06 1:24 ` Kefeng Wang
2024-06-06 1:55 ` Matthew Wilcox
2024-06-06 2:24 ` Kefeng Wang
2024-06-06 18:28 ` Jane Chu
2024-06-03 9:24 ` [PATCH v4 4/6] mm: migrate: support poisoned recover from migrate folio Kefeng Wang
2024-06-06 21:27 ` Jane Chu
2024-06-06 22:28 ` Jane Chu
2024-06-06 22:31 ` Jane Chu
2024-06-07 4:01 ` Kefeng Wang
2024-06-07 15:59 ` Jane Chu
2024-06-03 9:24 ` [PATCH v4 5/6] fs: hugetlbfs: support poison recover from hugetlbfs_migrate_folio() Kefeng Wang
2024-06-06 23:30 ` Jane Chu
2024-06-03 9:24 ` [PATCH v4 6/6] mm: migrate: remove folio_migrate_copy() Kefeng Wang
2024-06-06 23:46 ` Jane Chu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240603092439.3360652-4-wangkefeng.wang@huawei.com \
--to=wangkefeng.wang@huawei.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=bcrl@kvack.org \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=ioworker0@gmail.com \
--cc=jane.chu@oracle.com \
--cc=jglisse@redhat.com \
--cc=jiaqiyan@google.com \
--cc=linmiaohe@huawei.com \
--cc=linux-mm@kvack.org \
--cc=muchun.song@linux.dev \
--cc=nao.horiguchi@gmail.com \
--cc=osalvador@suse.de \
--cc=tony.luck@intel.com \
--cc=vishal.moola@gmail.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox