From: Shiyang Ruan <ruansy.fnst@fujitsu.com>
To: <linux-kernel@vger.kernel.org>, <linux-xfs@vger.kernel.org>,
<nvdimm@lists.linux.dev>, <linux-mm@kvack.org>,
<linux-fsdevel@vger.kernel.org>
Cc: <djwong@kernel.org>, <dan.j.williams@intel.com>,
<david@fromorbit.com>, <hch@infradead.org>, <jane.chu@oracle.com>
Subject: [PATCH v8 5/9] fsdax: Introduce dax_lock_mapping_entry()
Date: Thu, 2 Dec 2021 16:48:52 +0800 [thread overview]
Message-ID: <20211202084856.1285285-6-ruansy.fnst@fujitsu.com> (raw)
In-Reply-To: <20211202084856.1285285-1-ruansy.fnst@fujitsu.com>
The current dax_lock_page() locks dax entry by obtaining mapping and
index in page. To support 1-to-N RMAP in NVDIMM, we need a new function
to lock a specific dax entry corresponding to this file's mapping,index.
And BTW, output the page corresponding to the specific dax entry for
caller use.
Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
---
fs/dax.c | 65 ++++++++++++++++++++++++++++++++++++++++++++-
include/linux/dax.h | 15 +++++++++++
2 files changed, 79 insertions(+), 1 deletion(-)
diff --git a/fs/dax.c b/fs/dax.c
index 1f46810d4b68..b3c737aff9de 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -390,7 +390,7 @@ static struct page *dax_busy_page(void *entry)
}
/*
- * dax_lock_mapping_entry - Lock the DAX entry corresponding to a page
+ * dax_lock_page - Lock the DAX entry corresponding to a page
* @page: The page whose entry we want to lock
*
* Context: Process context.
@@ -455,6 +455,69 @@ void dax_unlock_page(struct page *page, dax_entry_t cookie)
dax_unlock_entry(&xas, (void *)cookie);
}
+/*
+ * dax_lock_mapping_entry - Lock the DAX entry corresponding to a mapping
+ * @mapping: the file's mapping whose entry we want to lock
+ * @index: the offset within this file
+ * @page: output the dax page corresponding to this dax entry
+ *
+ * Return: A cookie to pass to dax_unlock_mapping_entry() or 0 if the entry
+ * could not be locked.
+ */
+dax_entry_t dax_lock_mapping_entry(struct address_space *mapping, pgoff_t index,
+ struct page **page)
+{
+ XA_STATE(xas, NULL, 0);
+ void *entry;
+
+ rcu_read_lock();
+ for (;;) {
+ entry = NULL;
+ if (!dax_mapping(mapping))
+ break;
+
+ xas.xa = &mapping->i_pages;
+ xas_lock_irq(&xas);
+ xas_set(&xas, index);
+ entry = xas_load(&xas);
+ if (dax_is_locked(entry)) {
+ rcu_read_unlock();
+ wait_entry_unlocked(&xas, entry);
+ rcu_read_lock();
+ continue;
+ }
+ if (!entry ||
+ dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
+ /*
+ * Because we are looking for entry from file's mapping
+ * and index, so the entry may not be inserted for now,
+ * or even a zero/empty entry. We don't think this is
+ * an error case. So, return a special value and do
+ * not output @page.
+ */
+ entry = (void *)~0UL;
+ } else {
+ *page = pfn_to_page(dax_to_pfn(entry));
+ dax_lock_entry(&xas, entry);
+ }
+ xas_unlock_irq(&xas);
+ break;
+ }
+ rcu_read_unlock();
+ return (dax_entry_t)entry;
+}
+
+void dax_unlock_mapping_entry(struct address_space *mapping, pgoff_t index,
+ dax_entry_t cookie)
+{
+ XA_STATE(xas, &mapping->i_pages, index);
+
+ if (cookie == ~0UL)
+ return;
+
+ dax_unlock_entry(&xas, (void *)cookie);
+}
+
/*
* Find page cache entry at given index. If it is a DAX entry, return it
* with the entry locked. If the page cache doesn't contain an entry at
diff --git a/include/linux/dax.h b/include/linux/dax.h
index f01684a63447..7e75d2c45f78 100644
--- a/include/linux/dax.h
+++ b/include/linux/dax.h
@@ -166,6 +166,10 @@ struct page *dax_layout_busy_page(struct address_space *mapping);
struct page *dax_layout_busy_page_range(struct address_space *mapping, loff_t start, loff_t end);
dax_entry_t dax_lock_page(struct page *page);
void dax_unlock_page(struct page *page, dax_entry_t cookie);
+dax_entry_t dax_lock_mapping_entry(struct address_space *mapping,
+ unsigned long index, struct page **page);
+void dax_unlock_mapping_entry(struct address_space *mapping,
+ unsigned long index, dax_entry_t cookie);
#else
static inline struct page *dax_layout_busy_page(struct address_space *mapping)
{
@@ -193,6 +197,17 @@ static inline dax_entry_t dax_lock_page(struct page *page)
static inline void dax_unlock_page(struct page *page, dax_entry_t cookie)
{
}
+
+static inline dax_entry_t dax_lock_mapping_entry(struct address_space *mapping,
+ unsigned long index, struct page **page)
+{
+ return 0;
+}
+
+static inline void dax_unlock_mapping_entry(struct address_space *mapping,
+ unsigned long index, dax_entry_t cookie)
+{
+}
#endif
int dax_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero,
--
2.34.0
next prev parent reply other threads:[~2021-12-02 8:53 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-12-02 8:48 [RESEND PATCH v8 0/9] fsdax: introduce fs query to support reflink Shiyang Ruan
2021-12-02 8:48 ` [PATCH v8 1/9] dax: Use percpu rwsem for dax_{read,write}_lock() Shiyang Ruan
2021-12-14 15:40 ` Christoph Hellwig
2021-12-15 2:06 ` Shiyang Ruan
2021-12-16 7:46 ` Christoph Hellwig
2021-12-02 8:48 ` [PATCH v8 2/9] dax: Introduce holder for dax_device Shiyang Ruan
2021-12-15 3:09 ` Dan Williams
2021-12-02 8:48 ` [PATCH v8 3/9] mm: factor helpers for memory_failure_dev_pagemap Shiyang Ruan
2021-12-02 8:48 ` [PATCH v8 4/9] pagemap,pmem: Introduce ->memory_failure() Shiyang Ruan
2021-12-02 8:48 ` Shiyang Ruan [this message]
2021-12-14 15:42 ` [PATCH v8 5/9] fsdax: Introduce dax_lock_mapping_entry() Christoph Hellwig
2021-12-02 8:48 ` [PATCH v8 6/9] mm: Introduce mf_dax_kill_procs() for fsdax case Shiyang Ruan
2021-12-14 15:47 ` Christoph Hellwig
2021-12-02 8:48 ` [PATCH v8 7/9] dax: add dax holder helper for filesystems Shiyang Ruan
2021-12-14 15:47 ` Christoph Hellwig
2021-12-15 2:21 ` Shiyang Ruan
2021-12-16 7:46 ` Christoph Hellwig
2021-12-02 8:48 ` [PATCH v8 8/9] xfs: Implement ->notify_failure() for XFS Shiyang Ruan
2021-12-14 15:54 ` Christoph Hellwig
2021-12-14 16:01 ` Darrick J. Wong
2021-12-02 8:48 ` [PATCH v8 9/9] fsdax: set a CoW flag when associate reflink mappings Shiyang Ruan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20211202084856.1285285-6-ruansy.fnst@fujitsu.com \
--to=ruansy.fnst@fujitsu.com \
--cc=dan.j.williams@intel.com \
--cc=david@fromorbit.com \
--cc=djwong@kernel.org \
--cc=hch@infradead.org \
--cc=jane.chu@oracle.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-xfs@vger.kernel.org \
--cc=nvdimm@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox