From: "NeilBrown" <neilb@suse.de>
To: "Matthew Wilcox" <willy@infradead.org>
Cc: linux-doc@vger.kernel.org, linux-mm@kvack.org,
linux-fsdevel@vger.kernel.org,
"Miaohe Lin" <linmiaohe@huawei.com>
Subject: Re: [RFC] Documentation for folio_lock() and friends
Date: Tue, 05 Apr 2022 15:48:19 +1000 [thread overview]
Message-ID: <164913769939.10985.13675614818955421206@noble.neil.brown.name> (raw)
In-Reply-To: <YkspW4HDL54xEg69@casper.infradead.org>
On Tue, 05 Apr 2022, Matthew Wilcox wrote:
> It's a shame to not have these functions documented. I'm sure I've
> missed a few things that would be useful to document here.
>
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index ab47579af434..47b7851f1b64 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -888,6 +888,18 @@ bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm,
> void unlock_page(struct page *page);
> void folio_unlock(struct folio *folio);
>
> +/**
> + * folio_trylock() - Attempt to lock a folio.
> + * @folio: The folio to attempt to lock.
> + *
> + * Sometimes it is undesirable to wait for a folio to be unlocked (eg
> + * when the locks are being taken in the wrong order, or if making
> + * progress through a batch of folios is more important than processing
> + * them in order). Usually folio_lock() is the correct function to call.
Usually?
I think a "see also" type reference to folio_lock() is useful, but I
don't think "usually" is helpful.
> + *
> + * Context: Any context.
> + * Return: Whether the lock was successfully acquired.
> + */
> static inline bool folio_trylock(struct folio *folio)
> {
> return likely(!test_and_set_bit_lock(PG_locked, folio_flags(folio, 0)));
> @@ -901,6 +913,26 @@ static inline int trylock_page(struct page *page)
> return folio_trylock(page_folio(page));
> }
>
> +/**
> + * folio_lock() - Lock this folio.
> + * @folio: The folio to lock.
> + *
> + * The folio lock protects against many things, probably more than it
> + * should. It is primarily held while a folio is being read from storage,
> + * either from its backing file or from swap. It is also held while a
> + * folio is being truncated from its address_space.
> + *
> + * Holding the lock usually prevents the contents of the folio from being
> + * modified by other kernel users, although it does not prevent userspace
> + * from modifying data if it's mapped. The lock is not consistently held
> + * while a folio is being modified by DMA.
I don't think this paragraph is helpful... maybe if it listed which
change *are* prevented by the lock, rather than a few which aren't?
I think it is significant that the lock prevents removal from the page
cache, and so ->mapping is only stable while the lock is held. It might
be worth adding something about that.
> + *
> + * Context: May sleep. If you need to acquire the locks of two or
> + * more folios, they must be in order of ascending index, if they are
> + * in the same address_space. If they are in different address_spaces,
> + * acquire the lock of the folio which belongs to the address_space which
> + * has the lowest address in memory first.
> + */
> static inline void folio_lock(struct folio *folio)
> {
> might_sleep();
> @@ -908,6 +940,17 @@ static inline void folio_lock(struct folio *folio)
> __folio_lock(folio);
> }
>
> +/**
> + * lock_page() - Lock the folio containing this page.
> + * @page: The page to lock.
> + *
> + * See folio_lock() for a description of what the lock protects.
> + * This is a legacy function and new code should probably use folio_lock()
> + * instead.
> + *
> + * Context: May sleep. Pages in the same folio share a lock, so do not
> + * attempt to lock two pages which share a folio.
> + */
> static inline void lock_page(struct page *page)
> {
> struct folio *folio;
> @@ -918,6 +961,16 @@ static inline void lock_page(struct page *page)
> __folio_lock(folio);
> }
>
> +/**
> + * folio_lock_killable() - Lock this folio, interruptible by a fatal signal.
> + * @folio: The folio to lock.
> + *
> + * Attempts to lock the folio, like folio_lock(), except that the sleep
> + * to acquire the lock is interruptible by a fatal signal.
> + *
> + * Context: May sleep; see folio_lock().
> + * Return: 0 if the lock was acquired; -EINTR if a fatal signal was received.
> + */
> static inline int folio_lock_killable(struct folio *folio)
> {
> might_sleep();
> @@ -964,8 +1017,8 @@ int folio_wait_bit_killable(struct folio *folio, int bit_nr);
> * Wait for a folio to be unlocked.
> *
> * This must be called with the caller "holding" the folio,
> - * ie with increased "page->count" so that the folio won't
> - * go away during the wait..
> + * ie with increased folio reference count so that the folio won't
> + * go away during the wait.
> */
> static inline void folio_wait_locked(struct folio *folio)
> {
>
>
Thanks,
NeilBrown
next prev parent reply other threads:[~2022-04-05 5:48 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-04 17:22 Matthew Wilcox
2022-04-05 5:48 ` NeilBrown [this message]
2022-04-28 17:28 ` Matthew Wilcox
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=164913769939.10985.13675614818955421206@noble.neil.brown.name \
--to=neilb@suse.de \
--cc=linmiaohe@huawei.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox