linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Qi Zheng <zhengqi.arch@bytedance.com>,
	hughd@google.com, willy@infradead.org, muchun.song@linux.dev,
	vbabka@kernel.org, akpm@linux-foundation.org, rppt@kernel.org,
	vishal.moola@gmail.com, peterx@redhat.com, ryan.roberts@arm.com,
	christophe.leroy2@cs-soprasteria.com
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-arm-kernel@lists.infradead.org,
	linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH v2 01/14] mm: pgtable: introduce pte_offset_map_{ro|rw}_nolock()
Date: Mon, 26 Aug 2024 17:21:32 +0200	[thread overview]
Message-ID: <f318f65d-4198-481c-98a0-00415664614c@redhat.com> (raw)
In-Reply-To: <e866151ccd257ca14a9361ba59f8c3086aa76e4f.1724310149.git.zhengqi.arch@bytedance.com>

On 22.08.24 09:13, Qi Zheng wrote:
> Currently, the usage of pte_offset_map_nolock() can be divided into the
> following two cases:
> 
> 1) After acquiring PTL, only read-only operations are performed on the PTE
>     page. In this case, the RCU lock in pte_offset_map_nolock() will ensure
>     that the PTE page will not be freed, and there is no need to worry
>     about whether the pmd entry is modified.

There is also the usage where we don't grab the PTL at all, and only do 
a racy (read-only) lookup.

> 
> 2) After acquiring PTL, the pte or pmd entries may be modified. At this
>     time, we need to ensure that the pmd entry has not been modified
>     concurrently.
> 
> To more clearing distinguish between these two cases, this commit
> introduces two new helper functions to replace pte_offset_map_nolock().
> For 1), just rename it to pte_offset_map_ro_nolock(). For 2), in addition
> to changing the name to pte_offset_map_rw_nolock(), it also outputs the
> pmdval when successful. This can help the caller recheck *pmd once the PTL
> is taken. In some cases, that is, either the mmap_lock for write, or
> pte_same() check on contents, is also enough to ensure that the pmd entry
> is stable. But in order to prevent the interface from being abused, we
> choose to pass in a dummy local variable instead of NULL.
> 
> Subsequent commits will convert pte_offset_map_nolock() into the above
> two functions one by one, and finally completely delete it.
> 
> Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
> ---
>   Documentation/mm/split_page_table_lock.rst |  7 ++++
>   include/linux/mm.h                         |  5 +++
>   mm/pgtable-generic.c                       | 43 ++++++++++++++++++++++
>   3 files changed, 55 insertions(+)
> 
> diff --git a/Documentation/mm/split_page_table_lock.rst b/Documentation/mm/split_page_table_lock.rst
> index e4f6972eb6c04..08d0e706a32db 100644
> --- a/Documentation/mm/split_page_table_lock.rst
> +++ b/Documentation/mm/split_page_table_lock.rst
> @@ -19,6 +19,13 @@ There are helpers to lock/unlock a table and other accessor functions:
>    - pte_offset_map_nolock()
>   	maps PTE, returns pointer to PTE with pointer to its PTE table
>   	lock (not taken), or returns NULL if no PTE table;

What will happen to pte_offset_map_nolock() after this series? Does it 
still exist or will it become an internal helper?

> + - pte_offset_map_ro_nolock()
> +	maps PTE, returns pointer to PTE with pointer to its PTE table
> +	lock (not taken), or returns NULL if no PTE table;
> + - pte_offset_map_rw_nolock()
> +	maps PTE, returns pointer to PTE with pointer to its PTE table
> +	lock (not taken) and the value of its pmd entry, or returns NULL
> +	if no PTE table;

[...]

> +pte_t *pte_offset_map_rw_nolock(struct mm_struct *mm, pmd_t *pmd,
> +				unsigned long addr, pmd_t *pmdvalp,
> +				spinlock_t **ptlp)
> +{
> +	pmd_t pmdval;
> +	pte_t *pte;
> +
> +	BUG_ON(!pmdvalp);

As raised, no BUG_ON please. VM_WARN_ON_ONCE() is helpful during early 
testing and should catch these kind of things.

If someone thinks not requiring a non-NULL pointer is better, please 
speak up, I'm not married to that idea :)

> +	pte = __pte_offset_map(pmd, addr, &pmdval);
> +	if (likely(pte))
> +		*ptlp = pte_lockptr(mm, &pmdval);
> +	*pmdvalp = pmdval;
> +	return pte;
> +}
> +
>   /*
>    * pte_offset_map_lock(mm, pmd, addr, ptlp), and its internal implementation
>    * __pte_offset_map_lock() below, is usually called with the pmd pointer for
> @@ -356,6 +383,22 @@ pte_t *pte_offset_map_nolock(struct mm_struct *mm, pmd_t *pmd,
>    * recheck *pmd once the lock is taken; in practice, no callsite needs that -
>    * either the mmap_lock for write, or pte_same() check on contents, is enough.
>    *
> + * pte_offset_map_ro_nolock(mm, pmd, addr, ptlp), above, is like
> + * pte_offset_map(); but when successful, it also outputs a pointer to the
> + * spinlock in ptlp - as pte_offset_map_lock() does, but in this case without
> + * locking it.  This helps the caller to avoid a later pte_lockptr(mm, *pmd),
> + * which might by that time act on a changed *pmd: pte_offset_map_ro_nolock()
> + * provides the correct spinlock pointer for the page table that it returns.
> + * For readonly case, the caller does not need to recheck *pmd after the lock is
> + * taken, because the RCU lock will ensure that the PTE page will not be freed. > + *
> + * pte_offset_map_rw_nolock(mm, pmd, addr, pmdvalp, ptlp), above, is like
> + * pte_offset_map_ro_nolock(); but when successful, it also outputs the
> + * pdmval. For cases where pte or pmd entries may be modified, that is, maywrite
> + * case, this can help the caller recheck *pmd once the lock is taken. In some
> + * cases, that is, either the mmap_lock for write, or pte_same() check on
> + * contents, is also enough to ensure that the pmd entry is stable.
> + *
>    * Note that free_pgtables(), used after unmapping detached vmas, or when
>    * exiting the whole mm, does not take page table lock before freeing a page
>    * table, and may not use RCU at all: "outsiders" like khugepaged should avoid

In general to me a step into the right direction. Likely the 
documentation could be further clarified in some aspects:

Like that the use of pte_offset_map_ro_nolock() does not allow to easily 
identify if the page table was replaced in the meantime. Even after 
grabbing the PTL, we might be looking either at a page table that is 
still mapped or one that was unmapped and is about to get freed. But for 
R/O access this is usually sufficient AFAIUK.

Or that "RO" / "RW" expresses the intended semantics, not that the 
*kmap* will be RO/RW protected.

-- 
Cheers,

David / dhildenb



  parent reply	other threads:[~2024-08-26 15:21 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-08-22  7:13 [PATCH v2 00/14] " Qi Zheng
2024-08-22  7:13 ` [PATCH v2 01/14] mm: pgtable: " Qi Zheng
2024-08-26  3:45   ` [PATCH v2 01/14 update] " Qi Zheng
2024-08-26 15:21   ` David Hildenbrand [this message]
2024-08-27  4:33     ` [PATCH v2 01/14] " Qi Zheng
2024-08-28 10:48       ` David Hildenbrand
2024-08-29  3:27         ` Qi Zheng
2024-08-29 10:59         ` Qi Zheng
2024-08-29 15:31           ` David Hildenbrand
2024-08-30  6:37             ` Qi Zheng
2024-08-28  9:48   ` Muchun Song
2024-08-22  7:13 ` [PATCH v2 02/14] arm: adjust_pte() use pte_offset_map_rw_nolock() Qi Zheng
2024-08-26 15:26   ` David Hildenbrand
2024-08-29  3:39     ` Muchun Song
2024-08-22  7:13 ` [PATCH v2 03/14] powerpc: assert_pte_locked() use pte_offset_map_ro_nolock() Qi Zheng
2024-08-26 15:28   ` David Hildenbrand
2024-08-29  7:21   ` Muchun Song
2024-08-22  7:13 ` [PATCH v2 04/14] mm: filemap: filemap_fault_recheck_pte_none() " Qi Zheng
2024-08-26 15:29   ` David Hildenbrand
2024-08-29  7:23   ` Muchun Song
2024-08-22  7:13 ` [PATCH v2 05/14] mm: khugepaged: __collapse_huge_page_swapin() " Qi Zheng
2024-08-26 15:33   ` David Hildenbrand
2024-08-29  7:25   ` Muchun Song
2024-08-22  7:13 ` [PATCH v2 06/14] mm: handle_pte_fault() use pte_offset_map_rw_nolock() Qi Zheng
2024-08-26 15:36   ` David Hildenbrand
2024-08-27  4:53     ` Qi Zheng
2024-08-29  7:30   ` Muchun Song
2024-08-22  7:13 ` [PATCH v2 07/14] mm: khugepaged: collapse_pte_mapped_thp() " Qi Zheng
2024-08-29  8:10   ` Muchun Song
2024-08-30  6:54     ` Qi Zheng
2024-09-05  6:32       ` Muchun Song
2024-09-05  6:41         ` Qi Zheng
2024-09-05  7:18           ` Muchun Song
2024-08-22  7:13 ` [PATCH v2 08/14] mm: copy_pte_range() " Qi Zheng
2024-08-29  8:13   ` Muchun Song
2024-08-29 15:36   ` David Hildenbrand
2024-08-30  6:42     ` Qi Zheng
2024-08-22  7:13 ` [PATCH v2 09/14] mm: mremap: move_ptes() " Qi Zheng
2024-08-22  7:13 ` [PATCH v2 10/14] mm: page_vma_mapped_walk: map_pte() " Qi Zheng
2024-08-22  7:13 ` [PATCH v2 11/14] mm: userfaultfd: move_pages_pte() " Qi Zheng
2024-08-22  7:13 ` [PATCH v2 12/14] mm: multi-gen LRU: walk_pte_range() " Qi Zheng
2024-08-22  7:13 ` [PATCH v2 13/14] mm: pgtable: remove pte_offset_map_nolock() Qi Zheng
2024-08-22  7:13 ` [PATCH v2 14/14] mm: khugepaged: retract_page_tables() use pte_offset_map_rw_nolock() Qi Zheng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f318f65d-4198-481c-98a0-00415664614c@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=christophe.leroy2@cs-soprasteria.com \
    --cc=hughd@google.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=muchun.song@linux.dev \
    --cc=peterx@redhat.com \
    --cc=rppt@kernel.org \
    --cc=ryan.roberts@arm.com \
    --cc=vbabka@kernel.org \
    --cc=vishal.moola@gmail.com \
    --cc=willy@infradead.org \
    --cc=zhengqi.arch@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox