linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Steve Sistare <steven.sistare@oracle.com>, linux-mm@kvack.org
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Jason Gunthorpe <jgg@nvidia.com>,
	Matthew Wilcox <willy@infradead.org>
Subject: Re: [PATCH V2] mm/gup: folio_split_user_page_pin
Date: Fri, 27 Sep 2024 17:44:52 +0200	[thread overview]
Message-ID: <982f3e26-c998-4e72-b374-3f31bf0ca9f5@redhat.com> (raw)
In-Reply-To: <1727190332-385657-1-git-send-email-steven.sistare@oracle.com>

On 24.09.24 17:05, Steve Sistare wrote:
> Export a function that repins a high-order folio at small-page granularity.
> This allows any range of small pages within the folio to be unpinned later.
> For example, pages pinned via memfd_pin_folios and modified by
> folio_split_user_page_pin could be unpinned via unpin_user_page(s).
> 
> Suggested-by: Jason Gunthorpe <jgg@nvidia.com>
> Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
> 
> ---
> In V2 this has been renamed from repin_folio_unhugely, but is
> otherwise unchanged from V1.
> ---
> ---
>   include/linux/mm.h |  1 +
>   mm/gup.c           | 20 ++++++++++++++++++++
>   2 files changed, 21 insertions(+)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 13bff7c..b0b572d 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2521,6 +2521,7 @@ long pin_user_pages_unlocked(unsigned long start, unsigned long nr_pages,
>   long memfd_pin_folios(struct file *memfd, loff_t start, loff_t end,
>   		      struct folio **folios, unsigned int max_folios,
>   		      pgoff_t *offset);
> +void folio_split_user_page_pin(struct folio *folio, unsigned long npages);
>   
>   int get_user_pages_fast(unsigned long start, int nr_pages,
>   			unsigned int gup_flags, struct page **pages);
> diff --git a/mm/gup.c b/mm/gup.c
> index fcd602b..94ee79dd 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -3733,3 +3733,23 @@ long memfd_pin_folios(struct file *memfd, loff_t start, loff_t end,
>   	return ret;
>   }
>   EXPORT_SYMBOL_GPL(memfd_pin_folios);
> +
> +/**
> + * folio_split_user_page_pin() - split the pin on a high order folio

There really is no such concept of splitting pins :/

> + * @folio: the folio to split

"folio to split": Highly misleading :)

> + * @npages: The new number of pages the folio pin reference should hold
> + *
> + * Given a high order folio that is already pinned, adjust the reference
> + * count to allow unpin_user_page_range() and related to be called on a

unpin_user_page_range() does not exist, at least upstream. Did you mean 
unpin_user_page_range_dirty_lock() ?

> + * the folio. npages is the number of pages that will be passed to a
> + * future unpin_user_page_range().
> + */
> +void folio_split_user_page_pin(struct folio *folio, unsigned long npages)
> +{
> +	if (!folio_test_large(folio) || is_huge_zero_folio(folio) ||

is_huge_zero_folio() is still likely wrong.

Just follow the flow in unpin_user_page_range_dirty_lock() -> 
gup_put_folio().

Please point to me where in  unpin_user_page_range_dirty_lock() -> 
gup_put_folio() there is_a huge_zero_folio() special-casing is that 
would skip adjusting the refcount and the pincount, so it would be balanced?


> +	    npages == 1)
> +		return;
> +	atomic_add(npages - 1, &folio->_refcount);
> +	atomic_add(npages - 1, &folio->_pincount);
> +}
> +EXPORT_SYMBOL_GPL(folio_split_user_page_pin);

I can understand why we want to add more pins to a folio. I don't like 
this interface.


I would suggest a more generic interface:


/**
  * folio_try_add_pins() - add pins to an already-pinned folio
  * @folio: the folio to add more pins to
  *
  * Try to add more pins to an already-pinned folio. The semantics
  * of the pin (e.g., FOLL_WRITE) follow any existing pin and cannot
  * be changed.
  *
  * This function is helpful when having obtained a pin on a large folio
  * using memfd_pin_folios(), but wanting to logically unpin parts
  * (e.g., individual pages) of the folio later, for example, using
  * unpin_user_page_range_dirty_lock().
  *
  * This is not the right interface to initially pin a folio.
  */
int folio_try_add_pins(struct folio *folio, unsigned int pins)
{
	VM_WARN_ON_ONCE(!folio_maybe_dma_pinned(folio));

	return try_grab_folio(folio, pins, FOLL_PIN);
}

We might want to consider adding even better overflow checks in 
try_grab_folio(), but that's a different discussion.


The shared zeropage will be taken care of automatically, and the huge 
zero folio does currently not need any special care ...

-- 
Cheers,

David / dhildenb



  parent reply	other threads:[~2024-09-27 15:45 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-09-24 15:05 Steve Sistare
2024-09-24 16:55 ` Jason Gunthorpe
2024-09-27 15:44 ` David Hildenbrand [this message]
2024-09-27 15:58   ` Jason Gunthorpe
2024-10-01 17:17     ` Steven Sistare
2024-10-04 10:04       ` David Hildenbrand
2024-10-04 17:20         ` Steven Sistare
2024-10-04 20:19           ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=982f3e26-c998-4e72-b374-3f31bf0ca9f5@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=jgg@nvidia.com \
    --cc=linux-mm@kvack.org \
    --cc=steven.sistare@oracle.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox