linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: Lorenzo Stoakes <lstoakes@gmail.com>,
	linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	linux-kernel@vger.kernel.org
Cc: Matthew Wilcox <willy@infradead.org>,
	Hugh Dickins <hughd@google.com>,
	Liam Howlett <liam.howlett@oracle.com>,
	William Kucharski <william.kucharski@oracle.com>,
	Christian Brauner <brauner@kernel.org>,
	Jonathan Corbet <corbet@lwn.net>, Mike Rapoport <rppt@kernel.org>,
	Joel Fernandes <joel@joelfernandes.org>,
	Geert Uytterhoeven <geert@linux-m68k.org>
Subject: Re: [PATCH v3 2/5] mm: mlock: use folios and a folio batch internally
Date: Thu, 12 Jan 2023 11:31:49 +0100	[thread overview]
Message-ID: <e8b737fd-6204-419d-0592-5441ea344704@suse.cz> (raw)
In-Reply-To: <03ac78b416be5a361b79464acc3da7f93b9c37e8.1672043615.git.lstoakes@gmail.com>

On 12/26/22 09:44, Lorenzo Stoakes wrote:
> This brings mlock in line with the folio batches declared in mm/swap.c and
> makes the code more consistent across the two.
> 
> The existing mechanism for identifying which operation each folio in the
> batch is undergoing is maintained, i.e. using the lower 2 bits of the
> struct folio address (previously struct page address). This should continue
> to function correctly as folios remain at least system word-aligned.
> 
> All invoctions of mlock() pass either a non-compound page or the head of a
> THP-compound page and no tail pages need updating so this functionality
> works with struct folios being used internally rather than struct pages.
> 
> In this patch the external interface is kept identical to before in order
> to maintain separation between patches in the series, using a rather
> awkward conversion from struct page to struct folio in relevant functions.
> 
> However, this maintenance of the existing interface is intended to be
> temporary - the next patch in the series will update the interfaces to
> accept folios directly.
> 
> Signed-off-by: Lorenzo Stoakes <lstoakes@gmail.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

with some nits:

> -static struct lruvec *__munlock_page(struct page *page, struct lruvec *lruvec)
> +static struct lruvec *__munlock_folio(struct folio *folio, struct lruvec *lruvec)
>  {
> -	int nr_pages = thp_nr_pages(page);
> +	int nr_pages = folio_nr_pages(folio);
>  	bool isolated = false;
>  
> -	if (!TestClearPageLRU(page))
> +	if (!folio_test_clear_lru(folio))
>  		goto munlock;
>  
>  	isolated = true;
> -	lruvec = folio_lruvec_relock_irq(page_folio(page), lruvec);
> +	lruvec = folio_lruvec_relock_irq(folio, lruvec);
>  
> -	if (PageUnevictable(page)) {
> +	if (folio_test_unevictable(folio)) {
>  		/* Then mlock_count is maintained, but might undercount */
> -		if (page->mlock_count)
> -			page->mlock_count--;
> -		if (page->mlock_count)
> +		if (folio->mlock_count)
> +			folio->mlock_count--;
> +		if (folio->mlock_count)
>  			goto out;
>  	}
>  	/* else assume that was the last mlock: reclaim will fix it if not */
>  
>  munlock:
> -	if (TestClearPageMlocked(page)) {
> -		__mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages);
> -		if (isolated || !PageUnevictable(page))
> +	if (folio_test_clear_mlocked(folio)) {
> +		zone_stat_mod_folio(folio, NR_MLOCK, -nr_pages);

AFAIK the 1:1 replacement would be __zone_stat_mod_folio(), this is stronger
thus not causing a bug, but unneccessary?

> +		if (isolated || !folio_test_unevictable(folio))
>  			__count_vm_events(UNEVICTABLE_PGMUNLOCKED, nr_pages);
>  		else
>  			__count_vm_events(UNEVICTABLE_PGSTRANDED, nr_pages);
>  	}
>  
> -	/* page_evictable() has to be checked *after* clearing Mlocked */
> -	if (isolated && PageUnevictable(page) && page_evictable(page)) {
> -		del_page_from_lru_list(page, lruvec);
> -		ClearPageUnevictable(page);
> -		add_page_to_lru_list(page, lruvec);
> +	/* folio_evictable() has to be checked *after* clearing Mlocked */
> +	if (isolated && folio_test_unevictable(folio) && folio_evictable(folio)) {
> +		lruvec_del_folio(lruvec, folio);
> +		folio_clear_unevictable(folio);
> +		lruvec_add_folio(lruvec, folio);
>  		__count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages);
>  	}
>  out:
>  	if (isolated)
> -		SetPageLRU(page);
> +		folio_set_lru(folio);
>  	return lruvec;
>  }
>  
>  /*
> - * Flags held in the low bits of a struct page pointer on the mlock_pvec.
> + * Flags held in the low bits of a struct folio pointer on the mlock_fbatch.
>   */
>  #define LRU_PAGE 0x1
>  #define NEW_PAGE 0x2

Should it be X_FOLIO now?

> -static inline struct page *mlock_lru(struct page *page)
> +static inline struct folio *mlock_lru(struct folio *folio)
>  {
> -	return (struct page *)((unsigned long)page + LRU_PAGE);
> +	return (struct folio *)((unsigned long)folio + LRU_PAGE);
>  }
>  
> -static inline struct page *mlock_new(struct page *page)
> +static inline struct folio *mlock_new(struct folio *folio)
>  {
> -	return (struct page *)((unsigned long)page + NEW_PAGE);
> +	return (struct folio *)((unsigned long)folio + NEW_PAGE);
>  }
>  
>  /*
> - * mlock_pagevec() is derived from pagevec_lru_move_fn():
> - * perhaps that can make use of such page pointer flags in future,
> - * but for now just keep it for mlock.  We could use three separate
> - * pagevecs instead, but one feels better (munlocking a full pagevec
> - * does not need to drain mlocking pagevecs first).
> + * mlock_folio_batch() is derived from folio_batch_move_lru(): perhaps that can
> + * make use of such page pointer flags in future, but for now just keep it for

		       ^ folio?	

> + * mlock.  We could use three separate folio batches instead, but one feels
> + * better (munlocking a full folio batch does not need to drain mlocking folio
> + * batches first).
>   */
> -static void mlock_pagevec(struct pagevec *pvec)
> +static void mlock_folio_batch(struct folio_batch *fbatch)



  reply	other threads:[~2023-01-12 10:31 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-26  8:44 [PATCH v3 0/5] update mlock to use folios Lorenzo Stoakes
2022-12-26  8:44 ` [PATCH v3 1/5] mm: pagevec: add folio_batch_reinit() Lorenzo Stoakes
2023-01-12  9:57   ` Vlastimil Babka
2022-12-26  8:44 ` [PATCH v3 2/5] mm: mlock: use folios and a folio batch internally Lorenzo Stoakes
2023-01-12 10:31   ` Vlastimil Babka [this message]
2023-01-12 11:27     ` Lorenzo Stoakes
2022-12-26  8:44 ` [PATCH v3 3/5] m68k/mm/motorola: specify pmd_page() type Lorenzo Stoakes
2022-12-27  9:38   ` Geert Uytterhoeven
2023-01-12 10:38   ` Vlastimil Babka
2022-12-26  8:44 ` [PATCH v3 4/5] mm: mlock: update the interface to use folios Lorenzo Stoakes
2023-01-12 10:55   ` Vlastimil Babka
2023-01-12 12:01     ` Lorenzo Stoakes
2022-12-26  8:44 ` [PATCH v3 5/5] Documentation/mm: Update references to __m[un]lock_page() to *_folio() Lorenzo Stoakes
2023-01-12 11:04   ` Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e8b737fd-6204-419d-0592-5441ea344704@suse.cz \
    --to=vbabka@suse.cz \
    --cc=akpm@linux-foundation.org \
    --cc=brauner@kernel.org \
    --cc=corbet@lwn.net \
    --cc=geert@linux-m68k.org \
    --cc=hughd@google.com \
    --cc=joel@joelfernandes.org \
    --cc=liam.howlett@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lstoakes@gmail.com \
    --cc=rppt@kernel.org \
    --cc=william.kucharski@oracle.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox