linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@suse.com>
To: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: linux-mm@kvack.org, cgroups@vger.kernel.org,
	Johannes Weiner <hannes@cmpxchg.org>,
	Vladimir Davydov <vdavydov.dev@gmail.com>
Subject: Re: [PATCH v3 13/18] mm/memcg: Add folio_memcg_lock() and folio_memcg_unlock()
Date: Wed, 30 Jun 2021 10:32:02 +0200	[thread overview]
Message-ID: <YNwsAh5u2h34tGDb@dhcp22.suse.cz> (raw)
In-Reply-To: <20210630040034.1155892-14-willy@infradead.org>

On Wed 30-06-21 05:00:29, Matthew Wilcox wrote:
> These are the folio equivalents of lock_page_memcg() and
> unlock_page_memcg().  Reimplement them as wrappers.

Is there any reason why you haven't followed the same approach as for
the previous patches. I mean callers can call page_folio and then
lock_page_memcg wrapper shouldn't be really needed.

I do not really want to be annoying here but I have to say that I like
the conversion by previous patches much better than this wrapper
approach as mentioned during the previous review already. If you have
some reasons to stick with this approach for this particular case then
make it explicit in the changelog.

> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  include/linux/memcontrol.h | 10 +++++++++
>  mm/memcontrol.c            | 45 ++++++++++++++++++++++++--------------
>  2 files changed, 39 insertions(+), 16 deletions(-)
> 
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index ef79f9c0b296..279ea2640365 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -951,6 +951,8 @@ void mem_cgroup_print_oom_group(struct mem_cgroup *memcg);
>  extern bool cgroup_memory_noswap;
>  #endif
>  
> +void folio_memcg_lock(struct folio *folio);
> +void folio_memcg_unlock(struct folio *folio);
>  void lock_page_memcg(struct page *page);
>  void unlock_page_memcg(struct page *page);
>  
> @@ -1363,6 +1365,14 @@ static inline void unlock_page_memcg(struct page *page)
>  {
>  }
>  
> +static inline void folio_memcg_lock(struct folio *folio)
> +{
> +}
> +
> +static inline void folio_memcg_unlock(struct folio *folio)
> +{
> +}
> +
>  static inline void mem_cgroup_handle_over_high(void)
>  {
>  }
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index b925bdce0c6e..b94a6122f27d 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1960,18 +1960,17 @@ void mem_cgroup_print_oom_group(struct mem_cgroup *memcg)
>  }
>  
>  /**
> - * lock_page_memcg - lock a page and memcg binding
> - * @page: the page
> + * folio_memcg_lock - Bind a folio to its memcg.
> + * @folio: The folio.
>   *
> - * This function protects unlocked LRU pages from being moved to
> + * This function prevents unlocked LRU folios from being moved to
>   * another cgroup.
>   *
> - * It ensures lifetime of the locked memcg. Caller is responsible
> - * for the lifetime of the page.
> + * It ensures lifetime of the bound memcg.  The caller is responsible
> + * for the lifetime of the folio.
>   */
> -void lock_page_memcg(struct page *page)
> +void folio_memcg_lock(struct folio *folio)
>  {
> -	struct page *head = compound_head(page); /* rmap on tail pages */
>  	struct mem_cgroup *memcg;
>  	unsigned long flags;
>  
> @@ -1985,7 +1984,7 @@ void lock_page_memcg(struct page *page)
>  	if (mem_cgroup_disabled())
>  		return;
>  again:
> -	memcg = page_memcg(head);
> +	memcg = folio_memcg(folio);
>  	if (unlikely(!memcg))
>  		return;
>  
> @@ -1999,7 +1998,7 @@ void lock_page_memcg(struct page *page)
>  		return;
>  
>  	spin_lock_irqsave(&memcg->move_lock, flags);
> -	if (memcg != page_memcg(head)) {
> +	if (memcg != folio_memcg(folio)) {
>  		spin_unlock_irqrestore(&memcg->move_lock, flags);
>  		goto again;
>  	}
> @@ -2013,9 +2012,15 @@ void lock_page_memcg(struct page *page)
>  	memcg->move_lock_task = current;
>  	memcg->move_lock_flags = flags;
>  }
> +EXPORT_SYMBOL(folio_memcg_lock);
> +
> +void lock_page_memcg(struct page *page)
> +{
> +	folio_memcg_lock(page_folio(page));
> +}
>  EXPORT_SYMBOL(lock_page_memcg);
>  
> -static void __unlock_page_memcg(struct mem_cgroup *memcg)
> +static void __memcg_unlock(struct mem_cgroup *memcg)
>  {
>  	if (memcg && memcg->move_lock_task == current) {
>  		unsigned long flags = memcg->move_lock_flags;
> @@ -2030,14 +2035,22 @@ static void __unlock_page_memcg(struct mem_cgroup *memcg)
>  }
>  
>  /**
> - * unlock_page_memcg - unlock a page and memcg binding
> - * @page: the page
> + * folio_memcg_unlock - Release the binding between a folio and its memcg.
> + * @folio: The folio.
> + *
> + * This releases the binding created by folio_memcg_lock().  This does
> + * not change the accounting of this folio to its memcg, but it does
> + * permit others to change it.
>   */
> -void unlock_page_memcg(struct page *page)
> +void folio_memcg_unlock(struct folio *folio)
>  {
> -	struct page *head = compound_head(page);
> +	__memcg_unlock(folio_memcg(folio));
> +}
> +EXPORT_SYMBOL(folio_memcg_unlock);
>  
> -	__unlock_page_memcg(page_memcg(head));
> +void unlock_page_memcg(struct page *page)
> +{
> +	folio_memcg_unlock(page_folio(page));
>  }
>  EXPORT_SYMBOL(unlock_page_memcg);
>  
> @@ -5661,7 +5674,7 @@ static int mem_cgroup_move_account(struct page *page,
>  
>  	page->memcg_data = (unsigned long)to;
>  
> -	__unlock_page_memcg(from);
> +	__memcg_unlock(from);
>  
>  	ret = 0;
>  	nid = page_to_nid(page);
> -- 
> 2.30.2

-- 
Michal Hocko
SUSE Labs


  reply	other threads:[~2021-06-30  8:32 UTC|newest]

Thread overview: 59+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-30  4:00 [PATCH v3 00/18] Folio conversion of memcg Matthew Wilcox (Oracle)
2021-06-30  4:00 ` [PATCH v3 01/18] mm: Add folio_nid() Matthew Wilcox (Oracle)
2021-07-01  6:56   ` Christoph Hellwig
2021-06-30  4:00 ` [PATCH v3 02/18] mm/memcg: Remove 'page' parameter to mem_cgroup_charge_statistics() Matthew Wilcox (Oracle)
2021-06-30 14:17   ` Johannes Weiner
2021-06-30  4:00 ` [PATCH v3 03/18] mm/memcg: Use the node id in mem_cgroup_update_tree() Matthew Wilcox (Oracle)
2021-06-30  6:55   ` Michal Hocko
2021-06-30 14:18   ` Johannes Weiner
2021-07-01  6:57   ` Christoph Hellwig
2021-06-30  4:00 ` [PATCH v3 04/18] mm/memcg: Remove soft_limit_tree_node() Matthew Wilcox (Oracle)
2021-06-30  6:56   ` Michal Hocko
2021-06-30 14:19   ` Johannes Weiner
2021-07-01  7:09   ` Christoph Hellwig
2021-06-30  4:00 ` [PATCH v3 05/18] mm/memcg: Convert memcg_check_events to take a node ID Matthew Wilcox (Oracle)
2021-06-30  6:58   ` Michal Hocko
2021-06-30  6:59     ` Michal Hocko
2021-07-01  7:09   ` Christoph Hellwig
2021-06-30  4:00 ` [PATCH v3 06/18] mm/memcg: Add folio_memcg() and related functions Matthew Wilcox (Oracle)
2021-06-30  6:53   ` kernel test robot
2021-07-01  7:12   ` Christoph Hellwig
2021-06-30  4:00 ` [PATCH v3 07/18] mm/memcg: Convert commit_charge() to take a folio Matthew Wilcox (Oracle)
2021-06-30  4:00 ` [PATCH v3 08/18] mm/memcg: Convert mem_cgroup_charge() " Matthew Wilcox (Oracle)
2021-06-30  7:17   ` kernel test robot
2021-07-01  7:13   ` Christoph Hellwig
2021-06-30  4:00 ` [PATCH v3 09/18] mm/memcg: Convert uncharge_page() to uncharge_folio() Matthew Wilcox (Oracle)
2021-07-01  7:15   ` Christoph Hellwig
2021-06-30  4:00 ` [PATCH v3 10/18] mm/memcg: Convert mem_cgroup_uncharge() to take a folio Matthew Wilcox (Oracle)
2021-06-30  8:46   ` kernel test robot
2021-07-01  7:17   ` Christoph Hellwig
2021-07-07 12:09     ` Matthew Wilcox
2021-06-30  4:00 ` [PATCH v3 11/18] mm/memcg: Convert mem_cgroup_migrate() to take folios Matthew Wilcox (Oracle)
2021-07-01  7:20   ` Christoph Hellwig
2021-06-30  4:00 ` [PATCH v3 12/18] mm/memcg: Convert mem_cgroup_track_foreign_dirty_slowpath() to folio Matthew Wilcox (Oracle)
2021-06-30  4:00 ` [PATCH v3 13/18] mm/memcg: Add folio_memcg_lock() and folio_memcg_unlock() Matthew Wilcox (Oracle)
2021-06-30  8:32   ` Michal Hocko [this message]
2021-07-07 15:10     ` Matthew Wilcox
2021-07-08  7:28       ` Michal Hocko
2021-07-07 17:08   ` Johannes Weiner
2021-07-07 19:28     ` Matthew Wilcox
2021-07-07 20:41       ` Johannes Weiner
2021-07-09 19:37         ` Matthew Wilcox
2021-06-30  4:00 ` [PATCH v3 14/18] mm/memcg: Convert mem_cgroup_move_account() to use a folio Matthew Wilcox (Oracle)
2021-06-30  8:30   ` Michal Hocko
2021-06-30 11:22     ` Matthew Wilcox
2021-06-30 12:20       ` Michal Hocko
2021-06-30 12:31         ` Matthew Wilcox
2021-06-30 12:45           ` Michal Hocko
2021-07-07 15:25             ` Matthew Wilcox
2021-07-08  7:30               ` Michal Hocko
2021-06-30  4:00 ` [PATCH v3 15/18] mm/memcg: Add mem_cgroup_folio_lruvec() Matthew Wilcox (Oracle)
2021-06-30  8:12   ` kernel test robot
2021-06-30 19:18   ` Matthew Wilcox
2021-06-30 21:21     ` Johannes Weiner
2021-06-30  4:00 ` [PATCH v3 16/18] mm/memcg: Add folio_lruvec_lock() and similar functions Matthew Wilcox (Oracle)
2021-06-30  8:36   ` Michal Hocko
2021-06-30  4:00 ` [PATCH v3 17/18] mm/memcg: Add folio_lruvec_relock_irq() and folio_lruvec_relock_irqsave() Matthew Wilcox (Oracle)
2021-06-30  8:39   ` Michal Hocko
2021-06-30  4:00 ` [PATCH v3 18/18] mm/workingset: Convert workingset_activation to take a folio Matthew Wilcox (Oracle)
2021-06-30  8:44 ` [PATCH v3 00/18] Folio conversion of memcg Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YNwsAh5u2h34tGDb@dhcp22.suse.cz \
    --to=mhocko@suse.com \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-mm@kvack.org \
    --cc=vdavydov.dev@gmail.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox