linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Alex Shi <alex.shi@linux.alibaba.com>
Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, akpm@linux-foundation.org,
	mgorman@techsingularity.net, tj@kernel.org, hughd@google.com,
	khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com,
	yang.shi@linux.alibaba.com, Johannes Weiner <hannes@cmpxchg.org>,
	Roman Gushchin <guro@fb.com>, Shakeel Butt <shakeelb@google.com>,
	Chris Down <chris@chrisdown.name>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [PATCH v2 6/8] mm/lru: remove rcu_read_lock to fix performance regression
Date: Tue, 12 Nov 2019 06:38:44 -0800	[thread overview]
Message-ID: <20191112143844.GB7934@bombadil.infradead.org> (raw)
In-Reply-To: <1573567588-47048-7-git-send-email-alex.shi@linux.alibaba.com>

On Tue, Nov 12, 2019 at 10:06:26PM +0800, Alex Shi wrote:
> Intel 0day report there are performance regression on this patchset.
> The detailed info points to rcu_read_lock + PROVE_LOCKING which causes
> queued_spin_lock_slowpath waiting too long time to get lock.
> Remove rcu_read_lock is safe here since we had a spinlock hold.

Argh.  You have not sent these patches in a properly reviewable form!
I wasted all that time reviewing the earlier patch in this series only to
find out that you changed it here.  FIX THE PATCH, don't send a fix-patch
on top of it!

> Reported-by: kbuild test robot <lkp@intel.com>
> Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Roman Gushchin <guro@fb.com>
> Cc: Shakeel Butt <shakeelb@google.com>
> Cc: Chris Down <chris@chrisdown.name>
> Cc: Tejun Heo <tj@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> ---
>  include/linux/memcontrol.h | 29 ++++++++++++-----------------
>  1 file changed, 12 insertions(+), 17 deletions(-)
> 
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 2421b720d272..f869897a68f0 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -1307,20 +1307,18 @@ static inline struct lruvec *relock_page_lruvec_irq(struct page *page,
>  	struct pglist_data *pgdat = page_pgdat(page);
>  	struct lruvec *lruvec;
>  
> -	rcu_read_lock();
> +	if (!locked_lruvec)
> +		goto lock;
> +
>  	lruvec = mem_cgroup_page_lruvec(page, pgdat);
>  
> -	if (locked_lruvec == lruvec) {
> -		rcu_read_unlock();
> +	if (locked_lruvec == lruvec)
>  		return lruvec;
> -	}
> -	rcu_read_unlock();
>  
> -	if (locked_lruvec)
> -		spin_unlock_irq(&locked_lruvec->lru_lock);
> +	spin_unlock_irq(&locked_lruvec->lru_lock);
>  
> +lock:
>  	lruvec = lock_page_lruvec_irq(page, pgdat);
> -
>  	return lruvec;
>  }
>  
> @@ -1331,21 +1329,18 @@ static inline struct lruvec *relock_page_lruvec_irqsave(struct page *page,
>  	struct pglist_data *pgdat = page_pgdat(page);
>  	struct lruvec *lruvec;
>  
> -	rcu_read_lock();
> +	if (!locked_lruvec)
> +		goto lock;
> +
>  	lruvec = mem_cgroup_page_lruvec(page, pgdat);
>  
> -	if (locked_lruvec == lruvec) {
> -		rcu_read_unlock();
> +	if (locked_lruvec == lruvec)
>  		return lruvec;
> -	}
> -	rcu_read_unlock();
>  
> -	if (locked_lruvec)
> -		spin_unlock_irqrestore(&locked_lruvec->lru_lock,
> -							locked_lruvec->flags);
> +	spin_unlock_irqrestore(&locked_lruvec->lru_lock, locked_lruvec->flags);
>  
> +lock:
>  	lruvec = lock_page_lruvec_irqsave(page, pgdat);
> -
>  	return lruvec;
>  }
>  
> -- 
> 1.8.3.1
> 
> 


  reply	other threads:[~2019-11-12 14:38 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-12 14:06 [PATCH v2 0/8] per lruvec lru_lock for memcg Alex Shi
2019-11-12 14:06 ` [PATCH v2 1/8] mm/lru: add per lruvec lock " Alex Shi
2019-11-12 14:06 ` [PATCH v2 2/8] mm/lruvec: add irqsave flags into lruvec struct Alex Shi
2019-11-12 14:06 ` [PATCH v2 3/8] mm/lru: replace pgdat lru_lock with lruvec lock Alex Shi
2019-11-12 14:06 ` [PATCH v2 4/8] mm/lru: only change the lru_lock iff page's lruvec is different Alex Shi
2019-11-12 14:36   ` Matthew Wilcox
2019-11-13  2:26     ` Alex Shi
2019-11-13 13:45       ` Matthew Wilcox
2019-11-14  6:01         ` Alex Shi
2019-11-12 14:06 ` [PATCH v2 5/8] mm/pgdat: remove pgdat lru_lock Alex Shi
2019-11-12 14:06 ` [PATCH v2 6/8] mm/lru: remove rcu_read_lock to fix performance regression Alex Shi
2019-11-12 14:38   ` Matthew Wilcox [this message]
2019-11-13  2:40     ` Alex Shi
2019-11-13 11:40       ` Mel Gorman
2019-11-14  6:02         ` Alex Shi
2019-11-12 14:06 ` [PATCH v2 7/8] mm/lru: likely enhancement Alex Shi
2019-11-12 14:06 ` [PATCH v2 8/8] mm/lru: revise the comments of lru_lock Alex Shi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191112143844.GB7934@bombadil.infradead.org \
    --to=willy@infradead.org \
    --cc=akpm@linux-foundation.org \
    --cc=alex.shi@linux.alibaba.com \
    --cc=cgroups@vger.kernel.org \
    --cc=chris@chrisdown.name \
    --cc=daniel.m.jordan@oracle.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=khlebnikov@yandex-team.ru \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=shakeelb@google.com \
    --cc=tglx@linutronix.de \
    --cc=tj@kernel.org \
    --cc=yang.shi@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox