linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: zhong jiang <zhongjiang@huawei.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: <gregkh@linuxfoundation.org>, <stable@vger.kernel.org>,
	<vbabka@suse.cz>, <mhocko@suse.com>, <linux-mm@kvack.org>
Subject: Re: [RPF STABLE PATCH] mm/memfd: should be lock the radix_tree when iterating its slot
Date: Sat, 26 Oct 2019 09:54:55 +0800	[thread overview]
Message-ID: <5DB3A76F.3020508@huawei.com> (raw)
In-Reply-To: <20191025151738.GP2963@bombadil.infradead.org>

On 2019/10/25 23:17, Matthew Wilcox wrote:
> On Thu, Oct 24, 2019 at 11:03:20PM +0800, zhong jiang wrote:
>> +	xa_lock_irq(&mapping->i_pages);
> ...
>>  		if (need_resched()) {
>>  			slot = radix_tree_iter_resume(slot, &iter);
>> -			cond_resched_rcu();
>> +			cond_resched_lock(&mapping->i_pages.xa_lock);
> Ooh, this isn't right.  We're taking the lock, disabling interrupts,
> then dropping the lock and rescheduling without reenabling interrupts.
> If this ever triggers then we'll get a scheduling-while-atomic error.
>
> Fortunately (?) need_resched() can almost never be set while we're holding
> a spinlock with interrupts disabled (thanks to peterz for telling me that
> when I asked for a cond_resched_lock_irq() a few years ago).  So we need
> to take this patch further towards the current code.
I miss that.  Thanks you for pointing out.

Thanks,
zhong jiang
> Here's a version for 4.14.y.  Compile tested only.
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 6c10f1d92251..deaea74ec1b3 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -2657,11 +2657,12 @@ static void shmem_tag_pins(struct address_space *mapping)
>  	void **slot;
>  	pgoff_t start;
>  	struct page *page;
> +	unsigned int tagged = 0;
>  
>  	lru_add_drain();
>  	start = 0;
> -	rcu_read_lock();
>  
> +	spin_lock_irq(&mapping->tree_lock);
>  	radix_tree_for_each_slot(slot, &mapping->page_tree, &iter, start) {
>  		page = radix_tree_deref_slot(slot);
>  		if (!page || radix_tree_exception(page)) {
> @@ -2670,18 +2671,19 @@ static void shmem_tag_pins(struct address_space *mapping)
>  				continue;
>  			}
>  		} else if (page_count(page) - page_mapcount(page) > 1) {
> -			spin_lock_irq(&mapping->tree_lock);
>  			radix_tree_tag_set(&mapping->page_tree, iter.index,
>  					   SHMEM_TAG_PINNED);
> -			spin_unlock_irq(&mapping->tree_lock);
>  		}
>  
> -		if (need_resched()) {
> -			slot = radix_tree_iter_resume(slot, &iter);
> -			cond_resched_rcu();
> -		}
> +		if (++tagged % 1024)
> +			continue;
> +
> +		slot = radix_tree_iter_resume(slot, &iter);
> +		spin_unlock_irq(&mapping->tree_lock);
> +		cond_resched();
> +		spin_lock_irq(&mapping->tree_lock);
>  	}
> -	rcu_read_unlock();
> +	spin_unlock_irq(&mapping->tree_lock);
>  }
>  

>  /*
>
>
>
> .
>




      reply	other threads:[~2019-10-26  1:55 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-24 15:03 zhong jiang
2019-10-24 17:41 ` Matthew Wilcox
2019-10-24 23:54   ` Sasha Levin
2019-10-25 15:17 ` Matthew Wilcox
2019-10-26  1:54   ` zhong jiang [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5DB3A76F.3020508@huawei.com \
    --to=zhongjiang@huawei.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=stable@vger.kernel.org \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox