linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Steven Rostedt <rostedt@goodmis.org>
To: Christoph Hellwig <hch@lst.de>
Cc: akpm@linux-foundation.org, joelaf@google.com,
	jszhang@marvell.com, chris@chris-wilson.co.uk,
	joaodias@google.com, linux-mm@kvack.org,
	linux-rt-users@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 6/6] mm: add preempt points into __purge_vmap_area_lazy
Date: Tue, 18 Oct 2016 16:56:48 -0400	[thread overview]
Message-ID: <20161018205648.GB7021@home.goodmis.org> (raw)
In-Reply-To: <1476773771-11470-7-git-send-email-hch@lst.de>

On Tue, Oct 18, 2016 at 08:56:11AM +0200, Christoph Hellwig wrote:
> From: Joel Fernandes <joelaf@google.com>
> 
> Use cond_resched_lock to avoid holding the vmap_area_lock for a
> potentially long time.
> 
> Signed-off-by: Joel Fernandes <joelaf@google.com>
> [hch: split from a larger patch by Joel, wrote the crappy changelog]
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  mm/vmalloc.c | 14 +++++++++-----
>  1 file changed, 9 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 6c7eb8d..98b19ea 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -628,7 +628,7 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end)
>  	struct llist_node *valist;
>  	struct vmap_area *va;
>  	struct vmap_area *n_va;
> -	int nr = 0;
> +	bool do_free = false;
>  
>  	lockdep_assert_held(&vmap_purge_lock);
>  
> @@ -638,18 +638,22 @@ static bool __purge_vmap_area_lazy(unsigned long start, unsigned long end)
>  			start = va->va_start;
>  		if (va->va_end > end)
>  			end = va->va_end;
> -		nr += (va->va_end - va->va_start) >> PAGE_SHIFT;
> +		do_free = true;
>  	}
>  
> -	if (!nr)
> +	if (!do_free)
>  		return false;
>  
> -	atomic_sub(nr, &vmap_lazy_nr);
>  	flush_tlb_kernel_range(start, end);
>  
>  	spin_lock(&vmap_area_lock);
> -	llist_for_each_entry_safe(va, n_va, valist, purge_list)
> +	llist_for_each_entry_safe(va, n_va, valist, purge_list) {
> +		int nr = (va->va_end - va->va_start) >> PAGE_SHIFT;
> +
>  		__free_vmap_area(va);
> +		atomic_sub(nr, &vmap_lazy_nr);
> +		cond_resched_lock(&vmap_area_lock);

Is releasing the lock within a llist_for_each_entry_safe() actually safe? Is
vmap_area_lock the one to protect the valist?

That is llist_for_each_entry_safe(va, n_va, valist, purg_list) does:

	for (va = llist_entry(valist, typeof(*va), purge_list);
	     &va->purge_list != NULL &&
	     n_va = llist_entry(va->purge_list.next, typeof(*n_va),
				purge_list, true);
	     pos = n)

Thus n_va is pointing to the next element to process when we release the
lock. Is it possible for another task to get into this same path and process
the item that n_va is pointing to? Then when the preempted task comes back,
grabs the vmap_area_lock, and then continues the loop with what n_va has,
could that cause problems? That is, the next iteration after releasing the
lock does va = n_va. What happens if n_va no longer exits?

I don't know this code that well, and perhaps vmap_area_lock is not protecting
the list and this is all fine.

-- Steve


> +	}
>  	spin_unlock(&vmap_area_lock);
>  	return true;
>  }
> -- 
> 2.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2016-10-18 20:56 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-18  6:56 [RFC] reduce latency in __purge_vmap_area_lazy Christoph Hellwig
2016-10-18  6:56 ` [PATCH 1/6] mm: refactor __purge_vmap_area_lazy Christoph Hellwig
2016-10-18  6:56 ` [PATCH 2/6] mm: mark all calls into the vmalloc subsystem as potentially sleeping Christoph Hellwig
2016-10-18 10:33   ` Chris Wilson
2016-10-18 10:38     ` Christoph Hellwig
2016-10-19 11:15   ` Chris Wilson
2016-10-19 13:05     ` Christoph Hellwig
2016-10-19 15:34       ` Andy Lutomirski
2016-10-19 16:31         ` Christoph Hellwig
2016-10-19 19:43           ` Chris Wilson
2016-10-21  0:32           ` Joel Fernandes
2016-11-08 13:24     ` Joel Fernandes
2016-11-08 14:32       ` Andrey Ryabinin
2016-10-18  6:56 ` [PATCH 3/6] mm: remove free_unmap_vmap_area_noflush Christoph Hellwig
2016-10-18  6:56 ` [PATCH 4/6] mm: remove free_unmap_vmap_area_addr Christoph Hellwig
2016-10-21  0:46   ` Joel Fernandes
2016-10-21  1:58     ` Nicholas Piggin
2016-10-18  6:56 ` [PATCH 5/6] mm: turn vmap_purge_lock into a mutex Christoph Hellwig
2016-10-18  6:56 ` [PATCH 6/6] mm: add preempt points into __purge_vmap_area_lazy Christoph Hellwig
2016-10-18 20:56   ` Steven Rostedt [this message]
2016-10-18 21:01     ` Steven Rostedt
2016-10-18 10:40 ` [RFC] reduce latency in __purge_vmap_area_lazy Nicholas Piggin
2016-10-18 11:21 ` Jisheng Zhang
2016-10-21  1:08 ` Joel Fernandes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161018205648.GB7021@home.goodmis.org \
    --to=rostedt@goodmis.org \
    --cc=akpm@linux-foundation.org \
    --cc=chris@chris-wilson.co.uk \
    --cc=hch@lst.de \
    --cc=joaodias@google.com \
    --cc=joelaf@google.com \
    --cc=jszhang@marvell.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-rt-users@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox