From: Linus Torvalds <torvalds@linux-foundation.org>
To: David Rientjes <rientjes@google.com>
Cc: Ingo Molnar <mingo@kernel.org>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
linux-mm <linux-mm@kvack.org>,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
Paul Turner <pjt@google.com>,
Lee Schermerhorn <Lee.Schermerhorn@hp.com>,
Christoph Lameter <cl@linux.com>, Rik van Riel <riel@redhat.com>,
Mel Gorman <mgorman@suse.de>,
Andrew Morton <akpm@linux-foundation.org>,
Andrea Arcangeli <aarcange@redhat.com>,
Thomas Gleixner <tglx@linutronix.de>,
Johannes Weiner <hannes@cmpxchg.org>,
Hugh Dickins <hughd@google.com>,
Sasha Levin <levinsasha928@gmail.com>
Subject: Re: [patch] mm, mempolicy: Introduce spinlock to read shared policy tree
Date: Thu, 20 Dec 2012 10:34:25 -0800 [thread overview]
Message-ID: <CA+55aFyrSVzGZ438DGnTFuyFb1BOXaMmvxtkW0Xhnx+BxAg2PA@mail.gmail.com> (raw)
In-Reply-To: <alpine.DEB.2.00.1212031644440.32354@chino.kir.corp.google.com>
Going through some old emails before -rc1 rlease..
What is the status of this patch? The patch that is reported to cause
the problem hasn't been merged, but that mpol_misplaced() thing did
happen in commit 771fb4d806a9. And it looks like it's called from
numa_migrate_prep() under the pte map lock. Or am I missing something?
See commit 9532fec118d ("mm: numa: Migrate pages handled during a
pmd_numa hinting fault").
Am I missing something? Mel, please take another look.
I despise these kinds of dual-locking models, and am wondering if we
can't have *just* the spinlock?
Linus
On Mon, Dec 3, 2012 at 4:56 PM, David Rientjes <rientjes@google.com> wrote:
> From: Peter Zijlstra <a.p.zijlstra@chello.nl>
>
> Sasha was fuzzing with trinity and reported the following problem:
>
> BUG: sleeping function called from invalid context at kernel/mutex.c:269
> in_atomic(): 1, irqs_disabled(): 0, pid: 6361, name: trinity-main
> 2 locks held by trinity-main/6361:
> #0: (&mm->mmap_sem){++++++}, at: [<ffffffff810aa314>] __do_page_fault+0x1e4/0x4f0
> #1: (&(&mm->page_table_lock)->rlock){+.+...}, at: [<ffffffff8122f017>] handle_pte_fault+0x3f7/0x6a0
> Pid: 6361, comm: trinity-main Tainted: G W 3.7.0-rc2-next-20121024-sasha-00001-gd95ef01-dirty #74
> Call Trace:
> [<ffffffff8114e393>] __might_sleep+0x1c3/0x1e0
> [<ffffffff83ae5209>] mutex_lock_nested+0x29/0x50
> [<ffffffff8124fc3e>] mpol_shared_policy_lookup+0x2e/0x90
> [<ffffffff81219ebe>] shmem_get_policy+0x2e/0x30
> [<ffffffff8124e99a>] get_vma_policy+0x5a/0xa0
> [<ffffffff8124fce1>] mpol_misplaced+0x41/0x1d0
> [<ffffffff8122f085>] handle_pte_fault+0x465/0x6a0
>
> do_numa_page() calls the new mpol_misplaced() function introduced by
> "sched, numa, mm: Add the scanning page fault machinery" in the page fault
> patch while holding mm->page_table_lock and then
> mpol_shared_policy_lookup() ends up trying to take the shared policy
> mutex.
>
> The fix is to protect the shared policy tree with both a spinlock and
> mutex; both must be held to modify the tree, but only one is required to
> read the tree. This allows sp_lookup() to grab the spinlock for read.
>
> [rientjes@google.com: wrote changelog]
> Reported-by: Sasha Levin <levinsasha928@gmail.com>
> Tested-by: Sasha Levin <levinsasha928@gmail.com>
> Signed-off-by: David Rientjes <rientjes@google.com>
> ---
> include/linux/mempolicy.h | 1 +
> mm/mempolicy.c | 23 ++++++++++++++++++-----
> 2 files changed, 19 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
> --- a/include/linux/mempolicy.h
> +++ b/include/linux/mempolicy.h
> @@ -133,6 +133,7 @@ struct sp_node {
>
> struct shared_policy {
> struct rb_root root;
> + spinlock_t lock;
> struct mutex mutex;
> };
>
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -2090,12 +2090,20 @@ bool __mpol_equal(struct mempolicy *a, struct mempolicy *b)
> *
> * Remember policies even when nobody has shared memory mapped.
> * The policies are kept in Red-Black tree linked from the inode.
> - * They are protected by the sp->lock spinlock, which should be held
> - * for any accesses to the tree.
> + *
> + * The rb-tree is locked using both a mutex and a spinlock. Every modification
> + * to the tree must hold both the mutex and the spinlock, lookups can hold
> + * either to observe a stable tree.
> + *
> + * In particular, sp_insert() and sp_delete() take the spinlock, whereas
> + * sp_lookup() doesn't, this so users have choice.
> + *
> + * shared_policy_replace() and mpol_free_shared_policy() take the mutex
> + * and call sp_insert(), sp_delete().
> */
>
> /* lookup first element intersecting start-end */
> -/* Caller holds sp->mutex */
> +/* Caller holds either sp->lock and/or sp->mutex */
> static struct sp_node *
> sp_lookup(struct shared_policy *sp, unsigned long start, unsigned long end)
> {
> @@ -2134,6 +2142,7 @@ static void sp_insert(struct shared_policy *sp, struct sp_node *new)
> struct rb_node *parent = NULL;
> struct sp_node *nd;
>
> + spin_lock(&sp->lock);
> while (*p) {
> parent = *p;
> nd = rb_entry(parent, struct sp_node, nd);
> @@ -2146,6 +2155,7 @@ static void sp_insert(struct shared_policy *sp, struct sp_node *new)
> }
> rb_link_node(&new->nd, parent, p);
> rb_insert_color(&new->nd, &sp->root);
> + spin_unlock(&sp->lock);
> pr_debug("inserting %lx-%lx: %d\n", new->start, new->end,
> new->policy ? new->policy->mode : 0);
> }
> @@ -2159,13 +2169,13 @@ mpol_shared_policy_lookup(struct shared_policy *sp, unsigned long idx)
>
> if (!sp->root.rb_node)
> return NULL;
> - mutex_lock(&sp->mutex);
> + spin_lock(&sp->lock);
> sn = sp_lookup(sp, idx, idx+1);
> if (sn) {
> mpol_get(sn->policy);
> pol = sn->policy;
> }
> - mutex_unlock(&sp->mutex);
> + spin_unlock(&sp->lock);
> return pol;
> }
>
> @@ -2178,8 +2188,10 @@ static void sp_free(struct sp_node *n)
> static void sp_delete(struct shared_policy *sp, struct sp_node *n)
> {
> pr_debug("deleting %lx-l%lx\n", n->start, n->end);
> + spin_lock(&sp->lock);
> rb_erase(&n->nd, &sp->root);
> sp_free(n);
> + spin_unlock(&sp->lock);
> }
>
> static struct sp_node *sp_alloc(unsigned long start, unsigned long end,
> @@ -2264,6 +2276,7 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol)
> int ret;
>
> sp->root = RB_ROOT; /* empty tree == default mempolicy */
> + spin_lock_init(&sp->lock);
> mutex_init(&sp->mutex);
>
> if (mpol) {
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-12-20 18:34 UTC|newest]
Thread overview: 55+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-11-22 22:49 [PATCH 00/33] Latest numa/core release, v17 Ingo Molnar
2012-11-22 22:49 ` [PATCH 01/33] mm/generic: Only flush the local TLB in ptep_set_access_flags() Ingo Molnar
2012-11-22 22:49 ` [PATCH 02/33] x86/mm: Only do a local tlb flush " Ingo Molnar
2012-11-22 22:49 ` [PATCH 03/33] x86/mm: Introduce pte_accessible() Ingo Molnar
2012-11-22 22:49 ` [PATCH 04/33] mm: Only flush the TLB when clearing an accessible pte Ingo Molnar
2012-11-22 22:49 ` [PATCH 05/33] x86/mm: Completely drop the TLB flush from ptep_set_access_flags() Ingo Molnar
2012-11-22 22:49 ` [PATCH 06/33] mm: Count the number of pages affected in change_protection() Ingo Molnar
2012-11-22 22:49 ` [PATCH 07/33] mm: Optimize the TLB flush of sys_mprotect() and change_protection() users Ingo Molnar
2012-11-22 22:49 ` [PATCH 08/33] sched, numa, mm: Add last_cpu to page flags Ingo Molnar
2012-11-22 22:49 ` [PATCH 09/33] sched, mm, numa: Create generic NUMA fault infrastructure, with architectures overrides Ingo Molnar
2012-11-22 22:49 ` [PATCH 10/33] sched: Make find_busiest_queue() a method Ingo Molnar
2012-11-22 22:49 ` [PATCH 11/33] sched, numa, mm: Describe the NUMA scheduling problem formally Ingo Molnar
2012-11-22 22:49 ` [PATCH 12/33] numa, mm: Support NUMA hinting page faults from gup/gup_fast Ingo Molnar
2012-11-22 22:49 ` [PATCH 13/33] mm/migrate: Introduce migrate_misplaced_page() Ingo Molnar
2012-11-22 22:49 ` [PATCH 14/33] mm/migration: Improve migrate_misplaced_page() Ingo Molnar
2012-11-22 22:49 ` [PATCH 15/33] sched, numa, mm, arch: Add variable locality exception Ingo Molnar
2012-11-22 22:49 ` [PATCH 16/33] sched, numa, mm: Add credits for NUMA placement Ingo Molnar
2012-11-22 22:49 ` [PATCH 17/33] sched, mm, x86: Add the ARCH_SUPPORTS_NUMA_BALANCING flag Ingo Molnar
2012-11-22 22:49 ` [PATCH 18/33] sched, numa, mm: Add the scanning page fault machinery Ingo Molnar
2012-12-04 0:56 ` [patch] mm, mempolicy: Introduce spinlock to read shared policy tree David Rientjes
2012-12-20 18:34 ` Linus Torvalds [this message]
2012-12-20 22:55 ` David Rientjes
2012-12-21 13:47 ` Mel Gorman
2012-12-21 16:53 ` Linus Torvalds
2012-12-21 18:21 ` Hugh Dickins
2012-12-21 21:51 ` Linus Torvalds
2012-12-21 19:58 ` Mel Gorman
2012-12-21 22:02 ` Linus Torvalds
2012-12-21 23:10 ` Mel Gorman
2012-12-22 0:36 ` Linus Torvalds
2013-01-02 19:43 ` KOSAKI Motohiro
2012-11-22 22:49 ` [PATCH 19/33] sched: Add adaptive NUMA affinity support Ingo Molnar
2012-11-26 20:32 ` Sasha Levin
2012-11-22 22:49 ` [PATCH 20/33] sched: Implement constant, per task Working Set Sampling (WSS) rate Ingo Molnar
2012-11-22 22:49 ` [PATCH 21/33] sched, numa, mm: Count WS scanning against present PTEs, not virtual memory ranges Ingo Molnar
2012-11-22 22:49 ` [PATCH 22/33] sched: Implement slow start for working set sampling Ingo Molnar
2012-11-22 22:49 ` [PATCH 23/33] sched, numa, mm: Interleave shared tasks Ingo Molnar
2012-11-22 22:49 ` [PATCH 24/33] sched: Implement NUMA scanning backoff Ingo Molnar
2012-11-22 22:49 ` [PATCH 25/33] sched: Improve convergence Ingo Molnar
2012-11-22 22:49 ` [PATCH 26/33] sched: Introduce staged average NUMA faults Ingo Molnar
2012-11-22 22:49 ` [PATCH 27/33] sched: Track groups of shared tasks Ingo Molnar
2012-11-22 22:49 ` [PATCH 28/33] sched: Use the best-buddy 'ideal cpu' in balancing decisions Ingo Molnar
2012-11-22 22:49 ` [PATCH 29/33] sched, mm, mempolicy: Add per task mempolicy Ingo Molnar
2012-11-22 22:49 ` [PATCH 30/33] sched: Average the fault stats longer Ingo Molnar
2012-11-22 22:49 ` [PATCH 31/33] sched: Use the ideal CPU to drive active balancing Ingo Molnar
2012-11-22 22:49 ` [PATCH 32/33] sched: Add hysteresis to p->numa_shared Ingo Molnar
2012-11-22 22:49 ` [PATCH 33/33] sched: Track shared task's node groups and interleave their memory allocations Ingo Molnar
2012-11-22 22:53 ` [PATCH 00/33] Latest numa/core release, v17 Ingo Molnar
2012-11-23 6:47 ` Zhouping Liu
2012-11-23 17:32 ` Comparison between three trees (was: Latest numa/core release, v17) Mel Gorman
2012-11-25 8:47 ` Hillf Danton
2012-11-26 9:38 ` Mel Gorman
2012-11-25 23:37 ` Mel Gorman
2012-11-25 23:40 ` Mel Gorman
2012-11-26 13:33 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CA+55aFyrSVzGZ438DGnTFuyFb1BOXaMmvxtkW0Xhnx+BxAg2PA@mail.gmail.com \
--to=torvalds@linux-foundation.org \
--cc=Lee.Schermerhorn@hp.com \
--cc=a.p.zijlstra@chello.nl \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=levinsasha928@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mingo@kernel.org \
--cc=pjt@google.com \
--cc=riel@redhat.com \
--cc=rientjes@google.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox