linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Hugh Dickins <hugh@veritas.com>
To: Andrea Arcangeli <andrea@qumranet.com>
Cc: Christoph Lameter <clameter@sgi.com>, Robin Holt <holt@sgi.com>,
	Jack Steiner <steiner@sgi.com>, Nick Piggin <npiggin@suse.de>,
	Peter Zijlstra <a.p.zijlstra@chello.nl>,
	kvm-devel@lists.sourceforge.net,
	Kanoj Sarcar <kanojsarcar@yahoo.com>,
	Roland Dreier <rdreier@cisco.com>,
	Steve Wise <swise@opengridcomputing.com>,
	linux-kernel@vger.kernel.org, Avi Kivity <avi@qumranet.com>,
	linux-mm@kvack.org, general@lists.openfabrics.org,
	akpm@linux-foundation.org, Rusty Russell <rusty@rustcorp.com.au>
Subject: Re: [PATCH 01 of 12] Core of mmu notifiers
Date: Tue, 29 Apr 2008 11:49:11 +0100 (BST)	[thread overview]
Message-ID: <Pine.LNX.4.64.0804291128330.12702@blonde.site> (raw)
In-Reply-To: <20080429001052.GA8315@duo.random>

On Tue, 29 Apr 2008, Andrea Arcangeli wrote:
> 
> My point of view is that there was no rcu when I wrote that code, yet
> there was no reference count and yet all locking looks still exactly
> the same as I wrote it. There's even still the page_table_lock to
> serialize threads taking the mmap_sem in read mode against the first
> vma->anon_vma = anon_vma during the page fault.
> 
> Frankly I've absolutely no idea why rcu is needed in all rmap code
> when walking the page->mapping. Definitely the PG_locked is taken so
> there's no way page->mapping could possibly go away under the rmap
> code, hence the anon_vma can't go away as it's queued in the vma, and
> the vma has to go away before the page is zapped out of the pte.

[I'm scarcely following the mmu notifiers to-and-fro, which seems
to be in good hands, amongst faster thinkers than me: who actually
need and can test this stuff.  Don't let me slow you down; but I
can quickly clarify on this history.]

No, the locking was different as you had it, Andrea: there was an extra
bitspin lock, carried over from the pte_chains days (maybe we changed
the name, maybe we disagreed over the name, I forget), which mainly
guarded the page->mapcount.  I thought that was one lock more than we
needed, and eliminated it in favour of atomic page->mapcount in 2.6.9.

Here's the relevant extracts from ChangeLog-2.6.9:

[PATCH] rmaplock: PageAnon in mapping

First of a batch of five patches to eliminate rmap's page_map_lock, replace
its trylocking by spinlocking, and use anon_vma to speed up swapoff.

Patches updated from the originals against 2.6.7-mm7: nothing new so I won't
spam the list, but including Manfred's SLAB_DESTROY_BY_RCU fixes, and omitting
the unuse_process mmap_sem fix already in 2.6.8-rc3.

This patch:

Replace the PG_anon page->flags bit by setting the lower bit of the pointer in
page->mapping when it's anon_vma: PAGE_MAPPING_ANON bit.

We're about to eliminate the locking which kept the flags and mapping in
synch: it's much easier to work on a local copy of page->mapping, than worry
about whether flags and mapping are in synch (though I imagine it could be
done, at greater cost, with some barriers).

[PATCH] rmaplock: kill page_map_lock

The pte_chains rmap used pte_chain_lock (bit_spin_lock on PG_chainlock) to
lock its pte_chains.  We kept this (as page_map_lock: bit_spin_lock on
PG_maplock) when we moved to objrmap.  But the file objrmap locks its vma tree
with mapping->i_mmap_lock, and the anon objrmap locks its vma list with
anon_vma->lock: so isn't the page_map_lock superfluous?

Pretty much, yes.  The mapcount was protected by it, and needs to become an
atomic: starting at -1 like page _count, so nr_mapped can be tracked precisely
up and down.  The last page_remove_rmap can't clear anon page mapping any
more, because of races with page_add_rmap; from which some BUG_ONs must go for
the same reason, but they've served their purpose.

vmscan decisions are naturally racy, little change there beyond removing
page_map_lock/unlock.  But to stabilize the file-backed page->mapping against
truncation while acquiring i_mmap_lock, page_referenced_file now needs page
lock to be held even for refill_inactive_zone.  There's a similar issue in
acquiring anon_vma->lock, where page lock doesn't help: which this patch
pretends to handle, but actually it needs the next.

Roughly 10% cut off lmbench fork numbers on my 2*HT*P4.  Must confess my
testing failed to show the races even while they were knowingly exposed: would
benefit from testing on racier equipment.

[PATCH] rmaplock: SLAB_DESTROY_BY_RCU

With page_map_lock gone, how to stabilize page->mapping's anon_vma while
acquiring anon_vma->lock in page_referenced_anon and try_to_unmap_anon?

The page cannot actually be freed (vmscan holds reference), but however much
we check page_mapped (which guarantees that anon_vma is in use - or would
guarantee that if we added suitable barriers), there's no locking against page
becoming unmapped the instant after, then anon_vma freed.

It's okay to take anon_vma->lock after it's freed, so long as it remains a
struct anon_vma (its list would become empty, or perhaps reused for an
unrelated anon_vma: but no problem since we always check that the page located
is the right one); but corruption if that memory gets reused for some other
purpose.

This is not unique: it's liable to be problem whenever the kernel tries to
approach a structure obliquely.  It's generally solved with an atomic
reference count; but one advantage of anon_vma over anonmm is that it does not
have such a count, and it would be a backward step to add one.

Therefore...  implement SLAB_DESTROY_BY_RCU flag, to guarantee that such a
kmem_cache_alloc'ed structure cannot get freed to other use while the
rcu_read_lock is held i.e.  preempt disabled; and use that for anon_vma.

Fix concerns raised by Manfred: this flag is incompatible with poisoning and
destructor, and kmem_cache_destroy needs to synchronize_kernel.

I hope SLAB_DESTROY_BY_RCU may be useful elsewhere; but though it's safe for
little anon_vma, I'd be reluctant to use it on any caches whose immediate
shrinkage under pressure is important to the system.

[PATCH] rmaplock: mm lock ordering

With page_map_lock out of the way, there's no need for page_referenced and
try_to_unmap to use trylocks - provided we switch anon_vma->lock and
mm->page_table_lock around in anon_vma_prepare.  Though I suppose it's
possible that we'll find that vmscan makes better progress with trylocks than
spinning - we're free to choose trylocks again if so.

Try to update the mm lock ordering documentation in filemap.c.  But I still
find it confusing, and I've no idea of where to stop.  So add an mm lock
ordering list I can understand to rmap.c.

[The fifth patch was about using anon_vma in swapoff, not relevant here.]

So, going back to what you wrote: holding the page lock there is
not enough to prevent the struct anon_vma going away beneath us.

Hugh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2008-04-29 10:49 UTC|newest]

Thread overview: 86+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-04-22 13:51 [PATCH 00 of 12] mmu notifier #v13 Andrea Arcangeli
2008-04-22 13:51 ` [PATCH 01 of 12] Core of mmu notifiers Andrea Arcangeli
2008-04-22 14:56   ` Eric Dumazet
2008-04-22 15:15     ` Andrea Arcangeli
2008-04-22 15:24       ` Avi Kivity
2008-04-22 15:37       ` Eric Dumazet
2008-04-22 16:46         ` Andrea Arcangeli
2008-04-22 20:19   ` Christoph Lameter
2008-04-22 20:31     ` Robin Holt
2008-04-22 22:35     ` Andrea Arcangeli
2008-04-22 23:07       ` Robin Holt
2008-04-23  0:28         ` Jack Steiner
2008-04-23 16:37           ` Andrea Arcangeli
2008-04-23 18:19             ` Christoph Lameter
2008-04-23 18:25               ` Andrea Arcangeli
2008-04-23 22:19             ` Andrea Arcangeli
2008-04-24  6:49               ` Andrea Arcangeli
2008-04-24  9:51                 ` Robin Holt
2008-04-24 15:39                   ` Andrea Arcangeli
2008-04-24 17:41                     ` Andrea Arcangeli
2008-04-26 13:17                       ` Robin Holt
2008-04-26 14:04                         ` Andrea Arcangeli
2008-04-27 12:27                         ` Andrea Arcangeli
2008-04-28 20:34                           ` Christoph Lameter
2008-04-29  0:10                             ` Andrea Arcangeli
2008-04-29  1:28                               ` Christoph Lameter
2008-04-29 15:30                                 ` Andrea Arcangeli
2008-04-29 15:50                                   ` Robin Holt
2008-04-29 16:03                                     ` Andrea Arcangeli
2008-05-07 15:00                                       ` Andrea Arcangeli
2008-04-29 10:49                               ` Hugh Dickins [this message]
2008-04-29 13:32                                 ` Andrea Arcangeli
2008-04-23 13:36         ` Andrea Arcangeli
2008-04-23 14:47           ` Robin Holt
2008-04-23 15:59             ` Andrea Arcangeli
2008-04-23 18:09               ` Christoph Lameter
2008-04-23 18:19                 ` Andrea Arcangeli
2008-04-23 18:27                   ` Christoph Lameter
2008-04-23 18:37                     ` Andrea Arcangeli
2008-04-23 18:46                       ` Christoph Lameter
2008-04-22 23:20       ` Christoph Lameter
2008-04-23 16:26         ` Andrea Arcangeli
2008-04-23 17:24           ` Andrea Arcangeli
2008-04-23 18:21             ` Christoph Lameter
2008-04-23 18:34               ` Andrea Arcangeli
2008-04-23 18:15           ` Christoph Lameter
2008-04-23 17:09   ` Jack Steiner
2008-04-23 17:45     ` Andrea Arcangeli
2008-04-22 13:51 ` [PATCH 02 of 12] Fix ia64 compilation failure because of common code include bug Andrea Arcangeli
2008-04-22 20:22   ` Christoph Lameter
2008-04-22 22:43     ` Andrea Arcangeli
2008-04-22 23:07       ` Robin Holt
2008-04-22 13:51 ` [PATCH 03 of 12] get_task_mm should not succeed if mmput() is running and has reduced Andrea Arcangeli
2008-04-22 20:23   ` Christoph Lameter
2008-04-22 22:37     ` Andrea Arcangeli
2008-04-22 23:13       ` Christoph Lameter
2008-04-22 13:51 ` [PATCH 04 of 12] Moves all mmu notifier methods outside the PT lock (first and not last Andrea Arcangeli
2008-04-22 20:24   ` Christoph Lameter
2008-04-22 22:40     ` Andrea Arcangeli
2008-04-22 23:14       ` Christoph Lameter
2008-04-23 13:44         ` Andrea Arcangeli
2008-04-23 15:45           ` Robin Holt
2008-04-23 16:15             ` Andrea Arcangeli
2008-04-23 19:55               ` Robin Holt
2008-04-23 21:05             ` Avi Kivity
2008-04-23 18:02           ` Christoph Lameter
2008-04-23 18:16             ` Andrea Arcangeli
2008-04-22 13:51 ` [PATCH 05 of 12] Move the tlb flushing into free_pgtables. The conversion of the locks Andrea Arcangeli
2008-04-22 20:25   ` Christoph Lameter
2008-04-22 13:51 ` [PATCH 06 of 12] Move the tlb flushing inside of unmap vmas. This saves us from passing Andrea Arcangeli
2008-04-22 13:51 ` [PATCH 07 of 12] Add a function to rw_semaphores to check if there are any processes Andrea Arcangeli
2008-04-22 13:51 ` [PATCH 08 of 12] The conversion to a rwsem allows notifier callbacks during rmap traversal Andrea Arcangeli
2008-04-22 13:51 ` [PATCH 09 of 12] Convert the anon_vma spinlock to a rw semaphore. This allows concurrent Andrea Arcangeli
2008-04-22 13:51 ` [PATCH 10 of 12] Convert mm_lock to use semaphores after i_mmap_lock and anon_vma_lock Andrea Arcangeli
2008-04-22 20:26   ` Christoph Lameter
2008-04-22 22:54     ` Andrea Arcangeli
2008-04-22 23:19       ` Christoph Lameter
2008-04-22 13:51 ` [PATCH 11 of 12] XPMEM would have used sys_madvise() except that madvise_dontneed() Andrea Arcangeli
2008-04-22 13:51 ` [PATCH 12 of 12] This patch adds a lock ordering rule to avoid a potential deadlock when Andrea Arcangeli
2008-04-22 18:22 ` [PATCH 00 of 12] mmu notifier #v13 Robin Holt
2008-04-22 18:43   ` Andrea Arcangeli
2008-04-22 19:42     ` Robin Holt
2008-04-22 20:30       ` Christoph Lameter
2008-04-23 13:33         ` Andrea Arcangeli
2008-04-22 20:28     ` Christoph Lameter
2008-04-23  0:31 ` Jack Steiner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.64.0804291128330.12702@blonde.site \
    --to=hugh@veritas.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=akpm@linux-foundation.org \
    --cc=andrea@qumranet.com \
    --cc=avi@qumranet.com \
    --cc=clameter@sgi.com \
    --cc=general@lists.openfabrics.org \
    --cc=holt@sgi.com \
    --cc=kanojsarcar@yahoo.com \
    --cc=kvm-devel@lists.sourceforge.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=npiggin@suse.de \
    --cc=rdreier@cisco.com \
    --cc=rusty@rustcorp.com.au \
    --cc=steiner@sgi.com \
    --cc=swise@opengridcomputing.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox