From: Laurent Dufour <ldufour@linux.vnet.ibm.com>
To: "Kirill A . Shutemov" <kirill@shutemov.name>,
Peter Zijlstra <peterz@infradead.org>
Cc: Linux MM <linux-mm@kvack.org>, Michal Hocko <mhocko@suse.cz>
Subject: [RFC PATCH v2 4/7] mm: VMA sequence count
Date: Fri, 18 Nov 2016 12:08:48 +0100 [thread overview]
Message-ID: <95790e53bfcfb536eb8f1dcdf4750e7e8050d8f4.1479465699.git.ldufour@linux.vnet.ibm.com> (raw)
In-Reply-To: <cover.1479465699.git.ldufour@linux.vnet.ibm.com>
In-Reply-To: <cover.1479465699.git.ldufour@linux.vnet.ibm.com>
From: Peter Zijlstra <peterz@infradead.org>
Wrap the VMA modifications (vma_adjust/unmap_page_range) with sequence
counts such that we can easily test if a VMA is changed.
The unmap_page_range() one allows us to make assumptions about
page-tables; when we find the seqcount hasn't changed we can assume
page-tables are still valid.
The flip side is that we cannot distinguish between a vma_adjust() and
the unmap_page_range() -- where with the former we could have
re-checked the vma bounds against the address.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
include/linux/mm_types.h | 1 +
mm/memory.c | 2 ++
mm/mmap.c | 12 ++++++++++++
3 files changed, 15 insertions(+)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 903200f4ec41..620719bef808 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -358,6 +358,7 @@ struct vm_area_struct {
struct mempolicy *vm_policy; /* NUMA policy for the VMA */
#endif
struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
+ seqcount_t vm_sequence;
};
struct core_thread {
diff --git a/mm/memory.c b/mm/memory.c
index d19800904272..ec32cf710403 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1290,6 +1290,7 @@ void unmap_page_range(struct mmu_gather *tlb,
unsigned long next;
BUG_ON(addr >= end);
+ write_seqcount_begin(&vma->vm_sequence);
tlb_start_vma(tlb, vma);
pgd = pgd_offset(vma->vm_mm, addr);
do {
@@ -1299,6 +1300,7 @@ void unmap_page_range(struct mmu_gather *tlb,
next = zap_pud_range(tlb, vma, pgd, addr, next, details);
} while (pgd++, addr = next, addr != end);
tlb_end_vma(tlb, vma);
+ write_seqcount_end(&vma->vm_sequence);
}
diff --git a/mm/mmap.c b/mm/mmap.c
index ca9d91bca0d6..c2be9bd0ad92 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -514,6 +514,8 @@ void __vma_link_rb(struct mm_struct *mm, struct vm_area_struct *vma,
else
mm->highest_vm_end = vma->vm_end;
+ seqcount_init(&vma->vm_sequence);
+
/*
* vma->vm_prev wasn't known when we followed the rbtree to find the
* correct insertion point for that vma. As a result, we could not
@@ -629,6 +631,10 @@ int vma_adjust(struct vm_area_struct *vma, unsigned long start,
long adjust_next = 0;
int remove_next = 0;
+ write_seqcount_begin(&vma->vm_sequence);
+ if (next)
+ write_seqcount_begin_nested(&next->vm_sequence, SINGLE_DEPTH_NESTING);
+
if (next && !insert) {
struct vm_area_struct *exporter = NULL, *importer = NULL;
@@ -802,7 +808,9 @@ again:
* we must remove another next too. It would clutter
* up the code too much to do both in one go.
*/
+ write_seqcount_end(&next->vm_sequence);
next = vma->vm_next;
+ write_seqcount_begin_nested(&next->vm_sequence, SINGLE_DEPTH_NESTING);
if (remove_next == 2) {
remove_next = 1;
end = next->vm_end;
@@ -816,6 +824,10 @@ again:
if (insert && file)
uprobe_mmap(insert);
+ if (next)
+ write_seqcount_end(&next->vm_sequence);
+ write_seqcount_end(&vma->vm_sequence);
+
validate_mm(mm);
return 0;
--
2.7.4
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-11-18 11:09 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-10-17 12:33 mmap_sem bottleneck Laurent Dufour
2016-10-17 12:51 ` Peter Zijlstra
2016-10-18 14:50 ` Laurent Dufour
2016-10-18 15:01 ` Kirill A. Shutemov
2016-10-18 15:02 ` Peter Zijlstra
2016-11-18 11:08 ` [RFC PATCH v2 0/7] Speculative page faults Laurent Dufour
2016-11-18 11:08 ` [RFC PATCH v2 1/7] mm: Dont assume page-table invariance during faults Laurent Dufour
2016-11-18 11:08 ` [RFC PATCH v2 2/7] mm: Prepare for FAULT_FLAG_SPECULATIVE Laurent Dufour
2016-11-18 11:08 ` [RFC PATCH v2 3/7] mm: Introduce pte_spinlock Laurent Dufour
2016-11-18 11:08 ` Laurent Dufour [this message]
2016-11-18 11:08 ` [RFC PATCH v2 5/7] SRCU free VMAs Laurent Dufour
2016-11-18 11:08 ` [RFC PATCH v2 6/7] mm: Provide speculative fault infrastructure Laurent Dufour
2016-11-18 11:08 ` [RFC PATCH v2 7/7] mm,x86: Add speculative pagefault handling Laurent Dufour
2016-11-18 14:08 ` [RFC PATCH v2 0/7] Speculative page faults Andi Kleen
2016-12-01 8:34 ` Laurent Dufour
2016-12-01 12:50 ` Balbir Singh
2016-12-01 13:26 ` Laurent Dufour
2016-12-02 14:10 ` Michal Hocko
2016-10-17 12:57 ` mmap_sem bottleneck Michal Hocko
2016-10-20 7:23 ` Laurent Dufour
2016-10-20 10:55 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=95790e53bfcfb536eb8f1dcdf4750e7e8050d8f4.1479465699.git.ldufour@linux.vnet.ibm.com \
--to=ldufour@linux.vnet.ibm.com \
--cc=kirill@shutemov.name \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.cz \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox