From: Liam Howlett <liam.howlett@oracle.com>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: "maple-tree@lists.infradead.org" <maple-tree@lists.infradead.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Song Liu <songliubraving@fb.com>,
Davidlohr Bueso <dave@stgolabs.net>,
"Paul E . McKenney" <paulmck@kernel.org>,
Matthew Wilcox <willy@infradead.org>,
Laurent Dufour <ldufour@linux.ibm.com>,
David Rientjes <rientjes@google.com>,
Axel Rasmussen <axelrasmussen@google.com>,
Suren Baghdasaryan <surenb@google.com>,
Rik van Riel <riel@surriel.com>,
Peter Zijlstra <peterz@infradead.org>,
Michel Lespinasse <walken.cr@gmail.com>,
Jerome Glisse <jglisse@redhat.com>,
Minchan Kim <minchan@google.com>,
Joel Fernandes <joelaf@google.com>,
Rom Lemarchand <romlem@google.com>
Subject: Re: [PATCH v4 65/66] mm: Remove the vma linked list
Date: Wed, 26 Jan 2022 20:29:10 +0000 [thread overview]
Message-ID: <20220126202857.y53bz24zom2znb5i@revolver> (raw)
In-Reply-To: <5a83c2ad-82d7-9c56-89cf-5a2184386adc@suse.cz>
* Vlastimil Babka <vbabka@suse.cz> [220120 12:41]:
> On 12/1/21 15:30, Liam Howlett wrote:
> > From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>
> >
> > Replace any vm_next use with vma_find().
> >
> > Update free_pgtables(), unmap_vmas(), and zap_page_range() to use the
> > maple tree.
>
> > Use the new free_pgtables() and unmap_vmas() in do_mas_align_munmap().
> > At the same time, alter the loop to be more compact.
> >
> > Now that free_pgtables() and unmap_vmas() take a maple tree as an
> > argument, rearrange do_mas_align_munmap() to use the new table to hold
> > the lock
>
> table or tree?
tree, thanks.
>
> > Remove __vma_link_list() and __vma_unlink_list() as they are exclusively
> > used to update the linked list
> >
> > Rework validation of tree as it was depending on the linked list.
> >
> > Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
>
> git grep shows that some usages of 'vm_next' and 'vm_prev' remain after this
> patch, including some exotic arch code.
I must have missed them being added during the development cycle of
maple tree.. except parisc; parisc has a block of code left in an #if 0
so it's not lost - Good thing it's in CVS now so it's safe :)
Thanks, riscv will require a new patch.
damon test code will require a new patch - I will add this to the damon
conversion patch.
>
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -398,12 +398,21 @@ void free_pgd_range(struct mmu_gather *tlb,
> > } while (pgd++, addr = next, addr != end);
> > }
> >
> > -void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
> > - unsigned long floor, unsigned long ceiling)
> > +void free_pgtables(struct mmu_gather *tlb, struct maple_tree *mt,
> > + struct vm_area_struct *vma, unsigned long floor,
> > + unsigned long ceiling)
> > {
> > - while (vma) {
> > - struct vm_area_struct *next = vma->vm_next;
> > + MA_STATE(mas, mt, vma->vm_end, vma->vm_end);
> > +
> > + do {
> > unsigned long addr = vma->vm_start;
> > + struct vm_area_struct *next;
> > +
> > + /*
> > + * Note: USER_PGTABLES_CEILING may be passed as ceiling and may
> > + * be 0. This will underflow and is okay.
> > + */
> > + next = mas_find(&mas, ceiling - 1);
> >
> > /*
> > * Hide vma from rmap and truncate_pagecache before freeing
> > @@ -422,7 +431,7 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
> > while (next && next->vm_start <= vma->vm_end + PMD_SIZE
> > && !is_vm_hugetlb_page(next)) {
> > vma = next;
> > - next = vma->vm_next;
> > + next = mas_find(&mas, ceiling - 1);
> > unlink_anon_vmas(vma);
> > unlink_file_vma(vma);
> > }
> > @@ -430,7 +439,7 @@ void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *vma,
> > floor, next ? next->vm_start : ceiling);
> > }
> > vma = next;
> > - }
> > + } while (vma);
> > }
> >
> > void pmd_install(struct mm_struct *mm, pmd_t *pmd, pgtable_t *pte)
> > @@ -1602,17 +1611,19 @@ static void unmap_single_vma(struct mmu_gather *tlb,
> > * ensure that any thus-far unmapped pages are flushed before unmap_vmas()
> > * drops the lock and schedules.
> > */
> > -void unmap_vmas(struct mmu_gather *tlb,
> > +void unmap_vmas(struct mmu_gather *tlb, struct maple_tree *mt,
> > struct vm_area_struct *vma, unsigned long start_addr,
> > unsigned long end_addr)
> > {
> > struct mmu_notifier_range range;
> > + MA_STATE(mas, mt, vma->vm_end, vma->vm_end);
> >
> > mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma, vma->vm_mm,
> > start_addr, end_addr);
> > mmu_notifier_invalidate_range_start(&range);
> > - for ( ; vma && vma->vm_start < end_addr; vma = vma->vm_next)
> > + do {
> > unmap_single_vma(tlb, vma, start_addr, end_addr, NULL);
> > + } while ((vma = mas_find(&mas, end_addr - 1)) != NULL);
> > mmu_notifier_invalidate_range_end(&range);
> > }
> >
> > @@ -1627,8 +1638,11 @@ void unmap_vmas(struct mmu_gather *tlb,
> > void zap_page_range(struct vm_area_struct *vma, unsigned long start,
> > unsigned long size)
> > {
> > + struct maple_tree *mt = &vma->vm_mm->mm_mt;
>
> Well looks like that's also an option to avoid a new parameter :)
>
> > + unsigned long end = start + size;
> > struct mmu_notifier_range range;
> > struct mmu_gather tlb;
> > + MA_STATE(mas, mt, vma->vm_end, vma->vm_end);
> >
> > lru_add_drain();
> > mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm,
> > @@ -1636,8 +1650,9 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start,
> > tlb_gather_mmu(&tlb, vma->vm_mm);
> > update_hiwater_rss(vma->vm_mm);
> > mmu_notifier_invalidate_range_start(&range);
> > - for ( ; vma && vma->vm_start < range.end; vma = vma->vm_next)
> > + do {
> > unmap_single_vma(&tlb, vma, start, range.end, NULL);
> > + } while ((vma = mas_find(&mas, end - 1)) != NULL);
> > mmu_notifier_invalidate_range_end(&range);
> > tlb_finish_mmu(&tlb);
> > }
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index dde74e0b195d..e13c6ef76697 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -74,9 +74,10 @@ int mmap_rnd_compat_bits __read_mostly = CONFIG_ARCH_MMAP_RND_COMPAT_BITS;
> > static bool ignore_rlimit_data;
> > core_param(ignore_rlimit_data, ignore_rlimit_data, bool, 0644);
> >
> > -static void unmap_region(struct mm_struct *mm,
> > +static void unmap_region(struct mm_struct *mm, struct maple_tree *mt,
> > struct vm_area_struct *vma, struct vm_area_struct *prev,
> > - unsigned long start, unsigned long end);
> > + struct vm_area_struct *next, unsigned long start,
> > + unsigned long end);
> >
> > /* description of effects of mapping type and prot in current implementation.
> > * this is due to the limited x86 page protection hardware. The expected
> > @@ -173,10 +174,8 @@ void unlink_file_vma(struct vm_area_struct *vma)
> > /*
> > * Close a vm structure and free it, returning the next.
>
> No longer returning the next.
ack
>
> > */
> > -static struct vm_area_struct *remove_vma(struct vm_area_struct *vma)
> > +static void remove_vma(struct vm_area_struct *vma)
> > {
> > - struct vm_area_struct *next = vma->vm_next;
> > -
> > might_sleep();
> > if (vma->vm_ops && vma->vm_ops->close)
> > vma->vm_ops->close(vma);
>
> <snip>
>
> > */
> > struct anon_vma *find_mergeable_anon_vma(struct vm_area_struct *vma)
> > {
> > + MA_STATE(mas, &vma->vm_mm->mm_mt, vma->vm_end, vma->vm_end);
> > struct anon_vma *anon_vma = NULL;
> > + struct vm_area_struct *prev, *next;
> >
> > /* Try next first. */
> > - if (vma->vm_next) {
> > - anon_vma = reusable_anon_vma(vma->vm_next, vma, vma->vm_next);
> > + next = mas_walk(&mas);
> > + if (next) {
> > + anon_vma = reusable_anon_vma(next, vma, next);
> > if (anon_vma)
> > return anon_vma;
> > }
> >
> > + prev = mas_prev(&mas, 0);
> > + VM_BUG_ON_VMA(prev != vma, vma);
> > + prev = mas_prev(&mas, 0);
> > /* Try prev next. */
> > - if (vma->vm_prev)
> > - anon_vma = reusable_anon_vma(vma->vm_prev, vma->vm_prev, vma);
> > + if (prev)
> > + anon_vma = reusable_anon_vma(prev, prev, vma);
> >
> > /*
> > * We might reach here with anon_vma == NULL if we can't find
> > @@ -1906,10 +1825,10 @@ struct vm_area_struct *find_vma_intersection(struct mm_struct *mm,
> > unsigned long start_addr,
> > unsigned long end_addr)
> > {
> > - MA_STATE(mas, &mm->mm_mt, start_addr, start_addr);
> > + unsigned long index = start_addr;
> >
> > mmap_assert_locked(mm);
> > - return mas_find(&mas, end_addr - 1);
> > + return mt_find(&mm->mm_mt, &index, end_addr - 1);
>
> Why is this now changed again?
I found this with one of your previous comments, I have a fix.
>
> > }
> > EXPORT_SYMBOL(find_vma_intersection);
> >
> > @@ -1923,8 +1842,10 @@ EXPORT_SYMBOL(find_vma_intersection);
> > */
> > inline struct vm_area_struct *find_vma(struct mm_struct *mm, unsigned long addr)
> > {
> > - // Note find_vma_intersection will decrease 0 to underflow to ULONG_MAX
> > - return find_vma_intersection(mm, addr, 0);
> > + unsigned long index = addr;
> > +
> > + mmap_assert_locked(mm);
> > + return mt_find(&mm->mm_mt, &index, ULONG_MAX);
>
> And here.
Ditto.
>
> > }
> > EXPORT_SYMBOL(find_vma);
> >
> > @@ -2026,7 +1947,7 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
> > if (gap_addr < address || gap_addr > TASK_SIZE)
> > gap_addr = TASK_SIZE;
> >
> > - next = vma->vm_next;
> > + next = vma_find(mm, vma->vm_end);
> > if (next && next->vm_start < gap_addr && vma_is_accessible(next)) {
> > if (!(next->vm_flags & VM_GROWSUP))
> > return -ENOMEM;
> > @@ -2072,8 +1993,6 @@ int expand_upwards(struct vm_area_struct *vma, unsigned long address)
> > vma->vm_end = address;
> > vma_store(mm, vma);
> > anon_vma_interval_tree_post_update_vma(vma);
> > - if (!vma->vm_next)
> > - mm->highest_vm_end = vm_end_gap(vma);
> > spin_unlock(&mm->page_table_lock);
> >
> > perf_event_mmap(vma);
> > @@ -2100,7 +2019,7 @@ int expand_downwards(struct vm_area_struct *vma, unsigned long address)
> > return -EPERM;
> >
> > /* Enforce stack_guard_gap */
> > - prev = vma->vm_prev;
> > + find_vma_prev(mm, vma->vm_start, &prev);
> > /* Check that both stack segments have the same anon_vma? */
> > if (prev && !(prev->vm_flags & VM_GROWSDOWN) &&
> > vma_is_accessible(prev)) {
> > @@ -2235,20 +2154,22 @@ EXPORT_SYMBOL_GPL(find_extend_vma);
> > *
> > * Called with the mm semaphore held.
>
> Above this, the comment talks about vma list, update?
I will update the comment.
>
> > */
> > -static void remove_vma_list(struct mm_struct *mm, struct vm_area_struct *vma)
> > +static inline void remove_mt(struct mm_struct *mm, struct maple_tree *detached)
> > {
> > unsigned long nr_accounted = 0;
> > + unsigned long index = 0;
> > + struct vm_area_struct *vma;
> >
> > /* Update high watermark before we lower total_vm */
> > update_hiwater_vm(mm);
> > - do {
> > + mt_for_each(detached, vma, index, ULONG_MAX) {
> > long nrpages = vma_pages(vma);
> >
> > if (vma->vm_flags & VM_ACCOUNT)
> > nr_accounted += nrpages;
> > vm_stat_account(mm, vma->vm_flags, -nrpages);
> > - vma = remove_vma(vma);
> > - } while (vma);
> > + remove_vma(vma);
> > + }
> > vm_unacct_memory(nr_accounted);
> > validate_mm(mm);
> > }
next prev parent reply other threads:[~2022-01-26 20:29 UTC|newest]
Thread overview: 180+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-12-01 14:29 [PATCH v4 00/66] Introducing the Maple Tree Liam Howlett
2021-12-01 14:29 ` [PATCH v4 01/66] radix tree test suite: Add pr_err define Liam Howlett
2021-12-01 14:29 ` [PATCH v4 02/66] radix tree test suite: Add kmem_cache_set_non_kernel() Liam Howlett
2021-12-01 14:29 ` [PATCH v4 04/66] radix tree test suite: Add support for slab bulk APIs Liam Howlett
2021-12-01 14:29 ` [PATCH v4 03/66] radix tree test suite: Add allocation counts and size to kmem_cache Liam Howlett
2021-12-01 14:29 ` [PATCH v4 05/66] Maple Tree: Add new data structure Liam Howlett
2021-12-07 15:34 ` Vlastimil Babka
2021-12-08 15:47 ` Matthew Wilcox
2021-12-08 17:20 ` Liam Howlett
2021-12-15 12:54 ` Vlastimil Babka
2021-12-15 17:52 ` Liam Howlett
2021-12-01 14:29 ` [PATCH v4 06/66] mm: Start tracking VMAs with maple tree Liam Howlett
2021-12-07 18:01 ` Vlastimil Babka
2021-12-08 18:11 ` Liam Howlett
2021-12-01 14:29 ` [PATCH v4 07/66] mm: Add VMA iterator Liam Howlett
2021-12-09 15:26 ` Vlastimil Babka
2021-12-10 2:02 ` Liam Howlett
2021-12-10 15:08 ` Vlastimil Babka
2021-12-10 18:24 ` Liam Howlett
2021-12-01 14:29 ` [PATCH v4 08/66] mmap: Use the VMA iterator in count_vma_pages_range() Liam Howlett
2021-12-09 15:54 ` Vlastimil Babka
2021-12-10 1:35 ` Liam Howlett
2021-12-01 14:29 ` [PATCH v4 09/66] mm/mmap: Use the maple tree in find_vma() instead of the rbtree Liam Howlett
2021-12-15 13:05 ` Vlastimil Babka
2021-12-15 18:09 ` Liam Howlett
2022-01-13 15:46 ` Vlastimil Babka
2021-12-01 14:29 ` [PATCH v4 10/66] mm/mmap: Use the maple tree for find_vma_prev() " Liam Howlett
2021-12-15 14:33 ` Vlastimil Babka
2021-12-15 16:40 ` Vlastimil Babka
2021-12-15 18:19 ` Liam Howlett
2021-12-01 14:29 ` [PATCH v4 11/66] mm/mmap: Use maple tree for unmapped_area{_topdown} Liam Howlett
2021-12-15 16:43 ` Vlastimil Babka
2021-12-15 18:28 ` Liam Howlett
2021-12-01 14:29 ` [PATCH v4 12/66] kernel/fork: Use maple tree for dup_mmap() during forking Liam Howlett
2021-12-16 11:09 ` Vlastimil Babka
2022-01-03 16:45 ` Liam Howlett
2021-12-01 14:29 ` [PATCH v4 13/66] damon: Convert __damon_va_three_regions to use the VMA iterator Liam Howlett
2021-12-01 14:29 ` [PATCH v4 15/66] mm: Convert vma_lookup() to use the Maple Tree Liam Howlett
2021-12-17 11:59 ` Vlastimil Babka
2022-01-03 17:07 ` Liam Howlett
2021-12-01 14:29 ` [PATCH v4 14/66] proc: Remove VMA rbtree use from nommu Liam Howlett
2021-12-01 14:29 ` [PATCH v4 16/66] mm: Remove rb tree Liam Howlett
2022-01-12 12:02 ` Vlastimil Babka
2022-01-17 1:12 ` Liam Howlett
2021-12-01 14:29 ` [PATCH v4 19/66] mm: Optimize find_exact_vma() to use vma_lookup() Liam Howlett
2022-01-12 16:31 ` Vlastimil Babka
2021-12-01 14:29 ` [PATCH v4 18/66] xen: Use vma_lookup() in privcmd_ioctl_mmap() Liam Howlett
2022-01-12 16:01 ` Vlastimil Babka
2022-01-18 0:01 ` Liam Howlett
2021-12-01 14:29 ` [PATCH v4 17/66] mmap: Change zeroing of maple tree in __vma_adjust Liam Howlett
2022-01-12 14:55 ` Vlastimil Babka
2022-01-17 20:02 ` Liam Howlett
2021-12-01 14:29 ` [PATCH v4 21/66] mm/mmap: Change do_brk_flags() to expand existing VMA and add do_brk_munmap() Liam Howlett
2022-01-13 12:59 ` Vlastimil Babka
2022-01-19 3:03 ` Liam Howlett
2022-01-21 12:41 ` Vlastimil Babka
2022-01-13 15:28 ` Vlastimil Babka
2022-01-19 15:51 ` Liam Howlett
2021-12-01 14:29 ` [PATCH v4 20/66] mm/khugepaged: Optimize collapse_pte_mapped_thp() by using vma_lookup() Liam Howlett
2022-01-12 16:42 ` Vlastimil Babka
2021-12-01 14:29 ` [PATCH v4 22/66] mm: Use maple tree operations for find_vma_intersection() and find_vma() Liam Howlett
2022-01-13 15:53 ` Vlastimil Babka
2022-01-19 16:56 ` Liam Howlett
2021-12-01 14:30 ` [PATCH v4 23/66] mm/mmap: Use advanced maple tree API for mmap_region() Liam Howlett
2022-01-17 16:38 ` Vlastimil Babka
2022-01-21 18:11 ` Liam Howlett
2021-12-01 14:30 ` [PATCH v4 24/66] mm: Remove vmacache Liam Howlett
2022-01-17 17:01 ` Vlastimil Babka
2021-12-01 14:30 ` [PATCH v4 25/66] mm/mmap: Move mmap_region() below do_munmap() Liam Howlett
2021-12-01 14:30 ` [PATCH v4 27/66] mm/mmap: Change do_brk_munmap() to use do_mas_align_munmap() Liam Howlett
2022-01-18 11:57 ` Vlastimil Babka
2022-01-22 1:53 ` Liam Howlett
2021-12-01 14:30 ` [PATCH v4 26/66] mm/mmap: Reorganize munmap to use maple states Liam Howlett
2022-01-18 10:39 ` Vlastimil Babka
2022-01-21 19:31 ` Liam Howlett
2021-12-01 14:30 ` [PATCH v4 28/66] arm64: Remove mmap linked list from vdso Liam Howlett
2022-01-18 12:03 ` Vlastimil Babka
2021-12-01 14:30 ` [PATCH v4 29/66] parisc: Remove mmap linked list from cache handling Liam Howlett
2022-01-18 12:06 ` Vlastimil Babka
2021-12-01 14:30 ` [PATCH v4 31/66] s390: Remove vma linked list walks Liam Howlett
2022-01-18 12:12 ` Vlastimil Babka
2021-12-01 14:30 ` [PATCH v4 32/66] x86: " Liam Howlett
2022-01-18 12:12 ` Vlastimil Babka
2021-12-01 14:30 ` [PATCH v4 30/66] powerpc: Remove mmap " Liam Howlett
2022-01-18 12:10 ` Vlastimil Babka
2021-12-01 14:30 ` [PATCH v4 34/66] cxl: Remove vma linked list walk Liam Howlett
2022-01-18 12:37 ` Vlastimil Babka
2022-01-25 16:32 ` Liam Howlett
2021-12-01 14:30 ` [PATCH v4 33/66] xtensa: Remove vma linked list walks Liam Howlett
2022-01-18 12:23 ` Vlastimil Babka
2022-01-25 16:17 ` Liam Howlett
2021-12-01 14:30 ` [PATCH v4 37/66] binfmt_elf: Remove vma linked list walk Liam Howlett
2022-01-19 9:57 ` Vlastimil Babka
2021-12-01 14:30 ` [PATCH v4 36/66] um: " Liam Howlett
2022-01-18 18:41 ` Vlastimil Babka
2022-01-25 16:38 ` Liam Howlett
2021-12-01 14:30 ` [PATCH v4 35/66] optee: " Liam Howlett
2022-01-18 18:15 ` Vlastimil Babka
2021-12-01 14:30 ` [PATCH v4 38/66] coredump: " Liam Howlett
2022-01-19 10:31 ` Vlastimil Babka
2022-01-25 17:00 ` Matthew Wilcox
2021-12-01 14:30 ` [PATCH v4 39/66] binfmt_elf: Take the mmap lock when walking the VMA list Liam Howlett
2022-01-19 10:53 ` Vlastimil Babka
2022-01-31 17:41 ` Liam Howlett
2021-12-01 14:30 ` [PATCH v4 41/66] fs/proc/base: Use maple tree iterators in place of linked list Liam Howlett
2022-01-19 11:10 ` Vlastimil Babka
2021-12-01 14:30 ` [PATCH v4 40/66] exec: Use VMA iterator instead " Liam Howlett
2022-01-19 11:06 ` Vlastimil Babka
2021-12-01 14:30 ` [PATCH v4 42/66] fs/proc/task_mmu: Stop using linked list and highest_vm_end Liam Howlett
2022-01-21 11:52 ` Vlastimil Babka
2022-01-27 20:14 ` Liam Howlett
2021-12-01 14:30 ` [PATCH v4 44/66] ipc/shm: Use VMA iterator instead of linked list Liam Howlett
2022-01-21 12:25 ` Vlastimil Babka
2021-12-01 14:30 ` [PATCH v4 43/66] userfaultfd: Use maple tree iterator to iterate VMAs Liam Howlett
2022-01-19 16:26 ` Vlastimil Babka
2022-01-25 20:47 ` Liam Howlett
2021-12-01 14:30 ` [PATCH v4 46/66] perf: Use VMA iterator Liam Howlett
2022-01-19 16:47 ` Vlastimil Babka
2021-12-01 14:30 ` [PATCH v4 47/66] sched: Use maple tree iterator to walk VMAs Liam Howlett
2022-01-19 16:49 ` Vlastimil Babka
2021-12-01 14:30 ` [PATCH v4 45/66] acct: Use VMA iterator instead of linked list Liam Howlett
2022-01-19 16:44 ` Vlastimil Babka
2021-12-01 14:30 ` [PATCH v4 48/66] fork: Use VMA iterator Liam Howlett
2022-01-19 16:51 ` Vlastimil Babka
2021-12-01 14:30 ` [PATCH v4 49/66] bpf: Remove VMA linked list Liam Howlett
2022-01-19 17:04 ` Vlastimil Babka
2022-01-25 21:37 ` Liam Howlett
2021-12-01 14:30 ` [PATCH v4 50/66] mm/gup: Use maple tree navigation instead of " Liam Howlett
2022-01-19 17:39 ` Vlastimil Babka
2022-01-25 21:54 ` Liam Howlett
2021-12-01 14:30 ` [PATCH v4 51/66] mm/khugepaged: Use maple tree iterators instead of vma " Liam Howlett
2022-01-19 17:48 ` Vlastimil Babka
2022-01-25 22:03 ` Liam Howlett
2021-12-01 14:30 ` [PATCH v4 52/66] mm/ksm: " Liam Howlett
2022-01-19 17:58 ` Vlastimil Babka
2022-01-26 2:29 ` Liam Howlett
2021-12-01 14:30 ` [PATCH v4 54/66] mm/memcontrol: Stop using mm->highest_vm_end Liam Howlett
2022-01-20 11:21 ` Vlastimil Babka
2022-01-26 2:34 ` Liam Howlett
2021-12-01 14:30 ` [PATCH v4 55/66] mm/mempolicy: Use maple tree iterators instead of vma linked list Liam Howlett
2022-01-20 11:58 ` Vlastimil Babka
2022-01-26 2:48 ` Liam Howlett
2022-01-26 9:22 ` Vlastimil Babka
2022-01-27 17:25 ` Liam Howlett
2022-01-27 17:33 ` Vlastimil Babka
2022-01-27 23:03 ` Liam Howlett
2021-12-01 14:30 ` [PATCH v4 53/66] mm/madvise: Use vma_find() " Liam Howlett
2022-01-19 18:00 ` Vlastimil Babka
2021-12-01 14:30 ` [PATCH v4 56/66] mm/mlock: Use maple tree iterators " Liam Howlett
2022-01-20 12:16 ` Vlastimil Babka
2022-01-26 16:41 ` Liam Howlett
2021-12-01 14:30 ` [PATCH v4 57/66] mm/mprotect: Use maple tree navigation " Liam Howlett
2022-01-20 12:23 ` Vlastimil Babka
2021-12-01 14:30 ` [PATCH v4 60/66] mm/oom_kill: Use maple tree iterators " Liam Howlett
2022-01-20 12:43 ` Vlastimil Babka
2022-01-26 17:02 ` Liam Howlett
2021-12-01 14:30 ` [PATCH v4 59/66] mm/msync: Use vma_find() " Liam Howlett
2022-01-20 12:42 ` Vlastimil Babka
2021-12-01 14:30 ` [PATCH v4 58/66] mm/mremap: " Liam Howlett
2022-01-20 12:27 ` Vlastimil Babka
2022-01-26 16:59 ` Liam Howlett
2021-12-01 14:30 ` [PATCH v4 61/66] mm/pagewalk: " Liam Howlett
2022-01-20 12:43 ` Vlastimil Babka
2021-12-01 14:30 ` [PATCH v4 62/66] mm/swapfile: Use maple tree iterator " Liam Howlett
2022-01-20 12:46 ` Vlastimil Babka
2022-01-26 17:08 ` Liam Howlett
2021-12-01 14:30 ` [PATCH v4 63/66] i915: Use the VMA iterator Liam Howlett
2022-01-20 14:59 ` Vlastimil Babka
2022-01-20 15:50 ` Matthew Wilcox
2022-01-20 17:39 ` Liam Howlett
2021-12-01 14:30 ` [PATCH v4 64/66] nommu: Remove uses of VMA linked list Liam Howlett
2022-01-20 15:06 ` Vlastimil Babka
2022-01-20 15:54 ` Matthew Wilcox
2022-01-20 17:06 ` Vlastimil Babka
2022-01-27 16:36 ` Liam Howlett
2021-12-01 14:30 ` [PATCH v4 65/66] mm: Remove the vma " Liam Howlett
2022-01-20 17:41 ` Vlastimil Babka
2022-01-26 20:29 ` Liam Howlett [this message]
2021-12-01 14:30 ` [PATCH v4 66/66] mm/mmap: Drop range_has_overlap() function Liam Howlett
2022-01-21 9:51 ` Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220126202857.y53bz24zom2znb5i@revolver \
--to=liam.howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=axelrasmussen@google.com \
--cc=dave@stgolabs.net \
--cc=jglisse@redhat.com \
--cc=joelaf@google.com \
--cc=ldufour@linux.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=maple-tree@lists.infradead.org \
--cc=minchan@google.com \
--cc=paulmck@kernel.org \
--cc=peterz@infradead.org \
--cc=riel@surriel.com \
--cc=rientjes@google.com \
--cc=romlem@google.com \
--cc=songliubraving@fb.com \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=walken.cr@gmail.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox