From: John Hubbard <jhubbard@nvidia.com>
To: Andrew Morton <akpm@linux-foundation.org>, <airlied@linux.ie>,
<chris@chris-wilson.co.uk>, <daniel@ffwll.ch>,
<jani.nikula@linux.intel.com>, <joonas.lahtinen@linux.intel.com>,
<jrdr.linux@gmail.com>, <linux-mm@kvack.org>,
<matthew.auld@intel.com>, <mm-commits@vger.kernel.org>,
<rodrigo.vivi@intel.com>, <torvalds@linux-foundation.org>,
<tvrtko.ursulin@intel.com>, <willy@infradead.org>
Subject: Re: [patch 003/131] mm/gup: move __get_user_pages_fast() down a few lines in gup.c
Date: Wed, 3 Jun 2020 18:51:44 -0700 [thread overview]
Message-ID: <2f010100-7d0f-e7bd-073e-e7f6baa205b2@nvidia.com> (raw)
In-Reply-To: <20200603225627.KaTOZsw5O%akpm@linux-foundation.org>
On 2020-06-03 15:56, Andrew Morton wrote:
> From: John Hubbard <jhubbard@nvidia.com>
> Subject: mm/gup: move __get_user_pages_fast() down a few lines in gup.c
>
> Patch series "mm/gup, drm/i915: refactor gup_fast, convert to pin_user_pages()", v2.
These patches 003 through 007 (gup refactoring and pin_user_pages stuff) all
look good.
Thanks for fixing up the merge conflicts with commit
17839856fd58 ("gup: document and work around "COW can break either way" issue").
I wasn't aware of that commit until the -next conflict email showed up in
my inbox this morning.
thanks,
--
John Hubbard
NVIDIA
>
> In order to convert the drm/i915 driver from get_user_pages() to
> pin_user_pages(), a FOLL_PIN equivalent of __get_user_pages_fast() was
> required. That led to refactoring __get_user_pages_fast(), with the
> following goals:
>
> 1) As above: provide a pin_user_pages*() routine for drm/i915 to call,
> in place of __get_user_pages_fast(),
>
> 2) Get rid of the gup.c duplicate code for walking page tables with
> interrupts disabled. This duplicate code is a minor maintenance
> problem anyway.
>
> 3) Make it easy for an upcoming patch from Souptick, which aims to
> convert __get_user_pages_fast() to use a gup_flags argument, instead
> of a bool writeable arg. Also, if this series looks good, we can
> ask Souptick to change the name as well, to whatever the consensus
> is. My initial recommendation is: get_user_pages_fast_only(), to
> match the new pin_user_pages_only().
>
>
> This patch (of 4):
>
> This is in order to avoid a forward declaration of
> internal_get_user_pages_fast(), in the next patch.
>
> This is code movement only--all generated code should be identical.
>
> Link: http://lkml.kernel.org/r/20200522051931.54191-1-jhubbard@nvidia.com
> Link: http://lkml.kernel.org/r/20200519002124.2025955-1-jhubbard@nvidia.com
> Link: http://lkml.kernel.org/r/20200519002124.2025955-2-jhubbard@nvidia.com
> Signed-off-by: John Hubbard <jhubbard@nvidia.com>
> Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Daniel Vetter <daniel@ffwll.ch>
> Cc: David Airlie <airlied@linux.ie>
> Cc: Jani Nikula <jani.nikula@linux.intel.com>
> Cc: "Joonas Lahtinen" <joonas.lahtinen@linux.intel.com>
> Cc: Matthew Auld <matthew.auld@intel.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
> Cc: Souptick Joarder <jrdr.linux@gmail.com>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> ---
>
> mm/gup.c | 132 ++++++++++++++++++++++++++---------------------------
> 1 file changed, 66 insertions(+), 66 deletions(-)
>
> --- a/mm/gup.c~mm-gup-move-__get_user_pages_fast-down-a-few-lines-in-gupc
> +++ a/mm/gup.c
> @@ -2703,72 +2703,6 @@ static bool gup_fast_permitted(unsigned
> }
> #endif
>
> -/*
> - * Like get_user_pages_fast() except it's IRQ-safe in that it won't fall back to
> - * the regular GUP.
> - * Note a difference with get_user_pages_fast: this always returns the
> - * number of pages pinned, 0 if no pages were pinned.
> - *
> - * If the architecture does not support this function, simply return with no
> - * pages pinned.
> - *
> - * Careful, careful! COW breaking can go either way, so a non-write
> - * access can get ambiguous page results. If you call this function without
> - * 'write' set, you'd better be sure that you're ok with that ambiguity.
> - */
> -int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
> - struct page **pages)
> -{
> - unsigned long len, end;
> - unsigned long flags;
> - int nr_pinned = 0;
> - /*
> - * Internally (within mm/gup.c), gup fast variants must set FOLL_GET,
> - * because gup fast is always a "pin with a +1 page refcount" request.
> - */
> - unsigned int gup_flags = FOLL_GET;
> -
> - if (write)
> - gup_flags |= FOLL_WRITE;
> -
> - start = untagged_addr(start) & PAGE_MASK;
> - len = (unsigned long) nr_pages << PAGE_SHIFT;
> - end = start + len;
> -
> - if (end <= start)
> - return 0;
> - if (unlikely(!access_ok((void __user *)start, len)))
> - return 0;
> -
> - /*
> - * Disable interrupts. We use the nested form as we can already have
> - * interrupts disabled by get_futex_key.
> - *
> - * With interrupts disabled, we block page table pages from being
> - * freed from under us. See struct mmu_table_batch comments in
> - * include/asm-generic/tlb.h for more details.
> - *
> - * We do not adopt an rcu_read_lock(.) here as we also want to
> - * block IPIs that come from THPs splitting.
> - *
> - * NOTE! We allow read-only gup_fast() here, but you'd better be
> - * careful about possible COW pages. You'll get _a_ COW page, but
> - * not necessarily the one you intended to get depending on what
> - * COW event happens after this. COW may break the page copy in a
> - * random direction.
> - */
> -
> - if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) &&
> - gup_fast_permitted(start, end)) {
> - local_irq_save(flags);
> - gup_pgd_range(start, end, gup_flags, pages, &nr_pinned);
> - local_irq_restore(flags);
> - }
> -
> - return nr_pinned;
> -}
> -EXPORT_SYMBOL_GPL(__get_user_pages_fast);
> -
> static int __gup_longterm_unlocked(unsigned long start, int nr_pages,
> unsigned int gup_flags, struct page **pages)
> {
> @@ -2848,6 +2782,72 @@ static int internal_get_user_pages_fast(
> return ret;
> }
>
> +/*
> + * Like get_user_pages_fast() except it's IRQ-safe in that it won't fall back to
> + * the regular GUP.
> + * Note a difference with get_user_pages_fast: this always returns the
> + * number of pages pinned, 0 if no pages were pinned.
> + *
> + * If the architecture does not support this function, simply return with no
> + * pages pinned.
> + *
> + * Careful, careful! COW breaking can go either way, so a non-write
> + * access can get ambiguous page results. If you call this function without
> + * 'write' set, you'd better be sure that you're ok with that ambiguity.
> + */
> +int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
> + struct page **pages)
> +{
> + unsigned long len, end;
> + unsigned long flags;
> + int nr_pinned = 0;
> + /*
> + * Internally (within mm/gup.c), gup fast variants must set FOLL_GET,
> + * because gup fast is always a "pin with a +1 page refcount" request.
> + */
> + unsigned int gup_flags = FOLL_GET;
> +
> + if (write)
> + gup_flags |= FOLL_WRITE;
> +
> + start = untagged_addr(start) & PAGE_MASK;
> + len = (unsigned long) nr_pages << PAGE_SHIFT;
> + end = start + len;
> +
> + if (end <= start)
> + return 0;
> + if (unlikely(!access_ok((void __user *)start, len)))
> + return 0;
> +
> + /*
> + * Disable interrupts. We use the nested form as we can already have
> + * interrupts disabled by get_futex_key.
> + *
> + * With interrupts disabled, we block page table pages from being
> + * freed from under us. See struct mmu_table_batch comments in
> + * include/asm-generic/tlb.h for more details.
> + *
> + * We do not adopt an rcu_read_lock(.) here as we also want to
> + * block IPIs that come from THPs splitting.
> + *
> + * NOTE! We allow read-only gup_fast() here, but you'd better be
> + * careful about possible COW pages. You'll get _a_ COW page, but
> + * not necessarily the one you intended to get depending on what
> + * COW event happens after this. COW may break the page copy in a
> + * random direction.
> + */
> +
> + if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) &&
> + gup_fast_permitted(start, end)) {
> + local_irq_save(flags);
> + gup_pgd_range(start, end, gup_flags, pages, &nr_pinned);
> + local_irq_restore(flags);
> + }
> +
> + return nr_pinned;
> +}
> +EXPORT_SYMBOL_GPL(__get_user_pages_fast);
> +
> /**
> * get_user_pages_fast() - pin user pages in memory
> * @start: starting user address
> _
>
next prev parent reply other threads:[~2020-06-04 1:51 UTC|newest]
Thread overview: 142+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-03 22:55 incoming Andrew Morton
2020-06-03 22:56 ` [patch 001/131] mm/slub: fix a memory leak in sysfs_slab_add() Andrew Morton
2020-06-03 22:56 ` [patch 002/131] mm/memcg: optimize memory.numa_stat like memory.stat Andrew Morton
2020-06-03 22:56 ` [patch 003/131] mm/gup: move __get_user_pages_fast() down a few lines in gup.c Andrew Morton
2020-06-04 1:51 ` John Hubbard [this message]
2020-06-03 22:56 ` [patch 004/131] mm/gup: refactor and de-duplicate gup_fast() code Andrew Morton
2020-06-04 2:19 ` Linus Torvalds
2020-06-04 3:19 ` Linus Torvalds
2020-06-04 4:31 ` Linus Torvalds
2020-06-04 5:18 ` John Hubbard
2020-06-03 22:56 ` [patch 005/131] mm/gup: introduce pin_user_pages_fast_only() Andrew Morton
2020-06-03 22:56 ` [patch 006/131] drm/i915: convert get_user_pages() --> pin_user_pages() Andrew Morton
2020-06-03 22:56 ` [patch 007/131] mm/gup: might_lock_read(mmap_sem) in get_user_pages_fast() Andrew Morton
2020-06-03 22:56 ` [patch 008/131] kasan: stop tests being eliminated as dead code with FORTIFY_SOURCE Andrew Morton
2020-06-03 22:56 ` [patch 009/131] string.h: fix incompatibility between FORTIFY_SOURCE and KASAN Andrew Morton
2020-06-03 22:56 ` [patch 010/131] mm: clarify __GFP_MEMALLOC usage Andrew Morton
2020-06-03 22:56 ` [patch 011/131] mm: memblock: replace dereferences of memblock_region.nid with API calls Andrew Morton
2020-06-03 22:56 ` [patch 012/131] mm: make early_pfn_to_nid() and related defintions close to each other Andrew Morton
2020-06-03 22:57 ` [patch 013/131] mm: remove CONFIG_HAVE_MEMBLOCK_NODE_MAP option Andrew Morton
2020-06-03 22:57 ` [patch 014/131] mm: free_area_init: use maximal zone PFNs rather than zone sizes Andrew Morton
2020-06-03 22:57 ` [patch 015/131] mm: use free_area_init() instead of free_area_init_nodes() Andrew Morton
2020-06-03 22:57 ` [patch 016/131] alpha: simplify detection of memory zone boundaries Andrew Morton
2020-06-03 22:57 ` [patch 017/131] arm: " Andrew Morton
2020-06-03 22:57 ` [patch 018/131] arm64: simplify detection of memory zone boundaries for UMA configs Andrew Morton
2020-06-03 22:57 ` [patch 019/131] csky: simplify detection of memory zone boundaries Andrew Morton
2020-06-03 22:57 ` [patch 020/131] m68k: mm: " Andrew Morton
2020-06-03 22:57 ` [patch 021/131] parisc: " Andrew Morton
2020-06-03 22:57 ` [patch 022/131] sparc32: " Andrew Morton
2020-06-03 22:57 ` [patch 023/131] unicore32: " Andrew Morton
2020-06-03 22:57 ` [patch 024/131] xtensa: " Andrew Morton
2020-06-03 22:57 ` [patch 025/131] mm: memmap_init: iterate over memblock regions rather that check each PFN Andrew Morton
2020-06-03 22:57 ` [patch 026/131] mm: remove early_pfn_in_nid() and CONFIG_NODES_SPAN_OTHER_NODES Andrew Morton
2020-06-03 22:58 ` [patch 027/131] mm: free_area_init: allow defining max_zone_pfn in descending order Andrew Morton
2020-06-03 22:58 ` [patch 028/131] mm: rename free_area_init_node() to free_area_init_memoryless_node() Andrew Morton
2020-06-03 22:58 ` [patch 029/131] mm: clean up free_area_init_node() and its helpers Andrew Morton
2020-06-03 22:58 ` [patch 030/131] mm: simplify find_min_pfn_with_active_regions() Andrew Morton
2020-06-03 22:58 ` [patch 031/131] docs/vm: update memory-models documentation Andrew Morton
2020-06-03 22:58 ` [patch 032/131] mm/page_alloc.c: bad_[reason|flags] is not necessary when PageHWPoison Andrew Morton
2020-06-03 22:58 ` [patch 033/131] mm/page_alloc.c: bad_flags is not necessary for bad_page() Andrew Morton
2020-06-03 22:58 ` [patch 034/131] mm/page_alloc.c: rename free_pages_check_bad() to check_free_page_bad() Andrew Morton
2020-06-03 22:58 ` [patch 035/131] mm/page_alloc.c: rename free_pages_check() to check_free_page() Andrew Morton
2020-06-03 22:58 ` [patch 036/131] mm/page_alloc.c: extract check_[new|free]_page_bad() common part to page_bad_reason() Andrew Morton
2020-06-03 22:58 ` [patch 037/131] mm,page_alloc,cma: conditionally prefer cma pageblocks for movable allocations Andrew Morton
2020-06-03 22:58 ` [patch 038/131] mm/page_alloc.c: remove unused free_bootmem_with_active_regions Andrew Morton
2020-06-03 22:58 ` [patch 039/131] mm/page_alloc.c: only tune sysctl_lowmem_reserve_ratio value once when changing it Andrew Morton
2020-06-03 22:58 ` [patch 040/131] mm/page_alloc.c: clear out zone->lowmem_reserve[] if the zone is empty Andrew Morton
2020-06-03 22:58 ` [patch 041/131] mm/vmstat.c: do not show lowmem reserve protection information of empty zone Andrew Morton
2020-06-03 22:58 ` [patch 042/131] mm/page_alloc: use ac->high_zoneidx for classzone_idx Andrew Morton
2020-06-03 22:59 ` [patch 043/131] mm/page_alloc: integrate classzone_idx and high_zoneidx Andrew Morton
2020-06-03 22:59 ` [patch 044/131] mm/page_alloc.c: use NODE_MASK_NONE in build_zonelists() Andrew Morton
2020-06-03 22:59 ` [patch 045/131] mm: rename gfpflags_to_migratetype to gfp_migratetype for same convention Andrew Morton
2020-06-03 22:59 ` [patch 046/131] mm/page_alloc.c: reset numa stats for boot pagesets Andrew Morton
2020-06-03 22:59 ` [patch 047/131] mm, page_alloc: reset the zone->watermark_boost early Andrew Morton
2020-06-03 22:59 ` [patch 048/131] mm/page_alloc: restrict and formalize compound_page_dtors[] Andrew Morton
2020-06-03 22:59 ` [patch 049/131] mm/pagealloc.c: call touch_nmi_watchdog() on max order boundaries in deferred init Andrew Morton
2020-06-03 22:59 ` [patch 050/131] mm: initialize deferred pages with interrupts enabled Andrew Morton
2020-06-03 22:59 ` [patch 051/131] mm: call cond_resched() from deferred_init_memmap() Andrew Morton
2020-06-03 22:59 ` [patch 052/131] padata: remove exit routine Andrew Morton
2020-06-03 22:59 ` [patch 053/131] padata: initialize earlier Andrew Morton
2020-06-03 22:59 ` [patch 054/131] padata: allocate work structures for parallel jobs from a pool Andrew Morton
2020-06-03 22:59 ` [patch 055/131] padata: add basic support for multithreaded jobs Andrew Morton
2020-06-03 22:59 ` [patch 056/131] mm: don't track number of pages during deferred initialization Andrew Morton
2020-06-03 22:59 ` [patch 057/131] mm: parallelize deferred_init_memmap() Andrew Morton
2020-06-03 22:59 ` [patch 058/131] mm: make deferred init's max threads arch-specific Andrew Morton
2020-06-03 22:59 ` [patch 059/131] padata: document multithreaded jobs Andrew Morton
2020-06-03 23:00 ` [patch 060/131] mm/page_alloc.c: add missing newline Andrew Morton
2020-06-03 23:00 ` [patch 061/131] khugepaged: add self test Andrew Morton
2020-06-03 23:00 ` [patch 062/131] khugepaged: do not stop collapse if less than half PTEs are referenced Andrew Morton
2020-06-03 23:00 ` [patch 063/131] khugepaged: drain all LRU caches before scanning pages Andrew Morton
2020-06-03 23:00 ` [patch 064/131] khugepaged: drain LRU add pagevec after swapin Andrew Morton
2020-06-03 23:00 ` [patch 065/131] khugepaged: allow to collapse a page shared across fork Andrew Morton
2020-06-03 23:00 ` [patch 066/131] khugepaged: allow to collapse PTE-mapped compound pages Andrew Morton
2020-06-03 23:00 ` [patch 067/131] thp: change CoW semantics for anon-THP Andrew Morton
2020-06-03 23:00 ` [patch 068/131] khugepaged: introduce 'max_ptes_shared' tunable Andrew Morton
2020-06-03 23:00 ` [patch 069/131] hugetlbfs: add arch_hugetlb_valid_size Andrew Morton
2020-06-03 23:00 ` [patch 070/131] hugetlbfs: move hugepagesz= parsing to arch independent code Andrew Morton
2020-06-03 23:00 ` [patch 071/131] hugetlbfs: remove hugetlb_add_hstate() warning for existing hstate Andrew Morton
2020-06-03 23:00 ` [patch 072/131] hugetlbfs: clean up command line processing Andrew Morton
2020-06-03 23:00 ` [patch 073/131] hugetlbfs: fix changes to " Andrew Morton
2020-06-03 23:00 ` [patch 074/131] mm/hugetlb: avoid unnecessary check on pud and pmd entry in huge_pte_offset Andrew Morton
2020-06-03 23:00 ` [patch 075/131] arm64/mm: drop __HAVE_ARCH_HUGE_PTEP_GET Andrew Morton
2020-06-03 23:01 ` [patch 076/131] mm/hugetlb: define a generic fallback for is_hugepage_only_range() Andrew Morton
2020-06-03 23:01 ` [patch 077/131] mm/hugetlb: define a generic fallback for arch_clear_hugepage_flags() Andrew Morton
2020-06-03 23:01 ` [patch 078/131] mm: simplify calling a compound page destructor Andrew Morton
2020-06-03 23:01 ` [patch 079/131] mm/vmscan.c: use update_lru_size() in update_lru_sizes() Andrew Morton
2020-06-03 23:01 ` [patch 080/131] mm/vmscan: count layzfree pages and fix nr_isolated_* mismatch Andrew Morton
2020-06-03 23:01 ` [patch 081/131] mm/vmscan.c: change prototype for shrink_page_list Andrew Morton
2020-06-03 23:01 ` [patch 082/131] mm/vmscan: update the comment of should_continue_reclaim() Andrew Morton
2020-06-03 23:01 ` [patch 083/131] mm: fix NUMA node file count error in replace_page_cache() Andrew Morton
2020-06-03 23:01 ` [patch 084/131] mm: memcontrol: fix stat-corrupting race in charge moving Andrew Morton
2020-06-03 23:01 ` [patch 085/131] mm: memcontrol: drop @compound parameter from memcg charging API Andrew Morton
2020-06-03 23:01 ` [patch 086/131] mm: shmem: remove rare optimization when swapin races with hole punching Andrew Morton
2020-06-03 23:01 ` [patch 087/131] mm: memcontrol: move out cgroup swaprate throttling Andrew Morton
2020-06-03 23:01 ` [patch 088/131] mm: memcontrol: convert page cache to a new mem_cgroup_charge() API Andrew Morton
2020-06-03 23:01 ` [patch 089/131] mm: memcontrol: prepare uncharging for removal of private page type counters Andrew Morton
2020-06-03 23:01 ` [patch 090/131] mm: memcontrol: prepare move_account " Andrew Morton
2020-06-03 23:01 ` [patch 091/131] mm: memcontrol: prepare cgroup vmstat infrastructure for native anon counters Andrew Morton
2020-06-03 23:01 ` [patch 092/131] mm: memcontrol: switch to native NR_FILE_PAGES and NR_SHMEM counters Andrew Morton
2020-06-03 23:01 ` [patch 093/131] mm: memcontrol: switch to native NR_ANON_MAPPED counter Andrew Morton
2020-06-03 23:02 ` [patch 094/131] mm: memcontrol: switch to native NR_ANON_THPS counter Andrew Morton
2020-06-03 23:02 ` [patch 095/131] mm: memcontrol: convert anon and file-thp to new mem_cgroup_charge() API Andrew Morton
2020-06-03 23:02 ` [patch 096/131] mm: memcontrol: drop unused try/commit/cancel charge API Andrew Morton
2020-06-03 23:02 ` [patch 097/131] mm: memcontrol: prepare swap controller setup for integration Andrew Morton
2020-06-03 23:02 ` [patch 098/131] mm: memcontrol: make swap tracking an integral part of memory control Andrew Morton
2020-06-03 23:02 ` [patch 099/131] mm: memcontrol: charge swapin pages on instantiation Andrew Morton
2020-06-03 23:02 ` [patch 100/131] mm: memcontrol: document the new swap control behavior Andrew Morton
2020-06-03 23:02 ` [patch 101/131] mm: memcontrol: delete unused lrucare handling Andrew Morton
2020-06-03 23:02 ` [patch 102/131] mm: memcontrol: update page->mem_cgroup stability rules Andrew Morton
2020-06-03 23:02 ` [patch 103/131] mm: fix LRU balancing effect of new transparent huge pages Andrew Morton
2020-06-03 23:02 ` [patch 104/131] mm: keep separate anon and file statistics on page reclaim activity Andrew Morton
2020-06-03 23:02 ` [patch 105/131] mm: allow swappiness that prefers reclaiming anon over the file workingset Andrew Morton
2020-06-03 23:02 ` [patch 106/131] mm: fold and remove lru_cache_add_anon() and lru_cache_add_file() Andrew Morton
2020-06-03 23:02 ` [patch 107/131] mm: workingset: let cache workingset challenge anon Andrew Morton
2020-06-03 23:02 ` [patch 108/131] mm: remove use-once cache bias from LRU balancing Andrew Morton
2020-06-03 23:02 ` [patch 109/131] mm: vmscan: drop unnecessary div0 avoidance rounding in get_scan_count() Andrew Morton
2020-06-03 23:02 ` [patch 110/131] mm: base LRU balancing on an explicit cost model Andrew Morton
2020-06-03 23:02 ` [patch 111/131] mm: deactivations shouldn't bias the LRU balance Andrew Morton
2020-06-03 23:03 ` [patch 112/131] mm: only count actual rotations as LRU reclaim cost Andrew Morton
2020-06-03 23:03 ` [patch 113/131] mm: balance LRU lists based on relative thrashing Andrew Morton
2020-06-09 9:15 ` Alex Shi
2020-06-09 14:45 ` Johannes Weiner
2020-06-10 5:23 ` Joonsoo Kim
2020-06-11 3:28 ` Alex Shi
2020-06-03 23:03 ` [patch 114/131] mm: vmscan: determine anon/file pressure balance at the reclaim root Andrew Morton
2020-06-03 23:03 ` [patch 115/131] mm: vmscan: reclaim writepage is IO cost Andrew Morton
2020-06-03 23:03 ` [patch 116/131] mm: vmscan: limit the range of LRU type balancing Andrew Morton
2020-06-03 23:03 ` [patch 117/131] mm: swap: fix vmstats for huge pages Andrew Morton
2020-06-03 23:03 ` [patch 118/131] mm: swap: memcg: fix memcg stats " Andrew Morton
2020-06-03 23:03 ` [patch 119/131] tools/vm/page_owner_sort.c: filter out unneeded line Andrew Morton
2020-06-03 23:03 ` [patch 120/131] mm, mempolicy: fix up gup usage in lookup_node Andrew Morton
2020-06-03 23:03 ` [patch 121/131] include/linux/memblock.h: fix minor typo and unclear comment Andrew Morton
2020-06-03 23:03 ` [patch 122/131] sparc32: register memory occupied by kernel as memblock.memory Andrew Morton
2020-06-03 23:03 ` [patch 123/131] hugetlbfs: get unmapped area below TASK_UNMAPPED_BASE for hugetlbfs Andrew Morton
2020-06-03 23:03 ` [patch 124/131] mm: thp: don't need to drain lru cache when splitting and mlocking THP Andrew Morton
2020-06-03 23:03 ` [patch 125/131] powerpc/mm: drop platform defined pmd_mknotpresent() Andrew Morton
2020-06-03 23:03 ` [patch 126/131] mm/thp: rename pmd_mknotpresent() as pmd_mkinvalid() Andrew Morton
2020-06-03 23:03 ` [patch 127/131] drivers/base/memory.c: cache memory blocks in xarray to accelerate lookup Andrew Morton
2020-06-03 23:03 ` [patch 128/131] mm: add DEBUG_WX support Andrew Morton
2020-06-03 23:03 ` [patch 129/131] riscv: support DEBUG_WX Andrew Morton
2020-06-03 23:03 ` [patch 130/131] x86: mm: use ARCH_HAS_DEBUG_WX instead of arch defined Andrew Morton
2020-06-03 23:04 ` [patch 131/131] arm64: " Andrew Morton
2020-06-04 0:54 ` mmotm 2020-06-03-17-54 uploaded Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2f010100-7d0f-e7bd-073e-e7f6baa205b2@nvidia.com \
--to=jhubbard@nvidia.com \
--cc=airlied@linux.ie \
--cc=akpm@linux-foundation.org \
--cc=chris@chris-wilson.co.uk \
--cc=daniel@ffwll.ch \
--cc=jani.nikula@linux.intel.com \
--cc=joonas.lahtinen@linux.intel.com \
--cc=jrdr.linux@gmail.com \
--cc=linux-mm@kvack.org \
--cc=matthew.auld@intel.com \
--cc=mm-commits@vger.kernel.org \
--cc=rodrigo.vivi@intel.com \
--cc=torvalds@linux-foundation.org \
--cc=tvrtko.ursulin@intel.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox