* [PATCH v4 0/2] Do not shatter hugezeropage on wp-fault
@ 2024-09-16 9:43 Dev Jain
2024-09-16 9:43 ` [PATCH v4 1/2] mm: Abstract THP allocation Dev Jain
2024-09-16 9:43 ` [PATCH v4 2/2] mm: Allocate THP on hugezeropage wp-fault Dev Jain
0 siblings, 2 replies; 7+ messages in thread
From: Dev Jain @ 2024-09-16 9:43 UTC (permalink / raw)
To: akpm, david, willy, kirill.shutemov
Cc: ryan.roberts, anshuman.khandual, catalin.marinas, cl, vbabka,
mhocko, apopple, dave.hansen, will, baohua, jack, mark.rutland,
hughd, aneesh.kumar, yang, peterx, ioworker0, jglisse,
wangkefeng.wang, ziy, linux-kernel, linux-mm, Dev Jain
It was observed at [1] and [2] that the current kernel behaviour of
shattering a hugezeropage is inconsistent and suboptimal. For a VMA with
a THP allowable order, when we write-fault on it, the kernel installs a
PMD-mapped THP. On the other hand, if we first get a read fault, we get
a PMD pointing to the hugezeropage; subsequent write will trigger a
write-protection fault, shattering the hugezeropage into one writable
page, and all the other PTEs write-protected. The conclusion being, as
compared to the case of a single write-fault, applications have to suffer
512 extra page faults if they were to use the VMA as such, plus we get
the overhead of khugepaged trying to replace that area with a THP anyway.
Instead, replace the hugezeropage with a THP on wp-fault.
v3->v4:
- Renames: pmd_thp_fault_alloc -> vma_alloc_anon_folio_pmd,
map_pmd_thp -> map_anon_folio_pmd
- Instead of passing around, compute haddr at various places, similar
with gfp flags
- Pass haddr to update_mmu_cache_pmd() instead of unaligned address
- Do not pass vmf to map_anon_folio_pmd
- Do declarations in reverse xmas tree order
- Drop a new line which was introduced accidentally
- Call __pmd_thp_fault_success_stats from map_anon_folio_pmd
- Correctly return NULL from vma_alloc_anon_folio_pmd
- Initialize pgtable to NULL in __do_huge_pmd_anonymous_page, to
prevent freeing pgtable when not even allocated
- Drop if conditions from map_anon_folio_pmd, let the caller handle that
v2->v3:
- Drop foliop and order parameters, prefix the thp functions with pmd_
- First allocate THP, then pgtable, not vice-versa
- Move pgtable_trans_huge_deposit() from map_pmd_thp() to caller
- Drop exposing functions in include/linux/huge_mm.h
- Open code do_huge_zero_wp_pmd_locked()
- Release folio in case of pmd change after taking the lock, or
check_stable_address_space() returning VM_FAULT_SIGBUS
- Drop uffd-wp preservation. Looking at page_table_check_pmd_flags(),
preserving uffd-wp on a writable entry is invalid. Looking at
mfill_atomic(), uffd_copy() is a null operation when pmd is marked
uffd-wp.
v1->v2:
- Wrap do_huge_zero_wp_pmd_locked() around lock and unlock
- Call thp_fault_alloc() before do_huge_zero_wp_pmd_locked() to avoid
- calling sleeping function from spinlock context
[1]: https://lore.kernel.org/all/3743d7e1-0b79-4eaf-82d5-d1ca29fe347d@arm.com/
[2]: https://lore.kernel.org/all/1cfae0c0-96a2-4308-9c62-f7a640520242@arm.com/
The patchset has been rebased on the mm-unstable branch.
Dev Jain (2):
mm: Abstract THP allocation
mm: Allocate THP on hugezeropage wp-fault
mm/huge_memory.c | 152 +++++++++++++++++++++++++++++++++--------------
1 file changed, 109 insertions(+), 43 deletions(-)
--
2.30.2
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v4 1/2] mm: Abstract THP allocation
2024-09-16 9:43 [PATCH v4 0/2] Do not shatter hugezeropage on wp-fault Dev Jain
@ 2024-09-16 9:43 ` Dev Jain
2024-09-17 11:47 ` David Hildenbrand
2024-09-19 6:49 ` kernel test robot
2024-09-16 9:43 ` [PATCH v4 2/2] mm: Allocate THP on hugezeropage wp-fault Dev Jain
1 sibling, 2 replies; 7+ messages in thread
From: Dev Jain @ 2024-09-16 9:43 UTC (permalink / raw)
To: akpm, david, willy, kirill.shutemov
Cc: ryan.roberts, anshuman.khandual, catalin.marinas, cl, vbabka,
mhocko, apopple, dave.hansen, will, baohua, jack, mark.rutland,
hughd, aneesh.kumar, yang, peterx, ioworker0, jglisse,
wangkefeng.wang, ziy, linux-kernel, linux-mm, Dev Jain
In preparation for the second patch, abstract away the THP allocation
logic present in the create_huge_pmd() path, which corresponds to the
faulting case when no page is present.
There should be no functional change as a result of applying this patch,
except that, as David notes at [1], a PMD-aligned address should
be passed to update_mmu_cache_pmd().
[1]: https://lore.kernel.org/all/ddd3fcd2-48b3-4170-bcaa-2fe66e093f43@redhat.com/
Signed-off-by: Dev Jain <dev.jain@arm.com>
---
mm/huge_memory.c | 108 +++++++++++++++++++++++++++++------------------
1 file changed, 66 insertions(+), 42 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2a73efea02d7..cdc632b8dc9c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1146,47 +1146,88 @@ unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
}
EXPORT_SYMBOL_GPL(thp_get_unmapped_area);
-static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
- struct page *page, gfp_t gfp)
+static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct *vma,
+ unsigned long addr)
{
- struct vm_area_struct *vma = vmf->vma;
- struct folio *folio = page_folio(page);
- pgtable_t pgtable;
- unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
- vm_fault_t ret = 0;
+ unsigned long haddr = addr & HPAGE_PMD_MASK;
+ gfp_t gfp = vma_thp_gfp_mask(vma);
+ const int order = HPAGE_PMD_ORDER;
+ struct folio *folio = vma_alloc_folio(gfp, order, vma, haddr, true);
- VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
+ if (unlikely(!folio)) {
+ count_vm_event(THP_FAULT_FALLBACK);
+ count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK);
+ goto out;
+ }
+ VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) {
folio_put(folio);
count_vm_event(THP_FAULT_FALLBACK);
count_vm_event(THP_FAULT_FALLBACK_CHARGE);
- count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK);
- count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
- return VM_FAULT_FALLBACK;
+ count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK);
+ count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
+ return NULL;
}
folio_throttle_swaprate(folio, gfp);
- pgtable = pte_alloc_one(vma->vm_mm);
- if (unlikely(!pgtable)) {
- ret = VM_FAULT_OOM;
- goto release;
- }
-
- folio_zero_user(folio, vmf->address);
+ folio_zero_user(folio, addr);
/*
* The memory barrier inside __folio_mark_uptodate makes sure that
* folio_zero_user writes become visible before the set_pmd_at()
* write.
*/
__folio_mark_uptodate(folio);
+out:
+ return folio;
+}
+
+static void __pmd_thp_fault_success_stats(struct vm_area_struct *vma)
+{
+ count_vm_event(THP_FAULT_ALLOC);
+ count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC);
+ count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC);
+}
+
+static void map_anon_folio_pmd(struct folio *folio, pmd_t *pmd,
+ struct vm_area_struct *vma, unsigned long haddr)
+{
+ pmd_t entry;
+
+ entry = mk_huge_pmd(&folio->page, vma->vm_page_prot);
+ entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
+ folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE);
+ folio_add_lru_vma(folio, vma);
+ set_pmd_at(vma->vm_mm, haddr, pmd, entry);
+ update_mmu_cache_pmd(vma, haddr, pmd);
+ add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
+ __pmd_thp_fault_success_stats(vma);
+}
+
+static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf)
+{
+ unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
+ struct vm_area_struct *vma = vmf->vma;
+ pgtable_t pgtable = NULL;
+ struct folio *folio;
+ vm_fault_t ret = 0;
+
+ folio = vma_alloc_anon_folio_pmd(vma, vmf->address);
+ if (unlikely(!folio)) {
+ ret = VM_FAULT_FALLBACK;
+ goto release;
+ }
+
+ pgtable = pte_alloc_one(vma->vm_mm);
+ if (unlikely(!pgtable)) {
+ ret = VM_FAULT_OOM;
+ goto release;
+ }
vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd);
if (unlikely(!pmd_none(*vmf->pmd))) {
goto unlock_release;
} else {
- pmd_t entry;
-
ret = check_stable_address_space(vma->vm_mm);
if (ret)
goto unlock_release;
@@ -1200,21 +1241,11 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
VM_BUG_ON(ret & VM_FAULT_FALLBACK);
return ret;
}
-
- entry = mk_huge_pmd(page, vma->vm_page_prot);
- entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
- folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE);
- folio_add_lru_vma(folio, vma);
pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable);
- set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry);
- update_mmu_cache_pmd(vma, vmf->address, vmf->pmd);
- add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
+ map_anon_folio_pmd(folio, vmf->pmd, vma, haddr);
mm_inc_nr_ptes(vma->vm_mm);
deferred_split_folio(folio, false);
spin_unlock(vmf->ptl);
- count_vm_event(THP_FAULT_ALLOC);
- count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC);
- count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC);
}
return 0;
@@ -1223,7 +1254,8 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
release:
if (pgtable)
pte_free(vma->vm_mm, pgtable);
- folio_put(folio);
+ if (folio)
+ folio_put(folio);
return ret;
}
@@ -1281,8 +1313,6 @@ static void set_huge_zero_folio(pgtable_t pgtable, struct mm_struct *mm,
vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
{
struct vm_area_struct *vma = vmf->vma;
- gfp_t gfp;
- struct folio *folio;
unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
vm_fault_t ret;
@@ -1333,14 +1363,8 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf)
}
return ret;
}
- gfp = vma_thp_gfp_mask(vma);
- folio = vma_alloc_folio(gfp, HPAGE_PMD_ORDER, vma, haddr, true);
- if (unlikely(!folio)) {
- count_vm_event(THP_FAULT_FALLBACK);
- count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK);
- return VM_FAULT_FALLBACK;
- }
- return __do_huge_pmd_anonymous_page(vmf, &folio->page, gfp);
+
+ return __do_huge_pmd_anonymous_page(vmf);
}
static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr,
--
2.30.2
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v4 2/2] mm: Allocate THP on hugezeropage wp-fault
2024-09-16 9:43 [PATCH v4 0/2] Do not shatter hugezeropage on wp-fault Dev Jain
2024-09-16 9:43 ` [PATCH v4 1/2] mm: Abstract THP allocation Dev Jain
@ 2024-09-16 9:43 ` Dev Jain
1 sibling, 0 replies; 7+ messages in thread
From: Dev Jain @ 2024-09-16 9:43 UTC (permalink / raw)
To: akpm, david, willy, kirill.shutemov
Cc: ryan.roberts, anshuman.khandual, catalin.marinas, cl, vbabka,
mhocko, apopple, dave.hansen, will, baohua, jack, mark.rutland,
hughd, aneesh.kumar, yang, peterx, ioworker0, jglisse,
wangkefeng.wang, ziy, linux-kernel, linux-mm, Dev Jain
Introduce do_huge_zero_wp_pmd() to handle wp-fault on a hugezeropage and
replace it with a PMD-mapped THP. Remember to flush TLB entry
corresponding to the hugezeropage. In case of failure, fallback
to splitting the PMD.
Signed-off-by: Dev Jain <dev.jain@arm.com>
---
mm/huge_memory.c | 44 +++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 43 insertions(+), 1 deletion(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index cdc632b8dc9c..eac7f58729b3 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1796,6 +1796,41 @@ void huge_pmd_set_accessed(struct vm_fault *vmf)
spin_unlock(vmf->ptl);
}
+static vm_fault_t do_huge_zero_wp_pmd(struct vm_fault *vmf)
+{
+ unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
+ struct vm_area_struct *vma = vmf->vma;
+ struct mmu_notifier_range range;
+ struct folio *folio;
+ vm_fault_t ret = 0;
+
+ folio = vma_alloc_anon_folio_pmd(vma, vmf->address);
+ if (unlikely(!folio)) {
+ ret = VM_FAULT_FALLBACK;
+ goto out;
+ }
+
+ mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm, haddr,
+ haddr + HPAGE_PMD_SIZE);
+ mmu_notifier_invalidate_range_start(&range);
+ vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd);
+ if (unlikely(!pmd_same(pmdp_get(vmf->pmd), vmf->orig_pmd)))
+ goto release;
+ ret = check_stable_address_space(vma->vm_mm);
+ if (ret)
+ goto release;
+ (void)pmdp_huge_clear_flush(vma, haddr, vmf->pmd);
+ map_anon_folio_pmd(folio, vmf->pmd, vma, haddr);
+ goto unlock;
+release:
+ folio_put(folio);
+unlock:
+ spin_unlock(vmf->ptl);
+ mmu_notifier_invalidate_range_end(&range);
+out:
+ return ret;
+}
+
vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
{
const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE;
@@ -1808,8 +1843,15 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
vmf->ptl = pmd_lockptr(vma->vm_mm, vmf->pmd);
VM_BUG_ON_VMA(!vma->anon_vma, vma);
- if (is_huge_zero_pmd(orig_pmd))
+ if (is_huge_zero_pmd(orig_pmd)) {
+ vm_fault_t ret = do_huge_zero_wp_pmd(vmf);
+
+ if (!(ret & VM_FAULT_FALLBACK))
+ return ret;
+
+ /* Fallback to splitting PMD if THP cannot be allocated */
goto fallback;
+ }
spin_lock(vmf->ptl);
--
2.30.2
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v4 1/2] mm: Abstract THP allocation
2024-09-16 9:43 ` [PATCH v4 1/2] mm: Abstract THP allocation Dev Jain
@ 2024-09-17 11:47 ` David Hildenbrand
2024-09-24 4:25 ` Dev Jain
2024-09-19 6:49 ` kernel test robot
1 sibling, 1 reply; 7+ messages in thread
From: David Hildenbrand @ 2024-09-17 11:47 UTC (permalink / raw)
To: Dev Jain, akpm, willy, kirill.shutemov
Cc: ryan.roberts, anshuman.khandual, catalin.marinas, cl, vbabka,
mhocko, apopple, dave.hansen, will, baohua, jack, mark.rutland,
hughd, aneesh.kumar, yang, peterx, ioworker0, jglisse,
wangkefeng.wang, ziy, linux-kernel, linux-mm
On 16.09.24 11:43, Dev Jain wrote:
> In preparation for the second patch, abstract away the THP allocation
> logic present in the create_huge_pmd() path, which corresponds to the
> faulting case when no page is present.
>
> There should be no functional change as a result of applying this patch,
> except that, as David notes at [1], a PMD-aligned address should
> be passed to update_mmu_cache_pmd().
>
> [1]: https://lore.kernel.org/all/ddd3fcd2-48b3-4170-bcaa-2fe66e093f43@redhat.com/
>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---
> mm/huge_memory.c | 108 +++++++++++++++++++++++++++++------------------
> 1 file changed, 66 insertions(+), 42 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 2a73efea02d7..cdc632b8dc9c 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1146,47 +1146,88 @@ unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
> }
> EXPORT_SYMBOL_GPL(thp_get_unmapped_area);
>
> -static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
> - struct page *page, gfp_t gfp)
> +static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct *vma,
> + unsigned long addr)
> {
> - struct vm_area_struct *vma = vmf->vma;
> - struct folio *folio = page_folio(page);
> - pgtable_t pgtable;
> - unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
> - vm_fault_t ret = 0;
> + unsigned long haddr = addr & HPAGE_PMD_MASK;
> + gfp_t gfp = vma_thp_gfp_mask(vma);
> + const int order = HPAGE_PMD_ORDER;
> + struct folio *folio = vma_alloc_folio(gfp, order, vma, haddr, true);
>
> - VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
> + if (unlikely(!folio)) {
> + count_vm_event(THP_FAULT_FALLBACK);
> + count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK);
> + goto out;
> + }
>
> + VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
> if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) {
> folio_put(folio);
> count_vm_event(THP_FAULT_FALLBACK);
> count_vm_event(THP_FAULT_FALLBACK_CHARGE);
> - count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK);
> - count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
> - return VM_FAULT_FALLBACK;
> + count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK);
> + count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
> + return NULL;
> }
> folio_throttle_swaprate(folio, gfp);
>
> - pgtable = pte_alloc_one(vma->vm_mm);
> - if (unlikely(!pgtable)) {
> - ret = VM_FAULT_OOM;
> - goto release;
> - }
> -
> - folio_zero_user(folio, vmf->address);
> + folio_zero_user(folio, addr);
> /*
> * The memory barrier inside __folio_mark_uptodate makes sure that
> * folio_zero_user writes become visible before the set_pmd_at()
> * write.
> */
> __folio_mark_uptodate(folio);
> +out:
> + return folio;
> +}
> +
> +static void __pmd_thp_fault_success_stats(struct vm_area_struct *vma)
> +{
> + count_vm_event(THP_FAULT_ALLOC);
> + count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC);
> + count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC);
> +}
just inline that into map_anon_folio_pmd(), please. map_anon_folio_pmd
is perfectly readable ;)
> +
> +static void map_anon_folio_pmd(struct folio *folio, pmd_t *pmd,
> + struct vm_area_struct *vma, unsigned long haddr)
> +{
> + pmd_t entry;
> +
> + entry = mk_huge_pmd(&folio->page, vma->vm_page_prot);
> + entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
> + folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE);
> + folio_add_lru_vma(folio, vma);
> + set_pmd_at(vma->vm_mm, haddr, pmd, entry);
> + update_mmu_cache_pmd(vma, haddr, pmd);
> + add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
> + __pmd_thp_fault_success_stats(vma);
> +}
> +
> +static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf)
> +{
> + unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
> + struct vm_area_struct *vma = vmf->vma;
> + pgtable_t pgtable = NULL;
> + struct folio *folio;
> + vm_fault_t ret = 0;
> +
> + folio = vma_alloc_anon_folio_pmd(vma, vmf->address);
> + if (unlikely(!folio)) {
> + ret = VM_FAULT_FALLBACK;
> + goto release;
Why not simply "return VM_FAULT_FALLBACK;" ? There is nothing to
release. Then you can avoid the
"if (folio)" below and even stop initializing pgtable to NULL.
With these things take care of
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v4 1/2] mm: Abstract THP allocation
2024-09-16 9:43 ` [PATCH v4 1/2] mm: Abstract THP allocation Dev Jain
2024-09-17 11:47 ` David Hildenbrand
@ 2024-09-19 6:49 ` kernel test robot
1 sibling, 0 replies; 7+ messages in thread
From: kernel test robot @ 2024-09-19 6:49 UTC (permalink / raw)
To: Dev Jain, akpm, david, willy, kirill.shutemov
Cc: oe-kbuild-all, ryan.roberts, anshuman.khandual, catalin.marinas,
cl, vbabka, mhocko, apopple, dave.hansen, will, baohua, jack,
mark.rutland, hughd, aneesh.kumar, yang, peterx, ioworker0,
jglisse, wangkefeng.wang, ziy, linux-kernel, linux-mm, Dev Jain
Hi Dev,
kernel test robot noticed the following build warnings:
[auto build test WARNING on akpm-mm/mm-everything]
[also build test WARNING on linus/master v6.11 next-20240918]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Dev-Jain/mm-Abstract-THP-allocation/20240916-174543
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/r/20240916094309.1226908-2-dev.jain%40arm.com
patch subject: [PATCH v4 1/2] mm: Abstract THP allocation
config: i386-allmodconfig (https://download.01.org/0day-ci/archive/20240919/202409191416.9etlfugV-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240919/202409191416.9etlfugV-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202409191416.9etlfugV-lkp@intel.com/
All warnings (new ones prefixed by >>):
mm/huge_memory.c: In function 'vma_alloc_anon_folio_pmd':
>> mm/huge_memory.c:1152:23: warning: unused variable 'haddr' [-Wunused-variable]
1152 | unsigned long haddr = addr & HPAGE_PMD_MASK;
| ^~~~~
Kconfig warnings: (for reference only)
WARNING: unmet direct dependencies detected for GET_FREE_REGION
Depends on [n]: SPARSEMEM [=n]
Selected by [m]:
- RESOURCE_KUNIT_TEST [=m] && RUNTIME_TESTING_MENU [=y] && KUNIT [=m]
vim +/haddr +1152 mm/huge_memory.c
1148
1149 static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct *vma,
1150 unsigned long addr)
1151 {
> 1152 unsigned long haddr = addr & HPAGE_PMD_MASK;
1153 gfp_t gfp = vma_thp_gfp_mask(vma);
1154 const int order = HPAGE_PMD_ORDER;
1155 struct folio *folio = vma_alloc_folio(gfp, order, vma, haddr, true);
1156
1157 if (unlikely(!folio)) {
1158 count_vm_event(THP_FAULT_FALLBACK);
1159 count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK);
1160 goto out;
1161 }
1162
1163 VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
1164 if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) {
1165 folio_put(folio);
1166 count_vm_event(THP_FAULT_FALLBACK);
1167 count_vm_event(THP_FAULT_FALLBACK_CHARGE);
1168 count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK);
1169 count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
1170 return NULL;
1171 }
1172 folio_throttle_swaprate(folio, gfp);
1173
1174 folio_zero_user(folio, addr);
1175 /*
1176 * The memory barrier inside __folio_mark_uptodate makes sure that
1177 * folio_zero_user writes become visible before the set_pmd_at()
1178 * write.
1179 */
1180 __folio_mark_uptodate(folio);
1181 out:
1182 return folio;
1183 }
1184
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v4 1/2] mm: Abstract THP allocation
2024-09-17 11:47 ` David Hildenbrand
@ 2024-09-24 4:25 ` Dev Jain
2024-09-24 7:40 ` David Hildenbrand
0 siblings, 1 reply; 7+ messages in thread
From: Dev Jain @ 2024-09-24 4:25 UTC (permalink / raw)
To: David Hildenbrand, akpm, willy, kirill.shutemov
Cc: ryan.roberts, anshuman.khandual, catalin.marinas, cl, vbabka,
mhocko, apopple, dave.hansen, will, baohua, jack, mark.rutland,
hughd, aneesh.kumar, yang, peterx, ioworker0, jglisse,
wangkefeng.wang, ziy, linux-kernel, linux-mm
On 9/17/24 17:17, David Hildenbrand wrote:
> On 16.09.24 11:43, Dev Jain wrote:
>> In preparation for the second patch, abstract away the THP allocation
>> logic present in the create_huge_pmd() path, which corresponds to the
>> faulting case when no page is present.
>>
>> There should be no functional change as a result of applying this patch,
>> except that, as David notes at [1], a PMD-aligned address should
>> be passed to update_mmu_cache_pmd().
>>
>> [1]:
>> https://lore.kernel.org/all/ddd3fcd2-48b3-4170-bcaa-2fe66e093f43@redhat.com/
>>
>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>> ---
>> mm/huge_memory.c | 108 +++++++++++++++++++++++++++++------------------
>> 1 file changed, 66 insertions(+), 42 deletions(-)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 2a73efea02d7..cdc632b8dc9c 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -1146,47 +1146,88 @@ unsigned long thp_get_unmapped_area(struct
>> file *filp, unsigned long addr,
>> }
>> EXPORT_SYMBOL_GPL(thp_get_unmapped_area);
>> -static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
>> - struct page *page, gfp_t gfp)
>> +static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct
>> *vma,
>> + unsigned long addr)
>> {
>> - struct vm_area_struct *vma = vmf->vma;
>> - struct folio *folio = page_folio(page);
>> - pgtable_t pgtable;
>> - unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
>> - vm_fault_t ret = 0;
>> + unsigned long haddr = addr & HPAGE_PMD_MASK;
>> + gfp_t gfp = vma_thp_gfp_mask(vma);
>> + const int order = HPAGE_PMD_ORDER;
>> + struct folio *folio = vma_alloc_folio(gfp, order, vma, haddr,
>> true);
>> - VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
>> + if (unlikely(!folio)) {
>> + count_vm_event(THP_FAULT_FALLBACK);
>> + count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK);
>> + goto out;
>> + }
>> + VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
>> if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) {
>> folio_put(folio);
>> count_vm_event(THP_FAULT_FALLBACK);
>> count_vm_event(THP_FAULT_FALLBACK_CHARGE);
>> - count_mthp_stat(HPAGE_PMD_ORDER,
>> MTHP_STAT_ANON_FAULT_FALLBACK);
>> - count_mthp_stat(HPAGE_PMD_ORDER,
>> MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
>> - return VM_FAULT_FALLBACK;
>> + count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK);
>> + count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
>> + return NULL;
>> }
>> folio_throttle_swaprate(folio, gfp);
>> - pgtable = pte_alloc_one(vma->vm_mm);
>> - if (unlikely(!pgtable)) {
>> - ret = VM_FAULT_OOM;
>> - goto release;
>> - }
>> -
>> - folio_zero_user(folio, vmf->address);
>> + folio_zero_user(folio, addr);
>> /*
>> * The memory barrier inside __folio_mark_uptodate makes sure that
>> * folio_zero_user writes become visible before the set_pmd_at()
>> * write.
>> */
>> __folio_mark_uptodate(folio);
>> +out:
>> + return folio;
>> +}
>> +
>> +static void __pmd_thp_fault_success_stats(struct vm_area_struct *vma)
>> +{
>> + count_vm_event(THP_FAULT_ALLOC);
>> + count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC);
>> + count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC);
>> +}
>
> just inline that into map_anon_folio_pmd(), please. map_anon_folio_pmd
> is perfectly readable ;)
If you are asking me to open code it in map_anon_folio_pmd(), I'll do that.
>
>> +
>> +static void map_anon_folio_pmd(struct folio *folio, pmd_t *pmd,
>> + struct vm_area_struct *vma, unsigned long haddr)
>
>
>> +{
>> + pmd_t entry;
>> +
>> + entry = mk_huge_pmd(&folio->page, vma->vm_page_prot);
>> + entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
>> + folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE);
>> + folio_add_lru_vma(folio, vma);
>> + set_pmd_at(vma->vm_mm, haddr, pmd, entry);
>> + update_mmu_cache_pmd(vma, haddr, pmd);
>> + add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
>> + __pmd_thp_fault_success_stats(vma);
>> +}
>> +
>> +static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf)
>> +{
>> + unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
>> + struct vm_area_struct *vma = vmf->vma;
>> + pgtable_t pgtable = NULL;
>> + struct folio *folio;
>> + vm_fault_t ret = 0;
>> +
>> + folio = vma_alloc_anon_folio_pmd(vma, vmf->address);
>> + if (unlikely(!folio)) {
>> + ret = VM_FAULT_FALLBACK;
>> + goto release;
>
> Why not simply "return VM_FAULT_FALLBACK;" ? There is nothing to
> release. Then you can avoid the
>
> "if (folio)" below and even stop initializing pgtable to NULL.
Makes sense.
>
>
> With these things take care of
>
> Acked-by: David Hildenbrand <david@redhat.com>
Thanks!
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v4 1/2] mm: Abstract THP allocation
2024-09-24 4:25 ` Dev Jain
@ 2024-09-24 7:40 ` David Hildenbrand
0 siblings, 0 replies; 7+ messages in thread
From: David Hildenbrand @ 2024-09-24 7:40 UTC (permalink / raw)
To: Dev Jain, akpm, willy, kirill.shutemov
Cc: ryan.roberts, anshuman.khandual, catalin.marinas, cl, vbabka,
mhocko, apopple, dave.hansen, will, baohua, jack, mark.rutland,
hughd, aneesh.kumar, yang, peterx, ioworker0, jglisse,
wangkefeng.wang, ziy, linux-kernel, linux-mm
On 24.09.24 06:25, Dev Jain wrote:
>
> On 9/17/24 17:17, David Hildenbrand wrote:
>> On 16.09.24 11:43, Dev Jain wrote:
>>> In preparation for the second patch, abstract away the THP allocation
>>> logic present in the create_huge_pmd() path, which corresponds to the
>>> faulting case when no page is present.
>>>
>>> There should be no functional change as a result of applying this patch,
>>> except that, as David notes at [1], a PMD-aligned address should
>>> be passed to update_mmu_cache_pmd().
>>>
>>> [1]:
>>> https://lore.kernel.org/all/ddd3fcd2-48b3-4170-bcaa-2fe66e093f43@redhat.com/
>>>
>>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>>> ---
>>> mm/huge_memory.c | 108 +++++++++++++++++++++++++++++------------------
>>> 1 file changed, 66 insertions(+), 42 deletions(-)
>>>
>>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>>> index 2a73efea02d7..cdc632b8dc9c 100644
>>> --- a/mm/huge_memory.c
>>> +++ b/mm/huge_memory.c
>>> @@ -1146,47 +1146,88 @@ unsigned long thp_get_unmapped_area(struct
>>> file *filp, unsigned long addr,
>>> }
>>> EXPORT_SYMBOL_GPL(thp_get_unmapped_area);
>>> -static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf,
>>> - struct page *page, gfp_t gfp)
>>> +static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct
>>> *vma,
>>> + unsigned long addr)
>>> {
>>> - struct vm_area_struct *vma = vmf->vma;
>>> - struct folio *folio = page_folio(page);
>>> - pgtable_t pgtable;
>>> - unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
>>> - vm_fault_t ret = 0;
>>> + unsigned long haddr = addr & HPAGE_PMD_MASK;
>>> + gfp_t gfp = vma_thp_gfp_mask(vma);
>>> + const int order = HPAGE_PMD_ORDER;
>>> + struct folio *folio = vma_alloc_folio(gfp, order, vma, haddr,
>>> true);
>>> - VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
>>> + if (unlikely(!folio)) {
>>> + count_vm_event(THP_FAULT_FALLBACK);
>>> + count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK);
>>> + goto out;
>>> + }
>>> + VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
>>> if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) {
>>> folio_put(folio);
>>> count_vm_event(THP_FAULT_FALLBACK);
>>> count_vm_event(THP_FAULT_FALLBACK_CHARGE);
>>> - count_mthp_stat(HPAGE_PMD_ORDER,
>>> MTHP_STAT_ANON_FAULT_FALLBACK);
>>> - count_mthp_stat(HPAGE_PMD_ORDER,
>>> MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
>>> - return VM_FAULT_FALLBACK;
>>> + count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK);
>>> + count_mthp_stat(order, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
>>> + return NULL;
>>> }
>>> folio_throttle_swaprate(folio, gfp);
>>> - pgtable = pte_alloc_one(vma->vm_mm);
>>> - if (unlikely(!pgtable)) {
>>> - ret = VM_FAULT_OOM;
>>> - goto release;
>>> - }
>>> -
>>> - folio_zero_user(folio, vmf->address);
>>> + folio_zero_user(folio, addr);
>>> /*
>>> * The memory barrier inside __folio_mark_uptodate makes sure that
>>> * folio_zero_user writes become visible before the set_pmd_at()
>>> * write.
>>> */
>>> __folio_mark_uptodate(folio);
>>> +out:
>>> + return folio;
>>> +}
>>> +
>>> +static void __pmd_thp_fault_success_stats(struct vm_area_struct *vma)
>>> +{
>>> + count_vm_event(THP_FAULT_ALLOC);
>>> + count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC);
>>> + count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC);
>>> +}
>>
>> just inline that into map_anon_folio_pmd(), please. map_anon_folio_pmd
>> is perfectly readable ;)
>
> If you are asking me to open code it in map_anon_folio_pmd(), I'll do that.
Yes, there will be a single user, so just keep it in the caller.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2024-09-24 7:40 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-09-16 9:43 [PATCH v4 0/2] Do not shatter hugezeropage on wp-fault Dev Jain
2024-09-16 9:43 ` [PATCH v4 1/2] mm: Abstract THP allocation Dev Jain
2024-09-17 11:47 ` David Hildenbrand
2024-09-24 4:25 ` Dev Jain
2024-09-24 7:40 ` David Hildenbrand
2024-09-19 6:49 ` kernel test robot
2024-09-16 9:43 ` [PATCH v4 2/2] mm: Allocate THP on hugezeropage wp-fault Dev Jain
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox