linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH] mm/rmap.c: finer hwpoison granularity for PTE-mapped THP
@ 2020-01-02  3:04 Wei Yang
  2020-01-09 12:32 ` Kirill A. Shutemov
  0 siblings, 1 reply; 3+ messages in thread
From: Wei Yang @ 2020-01-02  3:04 UTC (permalink / raw)
  To: akpm, kirill.shutemov; +Cc: linux-mm, linux-kernel, richard.weiyang, Wei Yang

Currently we behave differently between PMD-mapped THP and PTE-mapped
THP on memory_failure.

User detected difference:

    For PTE-mapped THP, the whole 2M range will trigger MCE after
    memory_failure(), while only 4K range for PMD-mapped THP will.

Direct reason:

    All the 512 PTE entry will be marked as hwpoison entry for a PTE-mapped
    THP while only one PTE will be marked for a PMD-mapped THP.

Root reason:

    The root cause is PTE-mapped page doesn't need to split pmd which skip
    the SPLIT_FREEZE process. This makes try_to_unmap_one() do its job when
    the THP is not splited. And since page is HWPOISON, all the entries in
    THP is marked as hwpoison entry.

    While for the PMD-mapped THP, SPLIT_FREEZE will save migration entry to
    pte and this skip try_to_unmap_one() before THP splited. And then only
    the affected 4k page is marked as hwpoison entry.

This patch tries to provide a finer granularity for PTE-mapped THP by
only mark the affected subpage as hwpoison entry when THP is not
split.

Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>

---
This complicates the picture a little, while I don't find a better way to
improve. 

Also I may miss some case or not handle this properly.

Look forward your comments.
---
 mm/rmap.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index b3e381919835..90229917dd64 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1554,10 +1554,11 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 				set_huge_swap_pte_at(mm, address,
 						     pvmw.pte, pteval,
 						     vma_mmu_pagesize(vma));
-			} else {
+			} else if (!PageAnon(page) || page == subpage) {
 				dec_mm_counter(mm, mm_counter(page));
 				set_pte_at(mm, address, pvmw.pte, pteval);
-			}
+			} else
+				goto freeze;
 
 		} else if (pte_unused(pteval) && !userfaultfd_armed(vma)) {
 			/*
@@ -1579,6 +1580,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 			swp_entry_t entry;
 			pte_t swp_pte;
 
+freeze:
 			if (arch_unmap_one(mm, vma, address, pteval) < 0) {
 				set_pte_at(mm, address, pvmw.pte, pteval);
 				ret = false;
-- 
2.17.1



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [RFC PATCH] mm/rmap.c: finer hwpoison granularity for PTE-mapped THP
  2020-01-02  3:04 [RFC PATCH] mm/rmap.c: finer hwpoison granularity for PTE-mapped THP Wei Yang
@ 2020-01-09 12:32 ` Kirill A. Shutemov
  2020-01-10  1:55   ` Wei Yang
  0 siblings, 1 reply; 3+ messages in thread
From: Kirill A. Shutemov @ 2020-01-09 12:32 UTC (permalink / raw)
  To: Wei Yang; +Cc: akpm, kirill.shutemov, linux-mm, linux-kernel, richard.weiyang

On Thu, Jan 02, 2020 at 11:04:21AM +0800, Wei Yang wrote:
> Currently we behave differently between PMD-mapped THP and PTE-mapped
> THP on memory_failure.
> 
> User detected difference:
> 
>     For PTE-mapped THP, the whole 2M range will trigger MCE after
>     memory_failure(), while only 4K range for PMD-mapped THP will.
> 
> Direct reason:
> 
>     All the 512 PTE entry will be marked as hwpoison entry for a PTE-mapped
>     THP while only one PTE will be marked for a PMD-mapped THP.
> 
> Root reason:
> 
>     The root cause is PTE-mapped page doesn't need to split pmd which skip
>     the SPLIT_FREEZE process.

I don't follow how SPLIT_FREEZE is related to pisoning. Cold you
laraborate?

>     This makes try_to_unmap_one() do its job when
>     the THP is not splited. And since page is HWPOISON, all the entries in
>     THP is marked as hwpoison entry.
> 
>     While for the PMD-mapped THP, SPLIT_FREEZE will save migration entry to
>     pte and this skip try_to_unmap_one() before THP splited. And then only
>     the affected 4k page is marked as hwpoison entry.
> 
> This patch tries to provide a finer granularity for PTE-mapped THP by
> only mark the affected subpage as hwpoison entry when THP is not
> split.
> 
> Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
> 
> ---
> This complicates the picture a little, while I don't find a better way to
> improve. 
> 
> Also I may miss some case or not handle this properly.
> 
> Look forward your comments.
> ---
>  mm/rmap.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/rmap.c b/mm/rmap.c
> index b3e381919835..90229917dd64 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1554,10 +1554,11 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>  				set_huge_swap_pte_at(mm, address,
>  						     pvmw.pte, pteval,
>  						     vma_mmu_pagesize(vma));
> -			} else {
> +			} else if (!PageAnon(page) || page == subpage) {
>  				dec_mm_counter(mm, mm_counter(page));
>  				set_pte_at(mm, address, pvmw.pte, pteval);
> -			}
> +			} else
> +				goto freeze;
>  
>  		} else if (pte_unused(pteval) && !userfaultfd_armed(vma)) {
>  			/*
> @@ -1579,6 +1580,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
>  			swp_entry_t entry;
>  			pte_t swp_pte;
>  
> +freeze:
>  			if (arch_unmap_one(mm, vma, address, pteval) < 0) {
>  				set_pte_at(mm, address, pvmw.pte, pteval);
>  				ret = false;
> -- 
> 2.17.1
> 
> 

-- 
 Kirill A. Shutemov


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [RFC PATCH] mm/rmap.c: finer hwpoison granularity for PTE-mapped THP
  2020-01-09 12:32 ` Kirill A. Shutemov
@ 2020-01-10  1:55   ` Wei Yang
  0 siblings, 0 replies; 3+ messages in thread
From: Wei Yang @ 2020-01-10  1:55 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: Wei Yang, akpm, kirill.shutemov, linux-mm, linux-kernel, richard.weiyang

On Thu, Jan 09, 2020 at 03:32:33PM +0300, Kirill A. Shutemov wrote:
>On Thu, Jan 02, 2020 at 11:04:21AM +0800, Wei Yang wrote:
>> Currently we behave differently between PMD-mapped THP and PTE-mapped
>> THP on memory_failure.
>> 
>> User detected difference:
>> 
>>     For PTE-mapped THP, the whole 2M range will trigger MCE after
>>     memory_failure(), while only 4K range for PMD-mapped THP will.
>> 
>> Direct reason:
>> 
>>     All the 512 PTE entry will be marked as hwpoison entry for a PTE-mapped
>>     THP while only one PTE will be marked for a PMD-mapped THP.
>> 
>> Root reason:
>> 
>>     The root cause is PTE-mapped page doesn't need to split pmd which skip
>>     the SPLIT_FREEZE process.
>
>I don't follow how SPLIT_FREEZE is related to pisoning. Cold you
>laraborate?
>

Sure. Let me try to explain this a little.

    split_huge_page_to_list
        unmap_page
	    try_to_unmap_one
                ...
                __split_huge_pmd_locked
        __split_huge_page
            remap_page

There are two dimensions:

   * PMD mapped THP and PTE mapped THP
   * HWPOISON-ed page and non-HWPOISON-ed page

So there are total 4 cases.

1. First let's take a look at the normal case, when HWPOISON is not set.

If the page is PMD-mapped, SPLIT_FREEZE is passed down in flags. And finally
passed to __split_huge_pmd_locked. In this function, when freeze is true, PTE
will be set to migration entry. And because __split_huge_pmd_locked save
migration entry to PTE, try_to_unmap_one will not do real unmap. Then
remap_page restore those migration entry back. 

If the page is PTE-mapped, __split_huge_pmd_locked will be skipped since this
is already done. This means try_to_unmap_one will do the real unmap. Because
SPLIT_FREEZE is passed, PTE will be set to migration entry, which is the same
behavior as PMD-mapped page. Then remap_page restore those migration entry
back.

This shows PMD-mapped and PTE-mapped page share the same result on split.

While difference is who sets PTE as migration entry

  * __split_huge_pmd_locked does this job for PMD-mapped page
  * try_to_unmap_one does this job for PTE-mapped page

2. Now let's take a look at the HWPOISON case.

There are two critical differences 

  * __split_huge_pmd_locked is skipped for PTE-mapped page
  * HWPOISON effects the behavior of try_to_unmap_one

Then for PMD-mapped page, HWPOISON has no effect on split. But for PTE-mapped
page, all PTE will be set to hwpoison entry.

Then in memory_failure, the page split will have two different PTE result.

Not sure I explain it clearly.

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-01-10  1:55 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-02  3:04 [RFC PATCH] mm/rmap.c: finer hwpoison granularity for PTE-mapped THP Wei Yang
2020-01-09 12:32 ` Kirill A. Shutemov
2020-01-10  1:55   ` Wei Yang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox