linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: kirill.shutemov@linux.intel.com
To: Jiaqi Yan <jiaqiyan@google.com>
Cc: kirill@shutemov.name, shy828301@gmail.com,
	tongtiangen@huawei.com, tony.luck@intel.com,
	akpm@linux-foundation.org, wangkefeng.wang@huawei.com,
	naoya.horiguchi@nec.com, linmiaohe@huawei.com,
	linux-mm@kvack.org, osalvador@suse.de
Subject: Re: [PATCH v9 1/2] mm/khugepaged: recover from poisoned anonymous memory
Date: Thu, 19 Jan 2023 18:02:58 +0300	[thread overview]
Message-ID: <20230119150258.npfadnefkpny5fd3@box.shutemov.name> (raw)
In-Reply-To: <20221205234059.42971-2-jiaqiyan@google.com>

On Mon, Dec 05, 2022 at 03:40:58PM -0800, Jiaqi Yan wrote:
> Make __collapse_huge_page_copy return whether copying anonymous pages
> succeeded, and make collapse_huge_page handle the return status.
> 
> Break existing PTE scan loop into two for-loops. The first loop copies
> source pages into target huge page, and can fail gracefully when running
> into memory errors in source pages. If copying all pages succeeds, the
> second loop releases and clears up these normal pages. Otherwise, the
> second loop rolls back the page table and page states by:
> - re-establishing the original PTEs-to-PMD connection.
> - releasing source pages back to their LRU list.
> 
> Tested manually:
> 0. Enable khugepaged on system under test.
> 1. Start a two-thread application. Each thread allocates a chunk of
>    non-huge anonymous memory buffer.
> 2. Pick 4 random buffer locations (2 in each thread) and inject
>    uncorrectable memory errors at corresponding physical addresses.
> 3. Signal both threads to make their memory buffer collapsible, i.e.
>    calling madvise(MADV_HUGEPAGE).
> 4. Wait and check kernel log: khugepaged is able to recover from poisoned
>    pages and skips collapsing them.
> 5. Signal both threads to inspect their buffer contents and make sure no
>    data corruption.
> 
> Signed-off-by: Jiaqi Yan <jiaqiyan@google.com>
> ---
>  include/trace/events/huge_memory.h |   3 +-
>  mm/khugepaged.c                    | 179 ++++++++++++++++++++++-------
>  2 files changed, 139 insertions(+), 43 deletions(-)
> 
> diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
> index 35d759d3b0104..5743ae970af31 100644
> --- a/include/trace/events/huge_memory.h
> +++ b/include/trace/events/huge_memory.h
> @@ -36,7 +36,8 @@
>  	EM( SCAN_ALLOC_HUGE_PAGE_FAIL,	"alloc_huge_page_failed")	\
>  	EM( SCAN_CGROUP_CHARGE_FAIL,	"ccgroup_charge_failed")	\
>  	EM( SCAN_TRUNCATED,		"truncated")			\
> -	EMe(SCAN_PAGE_HAS_PRIVATE,	"page_has_private")		\
> +	EM( SCAN_PAGE_HAS_PRIVATE,	"page_has_private")		\
> +	EMe(SCAN_COPY_MC,		"copy_poisoned_page")		\
>  
>  #undef EM
>  #undef EMe
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 5a7d2d5093f9c..0f1b9e05e17ec 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -19,6 +19,7 @@
>  #include <linux/page_table_check.h>
>  #include <linux/swapops.h>
>  #include <linux/shmem_fs.h>
> +#include <linux/kmsan.h>
>  
>  #include <asm/tlb.h>
>  #include <asm/pgalloc.h>
> @@ -55,6 +56,7 @@ enum scan_result {
>  	SCAN_CGROUP_CHARGE_FAIL,
>  	SCAN_TRUNCATED,
>  	SCAN_PAGE_HAS_PRIVATE,
> +	SCAN_COPY_MC,
>  };
>  
>  #define CREATE_TRACE_POINTS
> @@ -530,6 +532,27 @@ static bool is_refcount_suitable(struct page *page)
>  	return page_count(page) == expected_refcount;
>  }
>  
> +/*
> + * Copies memory with #MC in source page (@from) handled. Returns number
> + * of bytes not copied if there was an exception; otherwise 0 for success.
> + * Note handling #MC requires arch opt-in.
> + */
> +static int copy_mc_page(struct page *to, struct page *from)
> +{
> +	char *vfrom, *vto;
> +	unsigned long ret;
> +
> +	vfrom = kmap_local_page(from);
> +	vto = kmap_local_page(to);
> +	ret = copy_mc_to_kernel(vto, vfrom, PAGE_SIZE);
> +	if (ret == 0)
> +		kmsan_copy_page_meta(to, from);
> +	kunmap_local(vto);
> +	kunmap_local(vfrom);
> +
> +	return ret;
> +}


It is very similar to copy_mc_user_highpage(), but uses
kmsan_copy_page_meta() instead of kmsan_unpoison_memory().

Could you explain the difference? I don't quite get it.

> +
>  static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
>  					unsigned long address,
>  					pte_t *pte,
> @@ -670,56 +693,124 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
>  	return result;
>  }
>  
> -static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
> -				      struct vm_area_struct *vma,
> -				      unsigned long address,
> -				      spinlock_t *ptl,
> -				      struct list_head *compound_pagelist)
> +/*
> + * __collapse_huge_page_copy - attempts to copy memory contents from normal
> + * pages to a hugepage. Cleans up the normal pages if copying succeeds;
> + * otherwise restores the original page table and releases isolated normal pages.
> + * Returns SCAN_SUCCEED if copying succeeds, otherwise returns SCAN_COPY_MC.
> + *
> + * @pte: starting of the PTEs to copy from
> + * @page: the new hugepage to copy contents to
> + * @pmd: pointer to the new hugepage's PMD
> + * @rollback: the original normal pages' PMD
> + * @vma: the original normal pages' virtual memory area
> + * @address: starting address to copy
> + * @pte_ptl: lock on normal pages' PTEs
> + * @compound_pagelist: list that stores compound pages
> + */
> +static int __collapse_huge_page_copy(pte_t *pte,
> +				     struct page *page,
> +				     pmd_t *pmd,
> +				     pmd_t rollback,

I think 'orig_pmd' is a better name.

> +				     struct vm_area_struct *vma,
> +				     unsigned long address,
> +				     spinlock_t *pte_ptl,
> +				     struct list_head *compound_pagelist)
>  {
>  	struct page *src_page, *tmp;
>  	pte_t *_pte;
> -	for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
> -				_pte++, page++, address += PAGE_SIZE) {
> -		pte_t pteval = *_pte;
> +	pte_t pteval;
> +	unsigned long _address;
> +	spinlock_t *pmd_ptl;
> +	int result = SCAN_SUCCEED;
>  
> -		if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
> -			clear_user_highpage(page, address);
> -			add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1);
> -			if (is_zero_pfn(pte_pfn(pteval))) {
> +	/*
> +	 * Copying pages' contents is subject to memory poison at any iteration.
> +	 */
> +	for (_pte = pte, _address = address; _pte < pte + HPAGE_PMD_NR;
> +	     _pte++, page++, _address += PAGE_SIZE) {
> +		pteval = *_pte;
> +
> +		if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval)))
> +			clear_user_highpage(page, _address);
> +		else {
> +			src_page = pte_page(pteval);
> +			if (copy_mc_page(page, src_page) > 0) {
> +				result = SCAN_COPY_MC;
> +				break;
> +			}
> +		}
> +	}
> +
> +	if (likely(result == SCAN_SUCCEED)) {
> +		for (_pte = pte, _address = address; _pte < pte + HPAGE_PMD_NR;
> +		     _pte++, _address += PAGE_SIZE) {
> +			pteval = *_pte;
> +			if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
> +				add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1);
> +				if (is_zero_pfn(pte_pfn(pteval))) {
> +					/*
> +					 * pte_ptl mostly unnecessary.
> +					 */
> +					spin_lock(pte_ptl);
> +					pte_clear(vma->vm_mm, _address, _pte);
> +					spin_unlock(pte_ptl);
> +				}
> +			} else {
> +				src_page = pte_page(pteval);
> +				if (!PageCompound(src_page))
> +					release_pte_page(src_page);
>  				/*
> -				 * ptl mostly unnecessary.
> +				 * pte_ptl mostly unnecessary, but preempt has
> +				 * to be disabled to update the per-cpu stats
> +				 * inside page_remove_rmap().
>  				 */
> -				spin_lock(ptl);
> -				ptep_clear(vma->vm_mm, address, _pte);
> -				spin_unlock(ptl);
> +				spin_lock(pte_ptl);
> +				ptep_clear(vma->vm_mm, _address, _pte);
> +				page_remove_rmap(src_page, vma, false);
> +				spin_unlock(pte_ptl);
> +				free_page_and_swap_cache(src_page);
> +			}
> +		}
> +		list_for_each_entry_safe(src_page, tmp, compound_pagelist, lru) {
> +			list_del(&src_page->lru);
> +			mod_node_page_state(page_pgdat(src_page),
> +					NR_ISOLATED_ANON + page_is_file_lru(src_page),
> +					-compound_nr(src_page));
> +			unlock_page(src_page);
> +			free_swap_cache(src_page);
> +			putback_lru_page(src_page);
> +		}
> +	} else {
> +		/*
> +		 * Re-establish the regular PMD that points to the regular
> +		 * page table. Restoring PMD needs to be done prior to
> +		 * releasing pages. Since pages are still isolated and
> +		 * locked here, acquiring anon_vma_lock_write is unnecessary.
> +		 */
> +		pmd_ptl = pmd_lock(vma->vm_mm, pmd);
> +		pmd_populate(vma->vm_mm, pmd, pmd_pgtable(rollback));
> +		spin_unlock(pmd_ptl);
> +		/*
> +		 * Release both raw and compound pages isolated
> +		 * in __collapse_huge_page_isolate.
> +		 */
> +		for (_pte = pte, _address = address; _pte < pte + HPAGE_PMD_NR;
> +		     _pte++, _address += PAGE_SIZE) {
> +			pteval = *_pte;
> +			if (!pte_none(pteval) && !is_zero_pfn(pte_pfn(pteval))) {
> +				src_page = pte_page(pteval);
> +				if (!PageCompound(src_page))
> +					release_pte_page(src_page);

Indentation levels get out of control. Maybe some code restructuring is
required?

>  			}
> -		} else {
> -			src_page = pte_page(pteval);
> -			copy_user_highpage(page, src_page, address, vma);
> -			if (!PageCompound(src_page))
> -				release_pte_page(src_page);
> -			/*
> -			 * ptl mostly unnecessary, but preempt has to
> -			 * be disabled to update the per-cpu stats
> -			 * inside page_remove_rmap().
> -			 */
> -			spin_lock(ptl);
> -			ptep_clear(vma->vm_mm, address, _pte);
> -			page_remove_rmap(src_page, vma, false);
> -			spin_unlock(ptl);
> -			free_page_and_swap_cache(src_page);
> +		}
> +		list_for_each_entry_safe(src_page, tmp, compound_pagelist, lru) {
> +			list_del(&src_page->lru);
> +			release_pte_page(src_page);
>  		}
>  	}
>  
> -	list_for_each_entry_safe(src_page, tmp, compound_pagelist, lru) {
> -		list_del(&src_page->lru);
> -		mod_node_page_state(page_pgdat(src_page),
> -				    NR_ISOLATED_ANON + page_is_file_lru(src_page),
> -				    -compound_nr(src_page));
> -		unlock_page(src_page);
> -		free_swap_cache(src_page);
> -		putback_lru_page(src_page);
> -	}
> +	return result;
>  }
>  
>  static void khugepaged_alloc_sleep(void)
> @@ -1079,9 +1170,13 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
>  	 */
>  	anon_vma_unlock_write(vma->anon_vma);
>  
> -	__collapse_huge_page_copy(pte, hpage, vma, address, pte_ptl,
> -				  &compound_pagelist);
> +	result = __collapse_huge_page_copy(pte, hpage, pmd, _pmd,
> +					   vma, address, pte_ptl,
> +					   &compound_pagelist);
>  	pte_unmap(pte);
> +	if (unlikely(result != SCAN_SUCCEED))
> +		goto out_up_write;
> +
>  	/*
>  	 * spin_lock() below is not the equivalent of smp_wmb(), but
>  	 * the smp_wmb() inside __SetPageUptodate() can be reused to
> -- 
> 2.39.0.rc0.267.gcb52ba06e7-goog
> 

-- 
  Kiryl Shutsemau / Kirill A. Shutemov


  reply	other threads:[~2023-01-19 15:03 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-05 23:40 [PATCH v9 0/2] Memory poison recovery in khugepaged collapsing Jiaqi Yan
2022-12-05 23:40 ` [PATCH v9 1/2] mm/khugepaged: recover from poisoned anonymous memory Jiaqi Yan
2023-01-19 15:02   ` kirill.shutemov [this message]
2023-01-20 15:56     ` Jiaqi Yan
2023-01-24  0:33       ` kirill.shutemov
2023-02-01  5:16         ` Jiaqi Yan
2023-02-02  0:01           ` kirill.shutemov
2023-02-02  0:30             ` kirill
2023-02-07 18:19               ` Jiaqi Yan
2023-02-08 11:44                 ` Alexander Potapenko
2023-02-08 23:00                   ` Jiaqi Yan
2023-02-17 19:49                     ` Jiaqi Yan
2023-02-28 13:40                     ` kirill
2023-03-03 17:15                       ` Jiaqi Yan
2022-12-05 23:40 ` [PATCH v9 2/2] mm/khugepaged: recover from poisoned file-backed memory Jiaqi Yan
2023-01-19 15:10   ` kirill.shutemov
2023-01-19 21:24     ` Jiaqi Yan
2023-01-18 23:29 ` [PATCH v9 0/2] Memory poison recovery in khugepaged collapsing Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230119150258.npfadnefkpny5fd3@box.shutemov.name \
    --to=kirill.shutemov@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=jiaqiyan@google.com \
    --cc=kirill@shutemov.name \
    --cc=linmiaohe@huawei.com \
    --cc=linux-mm@kvack.org \
    --cc=naoya.horiguchi@nec.com \
    --cc=osalvador@suse.de \
    --cc=shy828301@gmail.com \
    --cc=tongtiangen@huawei.com \
    --cc=tony.luck@intel.com \
    --cc=wangkefeng.wang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox