From: Yang Shi <shy828301@gmail.com>
To: Jiaqi Yan <jiaqiyan@google.com>
Cc: kirill.shutemov@linux.intel.com, kirill@shutemov.name,
tongtiangen@huawei.com, tony.luck@intel.com,
naoya.horiguchi@nec.com, linmiaohe@huawei.com,
linux-mm@kvack.org, akpm@linux-foundation.org,
osalvador@suse.de, wangkefeng.wang@huawei.com,
stevensd@chromium.org, hughd@google.com
Subject: Re: [PATCH v11 1/3] mm/khugepaged: recover from poisoned anonymous memory
Date: Tue, 28 Mar 2023 08:58:45 -0700 [thread overview]
Message-ID: <CAHbLzkrYZ0Hcxw23gOAm1fYiw0r7EXRgNnrSUc6Q0x18FPRuDw@mail.gmail.com> (raw)
In-Reply-To: <20230327211548.462509-2-jiaqiyan@google.com>
On Mon, Mar 27, 2023 at 2:15 PM Jiaqi Yan <jiaqiyan@google.com> wrote:
>
> Make __collapse_huge_page_copy return whether copying anonymous pages
> succeeded, and make collapse_huge_page handle the return status.
>
> Break existing PTE scan loop into two for-loops. The first loop copies
> source pages into target huge page, and can fail gracefully when running
> into memory errors in source pages. If copying all pages succeeds, the
> second loop releases and clears up these normal pages. Otherwise, the
> second loop rolls back the page table and page states by:
> - re-establishing the original PTEs-to-PMD connection.
> - releasing source pages back to their LRU list.
>
> Tested manually:
> 0. Enable khugepaged on system under test.
> 1. Start a two-thread application. Each thread allocates a chunk of
> non-huge anonymous memory buffer.
> 2. Pick 4 random buffer locations (2 in each thread) and inject
> uncorrectable memory errors at corresponding physical addresses.
> 3. Signal both threads to make their memory buffer collapsible, i.e.
> calling madvise(MADV_HUGEPAGE).
> 4. Wait and check kernel log: khugepaged is able to recover from poisoned
> pages and skips collapsing them.
> 5. Signal both threads to inspect their buffer contents and make sure no
> data corruption.
>
> Signed-off-by: Jiaqi Yan <jiaqiyan@google.com>
Reviewed-by: Yang Shi <shy828301@gmail.com>
Just a nit below:
> ---
> include/trace/events/huge_memory.h | 3 +-
> mm/khugepaged.c | 114 +++++++++++++++++++++++++----
> 2 files changed, 103 insertions(+), 14 deletions(-)
>
> diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
> index 3e6fb05852f9a..46cce509957ba 100644
> --- a/include/trace/events/huge_memory.h
> +++ b/include/trace/events/huge_memory.h
> @@ -36,7 +36,8 @@
> EM( SCAN_ALLOC_HUGE_PAGE_FAIL, "alloc_huge_page_failed") \
> EM( SCAN_CGROUP_CHARGE_FAIL, "ccgroup_charge_failed") \
> EM( SCAN_TRUNCATED, "truncated") \
> - EMe(SCAN_PAGE_HAS_PRIVATE, "page_has_private") \
> + EM( SCAN_PAGE_HAS_PRIVATE, "page_has_private") \
> + EMe(SCAN_COPY_MC, "copy_poisoned_page") \
>
> #undef EM
> #undef EMe
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index bee7fd7db380a..bef68286345c8 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -55,6 +55,7 @@ enum scan_result {
> SCAN_CGROUP_CHARGE_FAIL,
> SCAN_TRUNCATED,
> SCAN_PAGE_HAS_PRIVATE,
> + SCAN_COPY_MC,
> };
>
> #define CREATE_TRACE_POINTS
> @@ -681,20 +682,22 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> return result;
> }
>
> -static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
> - struct vm_area_struct *vma,
> - unsigned long address,
> - spinlock_t *ptl,
> - struct list_head *compound_pagelist)
> +static void __collapse_huge_page_copy_succeeded(pte_t *pte,
> + pmd_t *pmd,
> + struct vm_area_struct *vma,
> + unsigned long address,
> + spinlock_t *ptl,
> + struct list_head *compound_pagelist)
> {
> - struct page *src_page, *tmp;
> + struct page *src_page;
> + struct page *tmp;
> pte_t *_pte;
> - for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
> - _pte++, page++, address += PAGE_SIZE) {
> - pte_t pteval = *_pte;
> + pte_t pteval;
>
> + for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
> + _pte++, address += PAGE_SIZE) {
> + pteval = *_pte;
> if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
> - clear_user_highpage(page, address);
> add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1);
> if (is_zero_pfn(pte_pfn(pteval))) {
> /*
> @@ -706,7 +709,6 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
> }
> } else {
> src_page = pte_page(pteval);
> - copy_user_highpage(page, src_page, address, vma);
> if (!PageCompound(src_page))
> release_pte_page(src_page);
> /*
> @@ -733,6 +735,88 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
> }
> }
>
> +static void __collapse_huge_page_copy_failed(pte_t *pte,
> + pmd_t *pmd,
> + pmd_t orig_pmd,
> + struct vm_area_struct *vma,
> + unsigned long address,
It looks like "address" is not used at all. It could be removed.
> + struct list_head *compound_pagelist)
> +{
> + spinlock_t *pmd_ptl;
> +
> + /*
> + * Re-establish the PMD to point to the original page table
> + * entry. Restoring PMD needs to be done prior to releasing
> + * pages. Since pages are still isolated and locked here,
> + * acquiring anon_vma_lock_write is unnecessary.
> + */
> + pmd_ptl = pmd_lock(vma->vm_mm, pmd);
> + pmd_populate(vma->vm_mm, pmd, pmd_pgtable(orig_pmd));
> + spin_unlock(pmd_ptl);
> + /*
> + * Release both raw and compound pages isolated
> + * in __collapse_huge_page_isolate.
> + */
> + release_pte_pages(pte, pte + HPAGE_PMD_NR, compound_pagelist);
> +}
> +
> +/*
> + * __collapse_huge_page_copy - attempts to copy memory contents from raw
> + * pages to a hugepage. Cleans up the raw pages if copying succeeds;
> + * otherwise restores the original page table and releases isolated raw pages.
> + * Returns SCAN_SUCCEED if copying succeeds, otherwise returns SCAN_COPY_MC.
> + *
> + * @pte: starting of the PTEs to copy from
> + * @page: the new hugepage to copy contents to
> + * @pmd: pointer to the new hugepage's PMD
> + * @orig_pmd: the original raw pages' PMD
> + * @vma: the original raw pages' virtual memory area
> + * @address: starting address to copy
> + * @pte_ptl: lock on raw pages' PTEs
> + * @compound_pagelist: list that stores compound pages
> + */
> +static int __collapse_huge_page_copy(pte_t *pte,
> + struct page *page,
> + pmd_t *pmd,
> + pmd_t orig_pmd,
> + struct vm_area_struct *vma,
> + unsigned long address,
> + spinlock_t *pte_ptl,
> + struct list_head *compound_pagelist)
> +{
> + struct page *src_page;
> + pte_t *_pte;
> + pte_t pteval;
> + unsigned long _address;
> + int result = SCAN_SUCCEED;
> +
> + /*
> + * Copying pages' contents is subject to memory poison at any iteration.
> + */
> + for (_pte = pte, _address = address; _pte < pte + HPAGE_PMD_NR;
> + _pte++, page++, _address += PAGE_SIZE) {
> + pteval = *_pte;
> + if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
> + clear_user_highpage(page, _address);
> + continue;
> + }
> + src_page = pte_page(pteval);
> + if (copy_mc_user_highpage(page, src_page, _address, vma) > 0) {
> + result = SCAN_COPY_MC;
> + break;
> + }
> + }
> +
> + if (likely(result == SCAN_SUCCEED))
> + __collapse_huge_page_copy_succeeded(pte, pmd, vma, address,
> + pte_ptl, compound_pagelist);
> + else
> + __collapse_huge_page_copy_failed(pte, pmd, orig_pmd, vma,
> + address, compound_pagelist);
> +
> + return result;
> +}
> +
> static void khugepaged_alloc_sleep(void)
> {
> DEFINE_WAIT(wait);
> @@ -1106,9 +1190,13 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> */
> anon_vma_unlock_write(vma->anon_vma);
>
> - __collapse_huge_page_copy(pte, hpage, vma, address, pte_ptl,
> - &compound_pagelist);
> + result = __collapse_huge_page_copy(pte, hpage, pmd, _pmd,
> + vma, address, pte_ptl,
> + &compound_pagelist);
> pte_unmap(pte);
> + if (unlikely(result != SCAN_SUCCEED))
> + goto out_up_write;
> +
> /*
> * spin_lock() below is not the equivalent of smp_wmb(), but
> * the smp_wmb() inside __SetPageUptodate() can be reused to
> --
> 2.40.0.348.gf938b09366-goog
>
next prev parent reply other threads:[~2023-03-28 15:59 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-27 21:15 [PATCH v11 0/3] Memory poison recovery in khugepaged collapsing Jiaqi Yan
2023-03-27 21:15 ` [PATCH v11 1/3] mm/khugepaged: recover from poisoned anonymous memory Jiaqi Yan
2023-03-28 15:58 ` Yang Shi [this message]
2023-03-29 0:13 ` Jiaqi Yan
2023-03-27 21:15 ` [PATCH v11 2/3] mm/hwpoison: introduce copy_mc_highpage Jiaqi Yan
2023-03-28 16:00 ` Yang Shi
2023-03-27 21:15 ` [PATCH v11 3/3] mm/khugepaged: recover from poisoned file-backed memory Jiaqi Yan
2023-03-28 16:01 ` Yang Shi
2023-03-29 0:12 ` Jiaqi Yan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAHbLzkrYZ0Hcxw23gOAm1fYiw0r7EXRgNnrSUc6Q0x18FPRuDw@mail.gmail.com \
--to=shy828301@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=hughd@google.com \
--cc=jiaqiyan@google.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=kirill@shutemov.name \
--cc=linmiaohe@huawei.com \
--cc=linux-mm@kvack.org \
--cc=naoya.horiguchi@nec.com \
--cc=osalvador@suse.de \
--cc=stevensd@chromium.org \
--cc=tongtiangen@huawei.com \
--cc=tony.luck@intel.com \
--cc=wangkefeng.wang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox