From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2AEE8C25B4E for ; Fri, 20 Jan 2023 15:56:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1E7D86B0072; Fri, 20 Jan 2023 10:56:31 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 196EB6B0073; Fri, 20 Jan 2023 10:56:31 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 05F966B0074; Fri, 20 Jan 2023 10:56:31 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id E63ED6B0072 for ; Fri, 20 Jan 2023 10:56:30 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 7F170AB4A9 for ; Fri, 20 Jan 2023 15:56:30 +0000 (UTC) X-FDA: 80375629740.06.B1B7EBA Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) by imf17.hostedemail.com (Postfix) with ESMTP id B07524001B for ; Fri, 20 Jan 2023 15:56:28 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=IwsBoyOQ; spf=pass (imf17.hostedemail.com: domain of jiaqiyan@google.com designates 209.85.210.176 as permitted sender) smtp.mailfrom=jiaqiyan@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674230188; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FQgQXkAn0hNUkhLOHnb+u3ciprWB60bNFrUzPMO18zc=; b=Vf4JcLXtOjeq2l3/Ijks7+KlU87/EMMfohtWnW4FOu3KV/nkSEzV/V8heL6BH04HTqSycg BIK3zXkMDkrazNXC/Pa9kYXFXwnBurv091Tq1kQjB59sODuT8WpHwgGuYU0Ofhq2dKUDbC JkHQ8S6Xj5MNA+X/Z4yEe3c1oWRfsRI= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=IwsBoyOQ; spf=pass (imf17.hostedemail.com: domain of jiaqiyan@google.com designates 209.85.210.176 as permitted sender) smtp.mailfrom=jiaqiyan@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1674230188; a=rsa-sha256; cv=none; b=AqYb0BKT6ztdfUijlZjifPwRgwCg6P6nNqD3/77R7sDLCkJAimPkDYp2TAfHfsz1gE4qG/ 4ZgdEz9VeOa3uI36CNgZr+QrGIZRhqTgA6UxXxJuSzlZTjx5rWcECt7esN7dRsF2hlRKuw WRQLOo6y5MJj/R+d8tLb+7F0abVvwQ4= Received: by mail-pf1-f176.google.com with SMTP id z3so4285676pfb.2 for ; Fri, 20 Jan 2023 07:56:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=FQgQXkAn0hNUkhLOHnb+u3ciprWB60bNFrUzPMO18zc=; b=IwsBoyOQ1eN+q+UBF4tZNgLii4YGTYjUtUzHDbw9jJkAAofUuN7VaKHZAB1xNj8j89 aQoBkBiZmPgxlLQ+RUbj8SjvOmcb8qG5s8VJrMFOyH1JGC0ag5tCjUWxafGh6nBTEraD 1YyaPYqYiCIYj94Jqiy4yV7ZBM+4CQDdKPgyD/Kf0NyRcWZYZqMNIqnk3OJXPJ4aDYbG /zUubp6fJIvhUuBULGxn88wYam4diqU7lgbeVhOKJ80mUeBXwkRaDRqzrZGbZFNuCNHg u+6AsI8mnHPmUL3YAyRYfl+HQPYwcZymvkvHIenSJBildbhnX2MRekyyW7IV8hfbcBre c/RQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=FQgQXkAn0hNUkhLOHnb+u3ciprWB60bNFrUzPMO18zc=; b=UZjg14YFTV2/4N4EbXVzTwxtMxOnQFaypbqZc4hPd/4RpOwQWVDitf1ZSTd8Z/hAkx sL6EZZNXp1oJiNkrUM1K4TSyFHhq/KUk5fEq7IoJI6NKZ68h6R/FWSfUWryZADYOjoV9 2pN1IT8UAJ76ULD8AKVvlOD1sj+6rijjmOS3Oz6P4iRiAnRFiYBawUypHtcino0FhLoE E29PCqxTnO7jOY9DErjpOyoBs77w03ZWQn5ZDtHDdG2GVaRPb9ecCGgYcATyUS6qPZiP +OE2ibUNg0g61pNFumYAHBjWuBYsPj58C3IKsdpRXzJYwLaEqemKxtms+tZ/gDvu/lVK k7nQ== X-Gm-Message-State: AFqh2krbIyaxnXWh0kahVGUHAUCCAC1SeP0slKGRm33IVT9qfcN0+41M s4GRCsY/aQY2OilNTXUV4XhqlKPhBB1TqPFC2NWO8w== X-Google-Smtp-Source: AMrXdXt54z0EVFoK5axdInbHyS2XMjh+WDB4QHygsKoBbwfjfTxz5pA5EXNmxv2RDDxhbrS5v/Ev9OqR2KjurAfZ6BM= X-Received: by 2002:a63:184a:0:b0:4cf:2f9a:9010 with SMTP id 10-20020a63184a000000b004cf2f9a9010mr1509568pgy.243.1674230186720; Fri, 20 Jan 2023 07:56:26 -0800 (PST) MIME-Version: 1.0 References: <20221205234059.42971-1-jiaqiyan@google.com> <20221205234059.42971-2-jiaqiyan@google.com> <20230119150258.npfadnefkpny5fd3@box.shutemov.name> In-Reply-To: <20230119150258.npfadnefkpny5fd3@box.shutemov.name> From: Jiaqi Yan Date: Fri, 20 Jan 2023 07:56:15 -0800 Message-ID: Subject: Re: [PATCH v9 1/2] mm/khugepaged: recover from poisoned anonymous memory To: kirill.shutemov@linux.intel.com Cc: kirill@shutemov.name, shy828301@gmail.com, tongtiangen@huawei.com, tony.luck@intel.com, akpm@linux-foundation.org, wangkefeng.wang@huawei.com, naoya.horiguchi@nec.com, linmiaohe@huawei.com, linux-mm@kvack.org, osalvador@suse.de Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: B07524001B X-Rspam-User: X-Stat-Signature: eyh9pi3yj66fakfhnxi6573cdx448gpn X-HE-Tag: 1674230188-815918 X-HE-Meta: U2FsdGVkX1/pH7cr7PGkdyyLgrhvN/7VeWe1gxvxVkYMCifuREbUURiFYVf5cUjJNFNkssqFTN31djEH+bP1uo6e0lrINuBBZ60uBlUKDbCFTzVq6Bq6HtgL4tT8zikHu5zHA/KEP/6doWnFNccIVfK2KwhGPsStNLyaPfBW6Jzquu5/6nqJG8rbymetnI22jtEpKWETimd0V5kubtLgeAHSkyLkNNShlggt/rwjrse80PkQQQdvLCC6Po5LKAT2+WayTXhZ6zSUQ5OsqmVHoLCx8TC89x5Ce19SowbIgjMmUzy2aED70bvNz67h25SBOjM+NZgwGVDzALJAGTR2VTFt3niswyxtvq66t/xCy2W80pyhJ9ItT9xk3obGFaRm+h0gjV5ixeQs+mZesWNi0SwRWFAMuvZgDntg4weIGRiVP6Dxchww9DzKY2K1xWzsPuxucjT8wHdFMoccbc/ZEgfpDDzfVWbdn7XWuoE0eSh6JPUi/yvHbKik2nCZF/IG6hktwaszfG4bwx7vqGeKwGEnFWvezjPKiTW4mJymgcIn1SpHDcIgp8A3MRctzpa+9W16UOJ74v/UXEz+/o8ZlD1qw0QoHKF34spWBjbyvF346oKut1q5thCi2Mfp+gsypEzG11LuLtDukNtNVbOhy4Aky9iCf+1/Db6dIrporRr5I6tlPpiPcMjOKJB6YEs1OqKEPDe7UKyB6kUwvtjMKFzjNBNpTy3GyK7Wl4Dkgr9EmZYwYfMlpXKjVYXGuQBfl9fXu0yglfqkpr++718ThwguILihv3EnZ0KOtLjWA2skl1PJRWIpn4/p1HXGR7b3XvkYYjWkV6cBT+Zg6IyfWYhMJbzM0qDkt7tMLBh6dqLlMAGVe49tKRkl3gMHedU9/Z6GThDBMpoUkamCyiGuGgJr0wU+lUGYfpUBBOKwKX27SXl/tDmMrU026li0xRvUJc+X6+sk731fSaepFEM 4ySfeDhw YSHtXQu7SUWyatd4csMwoBUk2rA2R20zCVbn29schzm+0OlojkC1tw3q/QdcViOl75C9ysN9T1GQijm+IOz6ft4uKvvPQDi52/40/OyY/hWz2aV/OlThuZFfAwO5uFNG8VHXFGp854QBpIIWaVGvCWq3QOf97HjKtXFg/O9GTWrEO8Ay652D0xT7gNf8iIJw/qja6HrC+SGBRDXoR9ThkjNal9W0iIUlXn4CDNMgtAOtJvZK1KzrOcZfyOzmnRKM5ja0lAylx/ipVSYsF2EoTiBsguWDC72NV8CfDPxLN14gZUwo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Jan 19, 2023 at 7:03 AM wrote: > > On Mon, Dec 05, 2022 at 03:40:58PM -0800, Jiaqi Yan wrote: > > Make __collapse_huge_page_copy return whether copying anonymous pages > > succeeded, and make collapse_huge_page handle the return status. > > > > Break existing PTE scan loop into two for-loops. The first loop copies > > source pages into target huge page, and can fail gracefully when running > > into memory errors in source pages. If copying all pages succeeds, the > > second loop releases and clears up these normal pages. Otherwise, the > > second loop rolls back the page table and page states by: > > - re-establishing the original PTEs-to-PMD connection. > > - releasing source pages back to their LRU list. > > > > Tested manually: > > 0. Enable khugepaged on system under test. > > 1. Start a two-thread application. Each thread allocates a chunk of > > non-huge anonymous memory buffer. > > 2. Pick 4 random buffer locations (2 in each thread) and inject > > uncorrectable memory errors at corresponding physical addresses. > > 3. Signal both threads to make their memory buffer collapsible, i.e. > > calling madvise(MADV_HUGEPAGE). > > 4. Wait and check kernel log: khugepaged is able to recover from poisoned > > pages and skips collapsing them. > > 5. Signal both threads to inspect their buffer contents and make sure no > > data corruption. > > > > Signed-off-by: Jiaqi Yan > > --- > > include/trace/events/huge_memory.h | 3 +- > > mm/khugepaged.c | 179 ++++++++++++++++++++++------- > > 2 files changed, 139 insertions(+), 43 deletions(-) > > > > diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h > > index 35d759d3b0104..5743ae970af31 100644 > > --- a/include/trace/events/huge_memory.h > > +++ b/include/trace/events/huge_memory.h > > @@ -36,7 +36,8 @@ > > EM( SCAN_ALLOC_HUGE_PAGE_FAIL, "alloc_huge_page_failed") \ > > EM( SCAN_CGROUP_CHARGE_FAIL, "ccgroup_charge_failed") \ > > EM( SCAN_TRUNCATED, "truncated") \ > > - EMe(SCAN_PAGE_HAS_PRIVATE, "page_has_private") \ > > + EM( SCAN_PAGE_HAS_PRIVATE, "page_has_private") \ > > + EMe(SCAN_COPY_MC, "copy_poisoned_page") \ > > > > #undef EM > > #undef EMe > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > index 5a7d2d5093f9c..0f1b9e05e17ec 100644 > > --- a/mm/khugepaged.c > > +++ b/mm/khugepaged.c > > @@ -19,6 +19,7 @@ > > #include > > #include > > #include > > +#include > > > > #include > > #include > > @@ -55,6 +56,7 @@ enum scan_result { > > SCAN_CGROUP_CHARGE_FAIL, > > SCAN_TRUNCATED, > > SCAN_PAGE_HAS_PRIVATE, > > + SCAN_COPY_MC, > > }; > > > > #define CREATE_TRACE_POINTS > > @@ -530,6 +532,27 @@ static bool is_refcount_suitable(struct page *page) > > return page_count(page) == expected_refcount; > > } > > > > +/* > > + * Copies memory with #MC in source page (@from) handled. Returns number > > + * of bytes not copied if there was an exception; otherwise 0 for success. > > + * Note handling #MC requires arch opt-in. > > + */ > > +static int copy_mc_page(struct page *to, struct page *from) > > +{ > > + char *vfrom, *vto; > > + unsigned long ret; > > + > > + vfrom = kmap_local_page(from); > > + vto = kmap_local_page(to); > > + ret = copy_mc_to_kernel(vto, vfrom, PAGE_SIZE); > > + if (ret == 0) > > + kmsan_copy_page_meta(to, from); > > + kunmap_local(vto); > > + kunmap_local(vfrom); > > + > > + return ret; > > +} > > > It is very similar to copy_mc_user_highpage(), but uses > kmsan_copy_page_meta() instead of kmsan_unpoison_memory(). > > Could you explain the difference? I don't quite get it. copy_mc_page is actually the MC version of copy_highpage, which uses kmsan_copy_page_meta instead of kmsan_unpoison_memory. My understanding is kmsan_copy_page_meta covers kmsan_unpoison_memory. When there is no metadata (kmsan_shadow or kmsan_origin), both kmsan_copy_page_meta and kmsan_unpoison_memory just do kmsan_internal_unpoison_memory to mark the memory range as initialized; when there is metadata in src page, kmsan_copy_page_meta will copy whatever metadata in src to dst. So I think kmsan_copy_page_meta is the right thing to do. > > > + > > static int __collapse_huge_page_isolate(struct vm_area_struct *vma, > > unsigned long address, > > pte_t *pte, > > @@ -670,56 +693,124 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, > > return result; > > } > > > > -static void __collapse_huge_page_copy(pte_t *pte, struct page *page, > > - struct vm_area_struct *vma, > > - unsigned long address, > > - spinlock_t *ptl, > > - struct list_head *compound_pagelist) > > +/* > > + * __collapse_huge_page_copy - attempts to copy memory contents from normal > > + * pages to a hugepage. Cleans up the normal pages if copying succeeds; > > + * otherwise restores the original page table and releases isolated normal pages. > > + * Returns SCAN_SUCCEED if copying succeeds, otherwise returns SCAN_COPY_MC. > > + * > > + * @pte: starting of the PTEs to copy from > > + * @page: the new hugepage to copy contents to > > + * @pmd: pointer to the new hugepage's PMD > > + * @rollback: the original normal pages' PMD > > + * @vma: the original normal pages' virtual memory area > > + * @address: starting address to copy > > + * @pte_ptl: lock on normal pages' PTEs > > + * @compound_pagelist: list that stores compound pages > > + */ > > +static int __collapse_huge_page_copy(pte_t *pte, > > + struct page *page, > > + pmd_t *pmd, > > + pmd_t rollback, > > I think 'orig_pmd' is a better name. Will be renamed to orig_pmd in the next version v10. > > > + struct vm_area_struct *vma, > > + unsigned long address, > > + spinlock_t *pte_ptl, > > + struct list_head *compound_pagelist) > > { > > struct page *src_page, *tmp; > > pte_t *_pte; > > - for (_pte = pte; _pte < pte + HPAGE_PMD_NR; > > - _pte++, page++, address += PAGE_SIZE) { > > - pte_t pteval = *_pte; > > + pte_t pteval; > > + unsigned long _address; > > + spinlock_t *pmd_ptl; > > + int result = SCAN_SUCCEED; > > > > - if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) { > > - clear_user_highpage(page, address); > > - add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1); > > - if (is_zero_pfn(pte_pfn(pteval))) { > > + /* > > + * Copying pages' contents is subject to memory poison at any iteration. > > + */ > > + for (_pte = pte, _address = address; _pte < pte + HPAGE_PMD_NR; > > + _pte++, page++, _address += PAGE_SIZE) { > > + pteval = *_pte; > > + > > + if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) > > + clear_user_highpage(page, _address); > > + else { > > + src_page = pte_page(pteval); > > + if (copy_mc_page(page, src_page) > 0) { > > + result = SCAN_COPY_MC; > > + break; > > + } > > + } > > + } > > + > > + if (likely(result == SCAN_SUCCEED)) { > > + for (_pte = pte, _address = address; _pte < pte + HPAGE_PMD_NR; > > + _pte++, _address += PAGE_SIZE) { > > + pteval = *_pte; > > + if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) { > > + add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1); > > + if (is_zero_pfn(pte_pfn(pteval))) { > > + /* > > + * pte_ptl mostly unnecessary. > > + */ > > + spin_lock(pte_ptl); > > + pte_clear(vma->vm_mm, _address, _pte); > > + spin_unlock(pte_ptl); > > + } > > + } else { > > + src_page = pte_page(pteval); > > + if (!PageCompound(src_page)) > > + release_pte_page(src_page); > > /* > > - * ptl mostly unnecessary. > > + * pte_ptl mostly unnecessary, but preempt has > > + * to be disabled to update the per-cpu stats > > + * inside page_remove_rmap(). > > */ > > - spin_lock(ptl); > > - ptep_clear(vma->vm_mm, address, _pte); > > - spin_unlock(ptl); > > + spin_lock(pte_ptl); > > + ptep_clear(vma->vm_mm, _address, _pte); > > + page_remove_rmap(src_page, vma, false); > > + spin_unlock(pte_ptl); > > + free_page_and_swap_cache(src_page); > > + } > > + } > > + list_for_each_entry_safe(src_page, tmp, compound_pagelist, lru) { > > + list_del(&src_page->lru); > > + mod_node_page_state(page_pgdat(src_page), > > + NR_ISOLATED_ANON + page_is_file_lru(src_page), > > + -compound_nr(src_page)); > > + unlock_page(src_page); > > + free_swap_cache(src_page); > > + putback_lru_page(src_page); > > + } > > + } else { > > + /* > > + * Re-establish the regular PMD that points to the regular > > + * page table. Restoring PMD needs to be done prior to > > + * releasing pages. Since pages are still isolated and > > + * locked here, acquiring anon_vma_lock_write is unnecessary. > > + */ > > + pmd_ptl = pmd_lock(vma->vm_mm, pmd); > > + pmd_populate(vma->vm_mm, pmd, pmd_pgtable(rollback)); > > + spin_unlock(pmd_ptl); > > + /* > > + * Release both raw and compound pages isolated > > + * in __collapse_huge_page_isolate. > > + */ > > + for (_pte = pte, _address = address; _pte < pte + HPAGE_PMD_NR; > > + _pte++, _address += PAGE_SIZE) { > > + pteval = *_pte; > > + if (!pte_none(pteval) && !is_zero_pfn(pte_pfn(pteval))) { > > + src_page = pte_page(pteval); > > + if (!PageCompound(src_page)) > > + release_pte_page(src_page); > > Indentation levels get out of control. Maybe some code restructuring is > required? v10 will change to something like this to reduce 1 level of indentation: if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) continue; src_page = pte_page(pteval); if (!PageCompound(src_page)) release_pte_page(src_page); > > > } > > - } else { > > - src_page = pte_page(pteval); > > - copy_user_highpage(page, src_page, address, vma); > > - if (!PageCompound(src_page)) > > - release_pte_page(src_page); > > - /* > > - * ptl mostly unnecessary, but preempt has to > > - * be disabled to update the per-cpu stats > > - * inside page_remove_rmap(). > > - */ > > - spin_lock(ptl); > > - ptep_clear(vma->vm_mm, address, _pte); > > - page_remove_rmap(src_page, vma, false); > > - spin_unlock(ptl); > > - free_page_and_swap_cache(src_page); > > + } > > + list_for_each_entry_safe(src_page, tmp, compound_pagelist, lru) { > > + list_del(&src_page->lru); > > + release_pte_page(src_page); > > } > > } > > > > - list_for_each_entry_safe(src_page, tmp, compound_pagelist, lru) { > > - list_del(&src_page->lru); > > - mod_node_page_state(page_pgdat(src_page), > > - NR_ISOLATED_ANON + page_is_file_lru(src_page), > > - -compound_nr(src_page)); > > - unlock_page(src_page); > > - free_swap_cache(src_page); > > - putback_lru_page(src_page); > > - } > > + return result; > > } > > > > static void khugepaged_alloc_sleep(void) > > @@ -1079,9 +1170,13 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > > */ > > anon_vma_unlock_write(vma->anon_vma); > > > > - __collapse_huge_page_copy(pte, hpage, vma, address, pte_ptl, > > - &compound_pagelist); > > + result = __collapse_huge_page_copy(pte, hpage, pmd, _pmd, > > + vma, address, pte_ptl, > > + &compound_pagelist); > > pte_unmap(pte); > > + if (unlikely(result != SCAN_SUCCEED)) > > + goto out_up_write; > > + > > /* > > * spin_lock() below is not the equivalent of smp_wmb(), but > > * the smp_wmb() inside __SetPageUptodate() can be reused to > > -- > > 2.39.0.rc0.267.gcb52ba06e7-goog > > > > -- > Kiryl Shutsemau / Kirill A. Shutemov