From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA69CC433F5 for ; Wed, 6 Apr 2022 17:48:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 63B266B0071; Wed, 6 Apr 2022 13:47:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5EA2B6B0073; Wed, 6 Apr 2022 13:47:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 48AFF6B0074; Wed, 6 Apr 2022 13:47:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 3C6046B0071 for ; Wed, 6 Apr 2022 13:47:56 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 041D863496 for ; Wed, 6 Apr 2022 17:47:45 +0000 (UTC) X-FDA: 79327186932.07.197EE1D Received: from mail-lf1-f47.google.com (mail-lf1-f47.google.com [209.85.167.47]) by imf24.hostedemail.com (Postfix) with ESMTP id 74E17180017 for ; Wed, 6 Apr 2022 17:47:45 +0000 (UTC) Received: by mail-lf1-f47.google.com with SMTP id k21so5413067lfe.4 for ; Wed, 06 Apr 2022 10:47:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=1TyhCKL/ZZx0t/Jp0Kaef/CBo0NDduTZ81z6pZavOXU=; b=nM7fRaoHeoVFF8b9DIHCm4Qd9KaufLkpgf4wwWUUnCD9fXQb3UJ/hqIK/Xz9gblQXB 2qNy/x1lacWCzU8fR958i3YabzJC/frT0i74tF8yBH6++2tFznFBVmkBMco0lQ4aIRVU jknhpCLur7E3w3fSoUYUVJPHW9M1pUjHuqJgd+juEnAZLA6jC5S38AAuF941Urqsfuxx XfLkzV8oN4pJn2x1M62SUQ9E30IQ0xP0nfWhGY0ZSq5tXjqsg0bYDdPZGhtKMUYsa/lz vfKelux0BwBYNokddlpbCXBYz0z3tRH8sGco0WkBXHPr7c+ZRmzdbQr4z+7EI5BRDhGt 3b+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=1TyhCKL/ZZx0t/Jp0Kaef/CBo0NDduTZ81z6pZavOXU=; b=7Np/RYNK2KhnbmogYlOMg5k2biEFrrBhHjS2GrbYiw8/L4fuSFUgddagccFBjgvHwz yy0X0CAVmGQhSvOAr4uE7xDsoh6BHS6Iwe9UWiCiTiGj/d/vIV+UMZl3sTLi9/+816Ur ZvDmFF9Oe6jnA/N+lxy7XKDJOVU/jyxyIZYx7EvqYvGxe4yT0EmitqNKbilF6Ng2TNqG jwiQZsNVoqS+gHfzj6aaZHaDbxiUdraFbqf2jzsKb/QE3uCcVnNIRmJodgIYkrUqOC0P xCQmQmsEE+VkmV4NmyfbKxfXjMKz+5UZsb4OtK6z5pUnpxfM9nxiLDiA9gxsU3j3Svwp 2ctg== X-Gm-Message-State: AOAM532SgptgzFydgLY7XeFJEk4FN3CiC34ruxHP+N1zwGvs9CHvINQL V3KXW4IlOJ4Kik0aKhx64iTiwLV1F8AMYCByzEIvYw== X-Google-Smtp-Source: ABdhPJxiCeZdptXhyWlsKXyT1Jvf9ZgWt2Ky/TnZ4Lf4o5C7tiUGyZFO+fiUeR+hEcHGEYigHv+WodZrpwhvejsOMWE= X-Received: by 2002:a05:6512:3fa0:b0:44a:f66c:8365 with SMTP id x32-20020a0565123fa000b0044af66c8365mr6820027lfa.152.1649267263463; Wed, 06 Apr 2022 10:47:43 -0700 (PDT) MIME-Version: 1.0 References: <20220405205146.411595-1-jiaqiyan@google.com> <20220405205146.411595-2-jiaqiyan@google.com> In-Reply-To: <20220405205146.411595-2-jiaqiyan@google.com> From: Jiaqi Yan Date: Wed, 6 Apr 2022 10:47:32 -0700 Message-ID: Subject: Re: [RFC v2 1/2] mm: khugepaged: recover from poisoned anonymous memory To: Yang Shi , Tong Tiangen Cc: "Luck, Tony" , =?UTF-8?B?SE9SSUdVQ0hJIE5BT1lBKOWggOWPoyDnm7TkuZ8p?= , "Kirill A. Shutemov" , Miaohe Lin , Jue Wang , Linux MM Content-Type: text/plain; charset="UTF-8" X-Stat-Signature: j391g869o9975xorqn1xdy5ir835xx34 Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=nM7fRaoH; spf=pass (imf24.hostedemail.com: domain of jiaqiyan@google.com designates 209.85.167.47 as permitted sender) smtp.mailfrom=jiaqiyan@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 74E17180017 X-HE-Tag: 1649267265-278607 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Replacement patch to fix build error on architectures that define __HAVE_ARCH_COPY_HIGHPAGE, i.e. move copy_highpage_mc out of the "#ifndef __HAVE_ARCH_COPY_HIGHPAGE" block. >From b1225b762c075652675c61088dfa0b4d4b59ab90 Mon Sep 17 00:00:00 2001 From: Jiaqi Yan Date: Wed, 16 Feb 2022 01:04:25 +0000 Subject: [RFC v2 1/2] mm: khugepaged: recover from poisoned anonymous memory Make __collapse_huge_page_copy return whether collapsing/copying anonymous pages succeeded, and make collapse_huge_page handle the return status. Break existing PTE scan loop into two for-loops. The first loop copies source pages into target huge page, and can fail gracefully when running into memory errors in source pages. Roll back the page table and page states in the 2nd loop when copying failed: 1) re-establish the PTEs-to-PMD connection. 2) release pages back to their LRU list. Signed-off-by: Jiaqi Yan --- include/linux/highmem.h | 19 ++++++ mm/khugepaged.c | 138 ++++++++++++++++++++++++++++++---------- 2 files changed, 124 insertions(+), 33 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 39bb9b47fa9cd..0ccb1e92c4b06 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -298,6 +298,25 @@ static inline void copy_highpage(struct page *to, struct page *from) #endif +/* + * Machine check exception handled version of copy_highpage. + * Return true if copying page content failed; otherwise false. + * Note handling #MC requires arch opt-in. + */ +static inline bool copy_highpage_mc(struct page *to, struct page *from) +{ + char *vfrom, *vto; + unsigned long ret; + + vfrom = kmap_local_page(from); + vto = kmap_local_page(to); + ret = copy_mc_to_kernel(vto, vfrom, PAGE_SIZE); + kunmap_local(vto); + kunmap_local(vfrom); + + return ret > 0; +} + static inline void memcpy_page(struct page *dst_page, size_t dst_off, struct page *src_page, size_t src_off, size_t len) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 131492fd1148b..8e69a0640e551 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -52,6 +52,7 @@ enum scan_result { SCAN_CGROUP_CHARGE_FAIL, SCAN_TRUNCATED, SCAN_PAGE_HAS_PRIVATE, + SCAN_COPY_MC, }; #define CREATE_TRACE_POINTS @@ -739,44 +740,98 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma, return 0; } -static void __collapse_huge_page_copy(pte_t *pte, struct page *page, - struct vm_area_struct *vma, - unsigned long address, - spinlock_t *ptl, - struct list_head *compound_pagelist) +/* + * __collapse_huge_page_copy - attempts to copy memory contents from normal + * pages to a hugepage. Cleanup the normal pages if copying succeeds; + * otherwise restore the original pmd page table. Returns true if copying + * succeeds, otherwise returns false. + * + * @pte: starting of the PTEs to copy from + * @page: the new hugepage to copy contents to + * @pmd: pointer to the new hugepage's PMD + * @rollback: the original normal PTEs' PMD + * @address: starting address to copy + * @pte_ptl: lock on normal pages' PTEs + * @compound_pagelist: list that stores compound pages + */ +static bool __collapse_huge_page_copy(pte_t *pte, + struct page *page, + pmd_t *pmd, + pmd_t rollback, + struct vm_area_struct *vma, + unsigned long address, + spinlock_t *pte_ptl, + struct list_head *compound_pagelist) { struct page *src_page, *tmp; pte_t *_pte; - for (_pte = pte; _pte < pte + HPAGE_PMD_NR; - _pte++, page++, address += PAGE_SIZE) { - pte_t pteval = *_pte; + pte_t pteval; + unsigned long _address; + spinlock_t *pmd_ptl; + bool copy_succeeded = true; - if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) { + /* + * Copying pages' contents is subject to memory poison at any iteration. + */ + for (_pte = pte, _address = address; + _pte < pte + HPAGE_PMD_NR; + _pte++, page++, _address += PAGE_SIZE) { + pteval = *_pte; + + if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) clear_user_highpage(page, address); - add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1); - if (is_zero_pfn(pte_pfn(pteval))) { - /* - * ptl mostly unnecessary. - */ - spin_lock(ptl); - ptep_clear(vma->vm_mm, address, _pte); - spin_unlock(ptl); + else { + src_page = pte_page(pteval); + if (copy_highpage_mc(page, src_page)) { + copy_succeeded = false; + break; + } + } + } + + if (!copy_succeeded) { + /* + * Copying failed, re-establish the regular PMD that + * points to regular page table. Since PTEs are still + * isolated and locked, acquiring anon_vma_lock is unnecessary. + */ + pmd_ptl = pmd_lock(vma->vm_mm, pmd); + pmd_populate(vma->vm_mm, pmd, pmd_pgtable(rollback)); + spin_unlock(pmd_ptl); + } + + for (_pte = pte, _address = address; _pte < pte + HPAGE_PMD_NR; + _pte++, _address += PAGE_SIZE) { + pteval = *_pte; + if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) { + if (copy_succeeded) { + add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1); + if (is_zero_pfn(pte_pfn(pteval))) { + /* + * ptl mostly unnecessary. + */ + spin_lock(pte_ptl); + pte_clear(vma->vm_mm, _address, _pte); + spin_unlock(pte_ptl); + } } } else { src_page = pte_page(pteval); - copy_user_highpage(page, src_page, address, vma); if (!PageCompound(src_page)) release_pte_page(src_page); - /* - * ptl mostly unnecessary, but preempt has to - * be disabled to update the per-cpu stats - * inside page_remove_rmap(). - */ - spin_lock(ptl); - ptep_clear(vma->vm_mm, address, _pte); - page_remove_rmap(src_page, false); - spin_unlock(ptl); - free_page_and_swap_cache(src_page); + + if (copy_succeeded) { + /* + * ptl mostly unnecessary, but preempt has to + * be disabled to update the per-cpu stats + * inside page_remove_rmap(). + */ + spin_lock(pte_ptl); + pte_clear(vma->vm_mm, _address, _pte); + page_remove_rmap(src_page, false); + spin_unlock(pte_ptl); + free_page_and_swap_cache(src_page); + } } } @@ -784,6 +839,8 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page, list_del(&src_page->lru); release_pte_page(src_page); } + + return copy_succeeded; } static void khugepaged_alloc_sleep(void) @@ -1066,6 +1123,7 @@ static void collapse_huge_page(struct mm_struct *mm, struct vm_area_struct *vma; struct mmu_notifier_range range; gfp_t gfp; + bool copied = false; VM_BUG_ON(address & ~HPAGE_PMD_MASK); @@ -1177,9 +1235,13 @@ static void collapse_huge_page(struct mm_struct *mm, */ anon_vma_unlock_write(vma->anon_vma); - __collapse_huge_page_copy(pte, new_page, vma, address, pte_ptl, - &compound_pagelist); + copied = __collapse_huge_page_copy(pte, new_page, pmd, _pmd, + vma, address, pte_ptl, &compound_pagelist); pte_unmap(pte); + if (!copied) { + result = SCAN_COPY_MC; + goto out_up_write; + } /* * spin_lock() below is not the equivalent of smp_wmb(), but * the smp_wmb() inside __SetPageUptodate() can be reused to @@ -1364,9 +1426,14 @@ static int khugepaged_scan_pmd(struct mm_struct *mm, pte_unmap_unlock(pte, ptl); if (ret) { node = khugepaged_find_target_node(); - /* collapse_huge_page will return with the mmap_lock released */ - collapse_huge_page(mm, address, hpage, node, - referenced, unmapped); + /* + * collapse_huge_page will return with the mmap_r+w_lock released. + * It is uncertain if *hpage is NULL or not when collapse_huge_page + * returns, so keep ret=1 to jump to breakouterloop_mmap_lock + * in khugepaged_scan_mm_slot, then *hpage will be freed + * if collapse failed. + */ + collapse_huge_page(mm, address, hpage, node, referenced, unmapped); } out: trace_mm_khugepaged_scan_pmd(mm, page, writable, referenced, @@ -2168,6 +2235,11 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, khugepaged_scan_file(mm, file, pgoff, hpage); fput(file); } else { + /* + * mmap_read_lock is + * 1) released if both scan and collapse succeeded; + * 2) still held if either scan or collapse failed. + */ ret = khugepaged_scan_pmd(mm, vma, khugepaged_scan.address, hpage); -- 2.35.1.1178.g4f1659d476-goog