From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AACDDC54FD0 for ; Mon, 27 Apr 2020 08:48:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 75EBD2087E for ; Mon, 27 Apr 2020 08:48:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 75EBD2087E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=cn.fujitsu.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0F3258E0006; Mon, 27 Apr 2020 04:48:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0A55D8E0001; Mon, 27 Apr 2020 04:48:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F23A58E0006; Mon, 27 Apr 2020 04:48:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0061.hostedemail.com [216.40.44.61]) by kanga.kvack.org (Postfix) with ESMTP id DB0D18E0001 for ; Mon, 27 Apr 2020 04:48:35 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 8BC7C180AD804 for ; Mon, 27 Apr 2020 08:48:35 +0000 (UTC) X-FDA: 76753008990.03.glove59_2db3f2f2e0745 X-HE-Tag: glove59_2db3f2f2e0745 X-Filterd-Recvd-Size: 8258 Received: from heian.cn.fujitsu.com (mail.cn.fujitsu.com [183.91.158.132]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Mon, 27 Apr 2020 08:48:33 +0000 (UTC) X-IronPort-AV: E=Sophos;i="5.73,323,1583164800"; d="scan'208";a="90547650" Received: from unknown (HELO cn.fujitsu.com) ([10.167.33.5]) by heian.cn.fujitsu.com with ESMTP; 27 Apr 2020 16:48:31 +0800 Received: from G08CNEXMBPEKD05.g08.fujitsu.local (unknown [10.167.33.204]) by cn.fujitsu.com (Postfix) with ESMTP id 7F8A84BCC89C; Mon, 27 Apr 2020 16:37:51 +0800 (CST) Received: from G08CNEXJMPEKD02.g08.fujitsu.local (10.167.33.202) by G08CNEXMBPEKD05.g08.fujitsu.local (10.167.33.204) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 27 Apr 2020 16:48:34 +0800 Received: from G08CNEXCHPEKD05.g08.fujitsu.local (10.167.33.203) by G08CNEXJMPEKD02.g08.fujitsu.local (10.167.33.202) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 27 Apr 2020 16:48:33 +0800 Received: from localhost.localdomain (10.167.225.141) by G08CNEXCHPEKD05.g08.fujitsu.local (10.167.33.209) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 27 Apr 2020 16:48:32 +0800 From: Shiyang Ruan To: , , CC: , , , , , , , , Subject: [RFC PATCH 2/8] mm: add dax-rmap for memory-failure and rmap Date: Mon, 27 Apr 2020 16:47:44 +0800 Message-ID: <20200427084750.136031-3-ruansy.fnst@cn.fujitsu.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200427084750.136031-1-ruansy.fnst@cn.fujitsu.com> References: <20200427084750.136031-1-ruansy.fnst@cn.fujitsu.com> MIME-Version: 1.0 Content-Type: text/plain X-yoursite-MailScanner-ID: 7F8A84BCC89C.A1BFC X-yoursite-MailScanner: Found to be clean X-yoursite-MailScanner-From: ruansy.fnst@cn.fujitsu.com Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Memory-failure collects and kill processes who is accessing a posioned, file mmapped page. Add dax-rmap iteration to support reflink case. Also add it for rmap iteration. Signed-off-by: Shiyang Ruan --- mm/memory-failure.c | 63 +++++++++++++++++++++++++++++++++++---------- mm/rmap.c | 54 +++++++++++++++++++++++++++----------- 2 files changed, 88 insertions(+), 29 deletions(-) diff --git a/mm/memory-failure.c b/mm/memory-failure.c index a96364be8ab4..6d7da1fd55fd 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -463,36 +463,71 @@ static void collect_procs_anon(struct page *page, s= truct list_head *to_kill, page_unlock_anon_vma_read(av); } =20 +static void collect_each_procs_file(struct page *page, + struct task_struct *task, + struct list_head *to_kill) +{ + struct vm_area_struct *vma; + struct address_space *mapping =3D page->mapping; + struct rb_root_cached *root =3D (struct rb_root_cached *)page_private(p= age); + struct rb_node *node; + struct shared_file *shared; + pgoff_t pgoff; + + if (dax_mapping(mapping) && root) { + struct shared_file save =3D { + .mapping =3D mapping, + .index =3D page->index, + }; + for (node =3D rb_first_cached(root); node; node =3D rb_next(node)) { + shared =3D container_of(node, struct shared_file, node); + mapping =3D page->mapping =3D shared->mapping; + page->index =3D shared->index; + pgoff =3D page_to_pgoff(page); + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, + pgoff) { + if (vma->vm_mm =3D=3D task->mm) { + // each vma is unique, so is the vaddr. + add_to_kill(task, page, vma, to_kill); + } + } + } + // restore the mapping and index. + page->mapping =3D save.mapping; + page->index =3D save.index; + } else { + pgoff =3D page_to_pgoff(page); + vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, pgoff) { + /* + * Send early kill signal to tasks where a vma covers + * the page but the corrupted page is not necessarily + * mapped it in its pte. + * Assume applications who requested early kill want + * to be informed of all such data corruptions. + */ + if (vma->vm_mm =3D=3D task->mm) + add_to_kill(task, page, vma, to_kill); + } + } +} + /* * Collect processes when the error hit a file mapped page. */ static void collect_procs_file(struct page *page, struct list_head *to_k= ill, int force_early) { - struct vm_area_struct *vma; struct task_struct *tsk; struct address_space *mapping =3D page->mapping; =20 i_mmap_lock_read(mapping); read_lock(&tasklist_lock); for_each_process(tsk) { - pgoff_t pgoff =3D page_to_pgoff(page); struct task_struct *t =3D task_early_kill(tsk, force_early); =20 if (!t) continue; - vma_interval_tree_foreach(vma, &mapping->i_mmap, pgoff, - pgoff) { - /* - * Send early kill signal to tasks where a vma covers - * the page but the corrupted page is not necessarily - * mapped it in its pte. - * Assume applications who requested early kill want - * to be informed of all such data corruptions. - */ - if (vma->vm_mm =3D=3D t->mm) - add_to_kill(t, page, vma, to_kill); - } + collect_each_procs_file(page, t, to_kill); } read_unlock(&tasklist_lock); i_mmap_unlock_read(mapping); diff --git a/mm/rmap.c b/mm/rmap.c index f79a206b271a..69ea66f9e971 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1870,21 +1870,7 @@ static void rmap_walk_anon(struct page *page, stru= ct rmap_walk_control *rwc, anon_vma_unlock_read(anon_vma); } =20 -/* - * rmap_walk_file - do something to file page using the object-based rma= p method - * @page: the page to be handled - * @rwc: control variable according to each walk type - * - * Find all the mappings of a page using the mapping pointer and the vma= chains - * contained in the address_space struct it points to. - * - * When called from try_to_munlock(), the mmap_sem of the mm containing = the vma - * where the page was found will be held for write. So, we won't rechec= k - * vm_flags for that VMA. That should be OK, because that vma shouldn't= be - * LOCKED. - */ -static void rmap_walk_file(struct page *page, struct rmap_walk_control *= rwc, - bool locked) +static void rmap_walk_file_one(struct page *page, struct rmap_walk_contr= ol *rwc, bool locked) { struct address_space *mapping =3D page_mapping(page); pgoff_t pgoff_start, pgoff_end; @@ -1925,6 +1911,44 @@ static void rmap_walk_file(struct page *page, stru= ct rmap_walk_control *rwc, i_mmap_unlock_read(mapping); } =20 +/* + * rmap_walk_file - do something to file page using the object-based rma= p method + * @page: the page to be handled + * @rwc: control variable according to each walk type + * + * Find all the mappings of a page using the mapping pointer and the vma= chains + * contained in the address_space struct it points to. + * + * When called from try_to_munlock(), the mmap_sem of the mm containing = the vma + * where the page was found will be held for write. So, we won't rechec= k + * vm_flags for that VMA. That should be OK, because that vma shouldn't= be + * LOCKED. + */ +static void rmap_walk_file(struct page *page, struct rmap_walk_control *= rwc, + bool locked) +{ + struct rb_root_cached *root =3D (struct rb_root_cached *)page_private(p= age); + struct rb_node *node; + struct shared_file *shared; + + if (dax_mapping(page->mapping) && root) { + struct shared_file save =3D { + .mapping =3D page->mapping, + .index =3D page->index, + }; + for (node =3D rb_first_cached(root); node; node =3D rb_next(node)) { + shared =3D container_of(node, struct shared_file, node); + page->mapping =3D shared->mapping; + page->index =3D shared->index; + rmap_walk_file_one(page, rwc, locked); + } + // restore the mapping and index. + page->mapping =3D save.mapping; + page->index =3D save.index; + } else + rmap_walk_file_one(page, rwc, locked); +} + void rmap_walk(struct page *page, struct rmap_walk_control *rwc) { if (unlikely(PageKsm(page))) --=20 2.26.2