From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7251C433EF for ; Wed, 20 Oct 2021 18:32:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6A8D5610CB for ; Wed, 20 Oct 2021 18:32:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6A8D5610CB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 06F56940012; Wed, 20 Oct 2021 14:32:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F3A1294000E; Wed, 20 Oct 2021 14:32:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E0216940012; Wed, 20 Oct 2021 14:32:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0149.hostedemail.com [216.40.44.149]) by kanga.kvack.org (Postfix) with ESMTP id CE1AB94000E for ; Wed, 20 Oct 2021 14:32:53 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 6E8762D224 for ; Wed, 20 Oct 2021 18:32:53 +0000 (UTC) X-FDA: 78717662226.32.C9FE7F0 Received: from mail-ed1-f52.google.com (mail-ed1-f52.google.com [209.85.208.52]) by imf01.hostedemail.com (Postfix) with ESMTP id AFC8D507F217 for ; Wed, 20 Oct 2021 18:32:48 +0000 (UTC) Received: by mail-ed1-f52.google.com with SMTP id t16so128429eds.9 for ; Wed, 20 Oct 2021 11:32:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=FsM4jLxeMymBTI6a6wnbQipLVv+fOhvGZcyhOoYnFOk=; b=pNoMx24xj3/RigF3UtWtulcuh1QGFMoN97+qQrfHD7LsrB3tkGpDiZgxIbeH8diy0r 4UrcRHdSTAG1b8lcftXARzYc61ajv8CNwEX28qN+EdG798Io+hBgzdOwE7Mbul6WF5QR IUAPr3O7InmgVQoPZFtj7lYIDlC0lEX1AtwMOZv1GvGuD6WufOVc1BlPDti5Qa3+OmnD 69jW/RHh/sYi80T+NE9e6BuR52CBB7bDtd7uLHbv+XS2mIpwhQr50IowbMl5Wl3TH5Wb 6LdzDJ9ZW9Ep1Y0OIyUEh4oD1rf5YBmtfDn3DGmXuMsg4RwB99Ye3PGD5RyKR0cw3EHu 8vfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=FsM4jLxeMymBTI6a6wnbQipLVv+fOhvGZcyhOoYnFOk=; b=O893o/8X3Xe3OTxkDiA8luBOWAMMib6THu/AwlNU8wpClPseFua90fWpvIqcl81i73 unDlED33Bj8Jm0PcT/IBppfkggpGK2NQWRXbbgdj8z1d+vWhwtfxXT9LvMjjbhWRzn3j w/s4elYOxpIB4RfljUsLuW1z1qh8AC3bVRWZHMJQ8cvD5rkMBGYMmhfJwI1p0yVFsjDi VyVGRG9mSFHsv/t+8qS7N5cMXxgzcwAwOknLoAZODBWDKWtWpzChOkbTisr8qJ0qZfal n8nDo0gMUbvGIoXR85655+tK2AUOVD2V65wm0l7AziXnZvUsWh9IDhoQNMUmzGUPRwYX Vmlw== X-Gm-Message-State: AOAM531lwDWIBftmR3GV/djtz/jzmtskhF7D2qBc6C1pA7icyK6q1vEh z7kPLaLlxfuE6nV7a1SLBATO4/CuRhQjtLns97k= X-Google-Smtp-Source: ABdhPJxkNo5L9kIdLwLpxQfVifGdQqoOvOJc71Xf1YnE1qBxOaQL8vfZmUp+FNmpXa+OM3DNiCo6qEuaaTrgppOADtE= X-Received: by 2002:a05:6402:891:: with SMTP id e17mr779015edy.81.1634754771904; Wed, 20 Oct 2021 11:32:51 -0700 (PDT) MIME-Version: 1.0 References: <20211014191615.6674-1-shy828301@gmail.com> <20211014191615.6674-6-shy828301@gmail.com> <20211019055221.GC2268449@u2004> In-Reply-To: <20211019055221.GC2268449@u2004> From: Yang Shi Date: Wed, 20 Oct 2021 11:32:39 -0700 Message-ID: Subject: Re: [v4 PATCH 5/6] mm: shmem: don't truncate page if memory failure happens To: Naoya Horiguchi Cc: =?UTF-8?B?SE9SSUdVQ0hJIE5BT1lBKOWggOWPoyDnm7TkuZ8p?= , Hugh Dickins , "Kirill A. Shutemov" , Matthew Wilcox , Peter Xu , Oscar Salvador , Andrew Morton , Linux MM , Linux FS-devel Mailing List , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: AFC8D507F217 X-Stat-Signature: 8x5wd7qoc8rri75zbj6wzeobshaw8u3f Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=pNoMx24x; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf01.hostedemail.com: domain of shy828301@gmail.com designates 209.85.208.52 as permitted sender) smtp.mailfrom=shy828301@gmail.com X-HE-Tag: 1634754768-475476 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Oct 18, 2021 at 10:52 PM Naoya Horiguchi wrote: > > On Thu, Oct 14, 2021 at 12:16:14PM -0700, Yang Shi wrote: > > The current behavior of memory failure is to truncate the page cache > > regardless of dirty or clean. If the page is dirty the later access > > will get the obsolete data from disk without any notification to the > > users. This may cause silent data loss. It is even worse for shmem > > since shmem is in-memory filesystem, truncating page cache means > > discarding data blocks. The later read would return all zero. > > > > The right approach is to keep the corrupted page in page cache, any > > later access would return error for syscalls or SIGBUS for page fault, > > until the file is truncated, hole punched or removed. The regular > > storage backed filesystems would be more complicated so this patch > > is focused on shmem. This also unblock the support for soft > > offlining shmem THP. > > > > Signed-off-by: Yang Shi > > --- > > mm/memory-failure.c | 10 +++++++++- > > mm/shmem.c | 37 ++++++++++++++++++++++++++++++++++--- > > mm/userfaultfd.c | 5 +++++ > > 3 files changed, 48 insertions(+), 4 deletions(-) > > > > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > > index cdf8ccd0865f..f5eab593b2a7 100644 > > --- a/mm/memory-failure.c > > +++ b/mm/memory-failure.c > > @@ -57,6 +57,7 @@ > > #include > > #include > > #include > > +#include > > #include "internal.h" > > #include "ras/ras_event.h" > > > > @@ -866,6 +867,7 @@ static int me_pagecache_clean(struct page_state *ps, struct page *p) > > { > > int ret; > > struct address_space *mapping; > > + bool extra_pins; > > > > delete_from_lru_cache(p); > > > > @@ -894,6 +896,12 @@ static int me_pagecache_clean(struct page_state *ps, struct page *p) > > goto out; > > } > > > > + /* > > + * The shmem page is kept in page cache instead of truncating > > + * so is expected to have an extra refcount after error-handling. > > + */ > > + extra_pins = shmem_mapping(mapping); > > + > > /* > > * Truncation is a bit tricky. Enable it per file system for now. > > * > > @@ -903,7 +911,7 @@ static int me_pagecache_clean(struct page_state *ps, struct page *p) > > out: > > unlock_page(p); > > > > - if (has_extra_refcount(ps, p, false)) > > + if (has_extra_refcount(ps, p, extra_pins)) > > ret = MF_FAILED; > > > > return ret; > > diff --git a/mm/shmem.c b/mm/shmem.c > > index b5860f4a2738..69eaf65409e6 100644 > > --- a/mm/shmem.c > > +++ b/mm/shmem.c > > @@ -2456,6 +2456,7 @@ shmem_write_begin(struct file *file, struct address_space *mapping, > > struct inode *inode = mapping->host; > > struct shmem_inode_info *info = SHMEM_I(inode); > > pgoff_t index = pos >> PAGE_SHIFT; > > + int ret = 0; > > > > /* i_rwsem is held by caller */ > > if (unlikely(info->seals & (F_SEAL_GROW | > > @@ -2466,7 +2467,15 @@ shmem_write_begin(struct file *file, struct address_space *mapping, > > return -EPERM; > > } > > > > - return shmem_getpage(inode, index, pagep, SGP_WRITE); > > + ret = shmem_getpage(inode, index, pagep, SGP_WRITE); > > + > > + if (*pagep && PageHWPoison(*pagep)) { > > shmem_getpage() could return with pagep == NULL, so you need check ret first > to avoid NULL pointer dereference. > > > + unlock_page(*pagep); > > + put_page(*pagep); > > + ret = -EIO; > > + } > > + > > + return ret; > > } > > > > static int > > @@ -2555,6 +2564,11 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to) > > unlock_page(page); > > } > > > > + if (page && PageHWPoison(page)) { > > + error = -EIO; > > Is it cleaner to add PageHWPoison() check in the existing "if (page)" block > just above? Then, you don't have to check "page != NULL" twice. > > @@ -2562,7 +2562,11 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to) > if (sgp == SGP_CACHE) > set_page_dirty(page); > unlock_page(page); > > + if (PageHWPoison(page)) { > + error = -EIO; > + break; Further looking shows I missed a "put_page" in the first place. Will fix in the next version too. > + } > } > > /* > > > Thanks, > Naoya Horiguchi