From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 782FDC433F5 for ; Tue, 19 Oct 2021 17:31:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 23BB861354 for ; Tue, 19 Oct 2021 17:31:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 23BB861354 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 97A51900002; Tue, 19 Oct 2021 13:31:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 92AB36B008C; Tue, 19 Oct 2021 13:31:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A2E6900002; Tue, 19 Oct 2021 13:31:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0002.hostedemail.com [216.40.44.2]) by kanga.kvack.org (Postfix) with ESMTP id 6A4FF6B008A for ; Tue, 19 Oct 2021 13:31:23 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 220AE8249980 for ; Tue, 19 Oct 2021 17:31:23 +0000 (UTC) X-FDA: 78713878446.27.9B786C0 Received: from mail-ed1-f41.google.com (mail-ed1-f41.google.com [209.85.208.41]) by imf10.hostedemail.com (Postfix) with ESMTP id 1364A600198A for ; Tue, 19 Oct 2021 17:31:18 +0000 (UTC) Received: by mail-ed1-f41.google.com with SMTP id d3so16072168edp.3 for ; Tue, 19 Oct 2021 10:31:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ZNRPA5Y035ENoBEEnphzYyCovDbPmtOdSVwcFg1HX7s=; b=fWgyvW/rGsgnqNAqcunvXvurYtUn79Lt9cXScpF8vT/QKp/yJbFyPWfTGHHbMfr/Qr GmNvEaQyWOTAxUopNX2k3hC8XzH885aPukNHbhrdmAZNVdSZaisl+nuR10NVzLmxTXnF KjxhuC1XxwokYIoBQ2DTG07QpcuH5KOdqegBZBrq/7CQZlHaFKQj7zCy9ts0sjel92ob oionIRvKDb52cCzSdPZSLI0iqF3dVG6b44ypmE7eqEdZJX8//4VCwBV9zOP4iGDYUb5H qrGsO7h3o6UIjEWEcF4aQTbpuJt2EJiw2gQIWSQWZUzKZYXQWFnBF4tv4HaU6RLomJf1 xvIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ZNRPA5Y035ENoBEEnphzYyCovDbPmtOdSVwcFg1HX7s=; b=YDjNI6lhCV5Utt3o+1gZLfpbqsQcbDd1aEvWPn3YlQcKUvL63hruqyRp+IjHpIAVPB L3CLl9aVcRoEUTUSA5CSc6Ztr7FkheyU4vS4zpJ+l6xIaIIigntFw92SLyaCmlNubUn/ 5g9hUAEkJ/vKSJGu/Ls5YRfE1ZRs7lVC9EjWV3arDUJRWWQDVVRaWsg6nf+xb3peMCzs 1FeDhAcb/svn+YQyve3KvMhuqNLfgjA4/rdijeNJsUb2wyKOTRDWXpLKO30qCyjO9tZ2 bN8Gq+Z13bMy+NqkSlhYMV+mKqu7vI4I1dmKZfqSl8re4CoTt0APIgsjZHwWh1t2Vw4e DjjQ== X-Gm-Message-State: AOAM533u8EcBF3Ky/JmWMIsRP7MA30yEo7YJW29EsWa73yQzWhkNgHSP L8RHsPUU+F3TxKLqCawVV7FdkB8hzbiFRd9X/Rg= X-Google-Smtp-Source: ABdhPJwP7pVSw14q5bqPkE3n1jFqa5RdzcCX4HOZcWEKDa3sLdwe2MV31QxFZ2+NPr6SYQntTXpw5h82iuGlVY6NJ1U= X-Received: by 2002:a17:906:a94b:: with SMTP id hh11mr40193608ejb.85.1634664603268; Tue, 19 Oct 2021 10:30:03 -0700 (PDT) MIME-Version: 1.0 References: <20211014191615.6674-1-shy828301@gmail.com> <20211014191615.6674-6-shy828301@gmail.com> <20211019055221.GC2268449@u2004> In-Reply-To: <20211019055221.GC2268449@u2004> From: Yang Shi Date: Tue, 19 Oct 2021 10:29:51 -0700 Message-ID: Subject: Re: [v4 PATCH 5/6] mm: shmem: don't truncate page if memory failure happens To: Naoya Horiguchi Cc: =?UTF-8?B?SE9SSUdVQ0hJIE5BT1lBKOWggOWPoyDnm7TkuZ8p?= , Hugh Dickins , "Kirill A. Shutemov" , Matthew Wilcox , Peter Xu , Oscar Salvador , Andrew Morton , Linux MM , Linux FS-devel Mailing List , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 1364A600198A Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="fWgyvW/r"; spf=pass (imf10.hostedemail.com: domain of shy828301@gmail.com designates 209.85.208.41 as permitted sender) smtp.mailfrom=shy828301@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Stat-Signature: u9gicu9xtx9cgkkodx1k14cwnk9afazn X-Rspamd-Server: rspam05 X-HE-Tag: 1634664678-671823 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Oct 18, 2021 at 10:52 PM Naoya Horiguchi wrote: > > On Thu, Oct 14, 2021 at 12:16:14PM -0700, Yang Shi wrote: > > The current behavior of memory failure is to truncate the page cache > > regardless of dirty or clean. If the page is dirty the later access > > will get the obsolete data from disk without any notification to the > > users. This may cause silent data loss. It is even worse for shmem > > since shmem is in-memory filesystem, truncating page cache means > > discarding data blocks. The later read would return all zero. > > > > The right approach is to keep the corrupted page in page cache, any > > later access would return error for syscalls or SIGBUS for page fault, > > until the file is truncated, hole punched or removed. The regular > > storage backed filesystems would be more complicated so this patch > > is focused on shmem. This also unblock the support for soft > > offlining shmem THP. > > > > Signed-off-by: Yang Shi > > --- > > mm/memory-failure.c | 10 +++++++++- > > mm/shmem.c | 37 ++++++++++++++++++++++++++++++++++--- > > mm/userfaultfd.c | 5 +++++ > > 3 files changed, 48 insertions(+), 4 deletions(-) > > > > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > > index cdf8ccd0865f..f5eab593b2a7 100644 > > --- a/mm/memory-failure.c > > +++ b/mm/memory-failure.c > > @@ -57,6 +57,7 @@ > > #include > > #include > > #include > > +#include > > #include "internal.h" > > #include "ras/ras_event.h" > > > > @@ -866,6 +867,7 @@ static int me_pagecache_clean(struct page_state *ps, struct page *p) > > { > > int ret; > > struct address_space *mapping; > > + bool extra_pins; > > > > delete_from_lru_cache(p); > > > > @@ -894,6 +896,12 @@ static int me_pagecache_clean(struct page_state *ps, struct page *p) > > goto out; > > } > > > > + /* > > + * The shmem page is kept in page cache instead of truncating > > + * so is expected to have an extra refcount after error-handling. > > + */ > > + extra_pins = shmem_mapping(mapping); > > + > > /* > > * Truncation is a bit tricky. Enable it per file system for now. > > * > > @@ -903,7 +911,7 @@ static int me_pagecache_clean(struct page_state *ps, struct page *p) > > out: > > unlock_page(p); > > > > - if (has_extra_refcount(ps, p, false)) > > + if (has_extra_refcount(ps, p, extra_pins)) > > ret = MF_FAILED; > > > > return ret; > > diff --git a/mm/shmem.c b/mm/shmem.c > > index b5860f4a2738..69eaf65409e6 100644 > > --- a/mm/shmem.c > > +++ b/mm/shmem.c > > @@ -2456,6 +2456,7 @@ shmem_write_begin(struct file *file, struct address_space *mapping, > > struct inode *inode = mapping->host; > > struct shmem_inode_info *info = SHMEM_I(inode); > > pgoff_t index = pos >> PAGE_SHIFT; > > + int ret = 0; > > > > /* i_rwsem is held by caller */ > > if (unlikely(info->seals & (F_SEAL_GROW | > > @@ -2466,7 +2467,15 @@ shmem_write_begin(struct file *file, struct address_space *mapping, > > return -EPERM; > > } > > > > - return shmem_getpage(inode, index, pagep, SGP_WRITE); > > + ret = shmem_getpage(inode, index, pagep, SGP_WRITE); > > + > > + if (*pagep && PageHWPoison(*pagep)) { > > shmem_getpage() could return with pagep == NULL, so you need check ret first > to avoid NULL pointer dereference. Realy? IIUC pagep can't be NULL. It is a pointer's pointer passed in by the caller, for example, generic_perform_write(). Of course, "*pagep" could be NULL. > > > + unlock_page(*pagep); > > + put_page(*pagep); > > + ret = -EIO; > > + } > > + > > + return ret; > > } > > > > static int > > @@ -2555,6 +2564,11 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to) > > unlock_page(page); > > } > > > > + if (page && PageHWPoison(page)) { > > + error = -EIO; > > Is it cleaner to add PageHWPoison() check in the existing "if (page)" block > just above? Then, you don't have to check "page != NULL" twice. > > @@ -2562,7 +2562,11 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to) > if (sgp == SGP_CACHE) > set_page_dirty(page); > unlock_page(page); > > + if (PageHWPoison(page)) { > + error = -EIO; > + break; > + } Yeah, it looks better indeed. > } > > /* > > > Thanks, > Naoya Horiguchi