linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Baolin Wang <baolin.wang@linux.alibaba.com>
To: Yang Shi <shy828301@gmail.com>, Matthew Wilcox <willy@infradead.org>
Cc: akpm@linux-foundation.org, hughd@google.com, david@redhat.com,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 2/2] mm: shmem: improve the tmpfs large folio read performance
Date: Thu, 17 Oct 2024 11:25:39 +0800	[thread overview]
Message-ID: <2b3572e1-a618-4f86-979d-87f59282fe8f@linux.alibaba.com> (raw)
In-Reply-To: <CAHbLzkogrubD_rPH7zf1T454r-BsxL951YH=rGAfNqPZJSCGow@mail.gmail.com>



On 2024/10/17 01:33, Yang Shi wrote:
> On Wed, Oct 16, 2024 at 8:38 AM Matthew Wilcox <willy@infradead.org> wrote:
>>
>> On Wed, Oct 16, 2024 at 06:09:30PM +0800, Baolin Wang wrote:
>>> @@ -3128,8 +3127,9 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to)
>>>                if (folio) {
>>>                        folio_unlock(folio);
>>>
>>> -                     page = folio_file_page(folio, index);
>>> -                     if (PageHWPoison(page)) {
>>> +                     if (folio_test_hwpoison(folio) ||
>>> +                         (folio_test_large(folio) &&
>>> +                          folio_test_has_hwpoisoned(folio))) {
>>
>> Hm, so if we have hwpoison set on one page in a folio, we now can't read
>> bytes from any page in the folio?  That seems like we've made a bad
>> situation worse.
> 
> Yeah, I agree. I think we can fallback to page copy if
> folio_test_has_hwpoisoned is true. The PG_hwpoison flag is per page.
> 
> The folio_test_has_hwpoisoned is kept set if the folio split is failed
> in memory failure handler.

Right. I can still keep the page size copy if 
folio_test_has_hwpoisoned() is true. Some sample changes are as follow.

Moreover, I noticed shmem splice_read() and write() also simply return 
an error if the folio_test_has_hwpoisoned() is true, without any 
fallback to page granularity. I wonder if it is worth adding page 
granularity support as well?

diff --git a/mm/shmem.c b/mm/shmem.c
index 7e79b6a96da0..f30e24e529b9 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -3111,9 +3111,11 @@ static ssize_t shmem_file_read_iter(struct kiocb 
*iocb, struct iov_iter *to)

         for (;;) {
                 struct folio *folio = NULL;
+               struct page *page = NULL;
                 unsigned long nr, ret;
                 loff_t end_offset, i_size = i_size_read(inode);
                 size_t fsize;
+               bool fallback_page_copy = false;

                 if (unlikely(iocb->ki_pos >= i_size))
                         break;
@@ -3127,13 +3129,16 @@ static ssize_t shmem_file_read_iter(struct kiocb 
*iocb, struct iov_iter *to)
                 if (folio) {
                         folio_unlock(folio);

-                       if (folio_test_hwpoison(folio) ||
-                           (folio_test_large(folio) &&
-                            folio_test_has_hwpoisoned(folio))) {
+                       page = folio_file_page(folio, index);
+                       if (PageHWPoison(page)) {
                                 folio_put(folio);
                                 error = -EIO;
                                 break;
                         }
+
+                       if (folio_test_large(folio) &&
+                           folio_test_has_hwpoisoned(folio))
+                               fallback_page_copy = true;
                 }

                 /*
@@ -3147,7 +3152,7 @@ static ssize_t shmem_file_read_iter(struct kiocb 
*iocb, struct iov_iter *to)
                         break;
                 }
                 end_offset = min_t(loff_t, i_size, iocb->ki_pos + 
to->count);
-               if (folio)
+               if (folio && likely(!fallback_page_copy))
                         fsize = folio_size(folio);
                 else
                         fsize = PAGE_SIZE;
@@ -3160,8 +3165,13 @@ static ssize_t shmem_file_read_iter(struct kiocb 
*iocb, struct iov_iter *to)
                          * virtual addresses, take care about potential 
aliasing
                          * before reading the page on the kernel side.
                          */
-                       if (mapping_writably_mapped(mapping))
-                               flush_dcache_folio(folio);
+                       if (mapping_writably_mapped(mapping)) {
+                               if (unlikely(fallback_page_copy))
+                                       flush_dcache_page(page);
+                               else
+                                       flush_dcache_folio(folio);
+                       }
+
                         /*
                          * Mark the page accessed if we read the beginning.
                          */
@@ -3171,7 +3181,10 @@ static ssize_t shmem_file_read_iter(struct kiocb 
*iocb, struct iov_iter *to)
                          * Ok, we have the page, and it's up-to-date, so
                          * now we can copy it to user space...
                          */
-                       ret = copy_folio_to_iter(folio, offset, nr, to);
+                       if (unlikely(fallback_page_copy))
+                               ret = copy_page_to_iter(page, offset, 
nr, to);
+                       else
+                               ret = copy_folio_to_iter(folio, offset, 
nr, to);
                         folio_put(folio);
                 } else if (user_backed_iter(to)) {
                         /*


  reply	other threads:[~2024-10-17  3:25 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-16 10:09 [PATCH 0/2] Improve " Baolin Wang
2024-10-16 10:09 ` [PATCH 1/2] mm: shmem: update iocb->ki_pos directly to simplify tmpfs read logic Baolin Wang
2024-10-16 12:34   ` Kefeng Wang
2024-10-17  2:45     ` Baolin Wang
2024-10-16 10:09 ` [PATCH 2/2] mm: shmem: improve the tmpfs large folio read performance Baolin Wang
2024-10-16 12:36   ` Kefeng Wang
2024-10-17  2:46     ` Baolin Wang
2024-10-16 15:37   ` Matthew Wilcox
2024-10-16 17:33     ` Yang Shi
2024-10-17  3:25       ` Baolin Wang [this message]
2024-10-17 16:48         ` Yang Shi
2024-10-18  1:45           ` Baolin Wang
2024-10-16 11:47 ` [PATCH 0/2] Improve " Kefeng Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2b3572e1-a618-4f86-979d-87f59282fe8f@linux.alibaba.com \
    --to=baolin.wang@linux.alibaba.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@redhat.com \
    --cc=hughd@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=shy828301@gmail.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox