linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Miaohe Lin <linmiaohe@huawei.com>
To: Naoya Horiguchi <naoya.horiguchi@linux.dev>
Cc: <akpm@linux-foundation.org>, <naoya.horiguchi@nec.com>,
	<linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>,
	Matthew Wilcox <willy@infradead.org>
Subject: Re: [PATCH] mm: memory-failure: remove unneeded page state check in shake_page()
Date: Mon, 26 Jun 2023 09:43:44 +0800	[thread overview]
Message-ID: <2cd57a67-1cb2-83b8-3f73-6da72cd6159d@huawei.com> (raw)
In-Reply-To: <20230626005221.GA353339@ik1-406-35019.vs.sakura.ne.jp>

On 2023/6/26 8:52, Naoya Horiguchi wrote:
> On Sun, Jun 25, 2023 at 07:34:30PM +0800, Miaohe Lin wrote:
>> Remove unneeded PageLRU(p) and is_free_buddy_page(p) check as slab caches
>> are not shrunk now. This check can be added back when a lightweight range
>> based shrinker is available.
>>
>> Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
> 
> This looks to me a good cleanup because the result of
> "if (PageLRU(p) || is_free_buddy_page(p))" check is not used, so the check
> itself is unneeded.
> 
>> ---
>>  mm/memory-failure.c | 9 ++++-----
>>  1 file changed, 4 insertions(+), 5 deletions(-)
>>
>> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
>> index 5b663eca1f29..92f951df3e87 100644
>> --- a/mm/memory-failure.c
>> +++ b/mm/memory-failure.c
>> @@ -373,11 +373,10 @@ void shake_page(struct page *p)
>>  	if (PageHuge(p))
>>  		return;
>>  
>> -	if (!PageSlab(p)) {
>> -		lru_add_drain_all();
>> -		if (PageLRU(p) || is_free_buddy_page(p))
>> -			return;
>> -	}
>> +	if (PageSlab(p))
>> +		return;
>> +
>> +	lru_add_drain_all();
>>  
>>  	/*
>>  	 * TODO: Could shrink slab caches here if a lightweight range-based
> 
> I think that this TODO comment can be put together with "if (PageSlab)" block.

Thanks for your comment and advice. Do you mean something like below?

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 5b663eca1f29..66e7b3ceaf2d 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -372,17 +372,14 @@ void shake_page(struct page *p)
 {
        if (PageHuge(p))
                return;
-
-       if (!PageSlab(p)) {
-               lru_add_drain_all();
-               if (PageLRU(p) || is_free_buddy_page(p))
-                       return;
-       }
-
        /*
         * TODO: Could shrink slab caches here if a lightweight range-based
         * shrinker will be available.
         */
+       if (PageSlab(p))
+               return;
+
+       lru_add_drain_all();
 }
 EXPORT_SYMBOL_GPL(shake_page);

Thanks.


  reply	other threads:[~2023-06-26  1:43 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-25 11:34 Miaohe Lin
2023-06-25 13:32 ` Matthew Wilcox
2023-06-25 19:50 ` Matthew Wilcox
2023-06-26  1:40   ` Miaohe Lin
2023-06-26  0:52 ` Naoya Horiguchi
2023-06-26  1:43   ` Miaohe Lin [this message]
2023-06-28  0:04     ` Naoya Horiguchi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2cd57a67-1cb2-83b8-3f73-6da72cd6159d@huawei.com \
    --to=linmiaohe@huawei.com \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=naoya.horiguchi@linux.dev \
    --cc=naoya.horiguchi@nec.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox