linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1] mm/vmscan: Account hwpoisoned folios in reclaim statistics
@ 2025-07-02  9:34 18810879172
  2025-07-02  9:44 ` David Hildenbrand
  0 siblings, 1 reply; 3+ messages in thread
From: 18810879172 @ 2025-07-02  9:34 UTC (permalink / raw)
  To: akpm; +Cc: david, zhengqi.arch, linux-mm, linux-kernel, wangxuewen

From: wangxuewen <wangxuewen@kylinos.cn>

When encountering a hardware-poisoned folio in shrink_folio_list(),
we unmap and release the folio but fail to account it in the reclaim
statistics (sc->nr_reclaimed). This leads to an undercount of
actually reclaimed pages, potentially causing unnecessary additional
reclaim pressure.

Fix this by adding sc->nr_reclaimed += folio_nr_pages(folio) after
folio_put() in the hwpoison handling block. This matches the accounting
done in other reclaim paths.

Signed-off-by: wangxuewen <wangxuewen@kylinos.cn>
---
 mm/vmscan.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index f8dfd2864bbf..4c612f4b6e66 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1141,6 +1141,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
 			unmap_poisoned_folio(folio, folio_pfn(folio), false);
 			folio_unlock(folio);
 			folio_put(folio);
+			sc->nr_reclaimed += folio_nr_pages(folio);
 			continue;
 		}
 
-- 
2.34.1



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v1] mm/vmscan: Account hwpoisoned folios in reclaim statistics
  2025-07-02  9:34 [PATCH v1] mm/vmscan: Account hwpoisoned folios in reclaim statistics 18810879172
@ 2025-07-02  9:44 ` David Hildenbrand
  2025-07-04  3:14   ` wangxuewen
  0 siblings, 1 reply; 3+ messages in thread
From: David Hildenbrand @ 2025-07-02  9:44 UTC (permalink / raw)
  To: 18810879172, akpm; +Cc: zhengqi.arch, linux-mm, linux-kernel, wangxuewen

On 02.07.25 11:34, 18810879172@163.com wrote:
> From: wangxuewen <wangxuewen@kylinos.cn>
> 
> When encountering a hardware-poisoned folio in shrink_folio_list(),
> we unmap and release the folio but fail to account it in the reclaim
> statistics (sc->nr_reclaimed). This leads to an undercount of
> actually reclaimed pages, potentially causing unnecessary additional
> reclaim pressure.

I'll just note that this kind-of makes sense: the memory is not actually 
reclaimed -- we don't get free memory back. The hwpoisoned page is lost.

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v1] mm/vmscan: Account hwpoisoned folios in reclaim statistics
  2025-07-02  9:44 ` David Hildenbrand
@ 2025-07-04  3:14   ` wangxuewen
  0 siblings, 0 replies; 3+ messages in thread
From: wangxuewen @ 2025-07-04  3:14 UTC (permalink / raw)
  To: David Hildenbrand, akpm; +Cc: zhengqi.arch, linux-mm, linux-kernel, wangxuewen

> Hi David,
>  
> Thank you for your insightful feedback. You make an excellent point - 
> hwpoisoned pages are indeed not truly "reclaimed" as they don't contribute 
> to available memory but represent permanently lost capacity.
>  
> I will drop this patch.
>  
> Best regards,
> wangxuewen



在 2025/7/2 17:44, David Hildenbrand 写道:
> On 02.07.25 11:34, 18810879172@163.com wrote:
>> From: wangxuewen <wangxuewen@kylinos.cn>
>>
>> When encountering a hardware-poisoned folio in shrink_folio_list(),
>> we unmap and release the folio but fail to account it in the reclaim
>> statistics (sc->nr_reclaimed). This leads to an undercount of
>> actually reclaimed pages, potentially causing unnecessary additional
>> reclaim pressure.
> 
> I'll just note that this kind-of makes sense: the memory is not actually 
> reclaimed -- we don't get free memory back. The hwpoisoned page is lost.
> 



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-07-04  3:14 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-07-02  9:34 [PATCH v1] mm/vmscan: Account hwpoisoned folios in reclaim statistics 18810879172
2025-07-02  9:44 ` David Hildenbrand
2025-07-04  3:14   ` wangxuewen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox