linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: fix shrink nr.unqueued_dirty counter issue
@ 2023-12-08 11:29 Zhiguo Jiang
  0 siblings, 0 replies; 3+ messages in thread
From: Zhiguo Jiang @ 2023-12-08 11:29 UTC (permalink / raw)
  To: Andrew Morton, linux-mm, linux-kernel; +Cc: opensource.kernel, Zhiguo Jiang

It is needed to ensure sc->nr.unqueued_dirty > 0, which can avoid to
set PGDAT_DIRTY flag when sc->nr.unqueued_dirty and sc->nr.file_taken
are both zero at the same time.

Signed-off-by: Zhiguo Jiang <justinjiang@vivo.com>
---
 mm/vmscan.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
 mode change 100644 => 100755 mm/vmscan.c

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 4e3b835c6b4a..12680c392bb0
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -5908,7 +5908,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
 			set_bit(PGDAT_WRITEBACK, &pgdat->flags);
 
 		/* Allow kswapd to start writing pages during reclaim.*/
-		if (sc->nr.unqueued_dirty == sc->nr.file_taken)
+		if (sc->nr.unqueued_dirty && sc->nr.unqueued_dirty == sc->nr.file_taken)
 			set_bit(PGDAT_DIRTY, &pgdat->flags);
 
 		/*
-- 
2.39.0



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] mm: fix shrink nr.unqueued_dirty counter issue
  2024-01-12  1:23 Zhiguo Jiang
@ 2024-09-25 20:51 ` Andrew Morton
  0 siblings, 0 replies; 3+ messages in thread
From: Andrew Morton @ 2024-09-25 20:51 UTC (permalink / raw)
  To: Zhiguo Jiang; +Cc: linux-mm, linux-kernel, opensource.kernel

On Fri, 12 Jan 2024 09:23:52 +0800 Zhiguo Jiang <justinjiang@vivo.com> wrote:

> It is needed to ensure sc->nr.unqueued_dirty > 0, which can avoid to
> set PGDAT_DIRTY flag when sc->nr.unqueued_dirty and sc->nr.file_taken
> are both zero at the same time.
> 
> ...
>
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -5957,7 +5957,8 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
>  			set_bit(PGDAT_WRITEBACK, &pgdat->flags);
>  
>  		/* Allow kswapd to start writing pages during reclaim.*/
> -		if (sc->nr.unqueued_dirty == sc->nr.file_taken)
> +		if (sc->nr.unqueued_dirty &&
> +			sc->nr.unqueued_dirty == sc->nr.file_taken)
>  			set_bit(PGDAT_DIRTY, &pgdat->flags);
>  

Seems sensible.  Was this discovered by code inspection, or is there
some observable runtime effect?  If the latter, can you please describe
that effect?


^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH] mm: fix shrink nr.unqueued_dirty counter issue
@ 2024-01-12  1:23 Zhiguo Jiang
  2024-09-25 20:51 ` Andrew Morton
  0 siblings, 1 reply; 3+ messages in thread
From: Zhiguo Jiang @ 2024-01-12  1:23 UTC (permalink / raw)
  To: Andrew Morton, linux-mm, linux-kernel; +Cc: opensource.kernel, Zhiguo Jiang

It is needed to ensure sc->nr.unqueued_dirty > 0, which can avoid to
set PGDAT_DIRTY flag when sc->nr.unqueued_dirty and sc->nr.file_taken
are both zero at the same time.

Signed-off-by: Zhiguo Jiang <justinjiang@vivo.com>
---
 mm/vmscan.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
 mode change 100644 => 100755 mm/vmscan.c

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 91e7d334a7ca..7c0cd7ecdfab
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -5957,7 +5957,8 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
 			set_bit(PGDAT_WRITEBACK, &pgdat->flags);
 
 		/* Allow kswapd to start writing pages during reclaim.*/
-		if (sc->nr.unqueued_dirty == sc->nr.file_taken)
+		if (sc->nr.unqueued_dirty &&
+			sc->nr.unqueued_dirty == sc->nr.file_taken)
 			set_bit(PGDAT_DIRTY, &pgdat->flags);
 
 		/*
-- 
2.39.0



^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2024-09-25 20:51 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-08 11:29 [PATCH] mm: fix shrink nr.unqueued_dirty counter issue Zhiguo Jiang
2024-01-12  1:23 Zhiguo Jiang
2024-09-25 20:51 ` Andrew Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox