linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: vmscan: reset sc->priority on retry
@ 2024-05-29 15:49 Shakeel Butt
  2024-05-29 16:20 ` Roman Gushchin
  0 siblings, 1 reply; 3+ messages in thread
From: Shakeel Butt @ 2024-05-29 15:49 UTC (permalink / raw)
  To: Andrew Morton, Johannes Weiner
  Cc: Rik van Riel, Roman Gushchin, Michal Hocko, Facebook Kernel Team,
	linux-mm, linux-kernel, syzbot+17416257cb95200cba44

The commit 6be5e186fd65 ("mm: vmscan: restore incremental cgroup
iteration") added a retry reclaim heuristic to iterate all the cgroups
before returning an unsuccessful reclaim but missed to reset the
sc->priority. Let's fix it.

Reported-and-tested-by: syzbot+17416257cb95200cba44@syzkaller.appspotmail.com
Fixes: 6be5e186fd65 ("mm: vmscan: restore incremental cgroup iteration")
Signed-off-by: Shakeel Butt <shakeel.butt@linux.dev>
---
 mm/vmscan.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index b9170f767353..731b009a142b 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -6317,6 +6317,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
 	 * meaningful forward progress. Avoid false OOMs in this case.
 	 */
 	if (!sc->memcg_full_walk) {
+		sc->priority = initial_priority;
 		sc->memcg_full_walk = 1;
 		goto retry;
 	}
-- 
2.43.0



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] mm: vmscan: reset sc->priority on retry
  2024-05-29 15:49 [PATCH] mm: vmscan: reset sc->priority on retry Shakeel Butt
@ 2024-05-29 16:20 ` Roman Gushchin
  2024-05-29 17:08   ` Shakeel Butt
  0 siblings, 1 reply; 3+ messages in thread
From: Roman Gushchin @ 2024-05-29 16:20 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Andrew Morton, Johannes Weiner, Rik van Riel, Michal Hocko,
	Facebook Kernel Team, linux-mm, linux-kernel,
	syzbot+17416257cb95200cba44

On Wed, May 29, 2024 at 08:49:11AM -0700, Shakeel Butt wrote:
> The commit 6be5e186fd65 ("mm: vmscan: restore incremental cgroup
> iteration") added a retry reclaim heuristic to iterate all the cgroups
> before returning an unsuccessful reclaim but missed to reset the
> sc->priority. Let's fix it.
> 
> Reported-and-tested-by: syzbot+17416257cb95200cba44@syzkaller.appspotmail.com
> Fixes: 6be5e186fd65 ("mm: vmscan: restore incremental cgroup iteration")
> Signed-off-by: Shakeel Butt <shakeel.butt@linux.dev>

Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev>

Good catch!

> ---
>  mm/vmscan.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index b9170f767353..731b009a142b 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -6317,6 +6317,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
>  	 * meaningful forward progress. Avoid false OOMs in this case.
>  	 */
>  	if (!sc->memcg_full_walk) {
> +		sc->priority = initial_priority;
>  		sc->memcg_full_walk = 1;
>  		goto retry;
>  	}
> -- 
> 2.43.0
> 

I wonder if it makes sense to refactor things to be more robust like this:

diff --git a/mm/vmscan.c b/mm/vmscan.c
index d3ae6bf1b65c7..f150e79f736da 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -6246,7 +6246,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
        if (!cgroup_reclaim(sc))
                __count_zid_vm_events(ALLOCSTALL, sc->reclaim_idx, 1);

-       do {
+       for (sc->priority = initial_priority; sc->priority >= 0; sc->priority--) {
                if (!sc->proactive)
                        vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup,
                                        sc->priority);
@@ -6265,7 +6265,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
                 */
                if (sc->priority < DEF_PRIORITY - 2)
                        sc->may_writepage = 1;
-       } while (--sc->priority >= 0);
+       }

        last_pgdat = NULL;
        for_each_zone_zonelist_nodemask(zone, z, zonelist, sc->reclaim_idx,
@@ -6318,7 +6318,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
         * good, and retry with forcible deactivation if that fails.
         */
        if (sc->skipped_deactivate) {
-               sc->priority = initial_priority;
                sc->force_deactivate = 1;
                sc->skipped_deactivate = 0;
                goto retry;
@@ -6326,7 +6325,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,

        /* Untapped cgroup reserves?  Don't OOM, retry. */
        if (sc->memcg_low_skipped) {
-               sc->priority = initial_priority;
                sc->force_deactivate = 0;
                sc->memcg_low_reclaim = 1;
                sc->memcg_low_skipped = 0;


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] mm: vmscan: reset sc->priority on retry
  2024-05-29 16:20 ` Roman Gushchin
@ 2024-05-29 17:08   ` Shakeel Butt
  0 siblings, 0 replies; 3+ messages in thread
From: Shakeel Butt @ 2024-05-29 17:08 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Andrew Morton, Johannes Weiner, Rik van Riel, Michal Hocko,
	Facebook Kernel Team, linux-mm, linux-kernel,
	syzbot+17416257cb95200cba44

On Wed, May 29, 2024 at 09:20:46AM GMT, Roman Gushchin wrote:
> On Wed, May 29, 2024 at 08:49:11AM -0700, Shakeel Butt wrote:
> > The commit 6be5e186fd65 ("mm: vmscan: restore incremental cgroup
> > iteration") added a retry reclaim heuristic to iterate all the cgroups
> > before returning an unsuccessful reclaim but missed to reset the
> > sc->priority. Let's fix it.
> > 
> > Reported-and-tested-by: syzbot+17416257cb95200cba44@syzkaller.appspotmail.com
> > Fixes: 6be5e186fd65 ("mm: vmscan: restore incremental cgroup iteration")
> > Signed-off-by: Shakeel Butt <shakeel.butt@linux.dev>
> 
> Reviewed-by: Roman Gushchin <roman.gushchin@linux.dev>
> 
> Good catch!

Thanks.

> 
> > ---
> >  mm/vmscan.c | 1 +
> >  1 file changed, 1 insertion(+)
> > 
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index b9170f767353..731b009a142b 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -6317,6 +6317,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
> >  	 * meaningful forward progress. Avoid false OOMs in this case.
> >  	 */
> >  	if (!sc->memcg_full_walk) {
> > +		sc->priority = initial_priority;
> >  		sc->memcg_full_walk = 1;
> >  		goto retry;
> >  	}
> > -- 
> > 2.43.0
> > 
> 
> I wonder if it makes sense to refactor things to be more robust like this:

Oh I like this as it will make sc->priority values explicit. I hope we
don't have any hidden dependency on do-while semantics for this code
path.

> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index d3ae6bf1b65c7..f150e79f736da 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -6246,7 +6246,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
>         if (!cgroup_reclaim(sc))
>                 __count_zid_vm_events(ALLOCSTALL, sc->reclaim_idx, 1);
> 
> -       do {
> +       for (sc->priority = initial_priority; sc->priority >= 0; sc->priority--) {
>                 if (!sc->proactive)
>                         vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup,
>                                         sc->priority);
> @@ -6265,7 +6265,7 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
>                  */
>                 if (sc->priority < DEF_PRIORITY - 2)
>                         sc->may_writepage = 1;
> -       } while (--sc->priority >= 0);
> +       }
> 
>         last_pgdat = NULL;
>         for_each_zone_zonelist_nodemask(zone, z, zonelist, sc->reclaim_idx,
> @@ -6318,7 +6318,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
>          * good, and retry with forcible deactivation if that fails.
>          */
>         if (sc->skipped_deactivate) {
> -               sc->priority = initial_priority;
>                 sc->force_deactivate = 1;
>                 sc->skipped_deactivate = 0;
>                 goto retry;
> @@ -6326,7 +6325,6 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist,
> 
>         /* Untapped cgroup reserves?  Don't OOM, retry. */
>         if (sc->memcg_low_skipped) {
> -               sc->priority = initial_priority;
>                 sc->force_deactivate = 0;
>                 sc->memcg_low_reclaim = 1;
>                 sc->memcg_low_skipped = 0;


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2024-05-29 17:09 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-05-29 15:49 [PATCH] mm: vmscan: reset sc->priority on retry Shakeel Butt
2024-05-29 16:20 ` Roman Gushchin
2024-05-29 17:08   ` Shakeel Butt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox