From: Hugh Dickins <hughd@google.com>
To: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, Vlastimil Babka <vbabka@suse.cz>,
linux-kernel@vger.kernel.org, stable@vger.kernel.org,
Hugh Dickins <hughd@google.com>,
Matthew Wilcox <willy@infradead.org>,
Shakeel Butt <shakeel.butt@linux.dev>
Subject: Re: [PATCH v2] mm: page_alloc: move mlocked flag clearance into free_pages_prepare()
Date: Mon, 21 Oct 2024 12:49:28 -0700 (PDT) [thread overview]
Message-ID: <d50407d4-5a4e-de0c-9f70-222eef9a9f67@google.com> (raw)
In-Reply-To: <20241021173455.2691973-1-roman.gushchin@linux.dev>
On Mon, 21 Oct 2024, Roman Gushchin wrote:
> Syzbot reported a bad page state problem caused by a page
> being freed using free_page() still having a mlocked flag at
> free_pages_prepare() stage:
>
> BUG: Bad page state in process syz.0.15 pfn:1137bb
> page: refcount:0 mapcount:0 mapping:0000000000000000 index:0xffff8881137bb870 pfn:0x1137bb
> flags: 0x400000000080000(mlocked|node=0|zone=1)
> raw: 0400000000080000 0000000000000000 dead000000000122 0000000000000000
> raw: ffff8881137bb870 0000000000000000 00000000ffffffff 0000000000000000
> page dumped because: PAGE_FLAGS_CHECK_AT_FREE flag(s) set
> page_owner tracks the page as allocated
> page last allocated via order 0, migratetype Unmovable, gfp_mask
> 0x400dc0(GFP_KERNEL_ACCOUNT|__GFP_ZERO), pid 3005, tgid
> 3004 (syz.0.15), ts 61546 608067, free_ts 61390082085
> set_page_owner include/linux/page_owner.h:32 [inline]
> post_alloc_hook+0x1f3/0x230 mm/page_alloc.c:1537
> prep_new_page mm/page_alloc.c:1545 [inline]
> get_page_from_freelist+0x3008/0x31f0 mm/page_alloc.c:3457
> __alloc_pages_noprof+0x292/0x7b0 mm/page_alloc.c:4733
> alloc_pages_mpol_noprof+0x3e8/0x630 mm/mempolicy.c:2265
> kvm_coalesced_mmio_init+0x1f/0xf0 virt/kvm/coalesced_mmio.c:99
> kvm_create_vm virt/kvm/kvm_main.c:1235 [inline]
> kvm_dev_ioctl_create_vm virt/kvm/kvm_main.c:5500 [inline]
> kvm_dev_ioctl+0x13bb/0x2320 virt/kvm/kvm_main.c:5542
> vfs_ioctl fs/ioctl.c:51 [inline]
> __do_sys_ioctl fs/ioctl.c:907 [inline]
> __se_sys_ioctl+0xf9/0x170 fs/ioctl.c:893
> do_syscall_x64 arch/x86/entry/common.c:52 [inline]
> do_syscall_64+0x69/0x110 arch/x86/entry/common.c:83
> entry_SYSCALL_64_after_hwframe+0x76/0x7e
> page last free pid 951 tgid 951 stack trace:
> reset_page_owner include/linux/page_owner.h:25 [inline]
> free_pages_prepare mm/page_alloc.c:1108 [inline]
> free_unref_page+0xcb1/0xf00 mm/page_alloc.c:2638
> vfree+0x181/0x2e0 mm/vmalloc.c:3361
> delayed_vfree_work+0x56/0x80 mm/vmalloc.c:3282
> process_one_work kernel/workqueue.c:3229 [inline]
> process_scheduled_works+0xa5c/0x17a0 kernel/workqueue.c:3310
> worker_thread+0xa2b/0xf70 kernel/workqueue.c:3391
> kthread+0x2df/0x370 kernel/kthread.c:389
> ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
> ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
>
> A reproducer is available here:
> https://syzkaller.appspot.com/x/repro.c?x=1437939f980000
>
> The problem was originally introduced by
> commit b109b87050df ("mm/munlock: replace clear_page_mlock() by final
> clearance"): it was handling focused on handling pagecache
> and anonymous memory and wasn't suitable for lower level
> get_page()/free_page() API's used for example by KVM, as with
> this reproducer.
>
> Fix it by moving the mlocked flag clearance down to
> free_page_prepare().
>
> The bug itself if fairly old and harmless (aside from generating these
> warnings).
>
> Closes: https://syzkaller.appspot.com/x/report.txt?x=169a47d0580000
> Fixes: b109b87050df ("mm/munlock: replace clear_page_mlock() by final clearance")
> Signed-off-by: Roman Gushchin <roman.gushchin@linux.dev>
> Cc: <stable@vger.kernel.org>
> Cc: Hugh Dickins <hughd@google.com>
Acked-by: Hugh Dickins <hughd@google.com>
Thanks Roman - I'd been preparing a similar patch, so agree that this is
the right fix. I don't think there's any need to change your text, but
let me remind us that any "Bad page" report stops that page from being
allocated again (because it's in an undefined, potentially dangerous
state): so does amount to a small memory leak even if otherwise harmless.
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> ---
> mm/page_alloc.c | 15 +++++++++++++++
> mm/swap.c | 14 --------------
> 2 files changed, 15 insertions(+), 14 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index bc55d39eb372..7535d78862ab 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1044,6 +1044,7 @@ __always_inline bool free_pages_prepare(struct page *page,
> bool skip_kasan_poison = should_skip_kasan_poison(page);
> bool init = want_init_on_free();
> bool compound = PageCompound(page);
> + struct folio *folio = page_folio(page);
>
> VM_BUG_ON_PAGE(PageTail(page), page);
>
> @@ -1053,6 +1054,20 @@ __always_inline bool free_pages_prepare(struct page *page,
> if (memcg_kmem_online() && PageMemcgKmem(page))
> __memcg_kmem_uncharge_page(page, order);
>
> + /*
> + * In rare cases, when truncation or holepunching raced with
> + * munlock after VM_LOCKED was cleared, Mlocked may still be
> + * found set here. This does not indicate a problem, unless
> + * "unevictable_pgs_cleared" appears worryingly large.
> + */
> + if (unlikely(folio_test_mlocked(folio))) {
> + long nr_pages = folio_nr_pages(folio);
> +
> + __folio_clear_mlocked(folio);
> + zone_stat_mod_folio(folio, NR_MLOCK, -nr_pages);
> + count_vm_events(UNEVICTABLE_PGCLEARED, nr_pages);
> + }
> +
> if (unlikely(PageHWPoison(page)) && !order) {
> /* Do not let hwpoison pages hit pcplists/buddy */
> reset_page_owner(page, order);
> diff --git a/mm/swap.c b/mm/swap.c
> index 835bdf324b76..7cd0f4719423 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -78,20 +78,6 @@ static void __page_cache_release(struct folio *folio, struct lruvec **lruvecp,
> lruvec_del_folio(*lruvecp, folio);
> __folio_clear_lru_flags(folio);
> }
> -
> - /*
> - * In rare cases, when truncation or holepunching raced with
> - * munlock after VM_LOCKED was cleared, Mlocked may still be
> - * found set here. This does not indicate a problem, unless
> - * "unevictable_pgs_cleared" appears worryingly large.
> - */
> - if (unlikely(folio_test_mlocked(folio))) {
> - long nr_pages = folio_nr_pages(folio);
> -
> - __folio_clear_mlocked(folio);
> - zone_stat_mod_folio(folio, NR_MLOCK, -nr_pages);
> - count_vm_events(UNEVICTABLE_PGCLEARED, nr_pages);
> - }
> }
>
> /*
> --
> 2.47.0.105.g07ac214952-goog
>
>
next prev parent reply other threads:[~2024-10-21 19:49 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-21 17:34 Roman Gushchin
2024-10-21 17:57 ` Shakeel Butt
2024-10-22 2:11 ` Roman Gushchin
2024-10-21 19:49 ` Hugh Dickins [this message]
2024-10-22 2:16 ` Roman Gushchin
2024-11-06 1:09 ` Sean Christopherson
2024-11-06 1:32 ` Roman Gushchin
2024-11-06 2:19 ` Hugh Dickins
2024-10-21 20:34 ` Matthew Wilcox
2024-10-21 21:14 ` Hugh Dickins
2024-10-22 2:14 ` Roman Gushchin
2024-10-22 3:47 ` Matthew Wilcox
2024-10-22 4:33 ` Roman Gushchin
2024-10-22 8:26 ` Yosry Ahmed
2024-10-22 15:39 ` Sean Christopherson
2024-10-22 16:59 ` Matthew Wilcox
2024-10-22 19:52 ` Sean Christopherson
2024-10-23 2:04 ` Roman Gushchin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d50407d4-5a4e-de0c-9f70-222eef9a9f67@google.com \
--to=hughd@google.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=roman.gushchin@linux.dev \
--cc=shakeel.butt@linux.dev \
--cc=stable@vger.kernel.org \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox