From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 982C1C25B78 for ; Tue, 28 May 2024 13:58:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 325166B008A; Tue, 28 May 2024 09:58:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2D4D36B008C; Tue, 28 May 2024 09:58:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 175DE6B0092; Tue, 28 May 2024 09:58:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id ED5D16B008A for ; Tue, 28 May 2024 09:58:11 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 98441C0626 for ; Tue, 28 May 2024 13:58:11 +0000 (UTC) X-FDA: 82167958782.14.672AB1D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf18.hostedemail.com (Postfix) with ESMTP id 717921C0013 for ; Tue, 28 May 2024 13:58:08 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="CJ/S9YqH"; spf=pass (imf18.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1716904688; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7fzsmeKVNJqT1kCCG+RLWqVYIt72dG70HCy83DBhcgg=; b=458HDDGMt8X7LGRfb5UQ4RZktehpqRmY0Qs4APf+G6EqhFIPdmSk34bbL+SE+EV9yVC0bQ DUNEkdR1ycttql/QAQh23wT9WZuXlypxGldXr+UkUTYwwRkC89v/VJoHACymw0oASIhU5c lT7f3/vGI/dCjqEv6H01o9YHyLs5Stk= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="CJ/S9YqH"; spf=pass (imf18.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1716904688; a=rsa-sha256; cv=none; b=hHUiZWGxO8ZMAbMk9pGmgTD1EFtQMKr5YVxwqIsXZPJVot6sguSZF8MnkaKLzArSRDILXK v1IAohlaBF0MhfaKCN90Er6naM+T9KrFlgiLTUZwfSi44QoCYnCBo/ApgDKAh+paXn7f3a 4+Oc7qlMxLdiKnRJSCO7lJK4Jo5wuzU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1716904687; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7fzsmeKVNJqT1kCCG+RLWqVYIt72dG70HCy83DBhcgg=; b=CJ/S9YqHgFf6GCWNy8pJw++Wx278CMCzlUTnOINeeGlayykifKzU/4J0QrVNvF81abvBzn vhPRc9XVcHvG7CXMItbGK2wSFJR2XfNShF1A5mXvp1t55Wz24CNrMw7guzNrd0HVkbC9VT aHXKfuiITr5ON/ea+vL+Fti5mEqZJ5w= Received: from mail-ed1-f69.google.com (mail-ed1-f69.google.com [209.85.208.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-637-OyHK8u5IOueEkhRRPiweRA-1; Tue, 28 May 2024 09:58:02 -0400 X-MC-Unique: OyHK8u5IOueEkhRRPiweRA-1 Received: by mail-ed1-f69.google.com with SMTP id 4fb4d7f45d1cf-579d2279abaso394511a12.3 for ; Tue, 28 May 2024 06:58:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716904681; x=1717509481; h=content-transfer-encoding:in-reply-to:content-language:from :references:cc:to:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7fzsmeKVNJqT1kCCG+RLWqVYIt72dG70HCy83DBhcgg=; b=ZT+GvuCj94DzAZzIAS/ymekW7BNaL6/NDFwQZKJMx/gZFu9AO+TDbgHvnIhdS8elQG 0HxXUH9atGWvjTlU19ig/xFiusrjd3tbwms+mdBG9b9z5d4mAnOixWicnZz+5LESQXXE 5LGpHVfhEwpYgj4I+KfFz0pvF+NhAKRSnY1F8p1q2OUMUKQULiqmVI28vp6UJsfCchPV 1k8jdm5gOP1oAeozHn9qbhTHFjNpoyIW305K0cKxgf0rzbgnRJrU0PHn/YxiWsbpKunt 2HD90VwDUiqIy8fumOY4Q4lRZ/wrn2wgdSbDoCe6Xdw4VIPA6hWjt1g4KQVeITzHKV5J U/Hg== X-Forwarded-Encrypted: i=1; AJvYcCVf8W1tusm/kFPkg9MHbewsI4RYn8LnIaPba9TYUtaOphbPRLgNscz/H+7vVQfPJScwJk9cIajtsRVP6Z0LbJ4PUGE= X-Gm-Message-State: AOJu0YxMh5HUuqYF9gtsvZkUDstZ9Ox4uHnSzmwjFLEFzlk4C9t6YCTV rkYdAURT7ytlPfITC4d0AIpZ2iAtBJDKq2S8L4+M7Ofa16B8DSrJxtpxvTQLlnMWJW3DiqJ5x77 gSji6av4v2rBcm48wq/BNOPQ+uwk7WVxvho277c7XR+JdLF4O X-Received: by 2002:a50:d593:0:b0:578:5f1b:5018 with SMTP id 4fb4d7f45d1cf-5785f1b50f6mr7419658a12.29.1716904680835; Tue, 28 May 2024 06:58:00 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF/9hwXAmsyFr8n4AY7oT/3Ktlf+X2RxT5CuoFiU3PmOw7lZ/A28IR/cQP+Izo4xMNtE9k9+g== X-Received: by 2002:a50:d593:0:b0:578:5f1b:5018 with SMTP id 4fb4d7f45d1cf-5785f1b50f6mr7419640a12.29.1716904680345; Tue, 28 May 2024 06:58:00 -0700 (PDT) Received: from ?IPV6:2003:d8:2f28:4600:d3a7:6c26:54cf:e01e? (p200300d82f284600d3a76c2654cfe01e.dip0.t-ipconnect.de. [2003:d8:2f28:4600:d3a7:6c26:54cf:e01e]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-579ce607407sm3572608a12.74.2024.05.28.06.57.59 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 28 May 2024 06:57:59 -0700 (PDT) Message-ID: <162cb2a8-1b53-4e86-8d49-f4e09b3255a4@redhat.com> Date: Tue, 28 May 2024 15:57:58 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: 6.9/BUG: Bad page state in process kswapd0 pfn:d6e840 To: Mikhail Gavrilov Cc: Linux List Kernel Mailing , Linux Memory Management List References: <0672f0b7-36f5-4322-80e6-2da0f24c101b@redhat.com> <6b42ad9a-1f15-439a-8a42-34052fec017e@redhat.com> From: David Hildenbrand In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 717921C0013 X-Stat-Signature: cxtbefgr9kz8fmafis8rrkmmdqbpcong X-Rspam-User: X-HE-Tag: 1716904688-306653 X-HE-Meta: U2FsdGVkX1/Fm69D/lYVjWSWl5npWcdyTSAxaym0QQPsAY8pnPAmQVN58B7T84VJ9XzcgnqqB31LuRgTK+fgHuVL0rjgVvhM5SgxG65t73HA/MRCIv5hUUuA7q9O7DWY1IMurZIWFojXHbO47D2e6iN2BqkwyNfJ88O5BOypBZDsZFsYSVyqphDyx780Q/iSV/VX91uBd+ftkogd6YzIwoEKiTA2AiSQ1BBdiyX57h75ccXxFDowVH1ZyxIIJqUsHXJs+BSlzCdqogFT/AC8Kjfisy/fzTWAh1PFkn4CRTenJQSkHhbdT5F8T9rhgPsMl25uB7xN2W89xPENV91X6KtUh891/DMKe+2MYodTugjhE+iisuyyJSsM53hxXsMu7TkTZf8c2hk/4A+FLYa3uVbdNBRpHjrDNj/JnuRrZ7MfY+UCe4NSTjpZ2u6A73B56euMG+vYc91WZeZq3CVh9zqNsHFy0IDsN/4YqeLCtKgRiKQlXX4qJdK+ctl5BonQIJvn4Q/IC3iU5BRqKCb5IqBgXVP/FLtCWZFi4GYvCWSpJsh648xL1kzrVaMygZ3jgw4tHGcL4SBsnAJaHMgC0/OwqP+QlWV3gWluBio+2ruAnBmthefGTy7atndTrHFy8vDl7Lb2TFYefjL0LVRzLq0CSboErq46WY5FZaddQdbpRYAfnf2nQXkYfhIScAqhUocDSYUy6GvNnELvzmSDKeSUWQS6cwo5h8O26/gHhDb4nx+j3G0nLKH9Q9tzf8IqyNctfsp8C75DyXREgi4vmnc9EXPPhkB6G3gfRFEJUuodmdB1rcVYva3LcDBA/fyq9M6GQy0paqkRp77uApp7XFSUuhf+xEpOGtX7l1DHAZLjdiSzx+WISkJzw2CfjasoKNRHbQeG+uOQxbLHGrmyENb5Ez43utn+shvD5TSPFgGvtw3MkZTvtLrREhBtJ9muvNgipiZzJATafpp/hBW YNTcE66h XtKurhpL6GzQrAmb0eGVm/0OINuVBqtSIBiICdmUyCiYwq3xYv5svzO0DIf/Royg2frFRbd0h3OilxyMyS18QwvbSgxBezVGOasNgfaeo3Vr6TeIMu9460d/rz6vcJx+grKy+OXM5D7ewQHE0hjIGpevXRL2jMwRNFN48FDAwjvCnqcRQQpYIfaJDrKrL5Rb2WfexhcGrxP03GGYj9pNvFv6R1GzoXADFP9SOnjFhxtbk/uieqRYwuvdRg58tj27nU/ols8R1aTGI4hQ5pLJvJN7WZP5spZvgnxWxoAKf2X97zWR4/RJYBoXh4uLCvrkwfK5bVPZWHSR82t7sNWpdtzeb3otDSW71UyTK29SkFUp3w2RS187kUAfdtOdI/UkO24BanC4BRjM66gCjnBdiIYVSYe5TlsLwtIeeYW0DESqr2ouJxBIJYYEJxvfn6haNDrqkUj/ZR2CjW4pJygSqiX4nu3O4WtJnZT+2JLPM4jVaDes= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Am 28.05.24 um 08:05 schrieb Mikhail Gavrilov: > On Thu, May 23, 2024 at 12:05 PM Mikhail Gavrilov > wrote: >> >> On Thu, May 9, 2024 at 10:50 PM David Hildenbrand wrote: >>> >>> Do you have the other stracktrace as well? >>> >>> Maybe triggering memory reclaim (e.g., using "stress" or "memhog") could >>> trigger it, that might be reasonable to trey. Once we have a reproducer >>> we could at least bisect. >>> >> >> The only known workload that causes this is updating a large >> container. Unfortunately, not every container update reproduces the >> problem. > > Is it possible to add more debugging information to make it clearer > what's going on? If we knew who originally allocated that problematic page, that might help. Maybe page_owner could give some hints? > > BUG: Bad page state in process kcompactd0 pfn:605811 > page: refcount:0 mapcount:0 mapping:0000000082d91e3e index:0x1045efc4f > pfn:0x605811 > aops:btree_aops ino:1 > flags: 0x17ffffc600020c(referenced|uptodate|workingset|node=0|zone=2|lastcpupid=0x1fffff) > raw: 0017ffffc600020c dead000000000100 dead000000000122 ffff888159075220 > raw: 00000001045efc4f 0000000000000000 00000000ffffffff 0000000000000000 > page dumped because: non-NULL mapping Seems to be an order-0 page, otherwise we would have another "head: ..." report. It's not an anon/ksm/non-lru migration folio, because we clear the page->mapping field for them manually on the page freeing path. Likely it's a pagecache folio. So one option is that something seems to not properly set folio->mapping to NULL. But that problem would then also show up without page migration? Hmm. > Hardware name: ASUS System Product Name/ROG STRIX B650E-I GAMING WIFI, > BIOS 2611 04/07/2024 > Call Trace: > > dump_stack_lvl+0x84/0xd0 > bad_page.cold+0xbe/0xe0 > ? __pfx_bad_page+0x10/0x10 > ? page_bad_reason+0x9d/0x1f0 > free_unref_page+0x838/0x10e0 > __folio_put+0x1ba/0x2b0 > ? __pfx___folio_put+0x10/0x10 > ? __pfx___might_resched+0x10/0x10 I suspect we come via migrate_pages_batch()->migrate_folio_unmap()->migrate_folio_done(). Maybe this is the "Folio was freed from under us. So we are done." path when "folio_ref_count(src) == 1". Alternatively, we might come via migrate_pages_batch()->migrate_folio_move()->migrate_folio_done(). For ordinary migration, move_to_new_folio() will clear src->mapping if the folio was migrated successfully. That's the very first thing that migrate_folio_move() does, so I doubt that is the problem. So I suspect we are in the migrate_folio_unmap() path. But for a !anon folio, who should be freeing the folio concurrently (and not clearing folio->mapping?)? After all, we have to hold the folio lock while migrating. In khugepaged:collapse_file() we manually set folio->mapping = NULL, before dropping the reference. Something to try might be (to see if the problem goes away). diff --git a/mm/migrate.c b/mm/migrate.c index dd04f578c19c..45e92e14c904 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1124,6 +1124,13 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, /* Folio was freed from under us. So we are done. */ folio_clear_active(src); folio_clear_unevictable(src); + /* + * Anonymous and movable src->mapping will be cleared by + * free_pages_prepare so don't reset it here for keeping + * the type to work PageAnon, for example. + */ + if (!folio_mapping_flags(src)) + src->mapping = NULL; /* free_pages_prepare() will clear PG_isolated. */ list_del(&src->lru); migrate_folio_done(src, reason); But it does feel weird: who freed the page concurrently and didn't clear folio->mapping ... We don't hold the folio lock of src, though, but have the only reference. So another possible thing might be folio refcount mis-counting: folio_ref_count() == 1 but there are other references (e.g., from the pagecache). > ? migrate_folio_done+0x1de/0x2b0 > migrate_pages_batch+0xe73/0x2880 > ? __pfx_compaction_alloc+0x10/0x10 > ? __pfx_compaction_free+0x10/0x10 > ? __pfx_migrate_pages_batch+0x10/0x10 > ? trace_irq_enable.constprop.0+0xce/0x110 > ? __pfx_remove_migration_pte+0x10/0x10 > ? rcu_is_watching+0x12/0xc0 > migrate_pages+0x194f/0x22f0 > ? __pfx_compaction_alloc+0x10/0x10 > ? __pfx_compaction_free+0x10/0x10 > ? __pfx_migrate_pages+0x10/0x10 > ? trace_irq_enable.constprop.0+0xce/0x110 > ? rcu_is_watching+0x12/0xc0 > ? isolate_migratepages_block+0x2b02/0x4560 > ? __pfx_isolate_migratepages_block+0x10/0x10 > ? __pfx___might_resched+0x10/0x10 > compact_zone+0x1a7c/0x3860 > ? rcu_is_watching+0x12/0xc0 > ? __pfx___free_object+0x10/0x10 > ? __pfx_compact_zone+0x10/0x10 > ? rcu_is_watching+0x12/0xc0 > ? lock_acquire+0x457/0x540 > ? kcompactd+0x2fa/0xc70 > ? rcu_is_watching+0x12/0xc0 > compact_node+0x144/0x240 > ? __pfx_compact_node+0x10/0x10 > ? rcu_is_watching+0x12/0xc0 > kcompactd+0x686/0xc70 > ? __pfx_kcompactd+0x10/0x10 > ? __pfx_autoremove_wake_function+0x10/0x10 > ? __kthread_parkme+0xb1/0x1d0 > ? __pfx_kcompactd+0x10/0x10 > ? __pfx_kcompactd+0x10/0x10 > kthread+0x2d2/0x3a0 > ? _raw_spin_unlock_irq+0x28/0x60 > ? __pfx_kthread+0x10/0x10 > ret_from_fork+0x31/0x70 > ? __pfx_kthread+0x10/0x10 > ret_from_fork_asm+0x1a/0x30 > > -- Thanks, David / dhildenb