From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7664C25B78 for ; Tue, 28 May 2024 14:24:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3AF476B009D; Tue, 28 May 2024 10:24:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 35E716B009E; Tue, 28 May 2024 10:24:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2261E6B009F; Tue, 28 May 2024 10:24:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 0453A6B009D for ; Tue, 28 May 2024 10:24:14 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 7A4B640605 for ; Tue, 28 May 2024 14:24:14 +0000 (UTC) X-FDA: 82168024428.25.A26FBAE Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf26.hostedemail.com (Postfix) with ESMTP id 4D96F140002 for ; Tue, 28 May 2024 14:24:12 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Rsb0Cpkl; spf=pass (imf26.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1716906252; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=aFeFUyDI8MZUuk7pi22smpjjmkxD26zY5WjpsA1fKTk=; b=YjjUMTcKfsacr78ej5+Alx3SrpUTzEzChQpOX12WR9HzS1oO1DFQ20eU6atPaDy28q5W8J V9Tj/YSGDMbBmmy8ICU0FJXi0Wr6thDBJPJqPkIGUhilPVppxj/ID6HkrNDx0zwB5fIkTJ uOp9GGs7tTdNH/Ezi0vcjYbwy3uMzRw= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1716906252; a=rsa-sha256; cv=none; b=Tt97qZbjbe9csVOPwxdWyhmCvZOHsYdB5yXlsuwbM8/JMug+6zaPH9nXJESOaJkJVW9QLe BEe32nx4dSXXhgec30RrESuCO+WUttxLVNivppAbrl8vVXx/b+R2f7kF/PHm6pDpitY7Oa 6C132zHAIzwuoVt4uZ+bY/IXAdGShvY= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Rsb0Cpkl; spf=pass (imf26.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1716906251; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=aFeFUyDI8MZUuk7pi22smpjjmkxD26zY5WjpsA1fKTk=; b=Rsb0CpkluozopIBhdLmM0taZJm8nbow0y1NRr5epM4hpVy9lyOY1auGmVy4y9MHOqGy34V /NebslFMylY105/b2CvpS5V5Kqe6s901degnATG0TxGjLM5v0f1gYijo4i7DXW5HBPQzZd g2ku1OgvSmJzb2EbeOlGPT4SCKiipm4= Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com [209.85.208.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-154-76ofgcHIOdCxk6KxjG69fA-1; Tue, 28 May 2024 10:24:10 -0400 X-MC-Unique: 76ofgcHIOdCxk6KxjG69fA-1 Received: by mail-ed1-f71.google.com with SMTP id 4fb4d7f45d1cf-5785c7f4d46so616121a12.3 for ; Tue, 28 May 2024 07:24:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716906247; x=1717511047; h=content-transfer-encoding:in-reply-to:content-language:references :cc:to:from:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=aFeFUyDI8MZUuk7pi22smpjjmkxD26zY5WjpsA1fKTk=; b=Qeevd8vrfwv44QnYvZiNC1ZW5dJNDQy23RxPgtCYWj5oIoXy2EsLJ8znLmWVe5CkUg x+6pOx2CWh5saLtN1R97QFbujun7lUMr+0L3ytY/qKVS3NC8NzUbu0oTgiFuWpUvBcaj f4uPabbDNu8tr+R3X8zNuWuyzxH9Hq4sF3yEl3cb4O/Tj/L9X4EL+U/gae+9b5DUu539 8V+mH/Hnlw0fKx6x+vGdkKS1DSNkdbs/Vk2HD6zWQ/AjXVVAj8uLWZ8dhvPQ38trsXw5 3bsMSePfmXO0z5uF6WzqOvdtVYZcb9CEb3hKrB9mmQV4tJPI88SKyWXUHW9bVBgGS687 HPHA== X-Forwarded-Encrypted: i=1; AJvYcCVjCCXyEGkxFSLCB012E6Gv4mwQ0mxVpbUVN5qlrtiDV8iWC58InLBmQjnpiQLAVtsuPwCnq/vxcpSlGMj9Xd766Z0= X-Gm-Message-State: AOJu0YzUcIyoU0LuV2lEVmhrdvbeChhlk8VY9ujJB64AMjq50xvwbWGc N49b6NU5VrrDEtq+JhIr5X+tue2oIWk+MxU5n3MdKdMo9/nhKqeFCDnIsWlZAGO0zbyywR3VN/K 8JFMEL/IhzGeZOgw0ZJrw2jfR7viGJj7TPHzhmKsVS7EycJTCADcb008s X-Received: by 2002:a50:cd89:0:b0:572:7926:a0f9 with SMTP id 4fb4d7f45d1cf-578519ba586mr6750486a12.41.1716906247595; Tue, 28 May 2024 07:24:07 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHijkdlXElSp5VXxgvnTXmYyGnEF/2jH/gidiJ9qEqmCuQy2Hh1Qt6DibwfrYuXVfeINhz/0w== X-Received: by 2002:a50:cd89:0:b0:572:7926:a0f9 with SMTP id 4fb4d7f45d1cf-578519ba586mr6750470a12.41.1716906247095; Tue, 28 May 2024 07:24:07 -0700 (PDT) Received: from ?IPV6:2003:d8:2f28:4600:d3a7:6c26:54cf:e01e? (p200300d82f284600d3a76c2654cfe01e.dip0.t-ipconnect.de. [2003:d8:2f28:4600:d3a7:6c26:54cf:e01e]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-578572e7aafsm7001506a12.46.2024.05.28.07.24.06 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 28 May 2024 07:24:06 -0700 (PDT) Message-ID: <209ff705-fe6e-4d6d-9d08-201afba7d74b@redhat.com> Date: Tue, 28 May 2024 16:24:05 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: 6.9/BUG: Bad page state in process kswapd0 pfn:d6e840 From: David Hildenbrand To: Mikhail Gavrilov , Matthew Wilcox Cc: Linux List Kernel Mailing , Linux Memory Management List References: <0672f0b7-36f5-4322-80e6-2da0f24c101b@redhat.com> <6b42ad9a-1f15-439a-8a42-34052fec017e@redhat.com> <162cb2a8-1b53-4e86-8d49-f4e09b3255a4@redhat.com> In-Reply-To: <162cb2a8-1b53-4e86-8d49-f4e09b3255a4@redhat.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 4D96F140002 X-Rspam-User: X-Stat-Signature: ygb3patk8rhge9tgqbqnbj8x388boczt X-HE-Tag: 1716906252-391281 X-HE-Meta: U2FsdGVkX18nP8cAmsWhBUj+vc29uDuulljb8Wi+UNfIpHpSBdRtmQH4a6L5ogLZnKVepCdchn2EySK0tLMkXSn+Jw8citXruxRKzVeec+5CvqhoKhDhtkm6lRL4H6xhFIj9ePcCiysLDg4L4LjIATGRPLt4Yx1k0SCioy5WPIKTW6ahPheON09ulGx4Q0nluqmxQh69n5IjobBzf4WYO0ZdUdG3Kbs0SG5LRevRu4QWHeZPUBHRdayt+6Hsjstb2+597xB947zcBOv5dmwanTpi5mae6I7L9xp86FvoLb2tqDxz6B/g++UraH4UupB0X4CvnbI+RTFqa266GVKXlHKEjBCewg4sZSMIeWkD3sUMFdk03YhJ7J+t4IqsMfYo7+6W8Gd4TpSU9B+vUpbSqISUewSY+DNgTjfMbjblFfw3X0WYeTg564E07KtU7OZ8p7Qb1Pr/TsUXWoCSgha2/yidOdvfOd8XqAAnu7XU+tUFtzv4BMSMP7i3Y2zMDEsBcCya8mSWXRMd0Pe1NoOemhSIgQU9GuyT76/Hz/1x2ptymA1QqOgDbxHwb7zPGSGPbOqbSFnUxdZcE4CxQldNGYJ1incPSyVzNylJnncNi7ZzTZWibvARo4VjUeJIlO3mRC3ZMRQaOuqBy7zjVVx7YvkVjScPwDZ4gI9DIuXEyoIqq6eSgvjjnECs5Fz5m2XpvF51SAJLyKnLLzgy7L01ZPgGtjIQs7pKDMMMAnLwBdmo1ES7RiIlsFx0rjFPu/JeXI0Yf+4TjihQsfmbghT0oin2kcuuTM8RY0tvPMBD4y85L516vLvwo6M0cuxtUWD5CBlEv5D/wqNxeHG75PgWMaDMdIOAZAPViFlGJn9wRnDjrS+uvdMcFCf5yUdGWcKAPqUrEhZ/JtB70t+pQp7iI6d7KDuGh6SFb/0oPj+pZ7TTbJNyKbwHINa4jNtLGIieN0goWDuLeV7WSKZ4LPh ihujZSF3 9xz+x0SKwjmHAvY/8FsYm/aHnF8vyEIvCIOKbv//pCm0EhZ6vzdZalGZynfl/+ZT6tWo57F+V9R9iSPw3NXIOMl5Vwl2QPfBZYhfGsEaQyWu1tfDvgX5WG3eYh8ogs3Tql4SGBJQNszaUOjujcGtwxI/UiYepMny3OK2ZfHzd5eY6Hkuf+aA+Y0pdSrPmOm/CCkc0HTvFgjCtfjJRcDL4xTOZG8HbC3pGQU3TPzYyaNRE/aKQwxxbJVWvaO+SNeoRbI8jqBTmJfE/XDLQutrfSsM3S1voOis7qwsSXy03bFc+N4ZMJj9mq2a8sx5geFyiKi/MczvIjhZ8348/g77/Nr7k1OzNVrfis1i2W6L1pyfkIlRqWjTfVOgXHoP1E3og0422aIamZVrEDWGSHS91HDg3JI+Ji83mrHPao6+N0Ac24z2NXDs1dS+FBmIitV6tHEeR5aM4E2V2qqZi9BjGpaxIF3AvB1FRVIIVkVQA59oDX+p6OFSkP0/5cB3/CztCjef2YA1NBtPMcRG/DugCGHovCMoEiqoM98FxA8o9oTM9unglmnaXUHzun+wXrfHkYyGM X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Am 28.05.24 um 15:57 schrieb David Hildenbrand: > Am 28.05.24 um 08:05 schrieb Mikhail Gavrilov: >> On Thu, May 23, 2024 at 12:05 PM Mikhail Gavrilov >> wrote: >>> >>> On Thu, May 9, 2024 at 10:50 PM David Hildenbrand wrote: >>> >>> The only known workload that causes this is updating a large >>> container. Unfortunately, not every container update reproduces the >>> problem. >> >> Is it possible to add more debugging information to make it clearer >> what's going on? > > If we knew who originally allocated that problematic page, that might help. > Maybe page_owner could give some hints? > >> >> BUG: Bad page state in process kcompactd0  pfn:605811 >> page: refcount:0 mapcount:0 mapping:0000000082d91e3e index:0x1045efc4f >> pfn:0x605811 >> aops:btree_aops ino:1 >> flags: >> 0x17ffffc600020c(referenced|uptodate|workingset|node=0|zone=2|lastcpupid=0x1fffff) >> raw: 0017ffffc600020c dead000000000100 dead000000000122 ffff888159075220 >> raw: 00000001045efc4f 0000000000000000 00000000ffffffff 0000000000000000 >> page dumped because: non-NULL mapping > > Seems to be an order-0 page, otherwise we would have another "head: ..." report. > > It's not an anon/ksm/non-lru migration folio, because we clear the page->mapping > field for them manually on the page freeing path. Likely it's a pagecache folio. > > So one option is that something seems to not properly set folio->mapping to > NULL. But that problem would then also show up without page migration? Hmm. > >> Hardware name: ASUS System Product Name/ROG STRIX B650E-I GAMING WIFI, >> BIOS 2611 04/07/2024 >> Call Trace: >>   >>   dump_stack_lvl+0x84/0xd0 >>   bad_page.cold+0xbe/0xe0 >>   ? __pfx_bad_page+0x10/0x10 >>   ? page_bad_reason+0x9d/0x1f0 >>   free_unref_page+0x838/0x10e0 >>   __folio_put+0x1ba/0x2b0 >>   ? __pfx___folio_put+0x10/0x10 >>   ? __pfx___might_resched+0x10/0x10 > > I suspect we come via >     migrate_pages_batch()->migrate_folio_unmap()->migrate_folio_done(). > > Maybe this is the "Folio was freed from under us. So we are done." path > when "folio_ref_count(src) == 1". > > Alternatively, we might come via >     migrate_pages_batch()->migrate_folio_move()->migrate_folio_done(). > > For ordinary migration, move_to_new_folio() will clear src->mapping if > the folio was migrated successfully. That's the very first thing that > migrate_folio_move() does, so I doubt that is the problem. > > So I suspect we are in the migrate_folio_unmap() path. But for > a !anon folio, who should be freeing the folio concurrently (and not clearing > folio->mapping?)? After all, we have to hold the folio lock while migrating. > > In khugepaged:collapse_file() we manually set folio->mapping = NULL, before > dropping the reference. > > Something to try might be (to see if the problem goes away). > > diff --git a/mm/migrate.c b/mm/migrate.c > index dd04f578c19c..45e92e14c904 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -1124,6 +1124,13 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, >                 /* Folio was freed from under us. So we are done. */ >                 folio_clear_active(src); >                 folio_clear_unevictable(src); > +               /* > +                * Anonymous and movable src->mapping will be cleared by > +                * free_pages_prepare so don't reset it here for keeping > +                * the type to work PageAnon, for example. > +                */ > +               if (!folio_mapping_flags(src)) > +                       src->mapping = NULL; >                 /* free_pages_prepare() will clear PG_isolated. */ >                 list_del(&src->lru); >                 migrate_folio_done(src, reason); > > But it does feel weird: who freed the page concurrently and didn't clear > folio->mapping ... > > We don't hold the folio lock of src, though, but have the only reference. So > another possible thing might be folio refcount mis-counting: folio_ref_count() > == 1 but there are other references (e.g., from the pagecache). Hmm, your original report mentions kswapd, so I'm getting the feeling someone does one folio_put() too much and we are freeing a pageache folio that is still in the pageache and, therefore, has folio->mapping set ... bisecting would really help. -- Thanks, David / dhildenb