From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2DEE7C25B75 for ; Wed, 29 May 2024 06:57:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 70AD56B0082; Wed, 29 May 2024 02:57:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 690216B0092; Wed, 29 May 2024 02:57:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4C6096B0098; Wed, 29 May 2024 02:57:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 29D8A6B0082 for ; Wed, 29 May 2024 02:57:58 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id A825B120956 for ; Wed, 29 May 2024 06:57:57 +0000 (UTC) X-FDA: 82170528594.27.6307566 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf10.hostedemail.com (Postfix) with ESMTP id 779D3C0023 for ; Wed, 29 May 2024 06:57:54 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="MdaFL/tM"; spf=pass (imf10.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1716965874; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=X1jNL5AKCTqKVJJWMUcLi/+HsxHmghC2E1+CScAu2FQ=; b=ie4GtMa2BayBA+WlbfFjOzSprhjb8mO9WLgBo/QBtYgzp5C12c5gpKutsJqo80tzUvufiW DOO1ACEbVmvx39r8d4a+5CTtVR425j/S6sDB43aTycN89OV09/Dqx8F/Ds2G6vg3P5wKPq 9pQIuZksaGgm6M9yNS4BAsN/zpi2Fb4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1716965874; a=rsa-sha256; cv=none; b=giU2X+WdAurpS78ETdFxD7eM7AyurCCCIEiCQ6ZGSYyAYYJhSFTOlBKZlMVH2ywPxKKhE6 Eqrchex7LrHuO5R31z74BjnTfnBRBxrk2s4h43f/73dS8xhWe+5RLgCwLlfQhXnxG2Hj6h F55x/5U7EgQRQgGnfjhBSjrsEdZaPgI= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="MdaFL/tM"; spf=pass (imf10.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1716965873; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=X1jNL5AKCTqKVJJWMUcLi/+HsxHmghC2E1+CScAu2FQ=; b=MdaFL/tMe3OnpC8azQuftqv0/BBVMx4UOvkjcnwIgLH31T3SaIeJVSBPsbMOZe4nL4x0Kg IU5gGTRs2OUkNF6VSKSu0opQGymnnTclt2ACXz2BxOCXOV27Yka3Urz/Y6kKITAwjRjYsD Zc21FLpjpNL7cfNBSWN2cLV7U8BdnFw= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-297-6xx_aMC8Mxmp8jegInje9A-1; Wed, 29 May 2024 02:57:52 -0400 X-MC-Unique: 6xx_aMC8Mxmp8jegInje9A-1 Received: by mail-wm1-f72.google.com with SMTP id 5b1f17b1804b1-42110ebbd0fso12986325e9.1 for ; Tue, 28 May 2024 23:57:51 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716965871; x=1717570671; h=content-transfer-encoding:in-reply-to:organization:autocrypt :content-language:references:cc:to:from:subject:user-agent :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=X1jNL5AKCTqKVJJWMUcLi/+HsxHmghC2E1+CScAu2FQ=; b=gV2OB0XaiEUZZ+5iR2CKzCt7TyOJOxGCd0pykFjVi3tNQi9Wq9qN6GRFIt8WbX6Tnl ZKHryHcMwD9nftcbouoidvGSAvBQV3kbylUb00MymH7f5udMBJ0Y1sEys7mJpqPYE7AM TxPaUdvmyH62tPOM8qHaQaEUEyP+znslBYfoTWjvmER0p5zr5rQ9W3mHWTk9WjAuNN9O 6+LFr4stqgZcOM4V5qDhcXzk2pSmOg1CGp5iEHjC5yCE9xdk3xcW+v8lc0Pisc0KR9bN EtoZK13sUDJUV4lePo3JzXZoKLc81xsthv4o/Gs9lhnB/DSmghSrRS5QB4nQBcvI8OUK xnWQ== X-Forwarded-Encrypted: i=1; AJvYcCUVEJV74liw6+tO/6+Hh8kWH6Bl+nzbQy/unWhWFE0XOf8x2dKSJupFw/HzYA1w8onhnimB1NgKnts7+LXLf/QbfMs= X-Gm-Message-State: AOJu0YwJ/MS71HSXoURwbi72kskt/g4XZAMVjEpkveKvvy9HGhpQ8iEU 26OrZSK1BzBrzwE2RSLbwre06MA5zil+7/Ra3Zzh4LDXBxDj+QMalrS6yGLamiAmBicB03fzjlE IdTdMQ+pygHwm5pCPWZ9uK9VclnrUk9gf3IEGS+xcsada9pL3 X-Received: by 2002:a05:600c:198e:b0:420:fe60:c387 with SMTP id 5b1f17b1804b1-42108a12ab4mr117555405e9.38.1716965870796; Tue, 28 May 2024 23:57:50 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE0CejlWqPuXFqZN2EXRjw1Ccc4Nzq8rTsvDWbC57PrZ0GsEmBQvVcERgWRL1Bwkx7iEvBtug== X-Received: by 2002:a05:600c:198e:b0:420:fe60:c387 with SMTP id 5b1f17b1804b1-42108a12ab4mr117555135e9.38.1716965870249; Tue, 28 May 2024 23:57:50 -0700 (PDT) Received: from ?IPV6:2003:cb:c70c:3c00:686e:317c:3ed4:d7b8? (p200300cbc70c3c00686e317c3ed4d7b8.dip0.t-ipconnect.de. [2003:cb:c70c:3c00:686e:317c:3ed4:d7b8]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-42100ee7f1dsm201788225e9.7.2024.05.28.23.57.49 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 28 May 2024 23:57:49 -0700 (PDT) Message-ID: Date: Wed, 29 May 2024 08:57:48 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: 6.9/BUG: Bad page state in process kswapd0 pfn:d6e840 From: David Hildenbrand To: Mikhail Gavrilov , Chris Mason , Josef Bacik , David Sterba Cc: Linux List Kernel Mailing , Linux Memory Management List , Matthew Wilcox , linux-btrfs References: <0672f0b7-36f5-4322-80e6-2da0f24c101b@redhat.com> <6b42ad9a-1f15-439a-8a42-34052fec017e@redhat.com> <162cb2a8-1b53-4e86-8d49-f4e09b3255a4@redhat.com> <209ff705-fe6e-4d6d-9d08-201afba7d74b@redhat.com> Autocrypt: addr=david@redhat.com; keydata= xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt WNyWQQ== Organization: Red Hat In-Reply-To: <209ff705-fe6e-4d6d-9d08-201afba7d74b@redhat.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 779D3C0023 X-Rspam-User: X-Stat-Signature: fqikuk97j6tzi7i6mhz541cfhtyo8ftg X-HE-Tag: 1716965874-352756 X-HE-Meta: U2FsdGVkX1+tbnGgj6DsuCaOq25UR5rSlqUQWzfWhdyo9WHlDQpwlAaJ1D+r8bhEwvPDwLoLU0sHoIYpO5bUTutnRG5fX+2QT3Kq9BRh8L9V965RG1YttC0Mp7ACT6ERi8e5hk1r0sN+Pb2DgivUbqkOgoOv0AhIg/3Ss7mbOJq+7e2jSUmiw7YACsaMLOFghd4pxZmytBngFbifXon0vjo8rrvsssklJV0NUIw6dRhSlooiXEEzJxJkMtmdbnaY+gB2CPjstlYRX/YmKjQYphBxoAicx9Y1ZdK/Y/rV1x1K8X1cUKS+4Axx3M+Giwl6YyVAaFVL9kbnrEIWX83thy5GNCRTR5iK4OP7e6/jxrrRrNwKGP3se7DH+Lkq+T3h1uLrjL1eax1GHTZvttLovhy230BuqaQeSgF3LrjuxgPLzTgrdSOZ+3pQehpjViEFCEqCzh63D0EwpCm4v0BwI5akbSm/sAv8BwiG9qck9Zk60qAirexwbHyuj5z2QRsAUjEgIO7NkiO06FDSHvm2EfvWL2uvL4MBE1J/DpJ4rAOZxv/i6+mYs+asDW3/W5MSAXMxfbBI6/Qq2cHmgG2UH9PkeZeQN5URLfr6YSSXp7Yj8w0MU0t7QyqH0q0hWMoy+5yJ6GjeNUNONxoZqV9gB7ejBDrcKDjqriI3BTQRrEfDoKrftCA2FfCodvZ78XFiceBrMp3//yU464ApJyAc9Q/+cHpU6kj9WGNW8l6OABU9x49DqtOKlonWWu49zZd1LC7+mVag3UvRPJIxlao8+O2ywmRRUnMSaV0LWB5jcu3MBzcfvRDpEhSlZ9y0DTViIpQ24bw8V349lYxOjNIMCJz/ecj4j9H0D06c0im5fPiTcbeWxT83BCQ+z+8QwOYr8MQr+0vXLMXYP6sZdGvpkFpIbDQZTLK5Jbwk08x9k764W0p3ZyiOsWPJjMzacL8YXAdM65d8pkKgWxF3KdF aIqrfCE9 6tfx4rbcB4dyDOmNsfqjVVN2ozqe933qCV2KAfTo2EaCPIaKxInbHGfoNuP95/84B+b5vgC8fGLo+WOmmic60IGDfzIGPz4iS26A2X2VoTwr3WqG4V63pbFRQ3+EUL9lO/3iiEqT2JicpnqkNyrAt5sQAPHldjsFi/IFrNAAJREJUf8t47DQunjmAQODju0WVOgMak587FleTeR/b0I2SzO5VPVtjfpUeo0n0BD8GKNbx+8YK/dZuUmBFhgBLJ/Z1+q3K/6Is721dX7otfYp3tGuTfTb7rJgigCoCZPu2IuoP546/MCjuBKlkpD9nTOHqVfmshP2q1LMxb/KrOrtpJ+3ryLgqnOyBj6TiLacOdNVV+JqlHHoFnpFIrE9SZVwKD6rAtlteBZO0ZJMyFMsFGK0I75cWPdXfrbezlWxNulfuNg7tBBeX9Ef++ajrB7807HgiWTk5RqDS2Rvtko5BPL3tzmyfM4QA0qlrkIrH3niJDQCsnTU1we4Bo0Da9+6LvUJziS42KGhrCa9iP7Z92STjgW2DmtmAKnu5vqzsFAuIHt4U5mKRqH0Prs1hjbDkyDi12Csivo99J7sZb5gbJYCMLyeFBQiyhjvB+Ev3IJfuNFg= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 28.05.24 16:24, David Hildenbrand wrote: > Am 28.05.24 um 15:57 schrieb David Hildenbrand: >> Am 28.05.24 um 08:05 schrieb Mikhail Gavrilov: >>> On Thu, May 23, 2024 at 12:05 PM Mikhail Gavrilov >>> wrote: >>>> >>>> On Thu, May 9, 2024 at 10:50 PM David Hildenbrand wrote: >>>> >>>> The only known workload that causes this is updating a large >>>> container. Unfortunately, not every container update reproduces the >>>> problem. >>> >>> Is it possible to add more debugging information to make it clearer >>> what's going on? >> >> If we knew who originally allocated that problematic page, that might help. >> Maybe page_owner could give some hints? >> >>> >>> BUG: Bad page state in process kcompactd0  pfn:605811 >>> page: refcount:0 mapcount:0 mapping:0000000082d91e3e index:0x1045efc4f >>> pfn:0x605811 >>> aops:btree_aops ino:1 >>> flags: >>> 0x17ffffc600020c(referenced|uptodate|workingset|node=0|zone=2|lastcpupid=0x1fffff) >>> raw: 0017ffffc600020c dead000000000100 dead000000000122 ffff888159075220 >>> raw: 00000001045efc4f 0000000000000000 00000000ffffffff 0000000000000000 >>> page dumped because: non-NULL mapping >> >> Seems to be an order-0 page, otherwise we would have another "head: ..." report. >> >> It's not an anon/ksm/non-lru migration folio, because we clear the page->mapping >> field for them manually on the page freeing path. Likely it's a pagecache folio. >> >> So one option is that something seems to not properly set folio->mapping to >> NULL. But that problem would then also show up without page migration? Hmm. >> >>> Hardware name: ASUS System Product Name/ROG STRIX B650E-I GAMING WIFI, >>> BIOS 2611 04/07/2024 >>> Call Trace: >>>   >>>   dump_stack_lvl+0x84/0xd0 >>>   bad_page.cold+0xbe/0xe0 >>>   ? __pfx_bad_page+0x10/0x10 >>>   ? page_bad_reason+0x9d/0x1f0 >>>   free_unref_page+0x838/0x10e0 >>>   __folio_put+0x1ba/0x2b0 >>>   ? __pfx___folio_put+0x10/0x10 >>>   ? __pfx___might_resched+0x10/0x10 >> >> I suspect we come via >>     migrate_pages_batch()->migrate_folio_unmap()->migrate_folio_done(). >> >> Maybe this is the "Folio was freed from under us. So we are done." path >> when "folio_ref_count(src) == 1". >> >> Alternatively, we might come via >>     migrate_pages_batch()->migrate_folio_move()->migrate_folio_done(). >> >> For ordinary migration, move_to_new_folio() will clear src->mapping if >> the folio was migrated successfully. That's the very first thing that >> migrate_folio_move() does, so I doubt that is the problem. >> >> So I suspect we are in the migrate_folio_unmap() path. But for >> a !anon folio, who should be freeing the folio concurrently (and not clearing >> folio->mapping?)? After all, we have to hold the folio lock while migrating. >> >> In khugepaged:collapse_file() we manually set folio->mapping = NULL, before >> dropping the reference. >> >> Something to try might be (to see if the problem goes away). >> >> diff --git a/mm/migrate.c b/mm/migrate.c >> index dd04f578c19c..45e92e14c904 100644 >> --- a/mm/migrate.c >> +++ b/mm/migrate.c >> @@ -1124,6 +1124,13 @@ static int migrate_folio_unmap(new_folio_t get_new_folio, >>                 /* Folio was freed from under us. So we are done. */ >>                 folio_clear_active(src); >>                 folio_clear_unevictable(src); >> +               /* >> +                * Anonymous and movable src->mapping will be cleared by >> +                * free_pages_prepare so don't reset it here for keeping >> +                * the type to work PageAnon, for example. >> +                */ >> +               if (!folio_mapping_flags(src)) >> +                       src->mapping = NULL; >>                 /* free_pages_prepare() will clear PG_isolated. */ >>                 list_del(&src->lru); >>                 migrate_folio_done(src, reason); >> >> But it does feel weird: who freed the page concurrently and didn't clear >> folio->mapping ... >> >> We don't hold the folio lock of src, though, but have the only reference. So >> another possible thing might be folio refcount mis-counting: folio_ref_count() >> == 1 but there are other references (e.g., from the pagecache). > > Hmm, your original report mentions kswapd, so I'm getting the feeling someone > does one folio_put() too much and we are freeing a pageache folio that is still > in the pageache and, therefore, has folio->mapping set ... bisecting would > really help. > A little bird just told me that I missed an important piece in the dmesg output: "aops:btree_aops ino:1" from dump_mapping(): This is btrfs, i_ino is 1, and we don't have a dentry. Is that BTRFS_BTREE_INODE_OBJECTID? Summarizing what we know so far: (1) Freeing an order-0 btrfs folio where folio->mapping is still set (2) Triggered by kswapd and kcompactd; not triggered by other means of page freeing so far Possible theories: (A) folio->mapping not cleared when freeing the folio. But shouldn't this also happen on other freeing paths? Or are we simply lucky to never trigger that for that folio? (B) Messed-up refcounting: freeing a folio that is still in use (and therefore has folio-> mapping still set) I was briefly wondering if large folio splitting could be involved. CCing btrfs maintainers. -- Cheers, David / dhildenb