From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88627C369DC for ; Tue, 29 Apr 2025 11:31:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 402206B0005; Tue, 29 Apr 2025 07:31:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3AFA46B0006; Tue, 29 Apr 2025 07:31:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 203B26B0007; Tue, 29 Apr 2025 07:31:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id EE4F36B0005 for ; Tue, 29 Apr 2025 07:31:10 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 0A917C0182 for ; Tue, 29 Apr 2025 11:31:12 +0000 (UTC) X-FDA: 83386865184.07.950D2FC Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf23.hostedemail.com (Postfix) with ESMTP id 7601114000E for ; Tue, 29 Apr 2025 11:31:09 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KPuY4p8Z; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf23.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1745926269; a=rsa-sha256; cv=none; b=afy9PZajs609pTVMCCwBSvz6LNpGnqiU17A+PYUonTUiIduh4BMPLq6Ls9AdIekH3RvKwd bU77cjrgwD6yslSM+/HZJQaSRfslBjhtfcPfvNtWQOFxIxakwHc54cCDtk3zVqFHA1k1Qt fbL6gWc0t36JPEusbZUxYSMdmoPclXQ= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KPuY4p8Z; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf23.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1745926269; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=rGFSkL0z4QHTG2QpNf64SqnlUwXhFfw26Pfg2nP/yBs=; b=iDaYn04AB3h96IIQkyVdWZVKmdO3O8aDJdjHdaYNV2pggkx9QCDXAeoVVWB36Pmz9vwS3m RpOcHXdMu50X8mFVnvmj5lUdE9n5IaTBEMt/7NXXOK1Fdd4EQ8UDDnlf8L6buei6fm1bTO qCHKGZ0blPtoVePmifBpyoCfyKFXj08= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1745926268; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=rGFSkL0z4QHTG2QpNf64SqnlUwXhFfw26Pfg2nP/yBs=; b=KPuY4p8ZJRmXuxZMMTZCItuKbp9Ou+cMvzSExudpkgtrP3jxFD/sktKzo8wr+grI+jChYJ Z7aHB6w9zduD1pYd7NNP02cBToMf/kjH171v3UKWEdAe4eyypV7xnVvcpCXxHl4AWyBd4e qaKCqIyQ0fXkNaAla19Gx2U4rYWg4zk= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-404-6PJo6sLSMQqktHDRFT0lLQ-1; Tue, 29 Apr 2025 07:31:05 -0400 X-MC-Unique: 6PJo6sLSMQqktHDRFT0lLQ-1 X-Mimecast-MFC-AGG-ID: 6PJo6sLSMQqktHDRFT0lLQ_1745926264 Received: by mail-wm1-f71.google.com with SMTP id 5b1f17b1804b1-43cf172ff63so23998225e9.3 for ; Tue, 29 Apr 2025 04:31:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745926264; x=1746531064; h=content-transfer-encoding:in-reply-to:organization:autocrypt :content-language:from:references:cc:to:subject:user-agent :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=rGFSkL0z4QHTG2QpNf64SqnlUwXhFfw26Pfg2nP/yBs=; b=uIobXMFuyhwps5P7+N9gbHf9Cu53Ntim5JlSNRI6ypEfItQQnsWVX8F1rPsffhVsGe BvBXAZe7h7FjOnLT8lv45n06pria1qvv3bhMeunkwGYRPTqyTth6eIuLqCT00TyGwBeA F+AVj+yHABkkCCT6c5DDjE2XQ6DgMxgTLRMjaAlfFXqb760bYfK5RDpDnqDPlk2y7+qG v0uWGFhbctOMvU4vDYcqMqYV8VN8urSKYc5dLKuLbgq95JbkbFfo84/5KrJzMdW50j5O TOLDZzxIzy8ZLrCipJasdqnBcVgzs18EWhMUw9TIbEL7q9SUyYJeGv+flReD105k36dT 047g== X-Forwarded-Encrypted: i=1; AJvYcCUykm+vqWojug7RoXPXjlPBTyu/Mbx8Hh3d8ckTcpuwrdhhBrOLerUtMiaw8UIw0IhxHq/M9iw2Ww==@kvack.org X-Gm-Message-State: AOJu0YyPlNiAr7lUXIpHxJoB6Rl7O2PEyzxPUHDGt9wJfHqboD0LstBB YQkRQvmQk8TrNj13eWVyjmk2owbpcg5vcyY8KwWyBOWKjjxwszctbepZUr4+JY/YSPzroAIKFfc DNKsp5+FcIOwrokkAmVQ2IzrYMCkkLsWOIn56nclVl/pTXtvs X-Gm-Gg: ASbGnctyMK2Fs1MWFxLOWyir/frQiILDa5ay7O3DEUJORjTussJrqSCg6/eN658z8sa OS13FcCBFMwKL9yGan1qMWgiOjRCTSw2ket9pxZnjXuxw36o6ExbJAS2L2Cyw6B7pEdL+YAOYrE ZvGiGX1y/8b8w0Mx9qJCSQfzytxDXOaIFs97U7HJCATxdwe7G/4YEgQ4IyQrEZl3hakTV9ToN8J 5c+d0rdopjac4jU1bVWeUf0eXN8wAaJno2VU96enSCV89GA4qS6cweZvBRWnsmGJCnmrnAJSxob 87oOTggkg4o0dvHE9j+Aff7eZIkzOwcHHfiBO5CfBe5mFwC96n5TZlpRIuGilHNwFGrKNPHs1nz X2NSrcXMTb9Ro8JoTTOyXAFmLFtFa/Z5rxBZxmNc= X-Received: by 2002:a05:600c:3503:b0:43c:e9f7:d6a3 with SMTP id 5b1f17b1804b1-441ac8e8f3cmr21842355e9.13.1745926264281; Tue, 29 Apr 2025 04:31:04 -0700 (PDT) X-Google-Smtp-Source: AGHT+IH7IRcI0k5ke/9DoI3DFzAY2GIsJqGIh9CA5L6DqZoGxllNH2gLwweFoubVznf4SdDYzGHmmA== X-Received: by 2002:a05:600c:3503:b0:43c:e9f7:d6a3 with SMTP id 5b1f17b1804b1-441ac8e8f3cmr21841935e9.13.1745926263793; Tue, 29 Apr 2025 04:31:03 -0700 (PDT) Received: from ?IPV6:2003:cb:c73b:fa00:8909:2d07:8909:6a5a? (p200300cbc73bfa0089092d0789096a5a.dip0.t-ipconnect.de. [2003:cb:c73b:fa00:8909:2d07:8909:6a5a]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-440a538747esm152511205e9.35.2025.04.29.04.31.02 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 29 Apr 2025 04:31:03 -0700 (PDT) Message-ID: <45d43b69-23a2-4bb6-ac86-8587dcb67173@redhat.com> Date: Tue, 29 Apr 2025 13:31:01 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH V4 1/2] mm: add folio_migration_expected_refs() as inline function To: Shivank Garg , Matthew Wilcox Cc: Andrew Morton , shaggy@kernel.org, wangkefeng.wang@huawei.com, jane.chu@oracle.com, ziy@nvidia.com, donettom@linux.ibm.com, apopple@nvidia.com, jfs-discussion@lists.sourceforge.net, linux-kernel@vger.kernel.org, linux-mm@kvack.org, syzbot+8bb6fd945af4e0ad9299@syzkaller.appspotmail.com References: <20250422114000.15003-1-shivankg@amd.com> <20250422114000.15003-2-shivankg@amd.com> <20250422164111.f5d3f0756ad94d012180ece5@linux-foundation.org> <8f24de4d-5088-498a-968d-9e8bb85201ac@redhat.com> <7ff5b149-534d-4bb7-8c6d-3147279d3fae@amd.com> <604a1db2-bd64-455e-9cf7-968cacef0122@redhat.com> <57536c5a-23dd-4f14-af35-9c5523000e80@amd.com> From: David Hildenbrand Autocrypt: addr=david@redhat.com; keydata= xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt WNyWQQ== Organization: Red Hat In-Reply-To: <57536c5a-23dd-4f14-af35-9c5523000e80@amd.com> X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: y12xyBsLnyzi5ru7Cb1nG6cIB8mnj1xaLPxzI_OClaM_1745926264 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 7601114000E X-Stat-Signature: im63hd4nfgkcef4bsep1hfh56fefknka X-HE-Tag: 1745926269-271976 X-HE-Meta: U2FsdGVkX1/TK+dtkgJiWeof2Z254C0s+h/T5oGBcN9ga1lVLMfr1wUPwRuFl2jygA3E1Z4in1739W+YMGIArFgJqylkxkqZ1zz8HRW7CoprBL70yI0WMIYONRJxMSPv6DIry/OopGtyF5tUkFAX5hdRtPHUPhoijgqTDl2GgndskAGDyTpgbaGgPuQbKUmf/cJ4V37K/rcKC5oxBgcfNwGtYMJZojHZ+TL7QA9UuFU7JFA125gXVGNtgndCzAlwcOopwVprCT1O8TtcJgPwNYcTUJpGyDh0jbKDHCD+jm7TI4IHSswrHHI7EqvrqLbPBkRSw5NOgUshuZQ4C1lSci2egUbFGY/1T4UM0eHTuPjCppezuRrUI8i1JgelE2FadifEkqsf1qKk862EzaxXeyJvdjnWUwq9KO2hDZkc/PbdkZT6g0E5ZZqttgisAPk/iGvyN0CgrbeN8niKTNwloQwzqRONM/A3H6yv3jAEWRr0HqoEyp2FvWZLmNYm65HB4vqijCPUazB21W17qHNFV35OpDyOKC8Jm9RLIC4FX5FPgGjd2ZprjE2Lu2IVox2y3W80hamnSISRzDVfB7UTppCmeETGPfidE64I1i1+CDf+ZO1reuAtUg3ADeqpwV5kAbwFNxq1e67wO89naDWHkWzH7qiRv7shYrE23VCRZV0q5GUWIKS9cUSNWpjSc1Bc49IdjiVHrQfelfRCqkMoJqOOOmnyx7/nw7UrEQDlCRIrN2xm5DA/4v+lt6eYDNwK1HMJnVE6HZWWb/xVz7ajn2ic14yMuVx2Di9LthqOD85p211nPtBu1cvagn4pvQkcle74jHcEpjyPK72Dg8NZnwelkuoANu/WUHCILNWB775ICxs/Vyi4AXqQgJQBFB4lTthW1P/4K5IVYUg7CA8vMo3YTtKpqiX6WTSM+CrayFRGwQiKd3mQSc7MNXAny6sdsJRASf+3dSAn/46Do+5 42BihXtH TuHdUYWY/tIVG+Nv5oLXj/MzVch99UqFK8ceyCh7QcWOcAYzL+9Ybt3+P2g9MyrY2naI1SH6O2IbyzjdON3YXcB6S8To5UjIxE0FofI250ztQiv3ysKoo0BMnPtNql9PPm8Lft3JHkdOMlnomixLFZ+NcBawHgX20KPSjU98bONKbewl4uY8vW1jv84KnBo2jDLzRXYTWqx0TjyLNeH9hwCLBRzCc0rXRcEhLEPExgOEV4gJlHDMdVxAVQs8bdcDJRowPX39tKQh8/nuuEYfDLe61rlv1UCuGdCq+v6xi1CgoLIISyK4To5/lLTwRp8DURFvPsQl7WqhCqPC23ozRxpqMUwQQcezbdWSy+5k3bqJzYPYt2UwqXURrsCZfpwF6/C2HKkjO1HrCmqnqbq24l95gZMsujhVSB8hE3ZozCYR62h4Csbdy6JL0E4TMN7MEnCjnco+q324o3IK+92mTThom5h6jsE2PLitu5SqYbwgn5G9+/Uxa5KwL+34WXC6M3k+OQuXtB8ATGVGCM+WGZscrl6S4nd7CcKgcqXZrj/ZrE7+zr4ZdfiVTpzOW0oRvtThC X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 29.04.25 12:57, Shivank Garg wrote: > > > On 4/25/2025 1:17 PM, David Hildenbrand wrote: >> On 24.04.25 13:57, Shivank Garg wrote: >>> Hi All, >>> >>> Thank you for reviewing my patch and providing feedback. >>> >>> On 4/24/2025 8:49 AM, Matthew Wilcox wrote: >>>> On Wed, Apr 23, 2025 at 09:25:05AM +0200, David Hildenbrand wrote: >>>>> On 23.04.25 09:22, David Hildenbrand wrote: >>>>>> On 23.04.25 02:36, Matthew Wilcox wrote: >>>>>>> On Tue, Apr 22, 2025 at 04:41:11PM -0700, Andrew Morton wrote: >>>>>>>>> +/** >>>>>>>>> + * folio_migrate_expected_refs - Count expected references for an unmapped folio. >>>>>>>> >>>>>>>> "folio_migration_expected_refs" >>> >>> Thank you for catching this, I'll fix it. >>> >>> I wasn't previously aware of using make W=1 to build kernel-docs and >>> check for warnings - this is very useful information for me. >>> >>> I'll add to changelog to better explain why this is needed for JFS. >>> >>>>>>> >>>>>>> What I do wonder is whether we want to have such a specialised >>>>>>> function existing.  We have can_split_folio() in huge_memory.c >>>>>>> which is somewhat more comprehensive and doesn't require the folio to be >>>>>>> unmapped first. >>>>>> >>>>>> I was debating with myself whether we should do the usual "refs from >>>>>> ->private, refs from page table mappings" .. dance, and look up the >>>>>> mapping from the folio instead of passing it in. >>>>>> >>>>>> I concluded that for this (migration) purpose the function is good >>>>>> enough as it is: if abused in wrong context (e.g., still ->private, >>>>>> still page table mappings), it would not fake that there are no >>>>>> unexpected references. >>>>> >>>>> Sorry, I forgot that we still care about the reference from ->private here. >>>>> We expect the folio to be unmapped. >>>> >>>> Right, so just adding in folio_mapcount() will be a no-op for migration, >>>> but enable its reuse by can_split_folio().  Maybe.  Anyway, the way I >>>> explain page refocunts to people (and I need to put this in a document >>>> somewhere): >>>> >>>> There are three types of contribution to the refcount: >>>> >>>>   - Expected.  These are deducible from the folio itself, and they're all >>>>     findable.  You need to figure out what the expected number of >>>>     references are to a folio if you're going to try to freeze it. >>>>     These can be references from the mapcount, the page cache, the swap >>>>     cache, the private data, your call chain. >>>>   - Temporary.  Someone else has found the folio somehow; perhaps through >>>>     the page cache, or by calling GUP or something.  They mean you can't >>>>     freeze the folio because you don't know who has the reference or how >>>>     long they might hold it for. >>>>   - Spurious.  This is like a temporary reference, but worse because if >>>>     you read the code, there should be no way for there to be any temporary >>>>     references to the folio.  Someone's found a stale pointer to this >>>>     folio and has bumped the reference count while they check that the >>>>     folio they have is the one they expected to find.  They're going >>>>     to find out that the pointer they followed is stale and put their >>>>     refcount soon, but in the meantime you still can't freeze the folio. >>>> >>>> So I don't love the idea of having a function with the word "expected" >>>> in the name that returns a value which doesn't take into account all >>>> the potential contributors to the expected value.  And sure we can keep >>>> adding qualifiers to the function name to indicate how it is to be used, >>>> but at some point I think we should say "It's OK for this to be a little >>>> less efficient so we can understand what it means". >>> >>> Thank you, Willy, for the detailed explanation about page reference counting. >>> This has helped me understand the concept much better. >>> >>> Based on your explanation and the discussion, I'm summarizing the 2 approaches: >>> >>> 1. Rename folio_migration_expected_refs to folio_migration_expected_base_refs, to >>> to clarify it does not account for other potential contributors. >>> or folio_unmapped_base_refs? >>> 2. Accounting all possible contributors to expected refs: >>> folio_expected_refs(mapping, folio) >>> { >>>     int refs = 1; >>> >>>     if (mapping) { >>>         if (folio_test_anon(folio)) >>>             refs += folio_test_swapcache(folio) ? >>>                 folio_nr_pages(folio) : 0; >>>         else >>>             refs += folio_nr_pages(folio); >>> >>>         if (folio_test_private(folio)) >>>             refs++; >>>     } >>>     refs += folio_mapcount(folio); // takes mapped folio into account and evaluate as no-op for unmapped folios during migration >>>     return refs; >>> } >>> >>> Please let me know if this approach is acceptable or if you have >>> other suggestions for improvement. >> >> A couple of points: >> >> 1) Can we name it folio_expected_ref_count() >> >> 2) Can we avoid passing in the mapping? Might not be expensive to look it >>    up again. Below I avoid calling folio_mapping(). >> >> 3) Can we delegate adding the additional reference to the caller? Will make it >>    easier to use elsewhere (e.g., not additional reference because we are holding >>    the page table lock). >> >> 4) Can we add kerneldoc, and in particular document the semantics? >> >> Not sure if we should inline this function or put it into mm/utils.c >> > > Hi David, > > Thank you for the detailed suggestions. They all make sense to me. > > I did not understand a few changes in your patch below: >> >> I'm thinking of something like (completely untested): >> >> >> diff --git a/include/linux/mm.h b/include/linux/mm.h >> index a205020e2a58b..a0ad4ed9a75ff 100644 >> --- a/include/linux/mm.h >> +++ b/include/linux/mm.h >> @@ -2112,6 +2112,61 @@ static inline bool folio_maybe_mapped_shared(struct folio *folio) >>      return folio_test_large_maybe_mapped_shared(folio); >>  } >> >> +/** >> + * folio_expected_ref_count - calculate the expected folio refcount >> + * @folio: the folio >> + * >> + * Calculate the expected folio refcount, taking references from the pagecache, >> + * swapcache, PG_private and page table mappings into account. Useful in >> + * combination with folio_ref_count() to detect unexpected references (e.g., >> + * GUP or other temporary references). >> + * >> + * Does currently not consider references from the LRU cache. If the folio >> + * was isolated from the LRU (which is the case during migration or split), >> + * the folio was already isolated from the LRU and the LRU cache does not apply. >> + * >> + * Calling this function on an unmapped folio -- !folio_mapped() -- that is >> + * locked will return a stable result. >> + * >> + * Calling this function on a mapped folio will not result in a stable result, >> + * because nothing stops additional page table mappings from coming (e.g., >> + * fork()) or going (e.g., munmap()). >> + * >> + * Calling this function without the folio lock will also not result in a >> + * stable result: for example, the folio might get dropped from the swapcache >> + * concurrently. >> + * >> + * However, even when called without the folio lock or on a mapped folio, >> + * this function can be used to detect unexpected references early (for example. >> + * if it makes sense to even lock the folio and unmap it). >> + * >> + * The caller must add any reference (e.g., from folio_try_get()) it might be >> + * holding itself to the result. >> + * >> + * Returns the expected folio refcount. >> + */ >> +static inline int folio_expected_ref_count(const struct folio *folio) >> +{ >> +    const int order = folio_order(folio); >> +    int ref_count = 0; > > Why are we not taking base ref_count as 1 like it's done in original folio_expected_refs > implementation? The idea is that this is the responsibility of the caller, which will make this function more versatile. For example, when we're holding the page table lock and want to check for unexpected references, we wouldn't be holding any additional reference from a folio_try_get() like migration code would. > >> + >> +    if (WARN_ON_ONCE(folio_test_slab(folio))) >> +        return 0; >> + >> +    if (folio_test_anon(folio)) { >> +        /* One reference per page from the swapcache. */ >> +        ref_count += folio_test_swapcache(folio) << order; > > why not use folio_nr_pages() here instead 1 << order? > something like folio_test_swapcache(folio) * folio_nr_pages(folio). A shift is typically cheaper than a multiplication, so it looked like a low hanging fruit to use a shift here. > >> +    } else if (!((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS)) { >> +        /* One reference per page from the pagecache. */ >> +        ref_count += !!folio->mapping << order; >> +        /* One reference from PG_private. */ >> +        ref_count += folio_test_private(folio); >> +    } >> + >> +    /* One reference per page table mapping. */ >> +    return ref_count + folio_mapcount(folio);; > >> +} >> + >>  #ifndef HAVE_ARCH_MAKE_FOLIO_ACCESSIBLE >>  static inline int arch_make_folio_accessible(struct folio *folio) >>  { > > I tested your patch with stress-ng and my move-pages test code. I did not see > any bugs/errors. Cool! It would be good to get some feedback from Willy on the kerneldoc, if he's aware of other constraints etc. -- Cheers, David / dhildenb