From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1510FC27C52 for ; Thu, 6 Jun 2024 07:39:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7CE436B00A0; Thu, 6 Jun 2024 03:39:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 756E46B00A2; Thu, 6 Jun 2024 03:39:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 582B76B00A3; Thu, 6 Jun 2024 03:39:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 363096B00A0 for ; Thu, 6 Jun 2024 03:39:22 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id B10151A13C6 for ; Thu, 6 Jun 2024 07:39:21 +0000 (UTC) X-FDA: 82199663322.27.17A632D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf27.hostedemail.com (Postfix) with ESMTP id 770BC40003 for ; Thu, 6 Jun 2024 07:39:19 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=cxRXTuAg; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf27.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717659559; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YnLeWWjnKcrrWx2Fkf+FXZQi2uG/Bn31EMSuuGJ6MlE=; b=CsexoU5MkTl7MStFO67z5i1fkVFsuB8TqG6w+iYBQXQW1Nxdnq+UwC3f8e4Eu0sFTMnfcf enzo9DZqdtpAbcyhO58scdGJ3ZyJQ0BqvdS8zthPfgU7I5TMC6PESQc3ONEQe55M9foWVc 0V9M2oGZ1NuJtwzHn6hYbLZ3y2W0nOQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717659559; a=rsa-sha256; cv=none; b=dqqUdqkXz2sNyFLh1CMMtLMUfHZMr5mmC03CjrAgPlKzKZtJ5XB5PSioHqRSDvmKi6qXxo mkmp2zxTry5UGknLbJP9a770bjwmFXoIQkKIAmgAgjr7AELBbduXK4cHJjGkxP2n2Xk8K+ 0d+USIqdVA7+gAg+B4r5t89wd+MouK8= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=cxRXTuAg; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf27.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1717659558; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=YnLeWWjnKcrrWx2Fkf+FXZQi2uG/Bn31EMSuuGJ6MlE=; b=cxRXTuAgE/xxQAm6F5Zvz+Y90kscBHXcHiMxnR/V2HU73kSdg+hG9/IQhtacOQY8TkJCxX SU+CSOxeWflNTd12I3ig9HHwMMbwyRS7nXjiCtP3eLLP491m8oJ0UryzmkZ6G2FORq09iQ DYscMTduZfHZMRAUGIo9Gc9pXDURkrs= Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-596-F-ILsj8rP1KG0GkS7m41XQ-1; Thu, 06 Jun 2024 03:39:17 -0400 X-MC-Unique: F-ILsj8rP1KG0GkS7m41XQ-1 Received: by mail-wm1-f72.google.com with SMTP id 5b1f17b1804b1-42129affd18so5696095e9.1 for ; Thu, 06 Jun 2024 00:39:17 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717659556; x=1718264356; h=content-transfer-encoding:in-reply-to:organization:autocrypt :content-language:from:references:cc:to:subject:user-agent :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=YnLeWWjnKcrrWx2Fkf+FXZQi2uG/Bn31EMSuuGJ6MlE=; b=MiB8yH76YNFAFFVx7XZmoMWqtIMRBZgCYK+Vw5P0Jm17Jyf6wMk3pHf1s2mP8Fpzaz pKmM+SP9qCMrmMHXxmuurKjNS45bVbmXKFCQrJcveyG0wmiLY+fvOqlAu019D8+5ggMs odaXRRQk9kOVZ1MEpyv0siEGEBGpmj+aJuA7bd1QOjXl154FKTmjOy5OwKg30O/+a6+p jwHWWpJxFA0F7q6se7rhJ/XwTh3P17PbqDPPdUiN71Lj0KABBirDBlxriYmE965CjQVC 147XLl17Bh/V9HMXhbSoBvbJxCVYbYcvm/dU+uW0sNmejqDVCoehY25taGERukhdXJkC WAWA== X-Gm-Message-State: AOJu0Ywy/m179r17BlhKy9LJF8jUyk7sJGIjzdOfshSU5WbxM/anIv0g 7P7F6E7/cMTCJ3jE4VlycP85cHGd9k57NvbtV4nV6BVOo7FYBaJIseu9XyOSubU5j5kKkb7o0P2 InQBryJpmmLbR5Z7q5lNFsTL17oEriiqssDTk64KqxFKgAtxe X-Received: by 2002:a05:600c:4f83:b0:420:66e:4c31 with SMTP id 5b1f17b1804b1-4215634de43mr44186815e9.34.1717659556257; Thu, 06 Jun 2024 00:39:16 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHNTYspcDlx/Ts4V/7yhBU7MugbxT8yafWqP+PcwHXjYTzLEkM4ucpy8Zhyg2q2GusSqYMQkw== X-Received: by 2002:a05:600c:4f83:b0:420:66e:4c31 with SMTP id 5b1f17b1804b1-4215634de43mr44186555e9.34.1717659555726; Thu, 06 Jun 2024 00:39:15 -0700 (PDT) Received: from ?IPV6:2003:cb:c710:8800:a73c:ec5b:c02c:5e0b? (p200300cbc7108800a73cec5bc02c5e0b.dip0.t-ipconnect.de. [2003:cb:c710:8800:a73c:ec5b:c02c:5e0b]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-42158111143sm46303815e9.20.2024.06.06.00.39.14 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 06 Jun 2024 00:39:15 -0700 (PDT) Message-ID: <19590664-5190-4d30-ba0d-ec9d0ea373d3@redhat.com> Date: Thu, 6 Jun 2024 09:39:14 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] mm/gup: don't check page lru flag before draining it To: yangge1116 , akpm@linux-foundation.org, Matthew Wilcox Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, baolin.wang@linux.alibaba.com, liuzixing@hygon.cn References: <1717498121-20926-1-git-send-email-yangge1116@126.com> <0d7a4405-9a2e-4bd1-ba89-a31486155233@redhat.com> <776de760-e817-43b2-bd00-8ce96f4e37a8@redhat.com> <7063920f-963a-4b3e-a3f3-c5cc227bc877@redhat.com> From: David Hildenbrand Autocrypt: addr=david@redhat.com; keydata= xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt WNyWQQ== Organization: Red Hat In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Stat-Signature: aehfmntwsy5wepzgdgzweubgz7hozxga X-Rspamd-Queue-Id: 770BC40003 X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1717659559-361398 X-HE-Meta: U2FsdGVkX19DL1xbtkAE3yn3BdYloNO6Mx/dG6MK7IvV2uVMo1bPSeDgaR9zQEOChmV+naA7SSATDPwc7tc6jrkOfTa5XDqh2XzLHEfIqymqjQrSX1gTDgNIT8HW0x88IgfdLYqWbdeVemXknVuIIp18UYn/yX8TImJHhAUvrkEkLj9ltJZ1/CZqNqmN7jqd8q4IAn1DnaKPTKYqfuId+X4FjO6+YBRoRADkBZ1nAoP9BCaEbig/joLTWJsc3bbcWCfqwCt7KfDWaHToJBOGUjSnOnABlAxvyVjXGXygMuEQMYDJnkLvnx4V+4F0USAf3S3sowW3rQk88ahavWEGTL4CMGTJ4rPT8SAEq4VrPPYeQRFgPig1IbDRV+Upoxr3ZKLnN8NImRAV0GaFOlJoKN+2FQFME2UFyonx7j0DCRzeXWPDEI0A41L/1fG3q6bVUL+BuShdHxn6xvV8nUmk1iYur+mC8Rxx5kCfjZCUp6gCEuLvotbFKf0Qof6V78JWtCii7w7QLvGwgszeheEDZW+O2J3z3Jvx1qpnVDdJeTNM48ryCy7LKc3/362BljkHxQ8Wq/wUdU1e8r+UrWQyoNOEdqLj4Rxmi/awSfHv1AcxFrQgDTCO+nfmEKBs5qCP3lzsyNA3IOLy4yXrGuTXQ8yex/CYosGXCrt9mi67I7fEp8vfZhzv0soJi8qJBfpszN5OXFhNsjx3GVszkHgdQA9wWDF36UtR/+2oGiUZ5YEmX/2onrlpL2o2dCJGldWAqHVfyT96dPrpmQlBjSj7lqDFYtbJ3GSyyFHE6xLTBZwBLKkjUKlzMU3h3SDkOIgG/U25lGRtm0OZa1zewf4KzxFBiKfbTO7rW8BHL0/3WGsbQeCyAIXg1y+zikEl6H8QV7z4Hee3raX4GWhf6x3VbgAZvBesbaBMgybvkcZdb4s+FavY3FOsZXvmr5bGnHAqz0rwP9JFSH0qUWT66Rt I/estFFY vJ+f6HQ5ghOuc/t9HxtnBBtIjiOrPnyBy+Nnkjn9Dk8fdAnXKkzt/Wl9QjITxcOw/YBY/AyRtKmKEQ0KSb8faml85Pfg4wS0Ac/WMaJHUpKXrf70G/Xi4eEiqXRLG8aQJoil4AcxFDTnhWyf3QqM6qhvo0W5LG0Ho7EFIH85tystWef7UrcpzUChzF7FiXYmhqWcxtwi2D85i4ldeMCaF3vjBZJpz1uURiH44kdjQi4AIyikpVDaiimau6WDm36sgV7J8NgOsFAADUZilE2stHTiS/8rpIibarriuJKrQND8Er5tEIDoEVGKhz8oaSC1GejWMfIf53cP2QzsAFeuphmWR8DXUEwsOgchJQhdyTZsYKOzBybYRcntXeE3jWwKnnYS9v4q3N36+GdDiq8yJGVl/7mvS9uGnt3ZgE0dKihFdECD8l2MH4e7TbqYITWDmaKbp5Z7Rzyaz3dP8luhLS8nc+XbohcdeXeXPZMfxB+TE6ZdoV8JbbLp1sZ9N3KqeCDb1AW2Eyky59EvUqvXHTqy2Og== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 06.06.24 03:35, yangge1116 wrote: > > > 在 2024/6/5 下午5:53, David Hildenbrand 写道: >> On 05.06.24 11:41, David Hildenbrand wrote: >>> On 05.06.24 03:18, yangge1116 wrote: >>>> >>>> >>>> 在 2024/6/4 下午9:47, David Hildenbrand 写道: >>>>> On 04.06.24 12:48, yangge1116@126.com wrote: >>>>>> From: yangge >>>>>> >>>>>> If a page is added in pagevec, its ref count increases one, remove >>>>>> the page from pagevec decreases one. Page migration requires the >>>>>> page is not referenced by others except page mapping. Before >>>>>> migrating a page, we should try to drain the page from pagevec in >>>>>> case the page is in it, however, folio_test_lru() is not sufficient >>>>>> to tell whether the page is in pagevec or not, if the page is in >>>>>> pagevec, the migration will fail. >>>>>> >>>>>> Remove the condition and drain lru once to ensure the page is not >>>>>> referenced by pagevec. >>>>> >>>>> What you are saying is that we might have a page on which >>>>> folio_test_lru() succeeds, that was added to one of the cpu_fbatches, >>>>> correct? >>>> >>>> Yes >>>> >>>>> >>>>> Can you describe under which circumstances that happens? >>>>> >>>> >>>> If we call folio_activate() to move a page from inactive LRU list to >>>> active LRU list, the page is not only in LRU list, but also in one of >>>> the cpu_fbatches. >>>> >>>> void folio_activate(struct folio *folio) >>>> { >>>>        if (folio_test_lru(folio) && !folio_test_active(folio) && >>>>            !folio_test_unevictable(folio)) { >>>>            struct folio_batch *fbatch; >>>> >>>>            folio_get(folio); >>>>            //After this, folio is in LRU list, and its ref count have >>>> increased one. >>>> >>>>            local_lock(&cpu_fbatches.lock); >>>>            fbatch = this_cpu_ptr(&cpu_fbatches.activate); >>>>            folio_batch_add_and_move(fbatch, folio, folio_activate_fn); >>>>            local_unlock(&cpu_fbatches.lock); >>>>        } >>>> } >>> >>> Interesting, the !SMP variant does the folio_test_clear_lru(). >>> >>> It would be really helpful if we could reliably identify whether LRU >>> batching code has a raised reference on a folio. >>> >>> We have the same scenario in >>> * folio_deactivate() >>> * folio_mark_lazyfree() >>> >>> In folio_batch_move_lru() we do the folio_test_clear_lru(folio). >>> >>> No expert on that code, I'm wondering if we could move the >>> folio_test_clear_lru() out, such that we can more reliably identify >>> whether a folio is on the LRU batch or not. >> >> I'm sure there would be something extremely broken with the following >> (I don't know what I'm doing ;) ), but I wonder if there would be a way >> to make something like that work (and perform well enough?). >> >> diff --git a/mm/swap.c b/mm/swap.c >> index 67786cb771305..642e471c3ec5a 100644 >> --- a/mm/swap.c >> +++ b/mm/swap.c >> @@ -212,10 +212,6 @@ static void folio_batch_move_lru(struct folio_batch >> *fbatch, move_fn_t move_fn) >>         for (i = 0; i < folio_batch_count(fbatch); i++) { >>                 struct folio *folio = fbatch->folios[i]; >> >> -               /* block memcg migration while the folio moves between >> lru */ >> -               if (move_fn != lru_add_fn && !folio_test_clear_lru(folio)) >> -                       continue; >> - >>                 folio_lruvec_relock_irqsave(folio, &lruvec, &flags); >>                 move_fn(lruvec, folio); >> >> @@ -255,8 +251,9 @@ static void lru_move_tail_fn(struct lruvec *lruvec, >> struct folio *folio) >>   */ >>  void folio_rotate_reclaimable(struct folio *folio) >>  { >> -       if (!folio_test_locked(folio) && !folio_test_dirty(folio) && >> -           !folio_test_unevictable(folio) && folio_test_lru(folio)) { >> +       if (folio_test_lru(folio) && !folio_test_locked(folio) && >> +           !folio_test_dirty(folio) && !folio_test_unevictable(folio) && >> +           folio_test_clear_lru(folio)) { >>                 struct folio_batch *fbatch; >>                 unsigned long flags; >> >> @@ -354,7 +351,7 @@ static void folio_activate_drain(int cpu) >>  void folio_activate(struct folio *folio) >>  { >>         if (folio_test_lru(folio) && !folio_test_active(folio) && >> -           !folio_test_unevictable(folio)) { >> +           !folio_test_unevictable(folio) && >> folio_test_clear_lru(folio)) { >>                 struct folio_batch *fbatch; >> >>                 folio_get(folio); >> @@ -699,6 +696,8 @@ void deactivate_file_folio(struct folio *folio) >>         /* Deactivating an unevictable folio will not accelerate >> reclaim */ >>         if (folio_test_unevictable(folio)) >>                 return; >> +       if (!folio_test_clear_lru(folio)) >> +               return; >> >>         folio_get(folio); >>         local_lock(&cpu_fbatches.lock); >> @@ -718,7 +717,8 @@ void deactivate_file_folio(struct folio *folio) >>  void folio_deactivate(struct folio *folio) >>  { >>         if (folio_test_lru(folio) && !folio_test_unevictable(folio) && >> -           (folio_test_active(folio) || lru_gen_enabled())) { >> +           (folio_test_active(folio) || lru_gen_enabled()) && >> +           folio_test_clear_lru(folio)) { >>                 struct folio_batch *fbatch; >> >>                 folio_get(folio); >> @@ -740,7 +740,8 @@ void folio_mark_lazyfree(struct folio *folio) >>  { >>         if (folio_test_lru(folio) && folio_test_anon(folio) && >>             folio_test_swapbacked(folio) && >> !folio_test_swapcache(folio) && >> -           !folio_test_unevictable(folio)) { >> +           !folio_test_unevictable(folio) && >> +           folio_test_clear_lru(folio)) { >>                 struct folio_batch *fbatch; >> >>                 folio_get(folio); > > With your changes, we will call folio_test_clear_lru(folio) firstly to > clear the LRU flag, and then call folio_get(folio) to pin the folio, > seems a little unreasonable. Normally, folio_get(folio) is called > firstly to pin the page, and then some other functions is called to > handle the folio. Right, if that really matters (not sure if it does) we could do if (folio_test_lru(folio) && ... folio_get(folio); if (!folio_test_clear_lru(folio)) { folio_put(folio) } else { struct folio_batch *fbatch; } } -- Cheers, David / dhildenb