From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF2A7C5478C for ; Fri, 1 Mar 2024 16:32:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 529D56B007D; Fri, 1 Mar 2024 11:32:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4DA4F6B0080; Fri, 1 Mar 2024 11:32:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 353E86B0082; Fri, 1 Mar 2024 11:32:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 23C356B007D for ; Fri, 1 Mar 2024 11:32:13 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id C06E3A0470 for ; Fri, 1 Mar 2024 16:32:12 +0000 (UTC) X-FDA: 81849012504.19.8777108 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf17.hostedemail.com (Postfix) with ESMTP id C21E240032 for ; Fri, 1 Mar 2024 16:32:07 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ArIVW+aw; spf=pass (imf17.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709310730; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uTwhN2TF09fHVTd85Qn1FuxStIWaXYTKHi2aA5cb25I=; b=Ynw+DUg8X5gnW5Az/UD4MTtvh1tzlliBt/KhqwfSNb4Gh+zEuUvdNZVJzhQqC6II9P6J5z 42TeJZcMzekPlqqdS2izANXVZn1HPvvYuvztvvGaYw+2IUoFoqt6ENxHc5TLbs/X1iRsmH +IQhRhP1/pL/9IrTJ+z32CJ9ppM4xbY= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ArIVW+aw; spf=pass (imf17.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709310730; a=rsa-sha256; cv=none; b=XbPutIMjgcxPMrSyo9ZOmhvRSrrkHSvDYOqTj0x82EB2VjDwq/sC4S+7KHsB80OlpWgB1b A02HGbJiY3TW6XkwOsXT6jX37p5xjGz8bWvu+F/oEiAHvph7NPIG0PdAikK0F6WGXfBXiM Mhh2FfmNpqc9x2yuoPs9MTXto7nCHVg= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1709310727; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=uTwhN2TF09fHVTd85Qn1FuxStIWaXYTKHi2aA5cb25I=; b=ArIVW+aw3ziTSWeZBZnZ9hy1A6T4rCkl72IaxVHtgQ0MUlnY0tXOHA63RQBa5UAecfqKv8 PtRAE54Y6Gu12YL9cqrctK2B/lIuvZ3x9e6TbeKGrqw0J1/RC+FKF31yFXXRiQiCkwHNkm JqdvdYdahWPK00iUkQGJutBHVZOJoP4= Received: from mail-lf1-f70.google.com (mail-lf1-f70.google.com [209.85.167.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-198-6ZDqJJ9cPfG63KDFLE-Ntg-1; Fri, 01 Mar 2024 11:32:05 -0500 X-MC-Unique: 6ZDqJJ9cPfG63KDFLE-Ntg-1 Received: by mail-lf1-f70.google.com with SMTP id 2adb3069b0e04-51325c38ce0so1916287e87.1 for ; Fri, 01 Mar 2024 08:32:05 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709310724; x=1709915524; h=content-transfer-encoding:in-reply-to:organization:autocrypt:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=uTwhN2TF09fHVTd85Qn1FuxStIWaXYTKHi2aA5cb25I=; b=rn/HVWv8cZEg6Aich29dHs1YQ+KZdqbR4ah/PNrM5KRDhmp+RKv6pbYC5DrnhyAb9s GqlTMZOFKKCi9b/gWXToMQXKtpypKbTa6g/iEucm4j2VGQsZKUK8D4GQBW5WbPvjtkR8 DfMLv6E40LRIkGqBLeADCeeJkNT/GFFYFBOKTtsOnxVDUiNcvRXqtyuo+jcWnUuUmac8 8S7BlqHCsCHTBrm+cqKNK4Y5ASIZjG+RIZNFT0B+nzbYicFUtWhhV5xyoMOIMRKEOcBr wHqXToYSsn4y3SmW2h4TX7o8TmDv9zhuJaIX5q+KazxcW1tpsoLFT5T+mN/PyGHxBs9o w0/w== X-Forwarded-Encrypted: i=1; AJvYcCVfhwlZooMrrHiQFLnEtona2mZYDsuhl5AxeC71iQzaGkvEUF1WDNEo2Zlqb+JKSsmnD71gECPfYYNp4mduCVfFikg= X-Gm-Message-State: AOJu0YyE45tHamy6Wcn6k+246iM45yQPFN5NAavUfFBdYtVoO4u7HSDD C+/NXQk40SC8b8DeA//+cxMntuu2SZD0RQKfC/njIkIt3JWC4uUl89Kd4husE/79pz+ezTOOXs1 OxOyUarAapw7NqD30m8p/5bgrmUMyRFeJ9zaJIS7U2J7I/jtE X-Received: by 2002:a05:6512:34c8:b0:512:ed33:c16 with SMTP id w8-20020a05651234c800b00512ed330c16mr1717718lfr.8.1709310723732; Fri, 01 Mar 2024 08:32:03 -0800 (PST) X-Google-Smtp-Source: AGHT+IGZwq6ERdUXZFp063xBV3pwiQ9RToqSL8DuwtPI6F5Wj1/fp0bmNh0hrZvGt1L/7y12DvmW+w== X-Received: by 2002:a05:6512:34c8:b0:512:ed33:c16 with SMTP id w8-20020a05651234c800b00512ed330c16mr1717693lfr.8.1709310723246; Fri, 01 Mar 2024 08:32:03 -0800 (PST) Received: from ?IPV6:2003:cb:c713:3200:77d:8652:169f:b5f7? (p200300cbc7133200077d8652169fb5f7.dip0.t-ipconnect.de. [2003:cb:c713:3200:77d:8652:169f:b5f7]) by smtp.gmail.com with ESMTPSA id by12-20020a056000098c00b0033e25c39ac3sm58396wrb.80.2024.03.01.08.32.02 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 01 Mar 2024 08:32:02 -0800 (PST) Message-ID: <5c929e0d-b3b5-409d-8cf6-38347357eb04@redhat.com> Date: Fri, 1 Mar 2024 17:32:01 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 1/4] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags To: Ryan Roberts , Andrew Morton , Matthew Wilcox , Huang Ying , Gao Xiang , Yu Zhao , Yang Shi , Michal Hocko , Kefeng Wang Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org References: <20231025144546.577640-1-ryan.roberts@arm.com> <20231025144546.577640-2-ryan.roberts@arm.com> <6541e29b-f25a-48b8-a553-fd8febe85e5a@redhat.com> <2934125a-f2e2-417c-a9f9-3cb1e074a44f@redhat.com> <049818ca-e656-44e4-b336-934992c16028@arm.com> <4a73b16e-9317-477a-ac23-8033004b0637@arm.com> <1195531c-d985-47e2-b7a2-8895fbb49129@redhat.com> <5ebac77a-5c61-481f-8ac1-03bc4f4e2b1d@arm.com> From: David Hildenbrand Autocrypt: addr=david@redhat.com; keydata= xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt WNyWQQ== Organization: Red Hat In-Reply-To: <5ebac77a-5c61-481f-8ac1-03bc4f4e2b1d@arm.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspamd-Queue-Id: C21E240032 X-Rspam-User: X-Stat-Signature: rg4yoqbjywu47hb98yugcqki4g4s1hod X-Rspamd-Server: rspam01 X-HE-Tag: 1709310727-123493 X-HE-Meta: U2FsdGVkX18uoZ9NSj/A4OylJpkFVZHyXE4C5ROrC74zkKSdpFWBw43ATz3x9v+c6U0SlzF8EgJp162B/9tHHk6o4SSutwPKCSTZcnKrNGOGb8N0rS1BSWYGcJtgSgjfPOhYK+3KpZnys/dkT0rlTQ4Bk16N9y8UDI1gCGvC7VFDgdJyp6aKTbf1jjWIt3GB+mufxr/yawuuOYw2JDdONTwOlFFr/iMYwrig0baBiF8upPOmTfAFPQte6j4iYwuPi6vmCS1HNMKRRU+IrlmuulxCsKyJVKJQu5+r9rka3xzvbfEh2BxkP2v3jRzYSn8vaD24BWXX36APpkolcY0A+zh1HcxSmzps+bNv4+5gW9S7Y2Ce2KWDjJwL9qbry6r2miNh5Kra28sZgb4vrmXV+UJzEO/CLlKDSj07EVZBZE26wx3Vn5Z7MSOli48Zah7kF0cVTeyO38evxeV6nUJzI7VSHagug09E+c3J4dFGuvkbSBKCI/dvx5koMQ87318iFKe92I/+KkFKQR96YJwzY9DEGrO/qnZDlPDfMbsIaYRCVh5gRQPvCt9b23ckqT5H9hAy2Y6meTcjx1gyqxuzJectgFx3Gu4mGPRaAzFo1lDiM+qHqPYXgeZNVJwMBPC1uwitcGkndsUlHVoZ+OtoNMTwYmr2ABnXS9I02zj2qjtTmWWi1hGgrCuIE6AJ9a3b4aIlh4PsDJ3xBmjCYWY0Jqi1mQAgjDqiBjKtHgk/4IsgqrE5rueqfLkFgi1HXOHFVbpGEKieYYzaTta+EO5XyPv19gAOZbhG4kOY/mBYT37tnhYO2NXptsp19uGAZ6Eu+kvgTpGJ7fYkwU7PN1KDqXWCA+Ven98URAwS8WfHPhNU2X3MDzeAhEUj0AlGtR0enSbLDjadqGIontEcbIUARj/Ir7L5TMHzqWkjEdHGz/4Q9JoRIYDv8iAzp6iyfGtVAQlhWJLDRYs8CIa3OzG e7peVL5a ecm5sO68C03OoBCW9qSN+29PbOJL8UaNvW3WXdUN/oJBfLpYwgw6o0nX+xn2g47JSE8OTZW29zsDShG2sDhtRE6CLPenXnfYvyhW7rApZWHsFo3sVOCYeJYNCRn2/+R2ZSwZwNYJFdghOCswsR8EmCxRauUyWNr5+9z9t493fnxZD0wVtbfz+R4gz/lvoexYUs9eNVPZQ8CSf8CRFJaoiG4KK3xSvoUxCNN/xq1xxLuG/FKu9AYvjVAwP1lGSDkSw8iapRhbzWHXvZA+8CjRBEBTB9Z71BOA5qhHEB3ZRJH433cWCsPZTgLAnCLg2hU8oEuM6B8hii0rREd3IfbVaBdHEIkRuniRAndaSF4Z/WMeIz7ALK0b3mXmFqiwGPrNaGXs5QgolYFMHAWegtDkihYSKVSzbIbZb6rRmaUvcXJYQKxGPvovHQMBaufTdYRVTuMQNHQNDuc0aeTlrQpvKzPVs7g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 01.03.24 17:27, Ryan Roberts wrote: > On 28/02/2024 15:12, David Hildenbrand wrote: >> On 28.02.24 15:57, Ryan Roberts wrote: >>> On 28/02/2024 12:12, David Hildenbrand wrote: >>>>>> How relevant is it? Relevant enough that someone decided to put that >>>>>> optimization in? I don't know :) >>>>> >>>>> I'll have one last go at convincing you: Huang Ying (original author) commented >>>>> "I believe this should be OK.  Better to compare the performance too." at [1]. >>>>> That implies to me that perhaps the optimization wasn't in response to a >>>>> specific problem after all. Do you have any thoughts, Huang? >>>> >>>> Might make sense to include that in the patch description! >>>> >>>>> OK so if we really do need to keep this optimization, here are some ideas: >>>>> >>>>> Fundamentally, we would like to be able to figure out the size of the swap slot >>>>> from the swap entry. Today swap supports 2 sizes; PAGE_SIZE and PMD_SIZE. For >>>>> PMD_SIZE, it always uses a full cluster, so can easily add a flag to the >>>>> cluster >>>>> to mark it as PMD_SIZE. >>>>> >>>>> Going forwards, we want to support all sizes (power-of-2). Most of the time, a >>>>> cluster will contain only one size of THPs, but this is not the case when a THP >>>>> in the swapcache gets split or when an order-0 slot gets stolen. We expect >>>>> these >>>>> cases to be rare. >>>>> >>>>> 1) Keep the size of the smallest swap entry in the cluster header. Most of the >>>>> time it will be the full size of the swap entry, but sometimes it will cover >>>>> only a portion. In the latter case you may see a false negative for >>>>> swap_page_trans_huge_swapped() meaning we take the slow path, but that is rare. >>>>> There is one wrinkle: currently the HUGE flag is cleared in >>>>> put_swap_folio(). We >>>>> wouldn't want to do the equivalent in the new scheme (i.e. set the whole >>>>> cluster >>>>> to order-0). I think that is safe, but haven't completely convinced myself yet. >>>>> >>>>> 2) allocate 4 bits per (small) swap slot to hold the order. This will give >>>>> precise information and is conceptually simpler to understand, but will cost >>>>> more memory (half as much as the initial swap_map[] again). >>>>> >>>>> I still prefer to avoid this at all if we can (and would like to hear Huang's >>>>> thoughts). But if its a choice between 1 and 2, I prefer 1 - I'll do some >>>>> prototyping. >>>> >>>> Taking a step back: what about we simply batch unmapping of swap entries? >>>> >>>> That is, if we're unmapping a PTE range, we'll collect swap entries (under PT >>>> lock) that reference consecutive swap offsets in the same swap file. >>> >>> Yes in principle, but there are 4 places where free_swap_and_cache() is called, >>> and only 2 of those are really amenable to batching (zap_pte_range() and >>> madvise_free_pte_range()). So the other two users will still take the "slow" >>> path. Maybe those 2 callsites are the only ones that really matter? I can >>> certainly have a stab at this approach. >> >> We can ignore the s390x one. That s390x code should only apply to KVM guest >> memory where ordinary THP are not even supported. (and nobody uses mTHP there yet). >> >> Long story short: the VM can hint that some memory pages are now unused and the >> hypervisor can reclaim them. That's what that callback does (zap guest-provided >> guest memory). No need to worry about any batching for now. >> >> Then, there is the shmem one in shmem_free_swap(). I really don't know how shmem >> handles THP+swapout. >> >> But looking at shmem_writepage(), we split any large folios before moving them >> to the swapcache, so likely we don't care at all, because THP don't apply. >> >>> >>>> >>>> There, we can then first decrement all the swap counts, and then try minimizing >>>> how often we actually have to try reclaiming swap space (lookup folio, see it's >>>> a large folio that we cannot reclaim or could reclaim, ...). >>>> >>>> Might need some fine-tuning in swap code to "advance" to the next entry to try >>>> freeing up, but we certainly can do better than what we would do right now. >>> >>> I'm not sure I've understood this. Isn't advancing just a matter of: >>> >>> entry = swp_entry(swp_type(entry), swp_offset(entry) + 1); >> >> I was talking about the advancing swapslot processing after decrementing the >> swapcounts. >> >> Assume you decremented 512 swapcounts and some of them went to 0. AFAIU, you'd >> have to start with the first swapslot that has now a swapcount=0 one and try to >> reclaim swap. >> >> Assume you get a small folio, then you'll have to proceed with the next swap >> slot and try to reclaim swap. >> >> Assume you get a large folio, then you can skip more swapslots (depending on >> offset into the folio etc). >> >> If you get what I mean. :) >> > > I've implemented the batching as David suggested, and I'm pretty confident it's > correct. The only problem is that during testing I can't provoke the code to > take the path. I've been pouring through the code but struggling to figure out > under what situation you would expect the swap entry passed to > free_swap_and_cache() to still have a cached folio? Does anyone have any idea? > > This is the original (unbatched) function, after my change, which caused David's > concern that we would end up calling __try_to_reclaim_swap() far too much: > > int free_swap_and_cache(swp_entry_t entry) > { > struct swap_info_struct *p; > unsigned char count; > > if (non_swap_entry(entry)) > return 1; > > p = _swap_info_get(entry); > if (p) { > count = __swap_entry_free(p, entry); > if (count == SWAP_HAS_CACHE) > __try_to_reclaim_swap(p, swp_offset(entry), > TTRS_UNMAPPED | TTRS_FULL); > } > return p != NULL; > } > > The trouble is, whenever its called, count is always 0, so > __try_to_reclaim_swap() never gets called. > > My test case is allocating 1G anon memory, then doing madvise(MADV_PAGEOUT) over > it. Then doing either a munmap() or madvise(MADV_FREE), both of which cause this > function to be called for every PTE, but count is always 0 after > __swap_entry_free() so __try_to_reclaim_swap() is never called. I've tried for > order-0 as well as PTE- and PMD-mapped 2M THP. > > I'm guessing the swapcache was already reclaimed as part of MADV_PAGEOUT? I'm > using a block ram device as my backing store - I think this does synchronous IO > so perhaps if I have a real block device with async IO I might have more luck? > Just a guess... > > Or perhaps this code path is a corner case? In which case, perhaps its not worth > adding the batching optimization after all? I had to disable zswap in the past and was able to trigger this reliably with an ordinary swap backend (e.g., proper disk). Whenever you involve swap-to-ram, you might just get it reclaimed immediately. -- Cheers, David / dhildenb