From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5619FC27C44 for ; Sat, 1 Jun 2024 06:44:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E75996B00AA; Sat, 1 Jun 2024 02:44:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E25476B00AB; Sat, 1 Jun 2024 02:44:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CC6B26B00AC; Sat, 1 Jun 2024 02:44:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id A5A1E6B00AA for ; Sat, 1 Jun 2024 02:44:52 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 5855780E1E for ; Sat, 1 Jun 2024 06:44:52 +0000 (UTC) X-FDA: 82181382024.14.21D8A0D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf26.hostedemail.com (Postfix) with ESMTP id 11F7D140015 for ; Sat, 1 Jun 2024 06:44:48 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Tlb4z9eB; spf=pass (imf26.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717224289; a=rsa-sha256; cv=none; b=YWTtw64erYxFndftKPfkFuek5wRC/1Y6vLeluB2E6F0YEfMcfJUp0W3TQ1vgYR6pE2nraA QLK3MqGbP0CFinYP4WhYaVntaicB2ZU91qbnH6htsIQJ8zYT2m0Z7yrlqeTiQ8opQJ+pUZ WRpz/pKg0T5nciRzvmnirFVrSW8br6k= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Tlb4z9eB; spf=pass (imf26.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717224289; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bV//IZXKmmB/cI+410WyqC8NPdVn7fVm2YZQJyzhfFA=; b=XMdCRojQDA4jOYGKh6d6So5ee7Bsmao1khQ4+KUivd4TVPLX7F4u7ybUntuGkSiDctHp4I 4ul12g+RoA31LT44hzlluHv7yEsgpM3XW6H5OBIjTOXltOu2Q15IaUn5vWf76CNA7Z6DxN h1CzO/lY/nkYO1rM1I/lzQTJcItsqrs= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1717224288; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:autocrypt:autocrypt; bh=bV//IZXKmmB/cI+410WyqC8NPdVn7fVm2YZQJyzhfFA=; b=Tlb4z9eBFIy7Igf3n6biHAaxy1SpIHobbyTCpHeOLIz98W++n3J0lf2UHdgF0kphY8Hm4S qedL/khIQJ92rk32m5NKDDwYwFDip0B9E/DBP3lxMezHGMjD/VlKQLdcIWM7T1V0CHSu9H Fnu55AXR9Jz/fpou6jLRAa4/FhCFLRI= Received: from mail-lj1-f200.google.com (mail-lj1-f200.google.com [209.85.208.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-39-nYRv42P1OhyeSBNtfPpDUg-1; Sat, 01 Jun 2024 02:44:46 -0400 X-MC-Unique: nYRv42P1OhyeSBNtfPpDUg-1 Received: by mail-lj1-f200.google.com with SMTP id 38308e7fff4ca-2ea93c06e3cso13082581fa.2 for ; Fri, 31 May 2024 23:44:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717224284; x=1717829084; h=content-transfer-encoding:in-reply-to:organization:autocrypt :content-language:from:references:cc:to:subject:user-agent :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=bV//IZXKmmB/cI+410WyqC8NPdVn7fVm2YZQJyzhfFA=; b=n/Fs8dK8HW3O0u27FJ60xGiRDRPu1Pkj3elsqHcolqkija9F3siGdndCi7VA159lEh BTl1Q+IyyF+U1L5DvuNg3zwbzMDHhXr4a/XrNA9orIqJciSmevVeHDec4Y0eA6VF066e SML0mFtEL4t7SqdvXi+uUNfsyGTCmLk22u7o2gVKuidv9uX5Yc2AM+UlOBF2rXaxhvUb lz7UjNyhXedjoOsx4Ig6pv8ToQQDR5mR3KC/+r1reMu4b2UI4blvYhFRnmwI5hoCKmXf UgBb/5VtSHR8rGCIOTm1/cHbzUkpwgvjXOYKbrCqH7cSO3XA3dXTsw5yiRHP23T/mUdu HgaA== X-Forwarded-Encrypted: i=1; AJvYcCX9V14JnmAsr/OUBsNbeZz/yaRkZaUV8j19liBrinCH/ZGeip6lu+pT1XoWuGVy2Shn8Ek7+uD+h6Kmj1s/k8URL7g= X-Gm-Message-State: AOJu0YyE4Vf0pRTY0m5P5zQYEvmNoaELpEG+/a0gkJboBDfh6zZU6zyv O06SOSE4CwQwfdk5sFOivsBAYiflI+4wGXsDMc96XXr8mvFxJsYjx/Rzm4bgs0gTblEC/mfSaIQ 4R05h/g/KsvUD8WOBr0bFsr4GA+o+na66DRan365A6/3fa82P X-Received: by 2002:a05:651c:78a:b0:2ea:772a:ddb4 with SMTP id 38308e7fff4ca-2ea951ac7e3mr23853261fa.34.1717224284277; Fri, 31 May 2024 23:44:44 -0700 (PDT) X-Google-Smtp-Source: AGHT+IE2Kfdgj7cvmwkGlrYhR9q6t44XFUgHP9L3vy/kt/cwB/pFSYLlcltTHcj6AWTI4KFaxl54fw== X-Received: by 2002:a05:651c:78a:b0:2ea:772a:ddb4 with SMTP id 38308e7fff4ca-2ea951ac7e3mr23853161fa.34.1717224283684; Fri, 31 May 2024 23:44:43 -0700 (PDT) Received: from ?IPV6:2003:cb:c71f:b300:1d59:157a:ba27:f7e0? (p200300cbc71fb3001d59157aba27f7e0.dip0.t-ipconnect.de. [2003:cb:c71f:b300:1d59:157a:ba27:f7e0]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-4212b8b76f3sm44775025e9.47.2024.05.31.23.44.42 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 31 May 2024 23:44:43 -0700 (PDT) Message-ID: <3be98b72-eb54-4963-a130-ec6033ab1403@redhat.com> Date: Sat, 1 Jun 2024 08:44:42 +0200 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [RFC PATCH] mm: swap: reuse exclusive folio directly instead of wp page faults To: Barry Song <21cnbao@gmail.com> Cc: akpm@linux-foundation.org, linux-mm@kvack.org, chrisl@kernel.org, surenb@google.com, kasong@tencent.com, minchan@kernel.org, willy@infradead.org, ryan.roberts@arm.com, linux-kernel@vger.kernel.org, Barry Song References: <20240531104819.140218-1-21cnbao@gmail.com> <87ac9610-5650-451f-aa54-e634a6310af4@redhat.com> <821d29b9-cb06-49db-9fe8-6c054c8787fb@redhat.com> From: David Hildenbrand Autocrypt: addr=david@redhat.com; keydata= xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt WNyWQQ== Organization: Red Hat In-Reply-To: X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-US Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 11F7D140015 X-Stat-Signature: wfph98wcoycdyxcnnu1qcucgm46uemyz X-HE-Tag: 1717224288-794171 X-HE-Meta: U2FsdGVkX19qmb8vrqyQWfvskec0x4S98B1Sxu/hAn0kC7riG+kJ5djItNLU4D5SXY49ygyF0XEXlPHboL5afF1TMY91S0YQwZRQ5RXaDWh/rCBGFZqVYpBoORubi2jZJSjjG3Efao/YczlIf7C6SndNziHMOr29Arra0efhCmhu3O8BCm4BF1B+zbhCCW09r1QfrOj3xpcHykUW8xWb2KyQO0PfwxshQDNd5kzO1T3Hh7rR7OTzpkn/GbFflIzBw/eMuoQgf8kNywY+7ZCtB5h+1niLzbSOLC89vhggFpO3L42o6Jhw0xhN2SqQYKtLCXg6U8jULs1lWIemfCPc+wxlUcGslHmRGcL/ZfKL7URjasqY3HhiHorqkPkO9mQs0ttsr6azHiDjuMks29e6l0Cqtf87g6267e5z59jCqnKdNeaRYhWAxNotRoGrZSnI/fNyAIymc9HGmK5GOwHFWMaX9K11QnAAUu7eH0O3+bcg/JLfPi3HnMUimHqOHu7XTj/N8Trcclo2m4yVX4VOKhQvtr3fybkwAfvU5VMjg/Yg86Kx2FELzVIcTt1Hn6H2CZBi6dBgACou3DUgkTZaB2Mg+ZzuY+4/pOcs20LG94+WAytW4qCIQrtdmv9U6Fum2+7sRmq+hnyT/M0McnJUx0vE/x0P7uHzQT2exDQsOwmIhSm9/ET9Er8zqD/bk2SjBKdh4D9XrRyv0+i3d7uiuAjHnbpvJIOdnq7SYJUgwbUKTbo6emrAm07zdX4b7AAO78lIDEzPVyGTLi3bLteXPT/++cJqszkcwpK38dh7IjccXnedeZ+r5MVMQvlqmiM6lKW8v6HFNOoygdD8C2hlz/AY5soRMuIt+cMtlOGqZJn8klUDnRRHnvUCbDUdr+yGIgyN6x0BeSPMcibnhabWzturNmDBktPjLBiN/HoPLpeK848QHiHIIquVXcPmrWpyIT4AOLgTrOK9SWz/tvm vnOLbLJV EkVmIYVOLLotUKA5O83N0eCS+GPeKF8mdNofZ56jcO1d90divjF00LGAmwH4bh/aIYDqV0Q6kH0qKutiqS3NwZnWDqLpyGPSWediru7vGZyyjh+syJnRp+aIwOGXC/rauEEzVc1roIcaTSc3PWDmV0b2NoE19243AFo98EpCjYzx730bv5+RLiuym2H57xIK+SRdqFBGu6XDphsKCJMKIbMq7iLU2p5W6XY/yG3viu8KCLLjUDSbj9fwV+gsNOsPUkvLESo6jQqxRNPe6IKtiw2gKqcKArfA6OV4KSgphEsjj75M1xz1ZKhAcF3HquN9/Pc5fZtnJdl2DfI348l+sP0jbkCOdMyhXaZBr9BeZouDZUBlfGgNGmZezbrMZA00ZrV0S+E6CBXVrRciAGPyuzznGQydPqYo8C71z2QT5hTgHHpNdSZA1qxtoJ9XQ3V1P5kri9fepOCc2XjPA+0UjB7JNhadbaQffzg26oRVrO2O6uujY66FfCI6vPjkiuwgKnheal5ukbY2h8D6DBu/Q16V82avjhs2xj7pOwZLoznu283+pPhixNH2Lybvbnww5vCH/ZauriyqqwkE6lwbHpq9f1KRp/vXo81tA X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 01.06.24 02:48, Barry Song wrote: > On Sat, Jun 1, 2024 at 12:35 AM David Hildenbrand wrote: >> >> On 31.05.24 14:30, Barry Song wrote: >>> On Sat, Jun 1, 2024 at 12:20 AM Barry Song <21cnbao@gmail.com> wrote: >>>> >>>> On Sat, Jun 1, 2024 at 12:10 AM David Hildenbrand wrote: >>>>> >>>>> On 31.05.24 13:55, Barry Song wrote: >>>>>> On Fri, May 31, 2024 at 11:08 PM David Hildenbrand wrote: >>>>>>> >>>>>>> On 31.05.24 12:48, Barry Song wrote: >>>>>>>> From: Barry Song >>>>>>>> >>>>>>>> After swapping out, we perform a swap-in operation. If we first read >>>>>>>> and then write, we encounter a major fault in do_swap_page for reading, >>>>>>>> along with additional minor faults in do_wp_page for writing. However, >>>>>>>> the latter appears to be unnecessary and inefficient. Instead, we can >>>>>>>> directly reuse in do_swap_page and completely eliminate the need for >>>>>>>> do_wp_page. >>>>>>>> >>>>>>>> This patch achieves that optimization specifically for exclusive folios. >>>>>>>> The following microbenchmark demonstrates the significant reduction in >>>>>>>> minor faults. >>>>>>>> >>>>>>>> #define DATA_SIZE (2UL * 1024 * 1024) >>>>>>>> #define PAGE_SIZE (4UL * 1024) >>>>>>>> >>>>>>>> static void *read_write_data(char *addr) >>>>>>>> { >>>>>>>> char tmp; >>>>>>>> >>>>>>>> for (int i = 0; i < DATA_SIZE; i += PAGE_SIZE) { >>>>>>>> tmp = *(volatile char *)(addr + i); >>>>>>>> *(volatile char *)(addr + i) = tmp; >>>>>>>> } >>>>>>>> } >>>>>>>> >>>>>>>> int main(int argc, char **argv) >>>>>>>> { >>>>>>>> struct rusage ru; >>>>>>>> >>>>>>>> char *addr = mmap(NULL, DATA_SIZE, PROT_READ | PROT_WRITE, >>>>>>>> MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); >>>>>>>> memset(addr, 0x11, DATA_SIZE); >>>>>>>> >>>>>>>> do { >>>>>>>> long old_ru_minflt, old_ru_majflt; >>>>>>>> long new_ru_minflt, new_ru_majflt; >>>>>>>> >>>>>>>> madvise(addr, DATA_SIZE, MADV_PAGEOUT); >>>>>>>> >>>>>>>> getrusage(RUSAGE_SELF, &ru); >>>>>>>> old_ru_minflt = ru.ru_minflt; >>>>>>>> old_ru_majflt = ru.ru_majflt; >>>>>>>> >>>>>>>> read_write_data(addr); >>>>>>>> getrusage(RUSAGE_SELF, &ru); >>>>>>>> new_ru_minflt = ru.ru_minflt; >>>>>>>> new_ru_majflt = ru.ru_majflt; >>>>>>>> >>>>>>>> printf("minor faults:%ld major faults:%ld\n", >>>>>>>> new_ru_minflt - old_ru_minflt, >>>>>>>> new_ru_majflt - old_ru_majflt); >>>>>>>> } while(0); >>>>>>>> >>>>>>>> return 0; >>>>>>>> } >>>>>>>> >>>>>>>> w/o patch, >>>>>>>> / # ~/a.out >>>>>>>> minor faults:512 major faults:512 >>>>>>>> >>>>>>>> w/ patch, >>>>>>>> / # ~/a.out >>>>>>>> minor faults:0 major faults:512 >>>>>>>> >>>>>>>> Minor faults decrease to 0! >>>>>>>> >>>>>>>> Signed-off-by: Barry Song >>>>>>>> --- >>>>>>>> mm/memory.c | 7 ++++--- >>>>>>>> 1 file changed, 4 insertions(+), 3 deletions(-) >>>>>>>> >>>>>>>> diff --git a/mm/memory.c b/mm/memory.c >>>>>>>> index eef4e482c0c2..e1d2e339958e 100644 >>>>>>>> --- a/mm/memory.c >>>>>>>> +++ b/mm/memory.c >>>>>>>> @@ -4325,9 +4325,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>>>>>>> */ >>>>>>>> if (!folio_test_ksm(folio) && >>>>>>>> (exclusive || folio_ref_count(folio) == 1)) { >>>>>>>> - if (vmf->flags & FAULT_FLAG_WRITE) { >>>>>>>> - pte = maybe_mkwrite(pte_mkdirty(pte), vma); >>>>>>>> - vmf->flags &= ~FAULT_FLAG_WRITE; >>>>>>>> + if (vma->vm_flags & VM_WRITE) { >>>>>>>> + pte = pte_mkwrite(pte_mkdirty(pte), vma); >>>>>>>> + if (vmf->flags & FAULT_FLAG_WRITE) >>>>>>>> + vmf->flags &= ~FAULT_FLAG_WRITE; >>>>>>> >>>>>>> This implies, that even on a read fault, you would mark the pte dirty >>>>>>> and it would have to be written back to swap if still in the swap cache >>>>>>> and only read. >>>>>>> >>>>>>> That is controversial. >>>>>>> >>>>>>> What is less controversial is doing what mprotect() via >>>>>>> change_pte_range()/can_change_pte_writable() would do: mark the PTE >>>>>>> writable but not dirty. >>>>>>> >>>>>>> I suggest setting the pte only dirty if FAULT_FLAG_WRITE is set. >>>>>> >>>>>> Thanks! >>>>>> >>>>>> I assume you mean something as below? >>>>> >>>>> It raises an important point: uffd-wp must be handled accordingly. >>>>> >>>>>> >>>>>> diff --git a/mm/memory.c b/mm/memory.c >>>>>> index eef4e482c0c2..dbf1ba8ccfd6 100644 >>>>>> --- a/mm/memory.c >>>>>> +++ b/mm/memory.c >>>>>> @@ -4317,6 +4317,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>>>>> add_mm_counter(vma->vm_mm, MM_SWAPENTS, -nr_pages); >>>>>> pte = mk_pte(page, vma->vm_page_prot); >>>>>> >>>>>> + if (pte_swp_soft_dirty(vmf->orig_pte)) >>>>>> + pte = pte_mksoft_dirty(pte); >>>>>> + if (pte_swp_uffd_wp(vmf->orig_pte)) >>>>>> + pte = pte_mkuffd_wp(pte); >>>>>> /* >>>>>> * Same logic as in do_wp_page(); however, optimize for pages that are >>>>>> * certainly not shared either because we just allocated them without >>>>>> @@ -4325,18 +4329,19 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) >>>>>> */ >>>>>> if (!folio_test_ksm(folio) && >>>>>> (exclusive || folio_ref_count(folio) == 1)) { >>>>>> - if (vmf->flags & FAULT_FLAG_WRITE) { >>>>>> - pte = maybe_mkwrite(pte_mkdirty(pte), vma); >>>>>> - vmf->flags &= ~FAULT_FLAG_WRITE; >>>>>> + if (vma->vm_flags & VM_WRITE) { >>>>>> + if (vmf->flags & FAULT_FLAG_WRITE) { >>>>>> + pte = pte_mkwrite(pte_mkdirty(pte), vma); >>>>>> + vmf->flags &= ~FAULT_FLAG_WRITE; >>>>>> + } else if ((!vma_soft_dirty_enabled(vma) || >>>>>> pte_soft_dirty(pte)) >>>>>> + && !userfaultfd_pte_wp(vma, pte)) { >>>>>> + pte = pte_mkwrite(pte, vma); >>>>> >>>>> Even with FAULT_FLAG_WRITE we must respect uffd-wp and *not* do a >>>>> pte_mkwrite(pte). So we have to catch and handle that earlier (I could >>>>> have sworn we handle that somehow). >>>>> >>>>> Note that the existing >>>>> pte = pte_mkuffd_wp(pte); >>>>> >>>>> Will fix that up because it does an implicit pte_wrprotect(). >>>> >>>> This is exactly what I have missed as I am struggling with why WRITE_FAULT >>>> blindly does mkwrite without checking userfaultfd_pte_wp(). >>>> >>>>> >>>>> >>>>> So maybe what would work is >>>>> >>>>> >>>>> if ((vma->vm_flags & VM_WRITE) && !userfaultfd_pte_wp(vma, pte) && >>>>> !vma_soft_dirty_enabled(vma)) { >>>>> pte = pte_mkwrite(pte); >>>>> >>>>> /* Only set the PTE dirty on write fault. */ >>>>> if (vmf->flags & FAULT_FLAG_WRITE) { >>>>> pte = pte_mkdirty(pte); >>>>> vmf->flags &= ~FAULT_FLAG_WRITE; >>>>> } >>> >>> WRITE_FAULT has a pte_mkdirty, so it doesn't need to check >>> "!vma_soft_dirty_enabled(vma)"? >>> Maybe I thought too much, just the simple code below should work? >> >> That would likely not handle softdirty correctly in case we end up in >> pte_mkwrite(pte, vma); note that pte_mksoft_dirty() will not wrprotect ... > > if SOFTDIRTY has been set, we shouldn't do wrprotect? till the dirty > is cleared, we don't rely on further write fault to set soft dirty, right? If softidrty is enabled for the VMA and the PTE is softdirty, we can (not should) map it writable. My point is that softdirty tracking is so underused that optimizing it here is likely not required. > > so we should better do pte_mkwrite if pte_soft_dirty(pte) == true? > > if ((vma->vm_flags & VM_WRITE) && !userfaultfd_pte_wp(vma, pte) && > (!vma_soft_dirty_enabled(vma) || pte_soft_dirty(pte))) > >> >> (note that we shouldn't optimize for softdirty handling) >> >>> >>> if (!folio_test_ksm(folio) && >>> (exclusive || folio_ref_count(folio) == 1)) { >>> if (vma->vm_flags & VM_WRITE) { >>> if (vmf->flags & FAULT_FLAG_WRITE) { >>> pte = pte_mkwrite(pte_mkdirty(pte), vma); >>> vmf->flags &= ~FAULT_FLAG_WRITE; >>> } else { >>> pte = pte_mkwrite(pte, vma); >>> } >>> } >>> rmap_flags |= RMAP_EXCLUSIVE; >>> } >>> >>> if (pte_swp_soft_dirty(vmf->orig_pte)) >>> pte = pte_mksoft_dirty(pte); >>> if (pte_swp_uffd_wp(vmf->orig_pte)) >>> pte = pte_mkuffd_wp(pte); >>> >>> This still uses the implicit wrprotect of pte_mkuffd_wp. >> >> But the wrprotected->writable->wrprotected path really is confusing. I'd >> prefer to set these bits ahead of time instead, so we can properly rely >> on them -- like we do in other code. > > I agree. we are setting WRITE then reverting the WRITE. It is confusing. > > So in conclusion, we get the belows? > > if (pte_swp_soft_dirty(vmf->orig_pte)) > pte = pte_mksoft_dirty(pte); > if (pte_swp_uffd_wp(vmf->orig_pte)) > pte = pte_mkuffd_wp(pte); > > /* > * Same logic as in do_wp_page(); however, optimize for pages that are > * certainly not shared either because we just allocated them without > * exposing them to the swapcache or because the swap entry indicates > * exclusivity. > */ > if (!folio_test_ksm(folio) && > (exclusive || folio_ref_count(folio) == 1)) { > if (vma->vm_flags & VM_WRITE && !userfaultfd_pte_wp(vma, pte) > && (!vma_soft_dirty_enabled(vma) || pte_soft_dirty(pte))) { And here the code gets ugly. Just do if ((vma->vm_flags & VM_WRITE) && !userfaultfd_pte_wp(vma, pte) && !vma_soft_dirty_enabled(vma)) { ... } and don't optimize softdirty. Or if you really want to, have a helper function like userfaultfd_pte_wp() that wraps both checks. > if (vmf->flags & FAULT_FLAG_WRITE) { > pte = pte_mkwrite(pte_mkdirty(pte), vma); > vmf->flags &= ~FAULT_FLAG_WRITE; > } else { > pte = pte_mkwrite(pte, vma); > } > } why not pte = pte_mkwrite(pte, vma); if (vmf->flags & FAULT_FLAG_WRITE) { pte = pte_mkdirty(pte); vmf->flags &= ~FAULT_FLAG_WRITE; } Conceptually, I think this should be fine. -- Cheers, David / dhildenb