From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A0EF3C25B75 for ; Fri, 31 May 2024 12:20:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EC8EB6B0082; Fri, 31 May 2024 08:20:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E52686B0085; Fri, 31 May 2024 08:20:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CCA996B0089; Fri, 31 May 2024 08:20:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id AC0336B0082 for ; Fri, 31 May 2024 08:20:28 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 33779A120E for ; Fri, 31 May 2024 12:20:28 +0000 (UTC) X-FDA: 82178598936.22.8654ED5 Received: from mail-vs1-f50.google.com (mail-vs1-f50.google.com [209.85.217.50]) by imf29.hostedemail.com (Postfix) with ESMTP id 6FA54120009 for ; Fri, 31 May 2024 12:20:26 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="JLXRN3/b"; spf=pass (imf29.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.50 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717158026; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=n6mihzZPIajhm/Tu9byM7yxkAls9gPvYzrS97XWomiU=; b=eca3mAphmGNcAZ9SKbHc2Cf+7NV0ZdNFUrXghrXJnKfEXCNBh20/AGwEM/hNYzBfnqyySg CMNxZDeZ6ZPQxDuZHLi/WwcEzkACzTyeweNKrIkt7+TzGgOoPwQK/NyCyEi1TScRvCRT67 ymgHqRQtWinTpLznWuXg/Wm+Ib+m7BU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717158026; a=rsa-sha256; cv=none; b=sbCyI0lQpfyEmOIV33G7EG25aCVn34Q2RlDE+bhL1ej31Oy0MOIkijV8mXehpEjRct+cvp CyGGacEQOrWMe/PSAm6Ty406anQem8OtlKRTn0qX78cOg2f2dd3cdvDl20ndn56KeBgvFa rca7/oGRyotPbEhgmsaXEnoIIdn49aQ= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="JLXRN3/b"; spf=pass (imf29.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.50 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-vs1-f50.google.com with SMTP id ada2fe7eead31-48bbcdd2ba6so555124137.0 for ; Fri, 31 May 2024 05:20:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1717158025; x=1717762825; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=n6mihzZPIajhm/Tu9byM7yxkAls9gPvYzrS97XWomiU=; b=JLXRN3/b38kqqnwPCEWLcSqOEgDJtYVBbwZe6n8N4dKcMIce4RwfI7mfC4tsaF07g2 5BX8UreL1nbC1eaQq7kno2KV8N4pWmSJdLv17rUOEKEHenYyGJ8xJJHGlNYFGvMGvRCZ 2j8cQo8cfCMBj1HbkSXt8+iMnVwmaah59MRsKX55m61SwulHYcO35PvIn7s0Omr7pJZ1 INtqAsn6zu9m3VEKqqktiEwcm05jNLty67Qo8nKo69l7Iy+IGMgfgnNZEaRFm2icccxv ZhrmorS48Ay9Zf8MVSYjPpF97a7HuTthoTM3iP7cttliS8vJMhabm5/UciBpQextSVYp gotA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717158025; x=1717762825; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=n6mihzZPIajhm/Tu9byM7yxkAls9gPvYzrS97XWomiU=; b=ajERFB3uLP4Fdm5BKp5UfSTzPrqecLMHMBRlkYoUAIaRlQceml/xrr0dIniBB0Jl9z 5+ZbB+A2Ahu0bXZNj1P5mdti1xGY7FHqFMBW6cvZUVzUuBFz8Qpe08WpsS+UmJHDNwtl Q+CoS+kYHQJyFyGfDzK3iK6tR/aHYJISgGqMhOhFT/TwOlOAUOlvMm7hMbKaJn31kM3r NqddDai641GafjU9EST6MKQNNIcrAFXVcjurHYOQ9HHOh0/n3UleqBGrX61numsN9T8O 3QNTqApfLEOa8ga7gO+Qg0SsiFUuqP0FGBQitJByt3VUEAHpPBSMcwc+WI/xn31PZkjQ SGaA== X-Forwarded-Encrypted: i=1; AJvYcCXPJSBJ2Anu9SmXR3bdveAOBRCoGvw0sk85joEaCcL/MJsinw3EDYgeFwmQV46uT9/fwv12Nxk4LxphDYU9n4ZsHK8= X-Gm-Message-State: AOJu0YzjuqYs55CIiSuex1c9b2zcwv9Ud1aNxgA1dnVvnDKXU+SySYgJ 8mbSvpdo8IZDrlGfZ6qygwmW4v9fnu7gzYKJB4TCiAzdelPISFIcfHoXA9sIhr4xIILn9sZu4om aYx2tqD2aUFBgJM/NKQqQ6qDz3v0= X-Google-Smtp-Source: AGHT+IHiSk8nsMoWq1B4TPpiCuasKi0m15AKvxCuWsKbkFMpf5vL7h++LNU/oxFMSgsmPoCxlK5oMwonvhVAZi6kZ6c= X-Received: by 2002:a05:6102:7d0:b0:48b:c32e:2185 with SMTP id ada2fe7eead31-48bc32e220dmr892093137.9.1717158025416; Fri, 31 May 2024 05:20:25 -0700 (PDT) MIME-Version: 1.0 References: <20240531104819.140218-1-21cnbao@gmail.com> <87ac9610-5650-451f-aa54-e634a6310af4@redhat.com> In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Sat, 1 Jun 2024 00:20:13 +1200 Message-ID: Subject: Re: [RFC PATCH] mm: swap: reuse exclusive folio directly instead of wp page faults To: David Hildenbrand Cc: akpm@linux-foundation.org, linux-mm@kvack.org, chrisl@kernel.org, surenb@google.com, kasong@tencent.com, minchan@kernel.org, willy@infradead.org, ryan.roberts@arm.com, linux-kernel@vger.kernel.org, Barry Song Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Stat-Signature: 9up3qxtqg3hf1fnp3kzobfpygsz5rjfc X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 6FA54120009 X-HE-Tag: 1717158026-717420 X-HE-Meta: U2FsdGVkX1/5SYMI3lLDQulMGGow5kIGjeU7h3R0mOYOjg3+bynilkmyk/5gW76w5O6KRw62rNfMdNORu73S8EBVmpqg02dvs/y4ieKtgjAIfjfHjL0E/PlMpbN7RKaSvApYSzEXVQqj3DOQE3NP5pTy33gM4qpjMVuYgQ4NutSbl6/j/h2bANmQYB4+vqZjsp0WrxNMzdhYk8yxm2I/RKRR0s2a/jRrd32r2sCP45dWhcc6vv92OI41gSvMO4lwWOiPjzMkwWnRApVmBPFWi+21Vcmwrl6QNenuUSJKr8AKLbX3yn4oPb1o5YoapSoiLNM9Vd1R89WsZxGFViRqbdeyr49OiaeAZIpLazf+6xX2OWP7ia5rszZ5DCeqjFqR0LOjFDeK+OmG0hD8ZCpPzoZTABvkrGQrfqRCJmQwtZJphn8KNTJnIrdohF9WbDm+4IBW1tCElp8cOxCv6rsGd7PF6a+5dgXIpYP9VWkpe0+12pPL6DPh3tb2oMLkJjYrnAXal+CQiSSmup4AKHUdoBJr3Yxen6Q35GfZhuo2boDxJpgI7D2IiKdvQJ2NK0jRE0yV4vmmmWYBKigTvBT85aXvr+QQaOR70MulyHfYy7L7M96iep5Hl/4oag4YSDBHvH7tesUOxD7enXIoSK7BG07dGJjkSS/O6VsMXoujksoxeJfzy5mH431iDQoHtYyiwfCc3jERPvqGMlto1LhXfHeOd2Hy+HoGPAcGVKCb6mBtt82OOiRXX1gX+3IjKpFvZAi+s0ETHNJto8xKElpdpY0IJMAtMrBWrAgucr8gWcH48Z7lGlIaMCLvHYlnec1n/y8OP7PTp4UfOZxSymC4Cygcri1l5NuXam+3VQByO7TB+Vt2i4hleA5YxVkAuZhL4Ozw6gzU6mQ5UOS/49fDBuTGI/3i/wE6q73GwdqUEuda4wQcOz2J4dahX93d7gwNzoWkzQKqmI76E8C/7jd uCOSJ7vr IsqEEJTzG1b9lK+E9eojHoqn678s/aDJrBhDVfVQzY2Mpqm5rLv5UFjMqL0i+MSNJygjbmTvbCni4cYDaNxXrl/cYcg0rYcMb18j+tXgI8kzMj+lb8K5QJ/VqKg3GkO001FjBAanxMpUqrVmIFXEtbDSWZYgzU3HBn1y+28LzHQl7/Lt7+3Sgu3+sfH/ajYrE3fKoGXQy6NIKglpxQxK+pwyZBAxcsqtZKMsuESSsdolXSOqAM3ntp8WP83M7UhUJ9eXOh5NyN5L+2POKfvTTOPFMxWuTtaMWYKFLUCvdw6V14BX+QM7srqKYomVexUTkvA6T/wF6tfZHMmlgUlVY7GUGyPG+LOxCzNHpSAJIKNiyzrZFzcBvpMUXsg3axqHv2YYAN4Yk4uO20H7b6PELXYgI33QUjT1ivzM2HcPdyaGWeEcbwtladNwPEw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sat, Jun 1, 2024 at 12:10=E2=80=AFAM David Hildenbrand wrote: > > On 31.05.24 13:55, Barry Song wrote: > > On Fri, May 31, 2024 at 11:08=E2=80=AFPM David Hildenbrand wrote: > >> > >> On 31.05.24 12:48, Barry Song wrote: > >>> From: Barry Song > >>> > >>> After swapping out, we perform a swap-in operation. If we first read > >>> and then write, we encounter a major fault in do_swap_page for readin= g, > >>> along with additional minor faults in do_wp_page for writing. However= , > >>> the latter appears to be unnecessary and inefficient. Instead, we can > >>> directly reuse in do_swap_page and completely eliminate the need for > >>> do_wp_page. > >>> > >>> This patch achieves that optimization specifically for exclusive foli= os. > >>> The following microbenchmark demonstrates the significant reduction i= n > >>> minor faults. > >>> > >>> #define DATA_SIZE (2UL * 1024 * 1024) > >>> #define PAGE_SIZE (4UL * 1024) > >>> > >>> static void *read_write_data(char *addr) > >>> { > >>> char tmp; > >>> > >>> for (int i =3D 0; i < DATA_SIZE; i +=3D PAGE_SIZE) { > >>> tmp =3D *(volatile char *)(addr + i); > >>> *(volatile char *)(addr + i) =3D tmp; > >>> } > >>> } > >>> > >>> int main(int argc, char **argv) > >>> { > >>> struct rusage ru; > >>> > >>> char *addr =3D mmap(NULL, DATA_SIZE, PROT_READ | PROT_WRIT= E, > >>> MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); > >>> memset(addr, 0x11, DATA_SIZE); > >>> > >>> do { > >>> long old_ru_minflt, old_ru_majflt; > >>> long new_ru_minflt, new_ru_majflt; > >>> > >>> madvise(addr, DATA_SIZE, MADV_PAGEOUT); > >>> > >>> getrusage(RUSAGE_SELF, &ru); > >>> old_ru_minflt =3D ru.ru_minflt; > >>> old_ru_majflt =3D ru.ru_majflt; > >>> > >>> read_write_data(addr); > >>> getrusage(RUSAGE_SELF, &ru); > >>> new_ru_minflt =3D ru.ru_minflt; > >>> new_ru_majflt =3D ru.ru_majflt; > >>> > >>> printf("minor faults:%ld major faults:%ld\n", > >>> new_ru_minflt - old_ru_minflt, > >>> new_ru_majflt - old_ru_majflt); > >>> } while(0); > >>> > >>> return 0; > >>> } > >>> > >>> w/o patch, > >>> / # ~/a.out > >>> minor faults:512 major faults:512 > >>> > >>> w/ patch, > >>> / # ~/a.out > >>> minor faults:0 major faults:512 > >>> > >>> Minor faults decrease to 0! > >>> > >>> Signed-off-by: Barry Song > >>> --- > >>> mm/memory.c | 7 ++++--- > >>> 1 file changed, 4 insertions(+), 3 deletions(-) > >>> > >>> diff --git a/mm/memory.c b/mm/memory.c > >>> index eef4e482c0c2..e1d2e339958e 100644 > >>> --- a/mm/memory.c > >>> +++ b/mm/memory.c > >>> @@ -4325,9 +4325,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > >>> */ > >>> if (!folio_test_ksm(folio) && > >>> (exclusive || folio_ref_count(folio) =3D=3D 1)) { > >>> - if (vmf->flags & FAULT_FLAG_WRITE) { > >>> - pte =3D maybe_mkwrite(pte_mkdirty(pte), vma); > >>> - vmf->flags &=3D ~FAULT_FLAG_WRITE; > >>> + if (vma->vm_flags & VM_WRITE) { > >>> + pte =3D pte_mkwrite(pte_mkdirty(pte), vma); > >>> + if (vmf->flags & FAULT_FLAG_WRITE) > >>> + vmf->flags &=3D ~FAULT_FLAG_WRITE; > >> > >> This implies, that even on a read fault, you would mark the pte dirty > >> and it would have to be written back to swap if still in the swap cach= e > >> and only read. > >> > >> That is controversial. > >> > >> What is less controversial is doing what mprotect() via > >> change_pte_range()/can_change_pte_writable() would do: mark the PTE > >> writable but not dirty. > >> > >> I suggest setting the pte only dirty if FAULT_FLAG_WRITE is set. > > > > Thanks! > > > > I assume you mean something as below? > > It raises an important point: uffd-wp must be handled accordingly. > > > > > diff --git a/mm/memory.c b/mm/memory.c > > index eef4e482c0c2..dbf1ba8ccfd6 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -4317,6 +4317,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > add_mm_counter(vma->vm_mm, MM_SWAPENTS, -nr_pages); > > pte =3D mk_pte(page, vma->vm_page_prot); > > > > + if (pte_swp_soft_dirty(vmf->orig_pte)) > > + pte =3D pte_mksoft_dirty(pte); > > + if (pte_swp_uffd_wp(vmf->orig_pte)) > > + pte =3D pte_mkuffd_wp(pte); > > /* > > * Same logic as in do_wp_page(); however, optimize for pages = that are > > * certainly not shared either because we just allocated them = without > > @@ -4325,18 +4329,19 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > */ > > if (!folio_test_ksm(folio) && > > (exclusive || folio_ref_count(folio) =3D=3D 1)) { > > - if (vmf->flags & FAULT_FLAG_WRITE) { > > - pte =3D maybe_mkwrite(pte_mkdirty(pte), vma); > > - vmf->flags &=3D ~FAULT_FLAG_WRITE; > > + if (vma->vm_flags & VM_WRITE) { > > + if (vmf->flags & FAULT_FLAG_WRITE) { > > + pte =3D pte_mkwrite(pte_mkdirty(pte), v= ma); > > + vmf->flags &=3D ~FAULT_FLAG_WRITE; > > + } else if ((!vma_soft_dirty_enabled(vma) || > > pte_soft_dirty(pte)) > > + && !userfaultfd_pte_wp(vma, pte)) { > > + pte =3D pte_mkwrite(pte, vma); > > Even with FAULT_FLAG_WRITE we must respect uffd-wp and *not* do a > pte_mkwrite(pte). So we have to catch and handle that earlier (I could > have sworn we handle that somehow). > > Note that the existing > pte =3D pte_mkuffd_wp(pte); > > Will fix that up because it does an implicit pte_wrprotect(). This is exactly what I have missed as I am struggling with why WRITE_FAULT blindly does mkwrite without checking userfaultfd_pte_wp(). > > > So maybe what would work is > > > if ((vma->vm_flags & VM_WRITE) && !userfaultfd_pte_wp(vma, pte) && > !vma_soft_dirty_enabled(vma)) { > pte =3D pte_mkwrite(pte); > > /* Only set the PTE dirty on write fault. */ > if (vmf->flags & FAULT_FLAG_WRITE) { > pte =3D pte_mkdirty(pte); > vmf->flags &=3D ~FAULT_FLAG_WRITE; > } > } > looks good! > -- > Cheers, > > David / dhildenb > Thanks Barry