From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14364C25B75 for ; Fri, 31 May 2024 12:30:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A14AE6B009C; Fri, 31 May 2024 08:30:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9C4406B009D; Fri, 31 May 2024 08:30:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 88C2C6B009F; Fri, 31 May 2024 08:30:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6CDB56B009C for ; Fri, 31 May 2024 08:30:38 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 066A0A23D1 for ; Fri, 31 May 2024 12:30:38 +0000 (UTC) X-FDA: 82178624556.26.BF10DAF Received: from mail-ua1-f52.google.com (mail-ua1-f52.google.com [209.85.222.52]) by imf06.hostedemail.com (Postfix) with ESMTP id 1DE1D18000D for ; Fri, 31 May 2024 12:30:35 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=IUR76jND; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf06.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.52 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717158636; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MqCPiV40FRCqL5KZ/1CQ42Axiq6JU3nKj3bRIMUuHKc=; b=Pr88sYXcC3SafigXTGMzaqY4myOv/Gfshb3HV1dBPFrzpxz5u15XtrvaRMjt6T1G7EueBf YjCXFcW+oCXdCPn+syH0QUNjDut/4JU1sI9pOz71k+r+tsZAkqhBgE87FQt0RGfvC+0Sb4 nNGS/OWoJS4mIluMNd1B8zouDdskO/A= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=IUR76jND; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf06.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.52 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717158636; a=rsa-sha256; cv=none; b=xQUAVLbPim/uu2D4bkQJexT6GYs1bve3BwkDqTVaHZGsD4t6lphG8CuiG8ue0TSW3zh8R1 UJNiPyOqW55aJDZc3qupVZadrjynrdlqUzZUynETsmxRLPiAvYwgRcGSVoAz8QDGSDHFyA 9xG5+oOPHL051hk9v6dy/T6UzchQYXc= Received: by mail-ua1-f52.google.com with SMTP id a1e0cc1a2514c-80ac43b7b4bso578233241.3 for ; Fri, 31 May 2024 05:30:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1717158635; x=1717763435; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=MqCPiV40FRCqL5KZ/1CQ42Axiq6JU3nKj3bRIMUuHKc=; b=IUR76jNDtnDCCa59bVEzNwLKXl1jr8Awh1ZKi8/Vn4dZ9mXWtuSC3D+R28sIZLkbBa jPyng3r2zsnqvwgij5mRSmu2hL1Atyb71oHDLyCzkoWE8GXqj6T789E0hsUgZCOpy6/3 AvWWllXWwWuvycQBkByf9uaiQyIDQltdt+ecvoY+LqnReuANq9m8EM/TykSxIfXlQ6vA rh0em8Mr1BRDkiyEgFctMUb/LJmhq7RYeArUQw7+m3gxuoOjK3HUpWrYiqiPrFnsww45 K9z/eYQ1136C2EpglOTVqaQ0L4gwExBSVMicrZ7mj0UvE1Vl0+vxLgusTbFwnQHwEKsc KU+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717158635; x=1717763435; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MqCPiV40FRCqL5KZ/1CQ42Axiq6JU3nKj3bRIMUuHKc=; b=iHpNH5W+xqXnmpYiOeZGL3qacCxAsfhdOK0tQNgan4TuBeXaUWF9MN97a1Zkemc6Ii 4VzGuXDZJkKO38SEo7K2psJ3aGa6OJzpMPkM/y18OLt7Hd36Y7pvLNWvsRGEq0it4+RQ RfgIVFyPUf4IyVD5mLb+ZwipHAnONIqs14SBANYRbWietoC8zLuIGm0vLYbmbnu++shF A7FqXdh6CshOol6zo8OMSB+fHpFr5UO3GsSR+C0FRFQH9qoDksmmy+dsM9ECRb2lYYn7 vmT2jc2WISGXB6bm8eoeXLwpcxvVjaIKst6rk5QY/yw2voU10n01Tt9lDMl9dx0J84zC dMtw== X-Forwarded-Encrypted: i=1; AJvYcCXa9FnklKsRUv/LyMGm5soha+pGE02AypZisEOjR4Z7EqxUOP31m+bCtXdB4pxqn60PugnTItFB0rEjhY5DTkwTtyc= X-Gm-Message-State: AOJu0YwBzqhGGLgGNsZxgJ8pBHac5i51pLYUDML296qurdlPH5CsvDsh 9nGy8iJ1BUxYlrrx1YwKahbHjMgigaAYwnRYGY/bVuwhO3z5S/F7VyDgpgvahzJ3rF+WH627Hfq MQXiQLCk3dZTX5cNoSUN/clUgL4A= X-Google-Smtp-Source: AGHT+IF4M93lrQW5Fsa+giSnqh14rcaRZpYxJ+MPoqduDV6r4sAWty6AusRycWj39BSyyyaDQIlbZEOxnzHCdJzk6Yc= X-Received: by 2002:a67:ef84:0:b0:47f:251c:61df with SMTP id ada2fe7eead31-48bc23b40d2mr1662359137.27.1717158633456; Fri, 31 May 2024 05:30:33 -0700 (PDT) MIME-Version: 1.0 References: <20240531104819.140218-1-21cnbao@gmail.com> <87ac9610-5650-451f-aa54-e634a6310af4@redhat.com> In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Sat, 1 Jun 2024 00:30:22 +1200 Message-ID: Subject: Re: [RFC PATCH] mm: swap: reuse exclusive folio directly instead of wp page faults To: David Hildenbrand Cc: akpm@linux-foundation.org, linux-mm@kvack.org, chrisl@kernel.org, surenb@google.com, kasong@tencent.com, minchan@kernel.org, willy@infradead.org, ryan.roberts@arm.com, linux-kernel@vger.kernel.org, Barry Song Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 1DE1D18000D X-Stat-Signature: yp1nktpjtj66ynk4inc7n1af56tnsyhy X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1717158635-291640 X-HE-Meta: U2FsdGVkX1/rJlVaSR2Hxn1DduN69RfF3YwG8b9HinUefn9i6RQFRDxUK+ZdQ+oRf/mrQBWPOPltAnjz5yB/ZGjt10+3WgaYvfKVUd4/yRSslpQiWSYmaJkoaiEuJPTYispTxs4lwPRMmApGcuGPRv6CMJavZZ2EeSBS41r4gWRnuHXabwG69His5SZUoVy3fKfOrzf36su7LoP2DngTHlNVfIYw0GFGzN14ZEFDQkjx4ZroOCHGOIFnEJ7sxxcCTGy06cB/UQyKHf5nIYfy/122SVsiCu7GDVmXMpSrpvwz81lA5OU2pQiEPB7GN2W9JTX7OtVFLHVnezDnQEyNpfiMfYsctvImLP4EXNGRC0Hs6SLqudwNEvIuh11Aioqr/NFBGBlOPhYYqJPzF2KSwpiNe7nRqSwLewNuN1AOq00OdelQxW0rVwAIuntSSg2PBWaFW8ux0rWf+1LJMj17gXUEJd0dV7HJKrW52fX2ThScj7DAvyGoMRR9ZM2O6DMuDGIi0n+46J0Ov+s2JpAxD4PDGpfNu+9CPu+YSs9ePZogpObsBmvr8dA58UngC0khxiXeY5/SmyCc45FoFl6WoadUw+BN0mW+uNhHGPNKUAKb/tGZYQ6/M38CF5mWEVI3VBuTiYkJrc6wIEfRtm051Lg+mfAKr7XpvakqJgeKTDj01PBlan3fWCTyX6Q/iQgbBB8J78v2pEGCnCSniMvX0U1Xw8iIoRL8YKSW+76aVX9i78qtj1rn/lzGUQxPTjSr5bqlqbK/MmDlkPJP9q5lnmExujHh7jaCOkH2pm1jVGeNVSBCnp0CbNDBkpl7H2pBwOo8yGG5YoCU/SO/bYBgIMit6dMP6oHXNDCF5bowjsn/w/yWhYGY+BVOvgt8DUn5BYwV29IhdM0ya5/+B4ZNITn1NDhx6fhVR4Hh7ECXN0CDHoSDsVU/8OFVQJfRXTX8Fvh3uyjJ3ccGJnwmLGM ULLjrPD1 zY/hSaMTDD2O5uipbtjRbjb4VblQLNlqv1yEJcz0e6nTtx6V/fTSoWLqqDp22qU2t70UaUczuto94HCrY3u7ySx5L8MLwTnFoilKCl02+EbmZt9rscLjj77Pj6jmScE8dE2B/i61Gt9wDhh+D5fBCNFpHtn7HbYZVb/OooL483ZvqzzkoccsWs+/N4eHrN8jQG2RbkPnvL7a9lktOAbyrvO1RJAArmsEQgD8umg6GnTi6GEjxrLYcXk7C4p7MxmAbSlb8CNYWdQ/zsI4d0KkEvnXkr6CRei4J2OEBAAa/LBObLP58F6jgnS1yxuf5ubqGc0J4EeHA3KDmMhrpZcFhX3XFFaqMQm3Hsdq/xT688PjOvnuFJFKa3LO8CA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sat, Jun 1, 2024 at 12:20=E2=80=AFAM Barry Song <21cnbao@gmail.com> wrot= e: > > On Sat, Jun 1, 2024 at 12:10=E2=80=AFAM David Hildenbrand wrote: > > > > On 31.05.24 13:55, Barry Song wrote: > > > On Fri, May 31, 2024 at 11:08=E2=80=AFPM David Hildenbrand wrote: > > >> > > >> On 31.05.24 12:48, Barry Song wrote: > > >>> From: Barry Song > > >>> > > >>> After swapping out, we perform a swap-in operation. If we first rea= d > > >>> and then write, we encounter a major fault in do_swap_page for read= ing, > > >>> along with additional minor faults in do_wp_page for writing. Howev= er, > > >>> the latter appears to be unnecessary and inefficient. Instead, we c= an > > >>> directly reuse in do_swap_page and completely eliminate the need fo= r > > >>> do_wp_page. > > >>> > > >>> This patch achieves that optimization specifically for exclusive fo= lios. > > >>> The following microbenchmark demonstrates the significant reduction= in > > >>> minor faults. > > >>> > > >>> #define DATA_SIZE (2UL * 1024 * 1024) > > >>> #define PAGE_SIZE (4UL * 1024) > > >>> > > >>> static void *read_write_data(char *addr) > > >>> { > > >>> char tmp; > > >>> > > >>> for (int i =3D 0; i < DATA_SIZE; i +=3D PAGE_SIZE) { > > >>> tmp =3D *(volatile char *)(addr + i); > > >>> *(volatile char *)(addr + i) =3D tmp; > > >>> } > > >>> } > > >>> > > >>> int main(int argc, char **argv) > > >>> { > > >>> struct rusage ru; > > >>> > > >>> char *addr =3D mmap(NULL, DATA_SIZE, PROT_READ | PROT_WR= ITE, > > >>> MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); > > >>> memset(addr, 0x11, DATA_SIZE); > > >>> > > >>> do { > > >>> long old_ru_minflt, old_ru_majflt; > > >>> long new_ru_minflt, new_ru_majflt; > > >>> > > >>> madvise(addr, DATA_SIZE, MADV_PAGEOUT); > > >>> > > >>> getrusage(RUSAGE_SELF, &ru); > > >>> old_ru_minflt =3D ru.ru_minflt; > > >>> old_ru_majflt =3D ru.ru_majflt; > > >>> > > >>> read_write_data(addr); > > >>> getrusage(RUSAGE_SELF, &ru); > > >>> new_ru_minflt =3D ru.ru_minflt; > > >>> new_ru_majflt =3D ru.ru_majflt; > > >>> > > >>> printf("minor faults:%ld major faults:%ld\n", > > >>> new_ru_minflt - old_ru_minflt, > > >>> new_ru_majflt - old_ru_majflt); > > >>> } while(0); > > >>> > > >>> return 0; > > >>> } > > >>> > > >>> w/o patch, > > >>> / # ~/a.out > > >>> minor faults:512 major faults:512 > > >>> > > >>> w/ patch, > > >>> / # ~/a.out > > >>> minor faults:0 major faults:512 > > >>> > > >>> Minor faults decrease to 0! > > >>> > > >>> Signed-off-by: Barry Song > > >>> --- > > >>> mm/memory.c | 7 ++++--- > > >>> 1 file changed, 4 insertions(+), 3 deletions(-) > > >>> > > >>> diff --git a/mm/memory.c b/mm/memory.c > > >>> index eef4e482c0c2..e1d2e339958e 100644 > > >>> --- a/mm/memory.c > > >>> +++ b/mm/memory.c > > >>> @@ -4325,9 +4325,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf= ) > > >>> */ > > >>> if (!folio_test_ksm(folio) && > > >>> (exclusive || folio_ref_count(folio) =3D=3D 1)) { > > >>> - if (vmf->flags & FAULT_FLAG_WRITE) { > > >>> - pte =3D maybe_mkwrite(pte_mkdirty(pte), vma); > > >>> - vmf->flags &=3D ~FAULT_FLAG_WRITE; > > >>> + if (vma->vm_flags & VM_WRITE) { > > >>> + pte =3D pte_mkwrite(pte_mkdirty(pte), vma); > > >>> + if (vmf->flags & FAULT_FLAG_WRITE) > > >>> + vmf->flags &=3D ~FAULT_FLAG_WRITE; > > >> > > >> This implies, that even on a read fault, you would mark the pte dirt= y > > >> and it would have to be written back to swap if still in the swap ca= che > > >> and only read. > > >> > > >> That is controversial. > > >> > > >> What is less controversial is doing what mprotect() via > > >> change_pte_range()/can_change_pte_writable() would do: mark the PTE > > >> writable but not dirty. > > >> > > >> I suggest setting the pte only dirty if FAULT_FLAG_WRITE is set. > > > > > > Thanks! > > > > > > I assume you mean something as below? > > > > It raises an important point: uffd-wp must be handled accordingly. > > > > > > > > diff --git a/mm/memory.c b/mm/memory.c > > > index eef4e482c0c2..dbf1ba8ccfd6 100644 > > > --- a/mm/memory.c > > > +++ b/mm/memory.c > > > @@ -4317,6 +4317,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > > add_mm_counter(vma->vm_mm, MM_SWAPENTS, -nr_pages); > > > pte =3D mk_pte(page, vma->vm_page_prot); > > > > > > + if (pte_swp_soft_dirty(vmf->orig_pte)) > > > + pte =3D pte_mksoft_dirty(pte); > > > + if (pte_swp_uffd_wp(vmf->orig_pte)) > > > + pte =3D pte_mkuffd_wp(pte); > > > /* > > > * Same logic as in do_wp_page(); however, optimize for page= s that are > > > * certainly not shared either because we just allocated the= m without > > > @@ -4325,18 +4329,19 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > > */ > > > if (!folio_test_ksm(folio) && > > > (exclusive || folio_ref_count(folio) =3D=3D 1)) { > > > - if (vmf->flags & FAULT_FLAG_WRITE) { > > > - pte =3D maybe_mkwrite(pte_mkdirty(pte), vma); > > > - vmf->flags &=3D ~FAULT_FLAG_WRITE; > > > + if (vma->vm_flags & VM_WRITE) { > > > + if (vmf->flags & FAULT_FLAG_WRITE) { > > > + pte =3D pte_mkwrite(pte_mkdirty(pte),= vma); > > > + vmf->flags &=3D ~FAULT_FLAG_WRITE; > > > + } else if ((!vma_soft_dirty_enabled(vma) || > > > pte_soft_dirty(pte)) > > > + && !userfaultfd_pte_wp(vma, pte))= { > > > + pte =3D pte_mkwrite(pte, vma)= ; > > > > Even with FAULT_FLAG_WRITE we must respect uffd-wp and *not* do a > > pte_mkwrite(pte). So we have to catch and handle that earlier (I could > > have sworn we handle that somehow). > > > > Note that the existing > > pte =3D pte_mkuffd_wp(pte); > > > > Will fix that up because it does an implicit pte_wrprotect(). > > This is exactly what I have missed as I am struggling with why WRITE_FAUL= T > blindly does mkwrite without checking userfaultfd_pte_wp(). > > > > > > > So maybe what would work is > > > > > > if ((vma->vm_flags & VM_WRITE) && !userfaultfd_pte_wp(vma, pte) && > > !vma_soft_dirty_enabled(vma)) { > > pte =3D pte_mkwrite(pte); > > > > /* Only set the PTE dirty on write fault. */ > > if (vmf->flags & FAULT_FLAG_WRITE) { > > pte =3D pte_mkdirty(pte); > > vmf->flags &=3D ~FAULT_FLAG_WRITE; > > } WRITE_FAULT has a pte_mkdirty, so it doesn't need to check "!vma_soft_dirty_enabled(vma)"? Maybe I thought too much, just the simple code below should work? if (!folio_test_ksm(folio) && (exclusive || folio_ref_count(folio) =3D=3D 1)) { if (vma->vm_flags & VM_WRITE) { if (vmf->flags & FAULT_FLAG_WRITE) { pte =3D pte_mkwrite(pte_mkdirty(pte), vma); vmf->flags &=3D ~FAULT_FLAG_WRITE; } else { pte =3D pte_mkwrite(pte, vma); } } rmap_flags |=3D RMAP_EXCLUSIVE; } if (pte_swp_soft_dirty(vmf->orig_pte)) pte =3D pte_mksoft_dirty(pte); if (pte_swp_uffd_wp(vmf->orig_pte)) pte =3D pte_mkuffd_wp(pte); This still uses the implicit wrprotect of pte_mkuffd_wp. > > } > > > > looks good! > > > -- > > Cheers, > > > > David / dhildenb > > > > Thanks > Barry