From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C04CBC27C75 for ; Thu, 13 Jun 2024 08:39:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 580FF6B00A6; Thu, 13 Jun 2024 04:39:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 561656B00A7; Thu, 13 Jun 2024 04:39:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3F8096B00A8; Thu, 13 Jun 2024 04:39:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 2278E6B00A6 for ; Thu, 13 Jun 2024 04:39:41 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id DC7B9A0AAE for ; Thu, 13 Jun 2024 08:39:40 +0000 (UTC) X-FDA: 82225216920.07.2C0AC93 Received: from mail-vs1-f53.google.com (mail-vs1-f53.google.com [209.85.217.53]) by imf09.hostedemail.com (Postfix) with ESMTP id 14A5B140014 for ; Thu, 13 Jun 2024 08:39:38 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=WTb6YczB; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf09.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.53 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718267978; a=rsa-sha256; cv=none; b=Lr06t7S6VoPXEHtV1m/oYACU4iOp3skyW7+G+SD2MHA+20bvmIUQbiZ9bwKbmPPudKn1ri t+vXKZTPqYqk/GYRmO8zYYnTqsO6GUbbpzZcWeDe+xn3b5T5XOlcNwLMPP0OwgRTXdCgu+ znJB7GuvVmM31D7A9oeNK0GVlpfkYEs= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=WTb6YczB; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf09.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.53 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718267978; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=npy/tVALy92xNTq5ifJYDeAYI5e9q2XgMxsDAa0dZso=; b=x28x3Ow4/20WC6yDwdjLhBsaIbpH2Lud2y9sEQv910Da3kcOZkP9KVsHkRvltF0dYqQBFf Rl/yrkXvF85iw6S2S1LUr6dku+QmfgzNS5QDWcbnwwbmozhHsOzg3Bnp/O3i4hQXbR8hvi SjXdeEm35MDFbYJDj1EON/BBSxFiXbc= Received: by mail-vs1-f53.google.com with SMTP id ada2fe7eead31-48d9fa0dfceso269487137.1 for ; Thu, 13 Jun 2024 01:39:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1718267978; x=1718872778; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=npy/tVALy92xNTq5ifJYDeAYI5e9q2XgMxsDAa0dZso=; b=WTb6YczBbc7Rl3S34FbH/aDVwL79uywNxjaPBdBfq3CS0MMKOsDwBv+cCrfy2tf7D1 yYGPgKACIlEMqqpKnpPdqsnmPDqVdalTrOeF6cxdyaAvhb8OVj0dNaw9s3f+1SoavZhz 6IhDuUJxWBAVYzZ+GGlgPXq97ZG0w96rcpK/qhrmzUJVypLlLERKL7Pva/JJt0ouMVZh PR06WtIUMDqtbU02FcVaJsVJW/Pau8QyE9Vs1dd/wP6VQ++T21LxWuVqAhRZL2Erv3+L ligQXEsvU6w57vmgiF0NJvRYGv1bui7T82Zjs6z+NaPaXuMFCUOzMI5XDUYY0+0+D+/6 bMJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718267978; x=1718872778; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=npy/tVALy92xNTq5ifJYDeAYI5e9q2XgMxsDAa0dZso=; b=oKzba/TjakNQNq1+O+/5njxa8XzYVlN913b8KhW6t+Bx0zk9JKY2fmtAqwV7/5G6BW 9QFgUoIgf72/HzvEwRUz9NjpyMOS/ozZ6DkLpWGFbJdyLqKokoqFgFcUAwcY7ZdsyRCa KNVL+J9PcjEhztz8W/SPK7M/FXoIAmQW6aOL4stXIdYwN+34d9YEVhOKCXTNvV9LW1lK vL2JjrXE/qdKAj1AyYZof5843ryQrrdd+PqatDNcKfF7V5tKneF7HAE30blJoUBEwCyx GxHiS5eqf+WkTb6jWPUcssHC3gjKFSXM4XSvqADPT1ZWQTEaWPdvuZenzVmkwXXkcSRs 0JiA== X-Forwarded-Encrypted: i=1; AJvYcCV+ZaII198uNci1staXzhePqBrUXE112s48hkcasWlAplia8OoyzaICEctKDlm7T9vshjqJQ5KIlNYU+w67JnXi2ZQ= X-Gm-Message-State: AOJu0YxFFr4Mu/T/xcjQB8YoiEK7+jvrW/ahgaqbp2U6ygoaYbmNZ8W2 XEtl4U7hoMTzao28ZdrZzrQ9m2p7Ap8c+SqyUl+FURj0CQivLZJl6CBmKM/z7dAsOxZPX2ME7Xp CHYFyh4Y3JZLVsPzzKfvsXzeCNj8= X-Google-Smtp-Source: AGHT+IGI3v7bwhyn4kqTRc4d3er1NrJnAX685qyVCqxNRSRto/Wbh8/qblcWI068rxwMykFYpbKKNPUfb6m9w5bnavk= X-Received: by 2002:a67:fd0e:0:b0:48c:374c:b30c with SMTP id ada2fe7eead31-48d91db93bfmr4478589137.9.1718267978042; Thu, 13 Jun 2024 01:39:38 -0700 (PDT) MIME-Version: 1.0 References: <20240613000721.23093-1-21cnbao@gmail.com> <20240613000721.23093-2-21cnbao@gmail.com> In-Reply-To: <20240613000721.23093-2-21cnbao@gmail.com> From: Barry Song <21cnbao@gmail.com> Date: Thu, 13 Jun 2024 20:39:26 +1200 Message-ID: Subject: Re: [PATCH RFC 1/3] mm: extend rmap flags arguments for folio_add_new_anon_rmap To: david@redhat.com, akpm@linux-foundation.org, linux-mm@kvack.org Cc: chrisl@kernel.org, linux-kernel@vger.kernel.org, mhocko@suse.com, ryan.roberts@arm.com, baolin.wang@linux.alibaba.com, yosryahmed@google.com, shy828301@gmail.com, surenb@google.com, v-songbaohua@oppo.com, willy@infradead.org, ying.huang@intel.com, yuzhao@google.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 14A5B140014 X-Stat-Signature: go3uh3om9i7sj5wxrsq5bxfo5xg5fycc X-Rspam-User: X-HE-Tag: 1718267978-977453 X-HE-Meta: U2FsdGVkX1+JlzgbHENy0rwRmmS7hQbi6/F7Zp4qjLStkgoCGRh993pzSb34sT6sa491QJ4Y0kdn+uY/Vvcppopjt7/BQ1zynfK+tr/8S0ioMv8GQTdYhEiwa3/yjYT3VDSenk6e/1Yq5nRqBL0fla8xde6rHzkPUefi5C/dT42A9k5IDIj5lfqk/AGfipLLkv62gquS+vIe9lu/1iP8Xr4l77pn1BPk0c1jlCfoIuBhQxlTOv/ZciEnZvhHt05CYM4YqeIQAGmPdbpgkyjlEXAO/zHllm3V9+cmU/8UGbgu3L3k1eERvw9clhHJmIIYy/EZvbxR5hztx9KqhBQMxt6ys4qHB1m6VnhGJXKxpizmjt/DfXpqR9bVL4QrwXc6hlzQv7MuPULV9H8rwwC0+avH+EEgim3VhIWb+XG9ebvleIOD/lqiy+MENN2sPHDceG6+UeldtnlkmyiTwp9EcKt2N+NsWoz7Ud63dfqJYPgv3I2vLY7CgMlS5yNijx4XcP7V/jiQ3vmvKyOPg8vm6tq4ktKCIm78xJoa/cNT12Zb7WQvVibSAsISLPOFtbjDgp9LUKZNg53IqgsLtWaGzqDos5Yf8wLoPuRWn3Z9qwXEOMgFuLWeqPc8iJ8GigXKXlVkJu3c/aTBB5K1PWdsW0ptmkuUP3N1d7D/pI8RZ91OB5B0sjVWk5AzDOjHI0o/fFtPqYezUWr9B7n6h+XGCJ5dn62yZpcVsPm5m1SZoJTipixINQ0oxUnwfehaILch0Ts9ONiSZmJgTkS137fzhd+mokkROSOa57ACOAAzzfARSKTb8+lEmDN3UJkEh7RXlvmufQFkypb1J2oCqo88gnxo3VLu4jwsx3yAbKrrzxt92owfpEPaH9qzMgcvt8LSpXNDEqpt4byRw8NUdht59tHbBbs3aeJttze7Q56t1s/1SBlZz0nHW5pS3aa3F2iDPcbIyN5xi5f5nplcKli JjRzzu8n 9hryHhQwP809bSr986+ayFjVR/Q+Z3XcmO5ogBe1UB4dJ/FslW4BbVOHlGvhCTQ3KWirLgJV1zH3CELlN6kFbNlg487Vkr2te4BiNG1l+sh+KHXwcqpeLE1d20UpnRprK12BKGs9b9GaCOC7XtLvf63sXQtIWJN5qi5ElBuRUffmkLU52N008JuNs2asfT3si8M3lWTPaGSyD3LRpl3USGAIIF+nwi2j5vj+GLih3IFH3MJVN/Yao9v00k9AvXV8NFNvIqhxSQwmmzzyTmj16fUrEhex1yjxTgtryyzpWggxvhHD/6pQgqUg7sherMl1NKBa468khLz+CwO99PcjuJvEJsvjYd0SSr0BhD2YWsS8GL/Elmd8VemBsnFF9XgT3iRZ7gz9h1qKU0rf5VHHZxwRTPYP2VurQ0pICH3kp4fIIIvRMEO7yA9MiBUN394jHkd/3 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jun 13, 2024 at 12:07=E2=80=AFPM Barry Song <21cnbao@gmail.com> wro= te: > > From: Barry Song > > In the case of do_swap_page(), a new anonymous folio isn=E2=80=99t necess= arily > exclusive. This patch extends the rmap flags to allow treating a new > anon folio as either exclusive or non-exclusive. To maintain the current > behavior, we always use EXCLUSIVE as arguments. > > Suggested-by: David Hildenbrand > Signed-off-by: Barry Song > --- > include/linux/rmap.h | 2 +- > kernel/events/uprobes.c | 2 +- > mm/huge_memory.c | 2 +- > mm/khugepaged.c | 2 +- > mm/memory.c | 10 +++++----- > mm/migrate_device.c | 2 +- > mm/rmap.c | 15 +++++++++------ > mm/swapfile.c | 2 +- > mm/userfaultfd.c | 2 +- > 9 files changed, 21 insertions(+), 18 deletions(-) > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h > index cae38a2a643d..01a64e7e72b9 100644 > --- a/include/linux/rmap.h > +++ b/include/linux/rmap.h > @@ -244,7 +244,7 @@ void folio_add_anon_rmap_ptes(struct folio *, struct = page *, int nr_pages, > void folio_add_anon_rmap_pmd(struct folio *, struct page *, > struct vm_area_struct *, unsigned long address, rmap_t fl= ags); > void folio_add_new_anon_rmap(struct folio *, struct vm_area_struct *, > - unsigned long address); > + unsigned long address, rmap_t flags); > void folio_add_file_rmap_ptes(struct folio *, struct page *, int nr_page= s, > struct vm_area_struct *); > #define folio_add_file_rmap_pte(folio, page, vma) \ > diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c > index 2c83ba776fc7..c20368aa33dd 100644 > --- a/kernel/events/uprobes.c > +++ b/kernel/events/uprobes.c > @@ -181,7 +181,7 @@ static int __replace_page(struct vm_area_struct *vma,= unsigned long addr, > > if (new_page) { > folio_get(new_folio); > - folio_add_new_anon_rmap(new_folio, vma, addr); > + folio_add_new_anon_rmap(new_folio, vma, addr, RMAP_EXCLUS= IVE); > folio_add_lru_vma(new_folio, vma); > } else > /* no new page, just dec_mm_counter for old_page */ > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index f409ea9fcc18..09a83e43c71a 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -973,7 +973,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct= vm_fault *vmf, > > entry =3D mk_huge_pmd(page, vma->vm_page_prot); > entry =3D maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); > - folio_add_new_anon_rmap(folio, vma, haddr); > + folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE= ); > folio_add_lru_vma(folio, vma); > pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable)= ; > set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry); > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index 774a97e6e2da..4d759a7487d0 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -1213,7 +1213,7 @@ static int collapse_huge_page(struct mm_struct *mm,= unsigned long address, > > spin_lock(pmd_ptl); > BUG_ON(!pmd_none(*pmd)); > - folio_add_new_anon_rmap(folio, vma, address); > + folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE); > folio_add_lru_vma(folio, vma); > pgtable_trans_huge_deposit(mm, pmd, pgtable); > set_pmd_at(mm, address, pmd, _pmd); > diff --git a/mm/memory.c b/mm/memory.c > index 54d7d2acdf39..2f94921091fb 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -930,7 +930,7 @@ copy_present_page(struct vm_area_struct *dst_vma, str= uct vm_area_struct *src_vma > *prealloc =3D NULL; > copy_user_highpage(&new_folio->page, page, addr, src_vma); > __folio_mark_uptodate(new_folio); > - folio_add_new_anon_rmap(new_folio, dst_vma, addr); > + folio_add_new_anon_rmap(new_folio, dst_vma, addr, RMAP_EXCLUSIVE)= ; > folio_add_lru_vma(new_folio, dst_vma); > rss[MM_ANONPAGES]++; > > @@ -3400,7 +3400,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf= ) > * some TLBs while the old PTE remains in others. > */ > ptep_clear_flush(vma, vmf->address, vmf->pte); > - folio_add_new_anon_rmap(new_folio, vma, vmf->address); > + folio_add_new_anon_rmap(new_folio, vma, vmf->address, RMA= P_EXCLUSIVE); > folio_add_lru_vma(new_folio, vma); > BUG_ON(unshare && pte_write(entry)); > set_pte_at(mm, vmf->address, vmf->pte, entry); > @@ -4337,7 +4337,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > /* ksm created a completely new copy */ > if (unlikely(folio !=3D swapcache && swapcache)) { > - folio_add_new_anon_rmap(folio, vma, address); > + folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSI= VE); > folio_add_lru_vma(folio, vma); > } else { > folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, addr= ess, > @@ -4592,7 +4592,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault= *vmf) > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > count_mthp_stat(folio_order(folio), MTHP_STAT_ANON_FAULT_ALLOC); > #endif > - folio_add_new_anon_rmap(folio, vma, addr); > + folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE); > folio_add_lru_vma(folio, vma); > setpte: > if (vmf_orig_pte_uffd_wp(vmf)) > @@ -4790,7 +4790,7 @@ void set_pte_range(struct vm_fault *vmf, struct fol= io *folio, > /* copy-on-write page */ > if (write && !(vma->vm_flags & VM_SHARED)) { > VM_BUG_ON_FOLIO(nr !=3D 1, folio); > - folio_add_new_anon_rmap(folio, vma, addr); > + folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE)= ; > folio_add_lru_vma(folio, vma); > } else { > folio_add_file_rmap_ptes(folio, page, nr, vma); > diff --git a/mm/migrate_device.c b/mm/migrate_device.c > index 051d0a3ccbee..6d66dc1c6ffa 100644 > --- a/mm/migrate_device.c > +++ b/mm/migrate_device.c > @@ -658,7 +658,7 @@ static void migrate_vma_insert_page(struct migrate_vm= a *migrate, > goto unlock_abort; > > inc_mm_counter(mm, MM_ANONPAGES); > - folio_add_new_anon_rmap(folio, vma, addr); > + folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE); > if (!folio_is_zone_device(folio)) > folio_add_lru_vma(folio, vma); > folio_get(folio); > diff --git a/mm/rmap.c b/mm/rmap.c > index b9e5943c8349..e612d999811a 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1406,14 +1406,14 @@ void folio_add_anon_rmap_pmd(struct folio *folio,= struct page *page, > * This means the inc-and-test can be bypassed. > * The folio does not have to be locked. > * > - * If the folio is pmd-mappable, it is accounted as a THP. As the folio > - * is new, it's assumed to be mapped exclusively by a single process. > + * If the folio is pmd-mappable, it is accounted as a THP. > */ > void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct = *vma, > - unsigned long address) > + unsigned long address, rmap_t flags) > { > int nr =3D folio_nr_pages(folio); > int nr_pmdmapped =3D 0; > + bool exclusive =3D flags & RMAP_EXCLUSIVE; > > VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio); > VM_BUG_ON_VMA(address < vma->vm_start || > @@ -1424,7 +1424,8 @@ void folio_add_new_anon_rmap(struct folio *folio, s= truct vm_area_struct *vma, > if (likely(!folio_test_large(folio))) { > /* increment count (starts at -1) */ > atomic_set(&folio->_mapcount, 0); > - SetPageAnonExclusive(&folio->page); > + if (exclusive) > + SetPageAnonExclusive(&folio->page); > } else if (!folio_test_pmd_mappable(folio)) { > int i; > > @@ -1433,7 +1434,8 @@ void folio_add_new_anon_rmap(struct folio *folio, s= truct vm_area_struct *vma, > > /* increment count (starts at -1) */ > atomic_set(&page->_mapcount, 0); > - SetPageAnonExclusive(page); > + if (exclusive) > + SetPageAnonExclusive(page); > } > > /* increment count (starts at -1) */ > @@ -1445,7 +1447,8 @@ void folio_add_new_anon_rmap(struct folio *folio, s= truct vm_area_struct *vma, > /* increment count (starts at -1) */ > atomic_set(&folio->_large_mapcount, 0); > atomic_set(&folio->_nr_pages_mapped, ENTIRELY_MAPPED); > - SetPageAnonExclusive(&folio->page); > + if (exclusive) > + SetPageAnonExclusive(&folio->page); > nr_pmdmapped =3D nr; > } > I am lacking this: --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1408,7 +1408,7 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, VM_BUG_ON_VMA(address < vma->vm_start || address + (nr << PAGE_SHIFT) > vma->vm_end, vma); __folio_set_swapbacked(folio); - __folio_set_anon(folio, vma, address, true); + __folio_set_anon(folio, vma, address, exclusive); if (likely(!folio_test_large(folio))) { /* increment count (starts at -1) */ > diff --git a/mm/swapfile.c b/mm/swapfile.c > index 9c6d8e557c0f..ae1d2700f6a3 100644 > --- a/mm/swapfile.c > +++ b/mm/swapfile.c > @@ -1911,7 +1911,7 @@ static int unuse_pte(struct vm_area_struct *vma, pm= d_t *pmd, > > folio_add_anon_rmap_pte(folio, page, vma, addr, rmap_flag= s); > } else { /* ksm created a completely new copy */ > - folio_add_new_anon_rmap(folio, vma, addr); > + folio_add_new_anon_rmap(folio, vma, addr, RMAP_EXCLUSIVE)= ; > folio_add_lru_vma(folio, vma); > } > new_pte =3D pte_mkold(mk_pte(page, vma->vm_page_prot)); > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > index 5e7f2801698a..8dedaec00486 100644 > --- a/mm/userfaultfd.c > +++ b/mm/userfaultfd.c > @@ -216,7 +216,7 @@ int mfill_atomic_install_pte(pmd_t *dst_pmd, > folio_add_lru(folio); > folio_add_file_rmap_pte(folio, page, dst_vma); > } else { > - folio_add_new_anon_rmap(folio, dst_vma, dst_addr); > + folio_add_new_anon_rmap(folio, dst_vma, dst_addr, RMAP_EX= CLUSIVE); > folio_add_lru_vma(folio, dst_vma); > } > > -- > 2.34.1 >