From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F86ECD128A for ; Thu, 11 Apr 2024 12:47:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AF53A6B00A2; Thu, 11 Apr 2024 08:47:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AA5AF6B00A3; Thu, 11 Apr 2024 08:47:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 946696B00A4; Thu, 11 Apr 2024 08:47:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 71CF16B00A2 for ; Thu, 11 Apr 2024 08:47:00 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 3E218A1359 for ; Thu, 11 Apr 2024 12:47:00 +0000 (UTC) X-FDA: 81997225800.28.0620FD5 Received: from mail-ej1-f51.google.com (mail-ej1-f51.google.com [209.85.218.51]) by imf03.hostedemail.com (Postfix) with ESMTP id 49F442001C for ; Thu, 11 Apr 2024 12:46:57 +0000 (UTC) Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=WSFtuiSM; spf=pass (imf03.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.218.51 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1712839617; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fpIN/oe251dQdk+11Tq2j0spT/wKyDbFMGp96y9QdMY=; b=tMRYfHy3ylaq2GxZ3mJI1WPYfZsOlwhW8pBE2bMVWop20fszNA3fZfPFtOUgJ3aA909XAp PwKM7Eie/i3l30P7hGfgF7v7vrfKkI7r0j3Mr63W8/C5MxiU4HYRgKpRY32e6AefpPeGpO g3gxp20YhZm6Pe+2FplbrifQdG1cRyI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1712839617; a=rsa-sha256; cv=none; b=wjF0Rln8PxCCmUt8HfWUL/QR5+IidXnp+xfnl0h7gsqhXOdou+i6KE7HzNfrC2JLyRSJe1 dCED7TmTxY+uuKKV5cvOhk5Sudq3f8K83JSzpsfshNqWUly+xzNK1tfNh7zOPzBybjCkbi H978p+K4WHJV9VnX9bjcvSaeH+d+amk= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=WSFtuiSM; spf=pass (imf03.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.218.51 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-ej1-f51.google.com with SMTP id a640c23a62f3a-a519ef3054bso753155166b.1 for ; Thu, 11 Apr 2024 05:46:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1712839615; x=1713444415; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=fpIN/oe251dQdk+11Tq2j0spT/wKyDbFMGp96y9QdMY=; b=WSFtuiSMpu6nv0jgxcIO65x1PAlaEjRQcgVABYxZPyefD4QugtyqdBEK/dcHWdomFy zkOkWtSY//2L+7BPEz95Hm991COvErVBEf3n7ZxFQ7pBkXKQ1bqXQnhOaKZ7EUd3JhAC +w2l5yWTHItqw3BclJa8cWIBBdeywIesojQCqv0eeF+X8iypHxLmyCbQIdqTtnq8yCSV 8F1VlllVPmqYu3e4QqYFoB50tttPUiRQ8CBrIQMf56mmn7W98Uyc+9/Et9nQioh1zwr7 wXbgRe+ghF1RSx0uZOx0mHhTI6UxXJpzPExpq0IoNsCvlGtbQLDd6t9jB9tBpoI7Q39Z apWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1712839615; x=1713444415; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fpIN/oe251dQdk+11Tq2j0spT/wKyDbFMGp96y9QdMY=; b=pWZqw++3LK30/0q56faqATC44Q2Q6DUes8Am3QXi89PgELmqACunNy4r1a2tPz6LQy wVbHbgzzOj9MnceqDZBCpLNGeLcp40gT6tXjNjUZ9g4AiCrtGse4E1E6UQkCHZlISnyD 2OPrN+FQyxSbbJIZFziiBc2OzT7+9rznsqUapYNcoRQP5671t7jOVmW+TCAOmHE/UxI3 P0ryjk1FAiXEfDA+OZe+mBBdl/YOyrGsSt4bOJB3L5qZMCi1GHJ4f0eJs9kQ9+gTNGVF WsLuRfo6kCxMguLUGyfeDs/T7XSml0+OEiAuNo4+47ck+DNqDnBdzpcrVUaLFMoNxy/C J9yw== X-Forwarded-Encrypted: i=1; AJvYcCVEQA0/s/sEkCc4DOb0rAzgXoxHEzZqkJNA5rwWpWvUF/Mw/D6ofpK8Gg9F5PpP0V34Q9mOrEyXWArxtM66g7pBQEU= X-Gm-Message-State: AOJu0YwMUxVpUrF//MnRWrVDxnyMRMfCIa1e+SCu4t1uREmcByiMYhtu KUZ226WHj/uyk4CSZAb3+GloW1dLCvvVI/KofSbwQhBDmx3bQ/GiQw9UtXGI1U3OcpXUZJQiZ9d 2eqsXEA6IkyHn0XN1biJcZJ1fDlQ= X-Google-Smtp-Source: AGHT+IGIibNsbnvHb0rYMKt6CKnZKVPtOGxuVkTGaVO+y3GaOYPdTS+PaL8kEXS9LHaeOWe/gaHDBLc769ysgZO5DV8= X-Received: by 2002:a17:907:d2a:b0:a52:1a82:7d3b with SMTP id gn42-20020a1709070d2a00b00a521a827d3bmr2353932ejc.0.1712839615480; Thu, 11 Apr 2024 05:46:55 -0700 (PDT) MIME-Version: 1.0 References: <20240408042437.10951-1-ioworker0@gmail.com> <20240408042437.10951-2-ioworker0@gmail.com> <38c4add8-53a2-49ca-9f1b-f62c2ee3e764@arm.com> In-Reply-To: <38c4add8-53a2-49ca-9f1b-f62c2ee3e764@arm.com> From: Lance Yang Date: Thu, 11 Apr 2024 20:46:43 +0800 Message-ID: Subject: Re: [PATCH v5 1/2] mm/madvise: optimize lazyfreeing with mTHP in madvise_free To: Ryan Roberts Cc: akpm@linux-foundation.org, david@redhat.com, 21cnbao@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: neapzeomn4c9d56s74diju4oq169pgmt X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 49F442001C X-Rspam-User: X-HE-Tag: 1712839617-574232 X-HE-Meta: U2FsdGVkX19oCi5VHbcz3IC7mmc+R5x+I5IBOUZfRODfsoZIFS5YOYnAWH7HZFP2wtrpBYEroz/4ZaFBNFfO9FJwnDt7NDImduJlP++KeuImrZrsIKnWuZFUj3LSeFH1+t4ZTXyMSDM1qDLGVALtLMYx7Un13cya8otqd1z2Xo527ioFFahR8nJtY6omce9iVliZTe/MtJUNMfiJtJM3OZDgcJbYup6qc7UIJmHWnFqQZ18G4O7buSgkOnfAArW9bXY7nw0r+Rq5naTrWCFUVhOttU0/qgjesQ4DkYm6FtXwDD5R3WkBPrypCmEMoWyD3WLY5U26/sgV5dgr71rZTdk63Jiz/i+zw+En61fSS7j/r+8/J/IXMScMsC8zFGxs4lVaxRBSB2LYgYo5HMBAdPb8Jv3ziysfZusy7zK1S4xhC6oQHpPeU+i0d5MrwqlZHWTx8YZXR4Rv4F3+iw6cXRRFBJrfM3Rm8GnTv9GVImpovM4ol0fZ6HniXKAjMf+jI+r1Q7UqwKeor56GqLbo5lxmdYNPwbcpyY0gpM8YbD8eXFUqD5v83eoakPA7mGPwlzoR7j3l6B43NnqYytwUNHw2TnT0BdaAcpWLuUnKfEFEPVKcV4i25BEkNxAeFUUKZKGKUDfuYjjAs6oPLUAyCnZeKayajdXtCjKeWnD8cGdS1xr7ZZdKzxppfzVfWR76MepI7CTHkZI/NbJC+oTB9/OEvkuACJvI/V+FF3QlVYN40XVofrLz8ieZwt9waSXawY+gpDfChJOmcrjPdeW3WThnwxGZU+u9Qr0Wo2qqK1XZ8pczJa9chs0rqTuN46NiI2SLak0IRIKhm55bQxrkcT7vaowzPzqFngkUEbx6cXhsolPYibB7qgaeFzRD21GDpaXzx7wnoJQPzmZ2SvOXYGmksqWnlJKhX7g4gyE6cyptaLYTp62zj0UxVhxCSgG1Jc8WOR1pICB16axIymJ NCPQJWL8 hxmmoW/Sg6u3fy9XuvOmGhAEQ76UkUGarRT/ehxLVlbnjKJ2lnVnOsGZXeKsqKNOVeN3mW/H38lDgnKOO6X/DNMoATt8uYNZPwAARm2rhV0yZnApDqJ8pB24VCjENZZk/9jFHd1J7WjoVb2Wkmhb12NlPBUMIAbXuEJXLJ4/nUcHGDX++Ja0IQT53Fo4JLBmjL9E1SysJvAJjziLVZVAVleGEU1EiOy1/g4qo2wftBFHNIn+bNUVKZDykhXgq0efcbbfcqQhNma9W8HfSohVy8Z5Hh4wADpHD8YhTxp36mtA8E1foT2dPaDaWlxFC48AriDJezLnACN+qDPRIfCf629w0iWm9XBP/CmVoBl3QWtgQrrmtCB5SUulET66ZLaH0meP1W4NkMg50IE6XuNQAtIo8N37PVq5eTB5zRKOr7y04ENPNGC77JtflfApKZfNZbiu4egWYQ7z+yIxmDyFTnKuHxWd2MTJWLOFpuRH0paiH4OiGnHKLDTg/h8RN19crcrjwiInLvtHfU8dYJz6IEIbpRJ7jxnoDAQbIunaywxs7xa9e9ZMVx2LN/ZnZhai+VUPE X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Apr 11, 2024 at 7:11=E2=80=AFPM Ryan Roberts = wrote: > > On 08/04/2024 05:24, Lance Yang wrote: > > This patch optimizes lazyfreeing with PTE-mapped mTHP[1] > > (Inspired by David Hildenbrand[2]). We aim to avoid unnecessary folio > > splitting if the large folio is fully mapped within the target range. > > > > If a large folio is locked or shared, or if we fail to split it, we jus= t > > leave it in place and advance to the next PTE in the range. But note th= at > > the behavior is changed; previously, any failure of this sort would cau= se > > the entire operation to give up. As large folios become more common, > > sticking to the old way could result in wasted opportunities. > > > > On an Intel I5 CPU, lazyfreeing a 1GiB VMA backed by PTE-mapped folios = of > > the same size results in the following runtimes for madvise(MADV_FREE) = in > > seconds (shorter is better): > > > > Folio Size | Old | New | Change > > ------------------------------------------ > > 4KiB | 0.590251 | 0.590259 | 0% > > 16KiB | 2.990447 | 0.185655 | -94% > > 32KiB | 2.547831 | 0.104870 | -95% > > 64KiB | 2.457796 | 0.052812 | -97% > > 128KiB | 2.281034 | 0.032777 | -99% > > 256KiB | 2.230387 | 0.017496 | -99% > > 512KiB | 2.189106 | 0.010781 | -99% > > 1024KiB | 2.183949 | 0.007753 | -99% > > 2048KiB | 0.002799 | 0.002804 | 0% > > > > [1] https://lkml.kernel.org/r/20231207161211.2374093-5-ryan.roberts@arm= .com > > [2] https://lore.kernel.org/linux-mm/20240214204435.167852-1-david@redh= at.com > > > > Signed-off-by: Lance Yang > > --- > > include/linux/pgtable.h | 34 +++++++++ > > mm/internal.h | 12 +++- > > mm/madvise.c | 149 ++++++++++++++++++++++------------------ > > mm/memory.c | 4 +- > > 4 files changed, 129 insertions(+), 70 deletions(-) > > > > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h > > index 0f4b2faa1d71..4dd442787420 100644 > > --- a/include/linux/pgtable.h > > +++ b/include/linux/pgtable.h > > @@ -489,6 +489,40 @@ static inline pte_t ptep_get_and_clear(struct mm_s= truct *mm, > > } > > #endif > > > > +#ifndef mkold_clean_ptes > > +/** > > + * mkold_clean_ptes - Mark PTEs that map consecutive pages of the same= folio > > + * as old and clean. > > + * @mm: Address space the pages are mapped into. > > + * @addr: Address the first page is mapped at. > > + * @ptep: Page table pointer for the first entry. > > + * @nr: Number of entries to mark old and clean. > > + * > > + * May be overridden by the architecture; otherwise, implemented by > > + * get_and_clear/modify/set for each pte in the range. > > + * > > + * Note that PTE bits in the PTE range besides the PFN can differ. For= example, > > + * some PTEs might be write-protected. > > + * > > + * Context: The caller holds the page table lock. The PTEs map consec= utive > > + * pages that belong to the same folio. The PTEs are all in the same = PMD. > > + */ > > +static inline void mkold_clean_ptes(struct mm_struct *mm, unsigned lon= g addr, > > + pte_t *ptep, unsigned int nr) > > Just thinking out loud, I wonder if it would be cleaner to convert mkold_= ptes() > (which I added as part of swap-out) to something like: > > clear_young_dirty_ptes(struct mm_struct *mm, unsigned long addr, > pte_t *ptep, unsigned int nr, > bool clear_young, bool clear_dirty); > > Then we can use the same function for both use cases and also have the ab= ility > to only clear dirty in future if we ever need it. The other advantage is = that we > only need to plumb a single function down the arm64 arch code. As it curr= ently > stands, those 2 functions would be duplicating most of their code. > > Generated code would still be the same since I'd expect the callsites to = be > passing in constants for clear_young and clear_dirty. > > > +{ > > + pte_t pte; > > + > > + for (;;) { > > + pte =3D ptep_get_and_clear(mm, addr, ptep); > > + set_pte_at(mm, addr, ptep, pte_mkclean(pte_mkold(pte))); > > + if (--nr =3D=3D 0) > > + break; > > + ptep++; > > + addr +=3D PAGE_SIZE; > > + } > > +} > > +#endif > > + > > static inline void ptep_clear(struct mm_struct *mm, unsigned long addr= , > > pte_t *ptep) > > { > > diff --git a/mm/internal.h b/mm/internal.h > > index 57c1055d5568..792a9baf0d14 100644 > > --- a/mm/internal.h > > +++ b/mm/internal.h > > @@ -132,6 +132,8 @@ static inline pte_t __pte_batch_clear_ignored(pte_t= pte, fpb_t flags) > > * first one is writable. > > * @any_young: Optional pointer to indicate whether any entry except t= he > > * first one is young. > > + * @any_dirty: Optional pointer to indicate whether any entry except t= he > > + * first one is dirty. > > * > > * Detect a PTE batch: consecutive (present) PTEs that map consecutive > > * pages of the same large folio. > > @@ -147,18 +149,20 @@ static inline pte_t __pte_batch_clear_ignored(pte= _t pte, fpb_t flags) > > */ > > static inline int folio_pte_batch(struct folio *folio, unsigned long a= ddr, > > pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags, > > - bool *any_writable, bool *any_young) > > + bool *any_writable, bool *any_young, bool *any_dirty) > > { > > unsigned long folio_end_pfn =3D folio_pfn(folio) + folio_nr_pages= (folio); > > const pte_t *end_ptep =3D start_ptep + max_nr; > > pte_t expected_pte, *ptep; > > - bool writable, young; > > + bool writable, young, dirty; > > int nr; > > > > if (any_writable) > > *any_writable =3D false; > > if (any_young) > > *any_young =3D false; > > + if (any_dirty) > > + *any_dirty =3D false; > > > > VM_WARN_ON_FOLIO(!pte_present(pte), folio); > > VM_WARN_ON_FOLIO(!folio_test_large(folio) || max_nr < 1, folio); > > @@ -174,6 +178,8 @@ static inline int folio_pte_batch(struct folio *fol= io, unsigned long addr, > > writable =3D !!pte_write(pte); > > if (any_young) > > young =3D !!pte_young(pte); > > + if (any_dirty) > > + dirty =3D !!pte_dirty(pte); > > pte =3D __pte_batch_clear_ignored(pte, flags); > > > > if (!pte_same(pte, expected_pte)) > > @@ -191,6 +197,8 @@ static inline int folio_pte_batch(struct folio *fol= io, unsigned long addr, > > *any_writable |=3D writable; > > if (any_young) > > *any_young |=3D young; > > + if (any_dirty) > > + *any_dirty |=3D dirty; > > > > nr =3D pte_batch_hint(ptep, pte); > > expected_pte =3D pte_advance_pfn(expected_pte, nr); > > diff --git a/mm/madvise.c b/mm/madvise.c > > index bf26cf2b7715..0777df2e3691 100644 > > --- a/mm/madvise.c > > +++ b/mm/madvise.c > > @@ -321,6 +321,39 @@ static inline bool can_do_file_pageout(struct vm_a= rea_struct *vma) > > file_permission(vma->vm_file, MAY_WRITE) =3D=3D 0; > > } > > > > +static inline int madvise_folio_pte_batch(unsigned long addr, unsigned= long end, > > + struct folio *folio, pte_t *pte= p, > > + pte_t pte, bool *any_young, > > + bool *any_dirty) > > +{ > > + int max_nr =3D (end - addr) / PAGE_SIZE; > > + const fpb_t fpb_flags =3D FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRT= Y; > > + > > + return folio_pte_batch(folio, addr, ptep, pte, max_nr, fpb_flags,= NULL, > > + any_young, any_dirty); > > +} > > + > > +static inline bool madvise_pte_split_folio(struct mm_struct *mm, pmd_t= *pmd, > > + unsigned long addr, > > + struct folio *folio, pte_t **p= te, > > + spinlock_t **ptl) > > +{ > > + int err; > > + > > + if (!folio_trylock(folio)) > > + return false; > > + > > + folio_get(folio); > > + pte_unmap_unlock(*pte, *ptl); > > + err =3D split_folio(folio); > > + folio_unlock(folio); > > + folio_put(folio); > > + > > + *pte =3D pte_offset_map_lock(mm, pmd, addr, ptl); > > + > > + return err =3D=3D 0; > > +} > > + > > static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > > unsigned long addr, unsigned long end, > > struct mm_walk *walk) > > @@ -456,41 +489,29 @@ static int madvise_cold_or_pageout_pte_range(pmd_= t *pmd, > > * next pte in the range. > > */ > > if (folio_test_large(folio)) { > > - const fpb_t fpb_flags =3D FPB_IGNORE_DIRTY | > > - FPB_IGNORE_SOFT_DIRTY; > > - int max_nr =3D (end - addr) / PAGE_SIZE; > > bool any_young; > > - > > nit: there should be a blank line between variable declarations and follo= wing > code. You have removed it here (and similar in free function). Did you ru= n > checkpatch.pl? It would have caught these things. Sorry for that. I did see this warning msg, but I didn't take it seriously = :( > > > - nr =3D folio_pte_batch(folio, addr, pte, ptent, m= ax_nr, > > - fpb_flags, NULL, &any_young)= ; > > - if (any_young) > > - ptent =3D pte_mkyoung(ptent); > > + nr =3D madvise_folio_pte_batch(addr, end, folio, = pte, > > + ptent, &any_young, N= ULL); > > > > if (nr < folio_nr_pages(folio)) { > > - int err; > > - > > if (folio_likely_mapped_shared(folio)) > > continue; > > if (pageout_anon_only_filter && !folio_te= st_anon(folio)) > > continue; > > - if (!folio_trylock(folio)) > > - continue; > > - folio_get(folio); > > + > > arch_leave_lazy_mmu_mode(); > > - pte_unmap_unlock(start_pte, ptl); > > - start_pte =3D NULL; > > - err =3D split_folio(folio); > > - folio_unlock(folio); > > - folio_put(folio); > > - start_pte =3D pte =3D > > - pte_offset_map_lock(mm, pmd, addr= , &ptl); > > + if (madvise_pte_split_folio(mm, pmd, addr= , > > + folio, &start= _pte, &ptl)) > > + nr =3D 0; > > if (!start_pte) > > break; > > + pte =3D start_pte; > > arch_enter_lazy_mmu_mode(); > > - if (!err) > > - nr =3D 0; > > continue; > > } > > + > > + if (any_young) > > + ptent =3D pte_mkyoung(ptent); > > } > > > > /* > > @@ -687,47 +708,54 @@ static int madvise_free_pte_range(pmd_t *pmd, uns= igned long addr, > > continue; > > > > /* > > - * If pmd isn't transhuge but the folio is large and > > - * is owned by only this process, split it and > > - * deactivate all pages. > > + * If we encounter a large folio, only split it if it is = not > > + * fully mapped within the range we are operating on. Oth= erwise > > + * leave it as is so that it can be marked as lazyfree. I= f we > > + * fail to split a folio, leave it in place and advance t= o the > > + * next pte in the range. > > */ > > if (folio_test_large(folio)) { > > - int err; > > + bool any_young, any_dirty; > > + nr =3D madvise_folio_pte_batch(addr, end, folio, = pte, > > + ptent, &any_young, &= any_dirty); > > > > - if (folio_likely_mapped_shared(folio)) > > - break; > > - if (!folio_trylock(folio)) > > - break; > > - folio_get(folio); > > - arch_leave_lazy_mmu_mode(); > > - pte_unmap_unlock(start_pte, ptl); > > - start_pte =3D NULL; > > - err =3D split_folio(folio); > > + if (nr < folio_nr_pages(folio)) { > > + if (folio_likely_mapped_shared(folio)) > > + continue; > > + > > + arch_leave_lazy_mmu_mode(); > > + if (madvise_pte_split_folio(mm, pmd, addr= , > > + folio, &start= _pte, &ptl)) > > + nr =3D 0; > > + if (!start_pte) > > + break; > > + pte =3D start_pte; > > + arch_enter_lazy_mmu_mode(); > > + continue; > > + } > > + > > + if (any_young) > > + ptent =3D pte_mkyoung(ptent); > > + if (any_dirty) > > + ptent =3D pte_mkdirty(ptent); > > + } > > + > > + if (!folio_trylock(folio)) > > + continue; > > This is still wrong. This should all be protected by the "if > (folio_test_swapcache(folio) || folio_test_dirty(folio))" as it was previ= ously > so that you only call folio_trylock() if that condition is true. You are > unconditionally locking here, then unlocking, then relocking below if the > condition is met. Just put everything inside the condition and lock once. I'm not sure if it's safe to call folio_mapcount() without holding the folio lock. As mentioned earlier by David in the v2[1] > What could work for large folios is making sure that #ptes that map the > folio here correspond to the folio_mapcount(). And folio_mapcount() > should be called under folio lock, to avoid racing with swapout/migration= . [1] https://lore.kernel.org/all/5cc05529-eb80-410e-bc26-233b0ba0b21f@redhat= .com/ Thanks, Lance > > Thanks, > Ryan > > > + /* > > + * If we have a large folio at this point, we know it is = fully mapped > > + * so if its mapcount is the same as its number of pages,= it must be > > + * exclusive. > > + */ > > + if (folio_mapcount(folio) !=3D folio_nr_pages(folio)) { > > folio_unlock(folio); > > - folio_put(folio); > > - if (err) > > - break; > > - start_pte =3D pte =3D > > - pte_offset_map_lock(mm, pmd, addr, &ptl); > > - if (!start_pte) > > - break; > > - arch_enter_lazy_mmu_mode(); > > - pte--; > > - addr -=3D PAGE_SIZE; > > continue; > > } > > + folio_unlock(folio); > > > > if (folio_test_swapcache(folio) || folio_test_dirty(folio= )) { > > if (!folio_trylock(folio)) > > continue; > > - /* > > - * If folio is shared with others, we mustn't cle= ar > > - * the folio's dirty flag. > > - */ > > - if (folio_mapcount(folio) !=3D 1) { > > - folio_unlock(folio); > > - continue; > > - } > > > > if (folio_test_swapcache(folio) && > > !folio_free_swap(folio)) { > > @@ -740,19 +768,8 @@ static int madvise_free_pte_range(pmd_t *pmd, unsi= gned long addr, > > } > > > > if (pte_young(ptent) || pte_dirty(ptent)) { > > - /* > > - * Some of architecture(ex, PPC) don't update TLB > > - * with set_pte_at and tlb_remove_tlb_entry so fo= r > > - * the portability, remap the pte with old|clean > > - * after pte clearing. > > - */ > > - ptent =3D ptep_get_and_clear_full(mm, addr, pte, > > - tlb->fullmm); > > - > > - ptent =3D pte_mkold(ptent); > > - ptent =3D pte_mkclean(ptent); > > - set_pte_at(mm, addr, pte, ptent); > > - tlb_remove_tlb_entry(tlb, pte, addr); > > + mkold_clean_ptes(mm, addr, pte, nr); > > + tlb_remove_tlb_entries(tlb, pte, nr, addr); > > } > > folio_mark_lazyfree(folio); > > } > > diff --git a/mm/memory.c b/mm/memory.c > > index 1723c8ddf9cb..fe9d4d64c627 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -989,7 +989,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, s= truct vm_area_struct *src_vma > > flags |=3D FPB_IGNORE_SOFT_DIRTY; > > > > nr =3D folio_pte_batch(folio, addr, src_pte, pte, max_nr,= flags, > > - &any_writable, NULL); > > + &any_writable, NULL, NULL); > > folio_ref_add(folio, nr); > > if (folio_test_anon(folio)) { > > if (unlikely(folio_try_dup_anon_rmap_ptes(folio, = page, > > @@ -1559,7 +1559,7 @@ static inline int zap_present_ptes(struct mmu_gat= her *tlb, > > */ > > if (unlikely(folio_test_large(folio) && max_nr !=3D 1)) { > > nr =3D folio_pte_batch(folio, addr, pte, ptent, max_nr, f= pb_flags, > > - NULL, NULL); > > + NULL, NULL, NULL); > > > > zap_present_folio_ptes(tlb, vma, folio, page, pte, ptent,= nr, > > addr, details, rss, force_flush, >