From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E7E65C54E58 for ; Tue, 12 Mar 2024 10:21:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3C0A28D0037; Tue, 12 Mar 2024 06:21:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 370688D0017; Tue, 12 Mar 2024 06:21:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 211938D0037; Tue, 12 Mar 2024 06:21:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0CC978D0017 for ; Tue, 12 Mar 2024 06:21:00 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id D0FC3A03F0 for ; Tue, 12 Mar 2024 10:20:59 +0000 (UTC) X-FDA: 81887993838.17.F5908B8 Received: from mail-yw1-f179.google.com (mail-yw1-f179.google.com [209.85.128.179]) by imf06.hostedemail.com (Postfix) with ESMTP id 2895118000B for ; Tue, 12 Mar 2024 10:20:57 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=IkTcXRuA; spf=pass (imf06.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.128.179 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710238858; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pGF/hIVTrAtQpunvsvrl+1vMh2fhzIOPNq7zDJSYdNY=; b=3X+IDryKcQ0O16W+oH74FuC67YV40ZTRQVktSCRJlHcNuDFvAxdQkNXUUykuI5aQguS1gt VzgWftW6LwAwPDiwwcpFrn+vWG6pMcfxRcWQv2ufeGXbemKhukyteLsah0eaVFfZf1bHCz LF2ejluyal1FW7jQHrl0gVO/kROdir0= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=IkTcXRuA; spf=pass (imf06.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.128.179 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710238858; a=rsa-sha256; cv=none; b=5seFcUkACYfz8ohYvugfzSMNw2TrFSnvCTfbgU6cld1IbTAp3KQ94V7p4ug1J9ukwHjTWN AjenAbhZrn4kmFsDlSbr3S08EVu+An4RrkL1niVR5znBEEqRYO7Ex0/8kljp/ywSnJrBQx cCUk2MOSJEfuyQbnJIZB6vVt+xOG34E= Received: by mail-yw1-f179.google.com with SMTP id 00721157ae682-609f4d8551eso40901817b3.1 for ; Tue, 12 Mar 2024 03:20:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1710238857; x=1710843657; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=pGF/hIVTrAtQpunvsvrl+1vMh2fhzIOPNq7zDJSYdNY=; b=IkTcXRuACDJgy6srdy2QADcryE7/qFaubYRRR9odBTIGCsoKOEEEg2OncDia+eDIwV grNIrVhLyaXfhE1nbcekxR+KJSQJWAKPtEb/k3WJEXjLiV3hGpFdxfDTGvmWOLZLn4uZ nuDaD3wKRPJsx0TcbrvGoBCG2P961LZ3PHJSwFCot8qjWQxid4isjV45tXbhf6qjNqSB t5DYc/Znh/rffpzOSeo7TIO5sqgdYAEti17nV60MCJ8WGlAZK+2na39WILemfNu9RsOc GJVEjV/GsXWPMCYltX+IhzBRvx/BRFMqdx5eXYvZgOCOwxLA6RMu0y1Ba6AOsVy2d68/ aoiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710238857; x=1710843657; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pGF/hIVTrAtQpunvsvrl+1vMh2fhzIOPNq7zDJSYdNY=; b=sQVzyNThVGkJXGfn2M9w5b7xXjjY8gam8EuuP49kmn9AcKZRZ1/Ff51zXl0Jq3UmJ3 vbMZE/YFaZr9xg1uEJuaQYeKdpWYrqDEMAJM9S/4uY8kQ77Y/SCZOrWY7hDpW5mhGXnv GlXgOVu3NwcjsetjIuRBkrd+Q7m+CF8W0gD1DNLCmxL0lMu7+7NI2iO322dqX48pjzJX KwpJI9hw4uHq/y1ARqnwjsHWyAMGMYR/51rJl6C5c5OHtGI70HyHmeb9RorpnMSVCNIh 1AXyQRRm64IlOV2vbZP2Lr3KmxD1Vx4nIJ92yU+MNbk/ytgqEmrTBi0GrUAe2/7uJwC/ Ju2w== X-Forwarded-Encrypted: i=1; AJvYcCUpMHpkhdYJxhrTH3Do7feWm+ZSijMshNT4vPZ8/lwDMbf9Mfz4Q5wfuxDizv1L6J2Ui6lDwJJTYPkOjsioKlLn+iI= X-Gm-Message-State: AOJu0YzKFDoECGvsU0zRSuafuhy/T1YWALhE6XruqS7LVGtg/XcDi0NP j6CYtRByNWWaEzmzjushg58b4kLhZGHo7YP+XGEvsVYRrSfG03YIjVO8WtMIrv79u8dEpx32HPY sdXh+Q/a7HsYRK4SfqkOM6byohqc= X-Google-Smtp-Source: AGHT+IG6bMCGUYxeWt3V+sQi6R2N/d0jlLfhEG4tDYAcn9V+9ugAQtRlz0Q58rROEQRoU7hfv+1MZsVKTNYSPn/6vrU= X-Received: by 2002:a05:690c:d1b:b0:60a:181f:16d8 with SMTP id cn27-20020a05690c0d1b00b0060a181f16d8mr5661633ywb.5.1710238857078; Tue, 12 Mar 2024 03:20:57 -0700 (PDT) MIME-Version: 1.0 References: <20240307061425.21013-1-ioworker0@gmail.com> <03458c20-5544-411b-9b8d-b4600a9b802f@arm.com> <4090ae12-8fb9-4e58-a093-86c13cca1d47@arm.com> In-Reply-To: <4090ae12-8fb9-4e58-a093-86c13cca1d47@arm.com> From: Lance Yang Date: Tue, 12 Mar 2024 18:20:45 +0800 Message-ID: Subject: Re: [PATCH v2 1/1] mm/madvise: enhance lazyfreeing with mTHP in madvise_free To: Ryan Roberts Cc: Barry Song <21cnbao@gmail.com>, david@redhat.com, Vishal Moola , akpm@linux-foundation.org, zokeefe@google.com, shy828301@gmail.com, mhocko@suse.com, fengwei.yin@intel.com, xiehuan09@gmail.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 2895118000B X-Rspam-User: X-Stat-Signature: yg9ghridkyhiq8sdy76kqsyah3saeihp X-Rspamd-Server: rspam01 X-HE-Tag: 1710238857-109522 X-HE-Meta: U2FsdGVkX1/eadbBKLqVtcGcdUxxb//YgvoEQ9BOUr2jkPpAQrZ0gN7emmMKvwlU//9b0vFnL1kaMyE/AazwdBk0nwK1NfyL+XmOMGhFg8KocoYQ53BxcBuR2hXc0a5mGaPknFG+wH9meziYGoBCqG0HrrBJ97CBX36L4HvO5kpKBmAE/klDnYK55k835RMbn2lfTMWLf4vFdf6TEQf9/8tHA7E1atZwTN5r4XdecrauAQTtPQPnBBHEQ17DMvbkLy2QWGOaMJ3kFHP4WFoo41heEPpIoSdS/tIyBWsLbwGtQeWDLZHC3FgdI+8Or+Jk0Ow/TIL1UEKBd0CWBERXpLf3i/gFloblUbOC2O4YOUO3Dn0U/okYWHPS1JvMdK1bgNOCe1x8Olw59UJS4M76++xfAWb8+66s1jW6RlOo7wYFQ+l7p38/MIX9AsT9J/IcNJOsPRtKsysG9kvEYRsLh0LTc4pON64iRiLd6IKMKZTHJKhqjDJTRNdpNnBU0/7OHNRFC+l/374ErojGDfytaTnGHgayj130J84eTXy8ck6RSBkDazqjTYruFDyH3ZtIpSoUQ2NTOmZccRDfE4dQttnhznWErluEcI8y39BEDFrsb5cLkVkUqIyPcCm7pKTp/OW5q2Lzza4DAGEYgyHL2cMxvGHczizqcUvh+/e+1JutuGsOktsYxL0mPCTWfiijeekBgLbekfufZTt7jitMtcBR8RLahipSejd7rEiUS75CDQkV7yg/knfVrrrTm5KbbL11Trj7oMj9VAO37uhxJjhLCfMUl25YJHXs0AlEtNGdDu7QoQrX0hDU68a14MOkuucSn2kzupKH51afL93U7xd5OHxFVtWwbqvAgjNB7G5PgkTZbgAk6OaMkBHeE+jv6DwIIRmTPq5mFMQYhQipFEaLShDSSmvo0s9disfJv8FWToOPYRvPB1JjZsi8RRCf7dryCfIjK3nGxmxkLX6 dOayC7pM U3kjpmyXNGzF5MyoX68Y2RX3dpdDQNVXzPYQ2ja2vdy6vAGYA9yb7uWNukdOZX43Wf19S9/4J6Yj687oRo1DqFJmu/y7QK7t1lM8QBYpbu/DwUdSYRp7gHIZh1z1RNgf8laX2V4xRiSZyZMdxv19HiH3V7EuC5IIRzwKF X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Mon, Mar 11, 2024 at 11:07=E2=80=AFPM Ryan Roberts wrote: > > On 07/03/2024 09:07, Ryan Roberts wrote: > > On 07/03/2024 08:10, Barry Song wrote: > >> On Thu, Mar 7, 2024 at 9:00=E2=80=AFPM Lance Yang wrote: > >>> > >>> Hey Barry, > >>> > >>> Thanks for taking time to review! > >>> > >>> On Thu, Mar 7, 2024 at 3:00=E2=80=AFPM Barry Song <21cnbao@gmail.com>= wrote: > >>>> > >>>> On Thu, Mar 7, 2024 at 7:15=E2=80=AFPM Lance Yang wrote: > >>>>> > >>> [...] > >>>>> +static inline bool can_mark_large_folio_lazyfree(unsigned long add= r, > >>>>> + struct folio *foli= o, pte_t *start_pte) > >>>>> +{ > >>>>> + int nr_pages =3D folio_nr_pages(folio); > >>>>> + fpb_t flags =3D FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY; > >>>>> + > >>>>> + for (int i =3D 0; i < nr_pages; i++) > >>>>> + if (page_mapcount(folio_page(folio, i)) !=3D 1) > >>>>> + return false; > >>>> > >>>> we have moved to folio_estimated_sharers though it is not precise, s= o > >>>> we don't do > >>>> this check with lots of loops and depending on the subpage's mapcoun= t. > >>> > >>> If we don't check the subpage=E2=80=99s mapcount, and there is a cow = folio associated > >>> with this folio and the cow folio has smaller size than this folio, > >>> should we still > >>> mark this folio as lazyfree? > >> > >> I agree, this is true. However, we've somehow accepted the fact that > >> folio_likely_mapped_shared > >> can result in false negatives or false positives to balance the > >> overhead. So I really don't know :-) > >> > >> Maybe David and Vishal can give some comments here. > >> > >>> > >>>> BTW, do we need to rebase our work against David's changes[1]? > >>>> [1] https://lore.kernel.org/linux-mm/20240227201548.857831-1-david@r= edhat.com/ > >>> > >>> Yes, we should rebase our work against David=E2=80=99s changes. > >>> > >>>> > >>>>> + > >>>>> + return nr_pages =3D=3D folio_pte_batch(folio, addr, start_p= te, > >>>>> + ptep_get(start_pte), nr_pa= ges, flags, NULL); > >>>>> +} > >>>>> + > >>>>> static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr, > >>>>> unsigned long end, struct mm_walk *= walk) > >>>>> > >>>>> @@ -676,11 +690,45 @@ static int madvise_free_pte_range(pmd_t *pmd,= unsigned long addr, > >>>>> */ > >>>>> if (folio_test_large(folio)) { > >>>>> int err; > >>>>> + unsigned long next_addr, align; > >>>>> > >>>>> - if (folio_estimated_sharers(folio) !=3D 1) > >>>>> - break; > >>>>> - if (!folio_trylock(folio)) > >>>>> - break; > >>>>> + if (folio_estimated_sharers(folio) !=3D 1 |= | > >>>>> + !folio_trylock(folio)) > >>>>> + goto skip_large_folio; > >>>> > >>>> > >>>> I don't think we can skip all the PTEs for nr_pages, as some of them= might be > >>>> pointing to other folios. > >>>> > >>>> for example, for a large folio with 16PTEs, you do MADV_DONTNEED(15-= 16), > >>>> and write the memory of PTE15 and PTE16, you get page faults, thus P= TE15 > >>>> and PTE16 will point to two different small folios. We can only skip= when we > >>>> are sure nr_pages =3D=3D folio_pte_batch() is sure. > >>> > >>> Agreed. Thanks for pointing that out. > >>> > >>>> > >>>>> + > >>>>> + align =3D folio_nr_pages(folio) * PAGE_SIZE= ; > >>>>> + next_addr =3D ALIGN_DOWN(addr + align, alig= n); > >>>>> + > >>>>> + /* > >>>>> + * If we mark only the subpages as lazyfree= , or > >>>>> + * cannot mark the entire large folio as la= zyfree, > >>>>> + * then just split it. > >>>>> + */ > >>>>> + if (next_addr > end || next_addr - addr != =3D align || > >>>>> + !can_mark_large_folio_lazyfree(addr, fo= lio, pte)) > >>>>> + goto split_large_folio; > >>>>> + > >>>>> + /* > >>>>> + * Avoid unnecessary folio splitting if the= large > >>>>> + * folio is entirely within the given range= . > >>>>> + */ > >>>>> + folio_clear_dirty(folio); > >>>>> + folio_unlock(folio); > >>>>> + for (; addr !=3D next_addr; pte++, addr += =3D PAGE_SIZE) { > >>>>> + ptent =3D ptep_get(pte); > >>>>> + if (pte_young(ptent) || pte_dirty(p= tent)) { > >>>>> + ptent =3D ptep_get_and_clea= r_full( > >>>>> + mm, addr, pte, tlb-= >fullmm); > >>>>> + ptent =3D pte_mkold(ptent); > >>>>> + ptent =3D pte_mkclean(ptent= ); > >>>>> + set_pte_at(mm, addr, pte, p= tent); > >>>>> + tlb_remove_tlb_entry(tlb, p= te, addr); > >>>>> + } > >>>> > >>>> Can we do this in batches? for a CONT-PTE mapped large folio, you ar= e unfolding > >>>> and folding again. It seems quite expensive. > > > > I'm not convinced we should be doing this in batches. We want the initi= al > > folio_pte_batch() to be as loose as possible regarding permissions so t= hat we > > reduce our chances of splitting folios to the min. (e.g. ignore SW bits= like > > soft dirty, etc). I think it might be possible that some PTEs are RO an= d other > > RW too (e.g. due to cow - although with the current cow impl, probably = not. But > > its fragile to assume that). Anyway, if we do an initial batch that ign= ores all > > that then do this bit as a batch, you will end up smeering all the ptes= with > > whatever properties were set on the first pte, which probably isn't rig= ht. > > > > I've done a similar conversion for madvise_cold_or_pageout_pte_range() = as part > > of my swap-out series v4 (hoping to post imminently, but still working = out a > > latent bug that it triggers). I use ptep_test_and_clear_young() in that= , which > > arm64 can apply per-pte but avoid doing a contpte unfold/fold. I know y= ou have > > to clear dirty here too, but I think this pattern is preferable. > > > > FYI, my swap-out series also halfway-batches madvise_free_pte_range() s= o that I > > can batch free_swap_and_cache() for the swap entry case. Ideally the wo= rk you > > are doing here would be rebased on top of that and plug-in to the appro= ach > > implemented there. (subject to others' views of course). > > > > I'll cc you when I post it. > > I just sent out the swap-out series v4, as I presed the button I realized= I > forgot to cc you - sorry about that! It's at [1]. Patch 2 and 6 are the No worries about that. Thanks for letting me know! I will rebase our work on Patch 2 and 6. Thanks, Lance > interesting ones from this PoV. > > [1] https://lore.kernel.org/linux-mm/20240311150058.1122862-1-ryan.robert= s@arm.com/ > > > > > >>> > >>> Thanks for your suggestion. I'll do this in batches in v3. > >>> > >>> Thanks again for your time! > >>> > >>> Best, > >>> Lance > >>> > >>>> > >>>>> + } > >>>>> + folio_mark_lazyfree(folio); > >>>>> + goto next_folio; > >>>>> + > >>>>> +split_large_folio: > >>>>> folio_get(folio); > >>>>> arch_leave_lazy_mmu_mode(); > >>>>> pte_unmap_unlock(start_pte, ptl); > >>>>> @@ -688,13 +736,28 @@ static int madvise_free_pte_range(pmd_t *pmd,= unsigned long addr, > >>>>> err =3D split_folio(folio); > >>>>> folio_unlock(folio); > >>>>> folio_put(folio); > >>>>> - if (err) > >>>>> - break; > >>>>> - start_pte =3D pte =3D > >>>>> - pte_offset_map_lock(mm, pmd, addr, = &ptl); > >>>>> - if (!start_pte) > >>>>> - break; > >>>>> - arch_enter_lazy_mmu_mode(); > >>>>> + > >>>>> + /* > >>>>> + * If the large folio is locked or cannot b= e split, > >>>>> + * we just skip it. > >>>>> + */ > >>>>> + if (err) { > >>>>> +skip_large_folio: > >>>>> + if (next_addr >=3D end) > >>>>> + break; > >>>>> + pte +=3D (next_addr - addr) / PAGE_= SIZE; > >>>>> + addr =3D next_addr; > >>>>> + } > >>>>> + > >>>>> + if (!start_pte) { > >>>>> + start_pte =3D pte =3D pte_offset_ma= p_lock( > >>>>> + mm, pmd, addr, &ptl); > >>>>> + if (!start_pte) > >>>>> + break; > >>>>> + arch_enter_lazy_mmu_mode(); > >>>>> + } > >>>>> + > >>>>> +next_folio: > >>>>> pte--; > >>>>> addr -=3D PAGE_SIZE; > >>>>> continue; > >>>>> -- > >>>>> 2.33.1 > >>>>> > >> > >> Thanks > >> Barry > > >