From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE8C0C25B75 for ; Thu, 6 Jun 2024 03:57:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 492136B00A9; Wed, 5 Jun 2024 23:57:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 41A126B00AA; Wed, 5 Jun 2024 23:57:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 26CFD6B00AB; Wed, 5 Jun 2024 23:57:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 03D436B00A9 for ; Wed, 5 Jun 2024 23:57:16 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id AB3691C1F8E for ; Thu, 6 Jun 2024 03:57:16 +0000 (UTC) X-FDA: 82199103672.23.C87E655 Received: from mail-wm1-f50.google.com (mail-wm1-f50.google.com [209.85.128.50]) by imf09.hostedemail.com (Postfix) with ESMTP id C07EC140008 for ; Thu, 6 Jun 2024 03:57:14 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=V8L4haTg; spf=pass (imf09.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.128.50 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717646234; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dMhZBOaM18whZrcjPH6UtuRN1wLcVJ6zQldlsjrMTg4=; b=Z/FxBDhP9nkDBzNnLdhjP+AGsh/gFugLWiG+2FUJHGLRpHo80bL8B+/G1tvqyoda7hnhb/ 2K3SsgwazfIi907QXH7OemLTWftjaNjjRQbqIJapkTN4D1L8GzNRWM5nOlSfZRCNa8AB3r W+wI7YNdlRTSna2MwyXDfFM844pI+3o= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717646234; a=rsa-sha256; cv=none; b=z5JEPinsb95v8Sdgkd3jHUMcOUYO+xi4bw+/+aHrM43GTJu9iXIg4/f9zC+1jggyXpKa3Z MOEM4pmS2YrgISOG0wF7bGYv4oDh/gahVOjJ5i1OP0wG27xwsPpHP4yQLPvME2nQ1KnUCc DRbof0yxsX0P6wuEAlc7ZXq7wQTJgHg= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=V8L4haTg; spf=pass (imf09.hostedemail.com: domain of ioworker0@gmail.com designates 209.85.128.50 as permitted sender) smtp.mailfrom=ioworker0@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-wm1-f50.google.com with SMTP id 5b1f17b1804b1-4210aa00c94so5874805e9.1 for ; Wed, 05 Jun 2024 20:57:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1717646233; x=1718251033; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=dMhZBOaM18whZrcjPH6UtuRN1wLcVJ6zQldlsjrMTg4=; b=V8L4haTgbpdwUvEVKtgR7Sm23T3tB+pPJFYVPYpBw3Md4EwjpAXUq6H8nD8gEw84xs +Q+oAzyY71Ogb4UIIQsyvjD8qKCZQQc34hUQ1XM5cB7mUitXPiMP/rZ/OCoMe028EHoc F0zXHozO8pR+vhyA6hodkcXfebZ3NDOsYLJZoSpMPFMDYGvv7FInzM6gDLbP4YJxKgUC Qntl3+bAkBSmvWhdAX9tPf1m8B7wqgQaPpqDwGrBBTt52BcXsZWjRc/fcX/aAm2j11xp mCnTb2HsQfvhujnDDdqUL4CIzqLkSVZ2FTlzhP5W9bTjqrA0S/89HLZitr6B58GJnYlD gjKQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717646233; x=1718251033; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dMhZBOaM18whZrcjPH6UtuRN1wLcVJ6zQldlsjrMTg4=; b=UfgORuJA/6x31TwuF8RruXqe4yC/8Mjgo9VU8RExgrnAAmSkm9vt6j43ZJf91UHGZG +MEeEFZoRptMmJiWY73LNRoFFLBww10X2gQLr3tsK9DeTw8R8d6tuFmnz7SOTIVjTwl6 o1aCVh94SRQSDwxhEI/lfELpjFA0AgUBXaAoO+ClJ+vYmS31+qlwr39cr1Lnb/kyJQ+X hR4/AvDgfpxO+XJvuJuGjJJDLZpVmfkxMTcED3OFTKAR30qrZKcmus+y0iNpBHLamiAQ 0STD2ARoTYNUqRYYkYzDGpIpVxTFOKz3r/X7u1WJtUZopcmxbr4YK553TKaLvTTp0iEA N+Sw== X-Forwarded-Encrypted: i=1; AJvYcCVXJQUfI0wdMOJxS8Qq6Dy4WjevNQ44Mzd8Ox18Gbe+V6ydbdW/D8MZkEU6nGEisBYrvYlNtt6PBIfgd96ANmd4DsQ= X-Gm-Message-State: AOJu0Ywt4rfPMwizI+MILDahyC9qLdCcBIfYORXwe4HXkRckbRW2OOK/ 3IfpmsrDPMbroJhhJ0wX8K9AID5vOJqW+/aBYj0GQm4ngOxbbPe8VNeBM0t0eoJG402VxFeKhWG 03hBstBTubRbdReuTKiaJ+FCfckg= X-Google-Smtp-Source: AGHT+IGRivIFSAKLJWZVxswEI5Aab2sKcgp+OQ6xWbgJTjOWXKgbOLeQvYtj1atCxPNk/QDxatrat/RBogi+iNTHYTk= X-Received: by 2002:a05:600c:4754:b0:416:9ba0:8f17 with SMTP id 5b1f17b1804b1-421562f21b8mr39172235e9.22.1717646233147; Wed, 05 Jun 2024 20:57:13 -0700 (PDT) MIME-Version: 1.0 References: <20240521040244.48760-1-ioworker0@gmail.com> <20240521040244.48760-3-ioworker0@gmail.com> <8580a462-eadc-4fa5-b01a-c0b8c3ae644d@redhat.com> <7f2ab112-5916-422c-b29f-343cc0d6d754@redhat.com> In-Reply-To: From: Lance Yang Date: Thu, 6 Jun 2024 11:57:01 +0800 Message-ID: Subject: Re: [PATCH v6 2/3] mm/rmap: integrate PMD-mapped folio splitting into pagewalk loop To: David Hildenbrand Cc: Yin Fengwei , akpm@linux-foundation.org, willy@infradead.org, sj@kernel.org, baolin.wang@linux.alibaba.com, maskray@google.com, ziy@nvidia.com, ryan.roberts@arm.com, 21cnbao@gmail.com, mhocko@suse.com, zokeefe@google.com, shy828301@gmail.com, xiehuan09@gmail.com, libang.li@antgroup.com, wangkefeng.wang@huawei.com, songmuchun@bytedance.com, peterx@redhat.com, minchan@kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: C07EC140008 X-Rspam-User: X-Stat-Signature: 7d66yxdjns1uzq71gp4qbiib58m1ezor X-HE-Tag: 1717646234-729367 X-HE-Meta: U2FsdGVkX18oyjzbbVsZqsgkyWbWNkcUZHXhfBpuPer+Vf4IBDuQpMgAZ5KkGT/6P3QjjNZQhCMKK/9QaDdEshCn5sYdfk3YgOSSNS/CFpgKqvrS1bz7cyNs8FizBEdTvsTh5tIZKjbo7MPL6Agnk+l24VJ9InYfsRoJgenezb8LEWgC5rnPmen6G1HAMkjAlEiUB/Rs8C51pxQe9o+z7y/7J71OBjPGey5mhwN/a8DG3pHdDgvD0NVUxw27UuZ1jpdJ4NqYaLtFRUlZNpzm8pVWTfq+OP7ztwV1KM8ce3qViNpIQfjXxtgsaLCP+sWyz3khF3JJIgPuoSUn1eReMz5s+4wgswS+Vk2jM3Jw0lvjOcW4PLbVQy8arB+gbUTLpjmTw/Beikni7+n8aduQYxHtn34sptkumlAD1LvKRGgFegLAyZPXNEVtOOw5Oa5ogdMhg1TpC2jSS3vkG10Ot5/QaEzuitsFbiZ/EUs9x77FVt/gvE+01Zw+1YwYlZu2x/cJE1WGZ3GiwwDLysP24WBBBbuQ6K2KCTl/ed3nB0dFVpn6DQqDwnD99dKXfL7EJNkr/rGVfOI3p71t03dwuexErKmybcqcQtOcebAPU0zSJNShq4BiTZ2wnw84HfggFBlaooQiAfmJmC1qVHMFzd7wOLuziBj3yYURD2ZQ7+uSIHu5+4IrXxd/EP3hXLxZxrLT6mNlJWYWlCNaZOK/O7vcqPmgMHlS/DFEuKPjkFzmN5A7ssHw6n3zi3PjHVq1DM9PCq9f55MatEzv22csiEXbDUDuq3Krj8cK3Gu8YwvL8stf3J4Dkv4J5Ymxsz3/SkOm0Q1HXxLYp5xjtpL4nxDNnHMfdaaFG3ZVnkd5OwXlj26R+6gnhvv3UF9nE5Wbym1k+eECF5Ko7p9n9nM1UY04I+tK01kNiJHbeQBaoRD5YI0QdWhzOLUd+tNyRScyysUJngEqqK21+F5eqlb d789VJG0 OGYXDjvydK3ROdAlWJebzWIlVGp8RfsCUNojBznjiQWCIdNxjZmDz/KYlQd7yJ2MJP+PHko0YlB1b0KDpUMEzyeajE9Mi9EELc5317YMJG1XBryk0sHvsjFjKzvWIVDivnOHayR1KL8VlQp8DdarbbOsuV0gBC5oEREjUoIkiySoaTsdZHwUgLgoJPb47KvXq+cKP98brix7+JT3O8g3ZAS7WhpIeiz7xADUkqCBTaI5ix6TmDKfgdJD5zvyWzBvDH3Qm3Gbn5Q1vUrpt1AUR4seRbOkoHDmIGH+QyYQEfk46OevwUW76ysyJ62QWkWXDH+rej3LV2Mlt+LvyCDMoJXVi4igZ6FwTeHnrJ8YQijiV3jvFonQ3d+9fP8GtRCaL6CXdDeWuNiwbC7W2QEwaQFxQr7v2wTxN7gByav9xCij4sKE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jun 6, 2024 at 12:16=E2=80=AFAM David Hildenbrand wrote: > > On 05.06.24 17:43, Lance Yang wrote: > > On Wed, Jun 5, 2024 at 11:03=E2=80=AFPM David Hildenbrand wrote: > >> > >> On 05.06.24 16:57, Lance Yang wrote: > >>> On Wed, Jun 5, 2024 at 10:39=E2=80=AFPM David Hildenbrand wrote: > >>>> > >>>> On 05.06.24 16:28, David Hildenbrand wrote: > >>>>> On 05.06.24 16:20, Lance Yang wrote: > >>>>>> Hi David, > >>>>>> > >>>>>> On Wed, Jun 5, 2024 at 8:46=E2=80=AFPM David Hildenbrand wrote: > >>>>>>> > >>>>>>> On 21.05.24 06:02, Lance Yang wrote: > >>>>>>>> In preparation for supporting try_to_unmap_one() to unmap PMD-ma= pped > >>>>>>>> folios, start the pagewalk first, then call split_huge_pmd_addre= ss() to > >>>>>>>> split the folio. > >>>>>>>> > >>>>>>>> Since TTU_SPLIT_HUGE_PMD will no longer perform immediately, we = might > >>>>>>>> encounter a PMD-mapped THP missing the mlock in the VM_LOCKED ra= nge during > >>>>>>>> the page walk. It=E2=80=99s probably necessary to mlock this THP= to prevent it from > >>>>>>>> being picked up during page reclaim. > >>>>>>>> > >>>>>>>> Suggested-by: David Hildenbrand > >>>>>>>> Suggested-by: Baolin Wang > >>>>>>>> Signed-off-by: Lance Yang > >>>>>>>> --- > >>>>>>> > >>>>>>> [...] again, sorry for the late review. > >>>>>> > >>>>>> No worries at all, thanks for taking time to review! > >>>>>> > >>>>>>> > >>>>>>>> diff --git a/mm/rmap.c b/mm/rmap.c > >>>>>>>> index ddffa30c79fb..08a93347f283 100644 > >>>>>>>> --- a/mm/rmap.c > >>>>>>>> +++ b/mm/rmap.c > >>>>>>>> @@ -1640,9 +1640,6 @@ static bool try_to_unmap_one(struct folio = *folio, struct vm_area_struct *vma, > >>>>>>>> if (flags & TTU_SYNC) > >>>>>>>> pvmw.flags =3D PVMW_SYNC; > >>>>>>>> > >>>>>>>> - if (flags & TTU_SPLIT_HUGE_PMD) > >>>>>>>> - split_huge_pmd_address(vma, address, false, folio)= ; > >>>>>>>> - > >>>>>>>> /* > >>>>>>>> * For THP, we have to assume the worse case ie pmd fo= r invalidation. > >>>>>>>> * For hugetlb, it could be much worse if we need to d= o pud > >>>>>>>> @@ -1668,20 +1665,35 @@ static bool try_to_unmap_one(struct foli= o *folio, struct vm_area_struct *vma, > >>>>>>>> mmu_notifier_invalidate_range_start(&range); > >>>>>>>> > >>>>>>>> while (page_vma_mapped_walk(&pvmw)) { > >>>>>>>> - /* Unexpected PMD-mapped THP? */ > >>>>>>>> - VM_BUG_ON_FOLIO(!pvmw.pte, folio); > >>>>>>>> - > >>>>>>>> /* > >>>>>>>> * If the folio is in an mlock()d vma, we must= not swap it out. > >>>>>>>> */ > >>>>>>>> if (!(flags & TTU_IGNORE_MLOCK) && > >>>>>>>> (vma->vm_flags & VM_LOCKED)) { > >>>>>>>> /* Restore the mlock which got missed = */ > >>>>>>>> - if (!folio_test_large(folio)) > >>>>>>>> + if (!folio_test_large(folio) || > >>>>>>>> + (!pvmw.pte && (flags & TTU_SPLIT_HUGE_= PMD))) > >>>>>>>> mlock_vma_folio(folio, vma); > >>>>>>> > >>>>>>> Can you elaborate why you think this would be required? If we wou= ld have > >>>>>>> performed the split_huge_pmd_address() beforehand, we would stil= l be > >>>>>>> left with a large folio, no? > >>>>>> > >>>>>> Yep, there would still be a large folio, but it wouldn't be PMD-ma= pped. > >>>>>> > >>>>>> After Weifeng's series[1], the kernel supports mlock for PTE-mappe= d large > >>>>>> folio, but there are a few scenarios where we don't mlock a large = folio, such > >>>>>> as when it crosses a VM_LOCKed VMA boundary. > >>>>>> > >>>>>> - if (!folio_test_large(folio)) > >>>>>> + if (!folio_test_large(folio) || > >>>>>> + (!pvmw.pte && (flags & TTU_SPLIT_HU= GE_PMD))) > >>>>>> > >>>>>> And this check is just future-proofing and likely unnecessary. If = encountering a > >>>>>> PMD-mapped THP missing the mlock for some reason, we can mlock thi= s > >>>>>> THP to prevent it from being picked up during page reclaim, since = it is fully > >>>>>> mapped and doesn't cross the VMA boundary, IIUC. > >>>>>> > >>>>>> What do you think? > >>>>>> I would appreciate any suggestions regarding this check ;) > >>>>> > >>>>> Reading this patch only, I wonder if this change makes sense in the > >>>>> context here. > >>>>> > >>>>> Before this patch, we would have PTE-mapped the PMD-mapped THP befo= re > >>>>> reaching this call and skipped it due to "!folio_test_large(folio)"= . > >>>>> > >>>>> After this patch, we either > >>>>> > >>>>> a) PTE-remap the THP after this check, but retry and end-up here ag= ain, > >>>>> whereby we would skip it due to "!folio_test_large(folio)". > >>>>> > >>>>> b) Discard the PMD-mapped THP due to lazyfree directly. Can that > >>>>> co-exist with mlock and what would be the problem here with mlock? > >>>>> > >>>>> > >>> > >>> Thanks a lot for clarifying! > >>> > >>>>> So if the check is required in this patch, we really have to unders= tand > >>>>> why. If not, we should better drop it from this patch. > >>>>> > >>>>> At least my opinion, still struggling to understand why it would be > >>>>> required (I have 0 knowledge about mlock interaction with large fol= ios :) ). > >>>>> > >>>> > >>>> Looking at that series, in folio_references_one(), we do > >>>> > >>>> if (!folio_test_large(folio) || !pvmw.pte)= { > >>>> /* Restore the mlock which got mis= sed */ > >>>> mlock_vma_folio(folio, vma); > >>>> page_vma_mapped_walk_done(&pvmw); > >>>> pra->vm_flags |=3D VM_LOCKED; > >>>> return false; /* To break the loop= */ > >>>> } > >>>> > >>>> I wonder if we want that here as well now: in case of lazyfree we > >>>> would not back off, right? > >>>> > >>>> But I'm not sure if lazyfree in mlocked areas are even possible. > >>>> > >>>> Adding the "!pvmw.pte" would be much clearer to me than the flag che= ck. > >>> > >>> Hmm... How about we drop it from this patch for now, and add it back = if needed > >>> in the future? > >> > >> If we can rule out that MADV_FREE + mlock() keeps working as expected = in > >> the PMD-mapped case, we're good. > >> > >> Can we rule that out? (especially for MADV_FREE followed by mlock()) > > > > Perhaps we don't worry about that. > > > > IIUC, without that check, MADV_FREE + mlock() still works as expected i= n > > the PMD-mapped case, since if encountering a large folio in a VM_LOCKED > > VMA range, we will stop the page walk immediately. > > > Can you point me at the code (especially considering patch #3?) Yep, please see my other mail ;) Thanks, Lance > > -- > Cheers, > > David / dhildenb >