From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29F3EC3DA42 for ; Wed, 10 Jul 2024 16:07:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B95CB6B0093; Wed, 10 Jul 2024 12:07:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B45AD6B0095; Wed, 10 Jul 2024 12:07:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A0DC66B0098; Wed, 10 Jul 2024 12:07:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 820B16B0093 for ; Wed, 10 Jul 2024 12:07:24 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 06E5812015A for ; Wed, 10 Jul 2024 16:07:23 +0000 (UTC) X-FDA: 82324322808.11.61C6327 Received: from mail-yw1-f175.google.com (mail-yw1-f175.google.com [209.85.128.175]) by imf26.hostedemail.com (Postfix) with ESMTP id 23CD1140021 for ; Wed, 10 Jul 2024 16:07:21 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=qyA+X2R4; spf=pass (imf26.hostedemail.com: domain of surenb@google.com designates 209.85.128.175 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1720627617; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mSQUshmSdhXe881WB/NNrPxsfE0Hz+Zz8a2QxDP5NDU=; b=sh2XuXua+gqJgwdnkFIhfp48+jxO4SE8JzOdpkBiaSdjM32m6gFVMntSFS31KoKZCRQ5bG ylc2/W1/dAbuyf2hCPSrmVqSiDwqzDjsKSI6npZ/+acvMbIiFFbeLnOA4N+jLHzE+dUl8f 4rN9fj4jQ4axSJR2FgfoDvWO4OBFKHs= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=qyA+X2R4; spf=pass (imf26.hostedemail.com: domain of surenb@google.com designates 209.85.128.175 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1720627617; a=rsa-sha256; cv=none; b=60AZoGtq0Ttq15MmCtY0GP0H5uC0XsBQ00/Xk2Us5rglvcS9eTDsH53CwBO001rRAtkfrC iQsonQZSKWlITt8vIT2APjX+aLwYA0di3DFcHpOOP7mAzrIwesnEmlTxZNKLq477XdJn9P EKof9n6d32IchW9nLWujHRGW2ysCNdU= Received: by mail-yw1-f175.google.com with SMTP id 00721157ae682-65240d22f7cso58035057b3.0 for ; Wed, 10 Jul 2024 09:07:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1720627641; x=1721232441; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=mSQUshmSdhXe881WB/NNrPxsfE0Hz+Zz8a2QxDP5NDU=; b=qyA+X2R4KOM+lkzrXHGzzbrkIS0eO/EjMqftPKYwl0ReG9hF5US9QZ7Bi+DU7kT4Uz BirRY4UcuvKYXQeF+F0+BnUB3BoMh9BZF2MvdxoXtip01mU4MFvi5CKv7+iMiU4lsYp7 cpSahnTn+8OTAnCV/Z51njhETIBus7Z/JMLQ2x/3J8f9hiaAb+pMQKlwbZI/YGxScpGq UGA+m3YVDqAmr3wxjRvh4YZMyykRVsx78rWWz28nl7WD3/hGJ2mZs6xJWhbDcy4+hey7 FrCm3hyza619LRNzGx0wvQoDhCry0VJSKSF/AvulP6nm9kQgtj7VYabbEiED+kYbMX+z k+9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1720627641; x=1721232441; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mSQUshmSdhXe881WB/NNrPxsfE0Hz+Zz8a2QxDP5NDU=; b=mE78sDLlxOl9aQFKhQ+6ebc6F34l0vgXct5upsSilSmBNs5HnrcMkI9+68CTLh1hFS CkIUX7Mx/7y6/6aHu+gPKP3aSkXqqy5ctzuH0XTsTu+5CTeekmlXX2aKJ3iD9gzIMj0U jWdBJxPqw1F1mnzO8qnGOd2xZc3nty15y6v3FwSl6x7xrJaa3++z2T/DhrfYSF/jzJsz KGKawzGmpubwCTSYa9NGOFLKpdY+kdleDU/teCSkZF/k43vPmnHJ+bMvKmA4Q17qyfge uMvg6JrtE9bxV/Fmj+AwMzQJa1IRdpcsRjcziaP1hPXzl2wvn+U7ba15TiTLBe4jSaX6 xuyg== X-Gm-Message-State: AOJu0YyPdlDwv8yNkBthz3Ng5vXx1WkTQooifK5MttK9L94Mh5tQRQFr lvkrUrXk7zEWtX2bntgRcQriZ+8YqggMrt2UkflfbzHsRjvYKYgSKEi0leraxquxg6wvFjc4jQh o0sHpIVSdr5wQiclN43kDsuvOnGIlWAe9HtTk X-Google-Smtp-Source: AGHT+IFSykq+WyrWdT6Uv/EHdnZgmAh2o20rzvf6Nwh0G62ITLVVLQ8XZ8JteXgDLTJ5bYm4oUeM2YtLoJxpK+Xw1+c= X-Received: by 2002:a0d:f6c5:0:b0:631:2219:a441 with SMTP id 00721157ae682-658f08cc76dmr60471887b3.48.1720627640761; Wed, 10 Jul 2024 09:07:20 -0700 (PDT) MIME-Version: 1.0 References: <20240704182718.2653918-1-Liam.Howlett@oracle.com> <20240704182718.2653918-5-Liam.Howlett@oracle.com> In-Reply-To: <20240704182718.2653918-5-Liam.Howlett@oracle.com> From: Suren Baghdasaryan Date: Wed, 10 Jul 2024 09:07:09 -0700 Message-ID: Subject: Re: [PATCH v3 04/16] mm/mmap: Extract the gathering of vmas from do_vmi_align_munmap() To: "Liam R. Howlett" Cc: linux-mm@kvack.org, Andrew Morton , Vlastimil Babka , Lorenzo Stoakes , Matthew Wilcox , sidhartha.kumar@oracle.com, "Paul E . McKenney" , Bert Karwatzki , Jiri Olsa , linux-kernel@vger.kernel.org, Kees Cook Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: qpj8967i4ysgowunis4ocpsabmb9fnkp X-Rspam-User: X-Rspamd-Queue-Id: 23CD1140021 X-Rspamd-Server: rspam02 X-HE-Tag: 1720627641-227132 X-HE-Meta: U2FsdGVkX19OQX2xdBb2+QV2x7IpTo8PqkvtFeNi7RDnzIDpzqlS+1d6uH3e5cI+jvTkOZS+yj9pGCmcFXk7wKYEflK6xKju23caAP2P+sghiSJ7z3Hu9p+j1E+vn1C0gJD7kJtaz7Y9q9bKt3mCPI4mDvJ/sv26kvYkEX31WLu0LK5f69sLRukaU6tpmoUx6FrgwOaHf0dJeT5JUUzWJN1WmrHRkQz9XGdaPhIDmX3nJQowa6oDPURe1jCuDHJQVyED84jdeFxY36XlyoQ8XGwJuYInKd7osFeuhqdpFtFlGwqtjDtVEN0HeB1A8246hwz4eXl9TnLJYNkzws1X0MXgF6gTUjMfqJNqwhmzH6WlNsi70yuJQesR+glTKkAMtysclEHEQhOs1dnd4kUsIRhjByQAdHE3tzhGLvuOozME2lvp67FJ8mRQ5Q24/MWrVLtNQzpkTZMdo+USlyWxyo+B7W7rUT6WKeDQ9mcsUj3SXGoMUEOGiKh3dl98+ae3Zfm4G+uHphz5uOhkYpLl2xx0sq1o/pT4WyGNtNkJJ7/p6vgD7R9C8NnM1BU3T9iGAWhYV/nkZt6YpRTmrQ9xIKLBZa2pHMVAYFZcDTq7zcBCAWMYWeasQZ3UCcSr4DLvFTd7r8Rb8FnFp8PU7685UpXGR52ijqfblaQ66yGarkPvaI7aRGL6yrKn5n5BQD8UqLsS+OSVd3nt+i9cGrJtAZht9LpdbMzO6FFdGe8zfuUJiNDgKxEiLT4si2F/bkeOzcPsgKCNaGW83MbddKt3XanQ7FmrOYFV61zaq87Gk+5TJZSvl071sTIsQ9cSDlVXbBKWNvE1FM7wL3UfppbpCTBeSmtpyDktEs5TDmOkf8mZgkEsso2Ogu1oQe3WwhIM3yVOB9DvjKe0U5bjIjCJnTXOGSUYr7qSvLSBA7sprcN5Hz40VLUPFT6gze0Ue6S3wVT4RilMosTzPgdTbMl 1APCY7ao N4UaFx7/zKqV4rkJUc1Bz6RlDXO3WHPXjicXxt2i7o2lSWMvBtcKf4U+gK4l3JGAPJFcx60vdiGUbTTA/yLQhvsPh+eHpPt5rECdM80WDlkHVEWTKhHhIb1ZVeK+TSpYGKL+4zcdpyAFeg4uK4FV7HBMS4sR5xYBQdmDGcqwj4v1VOOD++tPV1Y20vFAX2RNz7jLbkeSv8c+cNhmQOJaIwKYt4zDXkBC6TNvq9uhXX6y0oPnnHAb6mCJgIw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Jul 4, 2024 at 11:27=E2=80=AFAM Liam R. Howlett wrote: > > Create vmi_gather_munmap_vmas() to handle the gathering of vmas into a > detached maple tree for removal later. Part of the gathering is the > splitting of vmas that span the boundary. > > Signed-off-by: Liam R. Howlett > --- > mm/mmap.c | 82 +++++++++++++++++++++++++++++++++++++++---------------- > 1 file changed, 58 insertions(+), 24 deletions(-) > > diff --git a/mm/mmap.c b/mm/mmap.c > index 411798f46932..8dc8ffbf9d8d 100644 > --- a/mm/mmap.c > +++ b/mm/mmap.c > @@ -2656,32 +2656,29 @@ vmi_complete_munmap_vmas(struct vma_iterator *vmi= , struct vm_area_struct *vma, > } > > /* > - * do_vmi_align_munmap() - munmap the aligned region from @start to @end= . > + * vmi_gather_munmap_vmas() - Put all VMAs within a range into a maple t= ree > + * for removal at a later date. Handles splitting first and last if nec= essary > + * and marking the vmas as isolated. > + * > * @vmi: The vma iterator > * @vma: The starting vm_area_struct > * @mm: The mm_struct > * @start: The aligned start address to munmap. > * @end: The aligned end address to munmap. > * @uf: The userfaultfd list_head > - * @unlock: Set to true to drop the mmap_lock. unlocking only happens o= n > - * success. > + * @mas_detach: The maple state tracking the detached tree > * > - * Return: 0 on success and drops the lock if so directed, error and lea= ves the > - * lock held otherwise. > + * Return: 0 on success > */ > static int > -do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma= , > +vmi_gather_munmap_vmas(struct vma_iterator *vmi, struct vm_area_struct *= vma, > struct mm_struct *mm, unsigned long start, > - unsigned long end, struct list_head *uf, bool unlock) > + unsigned long end, struct list_head *uf, > + struct ma_state *mas_detach, unsigned long *locked_vm= ) > { > struct vm_area_struct *next =3D NULL; > - struct maple_tree mt_detach; > int count =3D 0; > int error =3D -ENOMEM; > - unsigned long locked_vm =3D 0; > - MA_STATE(mas_detach, &mt_detach, 0, 0); > - mt_init_flags(&mt_detach, vmi->mas.tree->ma_flags & MT_FLAGS_LOCK= _MASK); > - mt_on_stack(mt_detach); > > /* > * If we need to split any vma, do it now to save pain later. > @@ -2720,15 +2717,14 @@ do_vmi_align_munmap(struct vma_iterator *vmi, str= uct vm_area_struct *vma, > goto end_split_failed; > } > vma_start_write(next); > - mas_set(&mas_detach, count); > - error =3D mas_store_gfp(&mas_detach, next, GFP_KERNEL); > + mas_set(mas_detach, count++); > + if (next->vm_flags & VM_LOCKED) > + *locked_vm +=3D vma_pages(next); Uh, this was confusing. You moved locked_vm/count accounting before mas_store_gfp(), so if mas_store_gfp() fails, they will be one-off because we incremented them but failed to insert the element. Only later I realized that if mas_store_gfp() fails then we never use these counters. The code is still correct but I'm wondering if this movement was necessary. We wouldn't use wrong values but why make them wrong in the first place? In later patches you account for more things in here and all that is also done before mas_store_gfp(). Would moving all that after mas_store_gfp() and keeping them always correct be an issue? > + > + error =3D mas_store_gfp(mas_detach, next, GFP_KERNEL); > if (error) > goto munmap_gather_failed; > vma_mark_detached(next, true); > - if (next->vm_flags & VM_LOCKED) > - locked_vm +=3D vma_pages(next); > - > - count++; > if (unlikely(uf)) { > /* > * If userfaultfd_unmap_prep returns an error the= vmas > @@ -2753,7 +2749,7 @@ do_vmi_align_munmap(struct vma_iterator *vmi, struc= t vm_area_struct *vma, > #if defined(CONFIG_DEBUG_VM_MAPLE_TREE) > /* Make sure no VMAs are about to be lost. */ > { > - MA_STATE(test, &mt_detach, 0, 0); > + MA_STATE(test, mas_detach->tree, 0, 0); > struct vm_area_struct *vma_mas, *vma_test; > int test_count =3D 0; > > @@ -2773,6 +2769,48 @@ do_vmi_align_munmap(struct vma_iterator *vmi, stru= ct vm_area_struct *vma, > while (vma_iter_addr(vmi) > start) > vma_iter_prev_range(vmi); > > + return 0; > + > +userfaultfd_error: > +munmap_gather_failed: > +end_split_failed: > + abort_munmap_vmas(mas_detach); > +start_split_failed: > +map_count_exceeded: > + return error; > +} > + > +/* > + * do_vmi_align_munmap() - munmap the aligned region from @start to @end= . > + * @vmi: The vma iterator > + * @vma: The starting vm_area_struct > + * @mm: The mm_struct > + * @start: The aligned start address to munmap. > + * @end: The aligned end address to munmap. > + * @uf: The userfaultfd list_head > + * @unlock: Set to true to drop the mmap_lock. unlocking only happens o= n > + * success. > + * > + * Return: 0 on success and drops the lock if so directed, error and lea= ves the > + * lock held otherwise. > + */ > +static int > +do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma= , > + struct mm_struct *mm, unsigned long start, > + unsigned long end, struct list_head *uf, bool unlock) > +{ > + struct maple_tree mt_detach; > + MA_STATE(mas_detach, &mt_detach, 0, 0); > + mt_init_flags(&mt_detach, vmi->mas.tree->ma_flags & MT_FLAGS_LOCK= _MASK); > + mt_on_stack(mt_detach); > + int error; > + unsigned long locked_vm =3D 0; > + > + error =3D vmi_gather_munmap_vmas(vmi, vma, mm, start, end, uf, > + &mas_detach, &locked_vm); > + if (error) > + goto gather_failed; > + > error =3D vma_iter_clear_gfp(vmi, start, end, GFP_KERNEL); > if (error) > goto clear_tree_failed; > @@ -2783,12 +2821,8 @@ do_vmi_align_munmap(struct vma_iterator *vmi, stru= ct vm_area_struct *vma, > return 0; > > clear_tree_failed: > -userfaultfd_error: > -munmap_gather_failed: > -end_split_failed: > abort_munmap_vmas(&mas_detach); > -start_split_failed: > -map_count_exceeded: > +gather_failed: > validate_mm(mm); > return error; > } > -- > 2.43.0 >