From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B183DE77199 for ; Tue, 7 Jan 2025 17:13:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3EFE86B00AD; Tue, 7 Jan 2025 12:13:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 39D796B00AE; Tue, 7 Jan 2025 12:13:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1EFA56B00B0; Tue, 7 Jan 2025 12:13:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E9DD46B00AD for ; Tue, 7 Jan 2025 12:13:03 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 97D6C80AD7 for ; Tue, 7 Jan 2025 17:13:03 +0000 (UTC) X-FDA: 82981301046.09.CF5D501 Received: from mail-qt1-f169.google.com (mail-qt1-f169.google.com [209.85.160.169]) by imf06.hostedemail.com (Postfix) with ESMTP id C18CE180006 for ; Tue, 7 Jan 2025 17:13:01 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=w1LgsJWB; spf=pass (imf06.hostedemail.com: domain of surenb@google.com designates 209.85.160.169 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736269981; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=chk6NJISFurQMFJvYF6vgkrsHvt/tFpUC1De8cMxuSQ=; b=e/S/2ob+UIOKGPUOty3qnwIkfWTrfHOMrONDhzlBEbPA7wlFIM7mzi8Co3hel6RMhx+Rxu xaCSZS18uq25jNWXEJ4GLk3SY/lkEKkLtqoo9l5k6v4/HdqYCijlJ8SV8vLulr6SZ1dhei SOQL6r9JryV3mAeBSleJeWLDYjGcYto= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=w1LgsJWB; spf=pass (imf06.hostedemail.com: domain of surenb@google.com designates 209.85.160.169 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736269981; a=rsa-sha256; cv=none; b=lcExNCQbM0O7L230CYC3cC+dNFudc/sgB+cKwOFDN7YT3Q5D4QQi1qCBC3A+VgiwXp2rDp +JG6Ae2mLcdPU6aHOmOcWwT02/yQkxCemAVwicV16RV+cePV2H9jUhN2VVcWoZ9f4kZRSY 4xaRGvQPLYfub4+FfBLsPfkBXqiF81s= Received: by mail-qt1-f169.google.com with SMTP id d75a77b69052e-467abce2ef9so293421cf.0 for ; Tue, 07 Jan 2025 09:13:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736269981; x=1736874781; darn=kvack.org; h=content-transfer-encoding:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=chk6NJISFurQMFJvYF6vgkrsHvt/tFpUC1De8cMxuSQ=; b=w1LgsJWBr+07Vv6WGSKlNECVujU9FDZ+5O7fI88sSZOlcDOE+HNp1JDdpmPfnxPPBy Is22/k31D4ZCkVG8hpI64s6uynOKGRz7wtipN8wemNA3QE0NB9hSuTeE8V9JSV8EDk0G Fw6pBrmMkqPJwpqV7xaG/ABR2DZYCVONqneW3TtEnvu8w6FnfWZdKD1dQIEaBLfm45oD Ot2SC3e3kd2yDVAKVk3KjyG8afZNbJo+XzSrPLaCGvny3DTbIv3iO/u6Q7o+BUcwlNro FbjXWcLss0ODcUPD6StdpevZtBXZapzHKa1AYA1FV6thsNkAxSybBOpJWSRnUc9TlcQR 5q1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736269981; x=1736874781; h=content-transfer-encoding:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=chk6NJISFurQMFJvYF6vgkrsHvt/tFpUC1De8cMxuSQ=; b=b5nHRlZ3i9f6OkqOUgv/UARp7sRt3bvub5wOi2GouUJPllI4OgeB3pioXxhdOKBcWP xc3vxVoCWdiiGPeCf0KVyL/sdN3SiGTgAkT/awJRe2n/foBqsFxm2+c6ypvmmMp1INIm d/UmUKF6O5zn41a5/mPh1rlEMnO+xnb90yTd9pbUtdbXO3e36QQLefJCTSK5ebDeOXtl AGito9GiGZ2SPoUBwP8/SwAXgl8mgSsDVMVemwCdAgA1HWRktOTCIFs382yY3rUDWVWI +trd0WwYlYgA5hfHpiplKQe6GDBzspfG5WZf14gF1eYx2/6rn/NmzaQ4QbDV0trOiQ3Y 7cWg== X-Forwarded-Encrypted: i=1; AJvYcCVzQELNAwhX2iZsmyKgNfA+CYeAtiEbTGSgLQyrQS1ScqdT8DZFlzRVWncaADGanSpOmJtHPpN0cw==@kvack.org X-Gm-Message-State: AOJu0YxnqtBj2rjQOvNuM8uqFkrkFXQMOkg8RakZOF8vmLy1EEM3+YEK 56upPTDkFGYnZAkfKgGDYbRqt8eC9YnuLLcxNqiKESaF7qOu9ZSxklwskbspCBR8wqsS5FcSiuD Bfnf1QoLw0aHBbSew+Ci/IvQiVZco25W856s/ X-Gm-Gg: ASbGncv+tQwwjmx9SIV80hkGrcw0z7nmw9edtbuVKOfxSXd8uNSRCT0KErn7JJPGWbQ ZA61SKE1eTVw6l1a8h3woEB8DoPG5szeK/S7M4+HjA7X4sIi4mAilUj4dEIebALso96IB X-Google-Smtp-Source: AGHT+IFXDdrNkxz81jJXTEGZAe+XA3xHXzR658pMa6NQwReUFhIVlxxq9kXoAVDHTPY5r9ItR2yLPiNNBl+6vlG8Iic= X-Received: by 2002:a05:622a:449:b0:466:8906:159a with SMTP id d75a77b69052e-46b3c8280dfmr3628191cf.19.1736269980332; Tue, 07 Jan 2025 09:13:00 -0800 (PST) MIME-Version: 1.0 References: <20241226170710.1159679-1-surenb@google.com> <20241226170710.1159679-5-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Tue, 7 Jan 2025 09:12:49 -0800 X-Gm-Features: AbW1kvbRLMdMEJMnFl6e19LgaABppU1UudlyyjOt6xw2vKr_iMO3vN3ig_loVss Message-ID: Subject: Re: [PATCH v7 04/17] mm: modify vma_iter_store{_gfp} to indicate if it's storing a new vma To: "Liam R. Howlett" , Suren Baghdasaryan , akpm@linux-foundation.org, peterz@infradead.org, willy@infradead.org, lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com, mgorman@techsingularity.net, david@redhat.com, peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net, paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com, hdanton@sina.com, hughd@google.com, lokeshgidra@google.com, minchan@google.com, jannh@google.com, shakeel.butt@linux.dev, souravpanda@google.com, pasha.tatashin@soleen.com, klarasmodin@gmail.com, corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: C18CE180006 X-Stat-Signature: s1sqkaux6ea171czg5tkx9dsdbxqxgz8 X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1736269981-168147 X-HE-Meta: U2FsdGVkX1/WqTfZyDmxZurcPEOgw4jXa627JbutsMscEk1M3ua9n6ZhOcy2r9IfxhjiUlZegZlHHYvQIcn8dZiyJnffRb0qAnJnqWLwOma1Z2hgOWMqlLfsGyyxWYrF9dUiqxFd9qmuTHmIcCdyuBP8MuNuT0F0P/swEHdW5v4lCohszHUld7rKTzHfuq9E3TkUzaDfLN5aROBgCUNlJ1K99Ty3VilCHRlTdPvWPbQy7Tuaf6lu/360xnCFnBgGgkuYm8pE7vVRu4fyzlTDzCs/SC6uoSPjZ4Mur+s2c3iL1jlZo/FKAGnRflE2lMSqAV1c1P1H1lZ+f62ZHy3qY2+dFZa2hlEHa5c2S92/IsWckB6ZpeDau+EigPCACYLfyp2lP7QlsWnbtWmEe9kA+zd49/Oqytr7f4iJ6tqP51tblRlECv34el1xHxvqrC8ROWvLp5rXzuqhPt9VcLA2Lk5/N357ATOScIdwZFqnrFCDTkqoxW2XU2wb/pMCPuwmJRTvPdnW84kJ+ekvbc536u/AtXYB0FngywnslRFsMvWhm3BJSukUO78Q/ViLUksAalX1Rl+vnYhloFCWugmxG/B2jZPzEKqV6j6KmOQyqsEzodTEABtqSjKkixpa7YxxKydmLY2Gt9UxytIbs9nRffuu8EENWjS/WKis5AzzwK0BIckJWLM1BZjNYSmWNV5Xjix9Wn3uPJ5ppdyT/WtYqWNGXEXhcCZgtvGSWLXq5Ui6ZTtekkAQejbeZ3sFUY4Gpit/EYwtygY3CfbkKxQAi9Yuqsw8BmZaWpQHb75lV8WHow8G2BvzPn+zUAZ5Ddj/0OdNx+SPmkBW2Tz8Bbqv0dT9xeiV5ismtccTCx6O9yYmCRnLo7Bo7OvWXWHNUj+qSin/ent/DFzDfxLMmy9M4VjDqNtDdt8YVoJL0NjijAym3juR3lWjwKCb8ylU9GUg8MrjcvuhA8cyQMDhMKT gIxVyb4B tEhKCogZjnW6Z+npdtz3edpdz6oIBiKb5nCR2nTQEFMBraZXlCPRt1hPnFZaqFqEZu8POFvAtcjI4Y+jyf3541uQbur1rsP+avjY7smF/7ks5Hbkvqhh9+h8npRWXATCHSGrHvs654aclxh71RtP7YgQMlsg0VyvqHsYJHEqwO218i4UGNKe/sUELk9Gq0AbzE3E4jYxfqFLXsIpaJwAczGQfK3MvLqOz0BdfSXQQBSBFFu0hGrNXMbMr+dbYyZ54pbmwnxVNSDWc/ysNGjjYw7hRDL5O9rPZQcLa X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Jan 7, 2025 at 8:50=E2=80=AFAM Liam R. Howlett wrote: > > * Suren Baghdasaryan [241226 12:07]: > > vma_iter_store() functions can be used both when adding a new vma and > > when updating an existing one. However for existing ones we do not need > > to mark them attached as they are already marked that way. Add a parame= ter > > to distinguish the usage and skip vma_mark_attached() when not needed. > > I really don't like boolean flags - especially to such a small function. > > The passing of flags complicates things and is not self documenting. Can > we make a new vma_iter_store_detach() that just calls vma_iter_store() > then does the detach? Sure, I'll do that. Thanks for the feedback! > > > > > Signed-off-by: Suren Baghdasaryan > > --- > > include/linux/mm.h | 12 ++++++++++++ > > mm/nommu.c | 4 ++-- > > mm/vma.c | 16 ++++++++-------- > > mm/vma.h | 13 +++++++++---- > > 4 files changed, 31 insertions(+), 14 deletions(-) > > > > diff --git a/include/linux/mm.h b/include/linux/mm.h > > index 081178b0eec4..c50edfedd99d 100644 > > --- a/include/linux/mm.h > > +++ b/include/linux/mm.h > > @@ -821,6 +821,16 @@ static inline void vma_assert_locked(struct vm_are= a_struct *vma) > > vma_assert_write_locked(vma); > > } > > > > +static inline void vma_assert_attached(struct vm_area_struct *vma) > > +{ > > + VM_BUG_ON_VMA(vma->detached, vma); > > +} > > + > > +static inline void vma_assert_detached(struct vm_area_struct *vma) > > +{ > > + VM_BUG_ON_VMA(!vma->detached, vma); > > +} > > + > > static inline void vma_mark_attached(struct vm_area_struct *vma) > > { > > vma->detached =3D false; > > @@ -866,6 +876,8 @@ static inline void vma_end_read(struct vm_area_stru= ct *vma) {} > > static inline void vma_start_write(struct vm_area_struct *vma) {} > > static inline void vma_assert_write_locked(struct vm_area_struct *vma) > > { mmap_assert_write_locked(vma->vm_mm); } > > +static inline void vma_assert_attached(struct vm_area_struct *vma) {} > > +static inline void vma_assert_detached(struct vm_area_struct *vma) {} > > static inline void vma_mark_attached(struct vm_area_struct *vma) {} > > static inline void vma_mark_detached(struct vm_area_struct *vma) {} > > > > diff --git a/mm/nommu.c b/mm/nommu.c > > index 9cb6e99215e2..72c8c505836c 100644 > > --- a/mm/nommu.c > > +++ b/mm/nommu.c > > @@ -1191,7 +1191,7 @@ unsigned long do_mmap(struct file *file, > > setup_vma_to_mm(vma, current->mm); > > current->mm->map_count++; > > /* add the VMA to the tree */ > > - vma_iter_store(&vmi, vma); > > + vma_iter_store(&vmi, vma, true); > > > > /* we flush the region from the icache only when the first execut= able > > * mapping of it is made */ > > @@ -1356,7 +1356,7 @@ static int split_vma(struct vma_iterator *vmi, st= ruct vm_area_struct *vma, > > > > setup_vma_to_mm(vma, mm); > > setup_vma_to_mm(new, mm); > > - vma_iter_store(vmi, new); > > + vma_iter_store(vmi, new, true); > > mm->map_count++; > > return 0; > > > > diff --git a/mm/vma.c b/mm/vma.c > > index 476146c25283..ce113dd8c471 100644 > > --- a/mm/vma.c > > +++ b/mm/vma.c > > @@ -306,7 +306,7 @@ static void vma_complete(struct vma_prepare *vp, st= ruct vma_iterator *vmi, > > * us to insert it before dropping the locks > > * (it may either follow vma or precede it). > > */ > > - vma_iter_store(vmi, vp->insert); > > + vma_iter_store(vmi, vp->insert, true); > > mm->map_count++; > > } > > > > @@ -660,14 +660,14 @@ static int commit_merge(struct vma_merge_struct *= vmg, > > vma_set_range(vmg->vma, vmg->start, vmg->end, vmg->pgoff); > > > > if (expanded) > > - vma_iter_store(vmg->vmi, vmg->vma); > > + vma_iter_store(vmg->vmi, vmg->vma, false); > > > > if (adj_start) { > > adjust->vm_start +=3D adj_start; > > adjust->vm_pgoff +=3D PHYS_PFN(adj_start); > > if (adj_start < 0) { > > WARN_ON(expanded); > > - vma_iter_store(vmg->vmi, adjust); > > + vma_iter_store(vmg->vmi, adjust, false); > > } > > } > > > > @@ -1689,7 +1689,7 @@ int vma_link(struct mm_struct *mm, struct vm_area= _struct *vma) > > return -ENOMEM; > > > > vma_start_write(vma); > > - vma_iter_store(&vmi, vma); > > + vma_iter_store(&vmi, vma, true); > > vma_link_file(vma); > > mm->map_count++; > > validate_mm(mm); > > @@ -2368,7 +2368,7 @@ static int __mmap_new_vma(struct mmap_state *map,= struct vm_area_struct **vmap) > > > > /* Lock the VMA since it is modified after insertion into VMA tre= e */ > > vma_start_write(vma); > > - vma_iter_store(vmi, vma); > > + vma_iter_store(vmi, vma, true); > > map->mm->map_count++; > > vma_link_file(vma); > > > > @@ -2542,7 +2542,7 @@ int do_brk_flags(struct vma_iterator *vmi, struct= vm_area_struct *vma, > > vm_flags_init(vma, flags); > > vma->vm_page_prot =3D vm_get_page_prot(flags); > > vma_start_write(vma); > > - if (vma_iter_store_gfp(vmi, vma, GFP_KERNEL)) > > + if (vma_iter_store_gfp(vmi, vma, GFP_KERNEL, true)) > > goto mas_store_fail; > > > > mm->map_count++; > > @@ -2785,7 +2785,7 @@ int expand_upwards(struct vm_area_struct *vma, un= signed long address) > > anon_vma_interval_tree_pre_update_vma(vma= ); > > vma->vm_end =3D address; > > /* Overwrite old entry in mtree. */ > > - vma_iter_store(&vmi, vma); > > + vma_iter_store(&vmi, vma, false); > > anon_vma_interval_tree_post_update_vma(vm= a); > > > > perf_event_mmap(vma); > > @@ -2865,7 +2865,7 @@ int expand_downwards(struct vm_area_struct *vma, = unsigned long address) > > vma->vm_start =3D address; > > vma->vm_pgoff -=3D grow; > > /* Overwrite old entry in mtree. */ > > - vma_iter_store(&vmi, vma); > > + vma_iter_store(&vmi, vma, false); > > anon_vma_interval_tree_post_update_vma(vm= a); > > > > perf_event_mmap(vma); > > diff --git a/mm/vma.h b/mm/vma.h > > index 24636a2b0acf..18c9e49b1eae 100644 > > --- a/mm/vma.h > > +++ b/mm/vma.h > > @@ -145,7 +145,7 @@ __must_check int vma_shrink(struct vma_iterator *vm= i, > > unsigned long start, unsigned long end, pgoff_t pgoff); > > > > static inline int vma_iter_store_gfp(struct vma_iterator *vmi, > > - struct vm_area_struct *vma, gfp_t gfp) > > + struct vm_area_struct *vma, gfp_t gfp, bool new_v= ma) > > > > { > > if (vmi->mas.status !=3D ma_start && > > @@ -157,7 +157,10 @@ static inline int vma_iter_store_gfp(struct vma_it= erator *vmi, > > if (unlikely(mas_is_err(&vmi->mas))) > > return -ENOMEM; > > > > - vma_mark_attached(vma); > > + if (new_vma) > > + vma_mark_attached(vma); > > + vma_assert_attached(vma); > > + > > return 0; > > } > > > > @@ -366,7 +369,7 @@ static inline struct vm_area_struct *vma_iter_load(= struct vma_iterator *vmi) > > > > /* Store a VMA with preallocated memory */ > > static inline void vma_iter_store(struct vma_iterator *vmi, > > - struct vm_area_struct *vma) > > + struct vm_area_struct *vma, bool new_vm= a) > > { > > > > #if defined(CONFIG_DEBUG_VM_MAPLE_TREE) > > @@ -390,7 +393,9 @@ static inline void vma_iter_store(struct vma_iterat= or *vmi, > > > > __mas_set_range(&vmi->mas, vma->vm_start, vma->vm_end - 1); > > mas_store_prealloc(&vmi->mas, vma); > > - vma_mark_attached(vma); > > + if (new_vma) > > + vma_mark_attached(vma); > > + vma_assert_attached(vma); > > } > > > > static inline unsigned long vma_iter_addr(struct vma_iterator *vmi) > > -- > > 2.47.1.613.gc27f4b7a9f-goog > >