From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51772ECAAD3 for ; Fri, 9 Sep 2022 16:28:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 901406B0071; Fri, 9 Sep 2022 12:28:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B0578D0002; Fri, 9 Sep 2022 12:28:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 778388D0001; Fri, 9 Sep 2022 12:28:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6821E6B0071 for ; Fri, 9 Sep 2022 12:28:26 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 3B91F4156B for ; Fri, 9 Sep 2022 16:28:26 +0000 (UTC) X-FDA: 79893079812.04.EC9E552 Received: from mail-yb1-f181.google.com (mail-yb1-f181.google.com [209.85.219.181]) by imf27.hostedemail.com (Postfix) with ESMTP id E3FA2400D8 for ; Fri, 9 Sep 2022 16:28:25 +0000 (UTC) Received: by mail-yb1-f181.google.com with SMTP id d189so3412890ybh.12 for ; Fri, 09 Sep 2022 09:28:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date; bh=BtrcDF/wsi4Mz50C9AtmB+94fMD35zW5o5lMMRnL+ts=; b=cYytpVKXeuoA9XTPipx92LXKUuqKLMkrk5Aj4EsXo3m6xCDrmFCV96avGVoJSwKUyI IJ+8XyfbhH2xCVoEfmvWp2hmz7qVLqJdi/h9AG/ziv6PiNLTxkPqPIEl1TKgn1Sjnu7V HJC/79fqa6qt3xs+aYFVN8Fi6Jr3++m+jsUPGfbrrlmhm1dnwdm6q+tTo+2vypqDgQ4p SDtTcwo3wuwldJxt/nUGS8qS7+xPZCEA8kmSQ8jh+W2uU3Iix1LhCJQKhtjvM2Rg7zmR C5XH6ocqrK/bPxfVvegO5VJmRFOTFlbDw/quPQFAEBZ/2zG/u+ha7UILuQML8c0Ui/zR 5D/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date; bh=BtrcDF/wsi4Mz50C9AtmB+94fMD35zW5o5lMMRnL+ts=; b=D+RZwSVMY/QLrtqhNv9ls2bpCQ7Zt0aJH1gDmE0GOIy0UViBTw2IvsT8GWwmOIbHGr OD+t8XslV4Enc1JelXzRfRBMF6TWS/iRqgHr02CYPoIeor/KFfRG1uZJbegOJuFwTu60 xuFGTECTAXFjT6UfeC6j3BBeR09awQwNAg+VjeoCrW9namLcnBAlJjXvfEpgKHojh5S4 AwolFIFFsVOVRqk0Ec6r/jbd8N4VyGs/coiMSYF52+QOChB5hxWZmsFGeEafrff27IRQ qXcdpMBxoEFVemoqiS/UyqXdKLBg6wasuWsbzrYC/lCX5gRDzgjiHJIoCbOn9nHfVM/c R6SA== X-Gm-Message-State: ACgBeo0Sj972A/oO6XV5jkQWJltUU0JPnNrIVbdfo+LXTjCJ0X4U1jjG LZleTAclziCqxUvUtYrMZbOnEFNRjgsSEqldBCMJnw== X-Google-Smtp-Source: AA6agR6VwZjHnx3/k4MmEwX2PHQzkWitOBKlKZUri6A+36oevF8WeczM6CeLc2fJXgQcdWaTJpN0N4HZQQ/3z0n9XqU= X-Received: by 2002:a25:d209:0:b0:6a8:e5f1:f179 with SMTP id j9-20020a25d209000000b006a8e5f1f179mr11954900ybg.380.1662740904957; Fri, 09 Sep 2022 09:28:24 -0700 (PDT) MIME-Version: 1.0 References: <20220901173516.702122-1-surenb@google.com> <20220901173516.702122-15-surenb@google.com> <7fcc871c-fcc2-e993-fe88-f0da49ff227a@linux.ibm.com> In-Reply-To: <7fcc871c-fcc2-e993-fe88-f0da49ff227a@linux.ibm.com> From: Suren Baghdasaryan Date: Fri, 9 Sep 2022 09:28:13 -0700 Message-ID: Subject: Re: [RFC PATCH RESEND 14/28] mm: mark VMAs as locked before isolating them To: Laurent Dufour Cc: akpm@linux-foundation.org, michel@lespinasse.org, jglisse@google.com, mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org, mgorman@suse.de, dave@stgolabs.net, willy@infradead.org, liam.howlett@oracle.com, peterz@infradead.org, laurent.dufour@fr.ibm.com, paulmck@kernel.org, luto@kernel.org, songliubraving@fb.com, peterx@redhat.com, david@redhat.com, dhowells@redhat.com, hughd@google.com, bigeasy@linutronix.de, kent.overstreet@linux.dev, rientjes@google.com, axelrasmussen@google.com, joelaf@google.com, minchan@google.com, kernel-team@android.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, x86@kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=cYytpVKX; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf27.hostedemail.com: domain of surenb@google.com designates 209.85.219.181 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1662740905; a=rsa-sha256; cv=none; b=6XUDay7mK8eKL5tpLg4wn/lJ5oqR6sE+92GDu3aCii0uxFx++TrygOhnSH36A8HYgJlpzH DU8Kd1FWP1mcthyRczRlYXiKZwTu+H6bg/6Q5lMjTCu1DPtGiEjUf5BGGPTeWIQj0Zp6J5 AZR39IKNdTMNRqTmHbJmgdXZ4x8heps= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1662740905; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=BtrcDF/wsi4Mz50C9AtmB+94fMD35zW5o5lMMRnL+ts=; b=wSkCZMBr6doUptAm/l+PpLhyVxQYJ9PrKysQLE5RJfYgjDDRhISZDeIainqEwazmqR1ZQM E6DSMLiAMbHm1HP2FjwfiQX5szqDiNK3M30FVQt+8NUb7lzeEeZbsKrKl6iz7JLFa/bFnB mOEGWw3w+FFIlefb6omIwVF6gCttI1c= Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=cYytpVKX; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf27.hostedemail.com: domain of surenb@google.com designates 209.85.219.181 as permitted sender) smtp.mailfrom=surenb@google.com X-Stat-Signature: gj35wqu5pwthydb8w4osrc9obwm8tw8g X-Rspamd-Queue-Id: E3FA2400D8 X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1662740905-213602 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Sep 9, 2022 at 6:35 AM Laurent Dufour wrote= : > > Le 01/09/2022 =C3=A0 19:35, Suren Baghdasaryan a =C3=A9crit : > > Mark VMAs as locked before isolating them and clear their tree node so > > that isolated VMAs are easily identifiable. In the later patches page > > fault handlers will try locking the found VMA and will check whether > > the VMA was isolated. Locking VMAs before isolating them ensures that > > page fault handlers don't operate on isolated VMAs. > > Found another place where the VMA should probably mark locked: > *** drivers/gpu/drm/drm_vma_manager.c: > drm_vma_node_revoke[338] rb_erase(&entry->vm_rb, &node->vm_files); Thanks! I'll add the necessary locking. > > There are 2 others entries in nommu.c but I guess this is not supported, > isn't it? Yes, PER_VMA_LOCK config depends on MMU but for completeness we could add locking there as well (it will be compiled out). > > > > Signed-off-by: Suren Baghdasaryan > > --- > > mm/mmap.c | 2 ++ > > mm/nommu.c | 2 ++ > > 2 files changed, 4 insertions(+) > > > > diff --git a/mm/mmap.c b/mm/mmap.c > > index 094678b4434b..b0d78bdc0de0 100644 > > --- a/mm/mmap.c > > +++ b/mm/mmap.c > > @@ -421,12 +421,14 @@ static inline void vma_rb_insert(struct vm_area_s= truct *vma, > > > > static void __vma_rb_erase(struct vm_area_struct *vma, struct rb_root = *root) > > { > > + vma_mark_locked(vma); > > /* > > * Note rb_erase_augmented is a fairly large inline function, > > * so make sure we instantiate it only once with our desired > > * augmented rbtree callbacks. > > */ > > rb_erase_augmented(&vma->vm_rb, root, &vma_gap_callbacks); > > + RB_CLEAR_NODE(&vma->vm_rb); > > } > > > > static __always_inline void vma_rb_erase_ignore(struct vm_area_struct = *vma, > > diff --git a/mm/nommu.c b/mm/nommu.c > > index e819cbc21b39..ff9933e57501 100644 > > --- a/mm/nommu.c > > +++ b/mm/nommu.c > > @@ -622,6 +622,7 @@ static void delete_vma_from_mm(struct vm_area_struc= t *vma) > > struct mm_struct *mm =3D vma->vm_mm; > > struct task_struct *curr =3D current; > > > > + vma_mark_locked(vma); > > mm->map_count--; > > for (i =3D 0; i < VMACACHE_SIZE; i++) { > > /* if the vma is cached, invalidate the entire cache */ > > @@ -644,6 +645,7 @@ static void delete_vma_from_mm(struct vm_area_struc= t *vma) > > > > /* remove from the MM's tree and list */ > > rb_erase(&vma->vm_rb, &mm->mm_rb); > > + RB_CLEAR_NODE(&vma->vm_rb); > > > > __vma_unlink_list(mm, vma); > > } >