From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FD4AC25B78 for ; Tue, 28 May 2024 20:37:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F02BF6B00A1; Tue, 28 May 2024 16:36:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EB2B76B00A5; Tue, 28 May 2024 16:36:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D7A796B00A9; Tue, 28 May 2024 16:36:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id BA3326B00A1 for ; Tue, 28 May 2024 16:36:59 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 5C59A120493 for ; Tue, 28 May 2024 20:36:59 +0000 (UTC) X-FDA: 82168963758.21.F78B4E8 Received: from mail-pj1-f51.google.com (mail-pj1-f51.google.com [209.85.216.51]) by imf18.hostedemail.com (Postfix) with ESMTP id 81D461C0002 for ; Tue, 28 May 2024 20:36:56 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=AF8MYpBd; spf=pass (imf18.hostedemail.com: domain of andrii.nakryiko@gmail.com designates 209.85.216.51 as permitted sender) smtp.mailfrom=andrii.nakryiko@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1716928616; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ShhpbTYkbNzOOiZyKHXPgvAm+XC4MBjMN7onACro4uw=; b=EoSHf62l+WNOu/RK/jZn+9Ch0ia0GpcwRQmXao1RyiYSaKztA8vLYMRL3t4DoPP2IDfUpg hsuoDGJHo1hMCnVQjsxqci0Jzt/Su+tOuTMTFM69ORElIAMLHXXVFf3X1QBhDwdOB7i9LA e2Lkx1OuOF3p9YFCzOXaMy9Wk+xMo0o= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=AF8MYpBd; spf=pass (imf18.hostedemail.com: domain of andrii.nakryiko@gmail.com designates 209.85.216.51 as permitted sender) smtp.mailfrom=andrii.nakryiko@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1716928616; a=rsa-sha256; cv=none; b=CFcOID4pyZIrv0BM5juwZYAq3yMsBD+1n/V2PN6rR3HAJmTHk3XL1qy9xVS3xvrn3p5G88 y6YR7tKHjp0lp1SYwx/pHc4kjBwoKuuQqCJLC+j8SY4vwGkUk+YkIUHMiwj8tt+v7/To0P dWgy3hgI1qKsWwnE55OlA5H3tUD3wDY= Received: by mail-pj1-f51.google.com with SMTP id 98e67ed59e1d1-2bdf69d387eso1055986a91.0 for ; Tue, 28 May 2024 13:36:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1716928615; x=1717533415; darn=kvack.org; h=content-transfer-encoding:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=ShhpbTYkbNzOOiZyKHXPgvAm+XC4MBjMN7onACro4uw=; b=AF8MYpBdVOdICxdashTy/kInFxhB9/O0uxlJBJcYcJM3dYmwhEtwhHxVI5tusWdoTb w941QpGqGKd+XMnH4K8a+/sV41ZFbA6ITIR14jWg2zct0jVIk+bN6B+yebcnjbJ2ynxH g5wMHyXnNy1KCUB/cX+Hc7YXDdrQKhuIda1Q4s3X2uXkFKmOVIPG2D/dnxCih6lIJRJz i0jCtff0io2TUtfwIANuLUQ+hjctbcoQbbU0e8AFRx818gTBNuAjUa0SY+VBkTrA0pPg BPxLGEPhQChB1bSz4JMNv87XX1KZWrMLSIcK4DDvkppjHgh/+sijKh7645tVitNBCJfx LQZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1716928615; x=1717533415; h=content-transfer-encoding:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ShhpbTYkbNzOOiZyKHXPgvAm+XC4MBjMN7onACro4uw=; b=Ygh8+wtkyPsUvuAAfNut5YwSdNJWCAvqZdtWAmkC9TgHVqgQWKtfGCO1C/ul5ykC4C ZQEDC/Ia5bNHajmoVLzDTA7eDtoutePiY+LU483qX7nz9/XH++XiCpKYOoGL6163FXYT LgnksENq1yRvrAuu1Z2U0S5ZDLKqGqmt58nF+TU2RqYbgykepqkiPIErIkUjw5ffQNdk ouEWl3+KuCxPowWqa4E1CtLZjwrMhPBqYvDxaUl0FsgR6HhCmXj/Twe4t87GhSHDRYlK FFFxItye+B4f/wfU0ngHbZ88e+PnUD0trjviUQfAMVpLFX4SedlpaOUkOYxY2/3D5Jyf 6+HQ== X-Forwarded-Encrypted: i=1; AJvYcCVh8rDiKguYDaDEg7SWMkw3/6OcV7wNGrQ6h8gpj2MK5y7RlYTPsMTXcWKaO2zZRDMwV7kbJADA3gj1PSgvzpZE40c= X-Gm-Message-State: AOJu0Yx7YN0/UiqbCisHCFdManFxrS/ztoIlpue8ycbciBqtMRbqr7jw jEwBBrtR3lVojksXzQHfIIi3aO+qPt5lMyMLCuB/InT4t4bLDC/ZYhSCfg7FbL1fE1Km526HiAu 9agOILEadXO02dR0HR8QoIfs2uoI= X-Google-Smtp-Source: AGHT+IHCnozhEFi9S2dOEuRQWFG1HUI+x08ibqkACVSV+SfjqJkUqAwwhGL/3AemhXz6T/B8lXbnqnpNdb9tuIv4TSs= X-Received: by 2002:a17:90a:f684:b0:2b4:32ae:8d29 with SMTP id 98e67ed59e1d1-2bf5f754e0dmr10595116a91.45.1716928615104; Tue, 28 May 2024 13:36:55 -0700 (PDT) MIME-Version: 1.0 References: <20240524041032.1048094-1-andrii@kernel.org> <20240524041032.1048094-5-andrii@kernel.org> In-Reply-To: From: Andrii Nakryiko Date: Tue, 28 May 2024 13:36:42 -0700 Message-ID: Subject: Re: [PATCH v2 4/9] fs/procfs: use per-VMA RCU-protected locking in PROCMAP_QUERY API To: "Liam R. Howlett" , Andrii Nakryiko , linux-fsdevel@vger.kernel.org, brauner@kernel.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, bpf@vger.kernel.org, gregkh@linuxfoundation.org, linux-mm@kvack.org, surenb@google.com, rppt@kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 81D461C0002 X-Stat-Signature: 6pq93fx1yuz116bz7gb8bdy7z8o6ooid X-Rspam-User: X-HE-Tag: 1716928616-277064 X-HE-Meta: U2FsdGVkX1+Bwoi3nEd64AVhxksimB6idqcSYTO8d6KcEC70YMkCCGXi4WboBiCuq8CG46rbTwDqEyIlKJT/aOvbwqh1/mVsnH+lLbEvQpKNY58JRMspddT8g62a9B6bvTkYJv7azKsI1z2gROB/Bm0pLWdbqhS8TGtXf+aPG4o5VvOvWt0Jhmmb0kYFBDbNiyUBey0OXMXsR7aFNDfyKG1BaRmm6sDleXuoBBrrpSrDNZerhRi4rFdhRI7TlOudG4AeKrQMx0jXyNkcsZ7FQLb/zg/e8WCWwx5P4OynoZtdIbuPoBjE8e8liDfWOaNdqAsZvmVhSfDeGrTfe4b3m7wsWz2jOvZOfi6Sc6JvB/iFGAxcz5OEiCS8q464GgWHj8Eb1lE1V0QO4Ed1xSOYkC4BARVxru71VRTC+jG6AgIpghqCyL8CBl2apHcncNWQEdVXRg+5hT7yjK23/rkupjk9Vyf+JN3pcI96wj1eH5oOAAydEf2uI1o98IbvRAG1brMyl52GaeHkf5gLazu+NXSsXJDE186p/4LcLjR8YgsBbEKyiWkUL0/ToSWmVN76l/FFF5htmtErA0RopiQutgbTFGvEFiYBlnYph6bvBQg7KH5KHxO0VeAS+BsP+fbCJdHxVF92qNRjDh/rl/WJjnOl+Pp3NkpXDy2dKCyTO7kKJwzrDRu6AJ3gxJOHGozbaSmxBpn8QofrXGNO3Lfi5hQ5QcHpJw9cdDLgF8laWWmQqMsJouuu/4KYk0fbuW4rCsA8oVvrDR0XX/d1psQeCSSNNFxuMSw9FJDQvOo7Hq3JnNvy0k7MlbheP5w5x2GexkzUyb+9we61Oj1XnAJNByM2jTGgnX/RfzhI01+r92a1XF5Q+swJyUUVe9iRl1kDJiXYXecTGgxSP5auz3+/YCQ3nLkfRekGKk+dkAYaK4pLIwxshk6E2L39QLKNMRQrqvx8e6XbYTIGUPr0sQA Fv7zACYG KCg8qSx8CaD+s8pyCLMJ+39MERrshzEXRzdKb7IuKcELA6c0LfQiDTwediruItslSkTwhb3UfNP0aAo3SWlhi4ipOrhiT15q/FQMv53AZXKyUkzNsO/90TkkNOHPOuFXiwcFwyDI2JfGdCOhbOfYhzOLBqZFdV6lCQoK+hVXHZGdIpKwVoxgzUcf4nwPaP6P5YZLmknrplXLU6l/OxsePk0uFTnb1QsAMGuR6Ahhz7bjFpFXejtXADEqafp4HhgO2IVoGEM/5kMq1QTQpRuVMhdP7CxpCMddojDkKe2sYKYXHPZtqSwB9kPVr5nkF5yO/oFQLDO7R4uKC3nziOVOqMnhWSXA1y3dqZEl7J7FgbLQIvru2uRhiCpVxsZTMOZ8XesNJ2mGV0cD/0PJQPM5B8sMliqfwiwH3kOjCotK/LUNqnqnlkmUAy+mHlEEVx/QBgTO2GAYGqdIubgf6JEGc+LtglnrUDI8Ie/cVvXVHtZ/Hg8egg5KM61ffzOMby3VoGBV6ZS3J3csqf8Uw+cCdQMAaKs4albsMUuA2 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, May 24, 2024 at 12:48=E2=80=AFPM Liam R. Howlett wrote: > > * Andrii Nakryiko [240524 00:10]: > > Attempt to use RCU-protected per-VAM lock when looking up requested VMA > > as much as possible, only falling back to mmap_lock if per-VMA lock > > failed. This is done so that querying of VMAs doesn't interfere with > > other critical tasks, like page fault handling. > > > > This has been suggested by mm folks, and we make use of a newly added > > internal API that works like find_vma(), but tries to use per-VMA lock. > > Thanks for doing this. > > > > > Signed-off-by: Andrii Nakryiko > > --- > > fs/proc/task_mmu.c | 42 ++++++++++++++++++++++++++++++++++-------- > > 1 file changed, 34 insertions(+), 8 deletions(-) > > > > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c > > index 8ad547efd38d..2b14d06d1def 100644 > > --- a/fs/proc/task_mmu.c > > +++ b/fs/proc/task_mmu.c > > @@ -389,12 +389,30 @@ static int pid_maps_open(struct inode *inode, str= uct file *file) > > ) > > > > static struct vm_area_struct *query_matching_vma(struct mm_struct *mm, > > - unsigned long addr, u32 = flags) > > + unsigned long addr, u32 = flags, > > + bool *mm_locked) > > { > > struct vm_area_struct *vma; > > + bool mmap_locked; > > + > > + *mm_locked =3D mmap_locked =3D false; > > > > next_vma: > > - vma =3D find_vma(mm, addr); > > + if (!mmap_locked) { > > + /* if we haven't yet acquired mmap_lock, try to use less = disruptive per-VMA */ > > + vma =3D find_and_lock_vma_rcu(mm, addr); > > + if (IS_ERR(vma)) { > > There is a chance that find_and_lock_vma_rcu() will return NULL when > there should never be a NULL. > > If you follow the MAP_FIXED call to mmap(), you'll land in map_region() > which does two operations: munmap(), then the mmap(). Since this was > behind a lock, it was fine. Now that we're transitioning to rcu > readers, it's less ideal. We have a race where we will see that gap. > In this implementation we may return NULL if the MAP_FIXED is at the end > of the address space. > > It might also cause issues if we are searching for a specific address > and we will skip a VMA that is currently being inserted by MAP_FIXED. > > The page fault handler doesn't have this issue as it looks for a > specific address then falls back to the lock if one is not found. > > This problem needs to be fixed prior to shifting the existing proc maps > file to using rcu read locks as well. We have a solution that isn't > upstream or on the ML, but is being tested and will go upstream. Ok, any ETA for that? Can it be retrofitted into find_and_lock_vma_rcu() once the fix lands? It's not ideal, but I think it's acceptable (for now) for this new API to have this race, given it seems quite unlikely to be hit in practice. Worst case, we can leave the per-VMA RCU-protected bits out until we have this solution in place, and then add it back when ready. > > > + /* failed to take per-VMA lock, fallback to mmap_= lock */ > > + if (mmap_read_lock_killable(mm)) > > + return ERR_PTR(-EINTR); > > + > > + *mm_locked =3D mmap_locked =3D true; > > + vma =3D find_vma(mm, addr); > > If you lock the vma here then drop the mmap lock, then you should be > able to simplify the code by avoiding the passing of the mmap_locked > variable around. > > It also means we don't need to do an unlokc_vma() call, which indicates > we are going to end the vma read but actually may be unlocking the mm. > > This is exactly why I think we need a common pattern and infrastructure > to do this sort of walking. > > Please have a look at userfaultfd patches here [1]. Note that > vma_start_read() cannot be used in the mmap_read_lock() critical > section. Ok, so you'd like me to do something like below, right? vma =3D find_vma(mm, addr); if (vma) down_read(&vma->vm_lock->lock) mmap_read_unlock(mm); ... and for the rest of logic always assume having per-VMA lock. ... The problem here is that I think we can't assume per-VMA lock, because it's gated by CONFIG_PER_VMA_LOCK, so I think we'll have to deal with this mmap_locked flag either way. Or am I missing anything? I don't think the flag makes things that much worse, tbh, but I'm happy to accommodate any better solution that would work regardless of CONFIG_PER_VMA_LOCK. > > > + } > > + } else { > > + /* if we have mmap_lock, get through the search as fast a= s possible */ > > + vma =3D find_vma(mm, addr); > > I think the only way we get here is if we are contending on the mmap > lock. This is actually where we should try to avoid holding the lock? > > > + } > > > > /* no VMA found */ > > if (!vma) > > @@ -428,18 +446,25 @@ static struct vm_area_struct *query_matching_vma(= struct mm_struct *mm, > > skip_vma: > > /* > > * If the user needs closest matching VMA, keep iterating. > > + * But before we proceed we might need to unlock current VMA. > > */ > > addr =3D vma->vm_end; > > + if (!mmap_locked) > > + vma_end_read(vma); > > if (flags & PROCMAP_QUERY_COVERING_OR_NEXT_VMA) > > goto next_vma; > > no_vma: > > - mmap_read_unlock(mm); > > + if (mmap_locked) > > + mmap_read_unlock(mm); > > return ERR_PTR(-ENOENT); > > } > > > > -static void unlock_vma(struct vm_area_struct *vma) > > +static void unlock_vma(struct vm_area_struct *vma, bool mm_locked) > > Confusing function name, since it may not be doing anything with the > vma lock. Would "unlock_vma_or_mm()" be ok? > > > { > > - mmap_read_unlock(vma->vm_mm); > > + if (mm_locked) > > + mmap_read_unlock(vma->vm_mm); > > + else > > + vma_end_read(vma); > > } > > > > static int do_procmap_query(struct proc_maps_private *priv, void __use= r *uarg) > > @@ -447,6 +472,7 @@ static int do_procmap_query(struct proc_maps_privat= e *priv, void __user *uarg) > > struct procmap_query karg; > > struct vm_area_struct *vma; > > struct mm_struct *mm; > > + bool mm_locked; > > const char *name =3D NULL; > > char *name_buf =3D NULL; > > __u64 usize; > > @@ -475,7 +501,7 @@ static int do_procmap_query(struct proc_maps_privat= e *priv, void __user *uarg) > > if (!mm || !mmget_not_zero(mm)) > > return -ESRCH; > > > > - vma =3D query_matching_vma(mm, karg.query_addr, karg.query_flags)= ; > > + vma =3D query_matching_vma(mm, karg.query_addr, karg.query_flags,= &mm_locked); > > if (IS_ERR(vma)) { > > mmput(mm); > > return PTR_ERR(vma); > > @@ -542,7 +568,7 @@ static int do_procmap_query(struct proc_maps_privat= e *priv, void __user *uarg) > > } > > > > /* unlock vma/mm_struct and put mm_struct before copying data to = user */ > > - unlock_vma(vma); > > + unlock_vma(vma, mm_locked); > > mmput(mm); > > > > if (karg.vma_name_size && copy_to_user((void __user *)karg.vma_na= me_addr, > > @@ -558,7 +584,7 @@ static int do_procmap_query(struct proc_maps_privat= e *priv, void __user *uarg) > > return 0; > > > > out: > > - unlock_vma(vma); > > + unlock_vma(vma, mm_locked); > > mmput(mm); > > kfree(name_buf); > > return err; > > -- > > 2.43.0 > > > > [1]. https://lore.kernel.org/linux-mm/20240215182756.3448972-5-lokeshgidr= a@google.com/ > > Thanks, > Liam