From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C852C77B7D for ; Mon, 15 May 2023 06:55:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2068D900003; Mon, 15 May 2023 02:55:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1B544900002; Mon, 15 May 2023 02:55:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 07E2C900003; Mon, 15 May 2023 02:55:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id EAA9D900002 for ; Mon, 15 May 2023 02:55:04 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B52BEC1070 for ; Mon, 15 May 2023 06:55:04 +0000 (UTC) X-FDA: 80791577328.21.E5F630A Received: from mail-wm1-f44.google.com (mail-wm1-f44.google.com [209.85.128.44]) by imf05.hostedemail.com (Postfix) with ESMTP id D5A0910000B for ; Mon, 15 May 2023 06:55:02 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=lduHuKuU; spf=pass (imf05.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.44 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1684133703; a=rsa-sha256; cv=none; b=5RRT+exaPktudR82f3up0LxGFW/E8Rr4DGBhRKkXs7fQbdfPGzKnpnsZS4mIyvK99f4LUs 917Y7/97fYUfgpqNchN/sr83OVmQS9MfqaHIZESgTvH4ksNLy4q7K+NYmLr4rr+rwPoF7P 4WxlwimKRvDAs5gFbqZejT692kCKR1I= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=lduHuKuU; spf=pass (imf05.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.128.44 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1684133703; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=txIEkaJxNsUXZY32mj0SNE7DB1iM3liYdI1GT66Ej1w=; b=mXVf9zHiz2udoHyfu6wXt1D2PKPEALYa+OawAblP2Hc9g2bTErKRE7JndIgsFDlB9fyvou ttx4E6XbxozyP8Sidrc8NrijW9gyJD4sC7nI7hm0kMtxr8AuV/F+fMRHXm3JClHPYf8cBx Pgdgr/sRDZZTgEAbx3FhQAGhMrV8ePc= Received: by mail-wm1-f44.google.com with SMTP id 5b1f17b1804b1-3f475366514so38597125e9.2 for ; Sun, 14 May 2023 23:55:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1684133701; x=1686725701; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=txIEkaJxNsUXZY32mj0SNE7DB1iM3liYdI1GT66Ej1w=; b=lduHuKuUc3/mLwsLIg3WrLQAquZp6rMRhdo4iF9Vt2mqUm1rN/PZkqQK1xuvXi48Zp bJSZsi5U8zTsjlRRU1ngQo8EtMGrkrQUWUwvKVNFXpojVh1y9UqAmnCCyOtvIHryCEfk rBe2WcADGSHbZjlsdcL3gsvuBX3VW9s6ipCCTWWfV9LT8T1PUrURjokZDNSpEQuOWm+m aAizNIPQpTJALbuAi7gs/GuGpaaDwBStN1NDabKDIJGq9RR/eykHQsPd5UxfXDu87Zah J2ZgkdfIKzxZcJLc9hvU8+KoXlt0SpEUVSvawJpT34kuVlNc49NrPbZCIDygU/9qwIYy nxkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684133701; x=1686725701; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=txIEkaJxNsUXZY32mj0SNE7DB1iM3liYdI1GT66Ej1w=; b=NKdUeRavOLW41nP5/Ym2rwYGALqRZOi3wDTgKsNvPo6Fd2O8NI4qyCWovV7GhlzTHk 88OWw4BN1pQ9qiQNTh/+L7y4hTRsiTPjro62HF6h8Albf+Id7wVPj/tP0m1p4Jnz/lCf A0F4D3GdHi/v0jskIATfjwODH4IhOQzFMpuIM1xlUsp3e1hayLaZLo6JDjwxoz3FpCOQ P7zhRmPRCJBZJmhutOFRfl1L4kAUAiBgfoWHz4ORonb723rB7BrA955C/7uU0W/sG527 pP5fIp9whkvMA4plbvN0Inm8gqfe2vigqmW+Po1o1hjmKFfowo/0NbMBayg1IUf4gi6k SUAw== X-Gm-Message-State: AC+VfDy/mtIKPWZ5zqI869Hbufl1LSNhiZrUWD8IerORLKjI6AMoAX5Q yZeA85oZ4KFCDT6u7/0Lw4g= X-Google-Smtp-Source: ACHHUZ4oxpNS3xTa9W0HiDLQjWh20E1/3f6bvn3I1qPWGwn6LN/VBJMmURzCAUJfmUNyjbUUJYSl8w== X-Received: by 2002:a05:600c:1149:b0:3f5:e7f:5327 with SMTP id z9-20020a05600c114900b003f50e7f5327mr511144wmz.10.1684133700805; Sun, 14 May 2023 23:55:00 -0700 (PDT) Received: from localhost (host86-156-84-164.range86-156.btcentralplus.com. [86.156.84.164]) by smtp.gmail.com with ESMTPSA id p1-20020a05600c204100b003f4e47c6504sm13540041wmg.21.2023.05.14.23.54.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 14 May 2023 23:54:59 -0700 (PDT) Date: Mon, 15 May 2023 07:54:58 +0100 From: Lorenzo Stoakes To: Mike Rapoport Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Peter Xu , Mark Rutland , "Liam R . Howlett" , Alexander Viro , Christian Brauner Subject: Re: [PATCH for 6.4-rcX] mm: userfaultfd: avoid passing an invalid range to vma_merge() Message-ID: References: <20230514172731.134188-1-lstoakes@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspam-User: X-Stat-Signature: 7kz5u46dqhgbuwgmtjfn7kwykymx91u9 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: D5A0910000B X-HE-Tag: 1684133702-916563 X-HE-Meta: U2FsdGVkX18bZGVFWbGOOkfohpiH0roMz/FZ1R2HG99W/qDL5lp4riGh4gIQ74teqBDGQLBou5ntE0tfDGL0MOOZbkG3kzn2QRkVlG7oaEFGy139xr8Dlm/NZPjfdyxar90GfChGblbLj4ES+tEnOHKUgLbzqucaKyQbdzxx2VakfoKU7hWw7HUKGufyKJ+93MKY62iy92jdVPfBkLeTgqaiG9cQ5727VXvASwJLm80nxQvQTN2SJ50NKJvhdYOZveWSrouqS8J/Uosd2uLNSY9zxVR3LQltG6grBSxYIqtRtG8YpFNy87Ij9Bf0ZCnUrUVeDI8XgMRdlwVE80TWJJ6gwUMAmAu0SYHq6pVdKV7xG/mtLr/dk4WKTBxIbU4xWF0a/n8j1dGmKZxpj5wSjac0bmhIQ5k7xf+wR0udMaHacToUPOeo4gULdf9E8NkAZtELkipzxx4ZqSVzE5t4uhc5zDpSJWylqc4IEaNK/7kfuIIRaHMutSv4RYaGmASFDG8rRRGm+Ldw9PWZbI+mpvT2zMluIdVDcuwk0qT4yeQGqWFbWL54dqwbFZy5XV4gV46WII41Rcbm7GsXaf8nNW5E78w+11SLX1tNJo9Zzi93UgQfa/oIXnv/pczEsjzujMKHDCdvJkXxL3dYEyKdjfK7iPrbMQPo9uc9VbptT7x/U9e3sb8ZjuPgsSrg3ubYYux6gWDU/72qkJuokLnEV4Vv7uGPvkqCt8eVoub0WgMc7Jn1HHNDbQjSg9pq2OGpwsximmvjDZe3QBhFjdbyMHJLR0PdijP4/wSKzhhEE1GFw6CXgthoO1ukOvWYVZZ6KtihUCUs0huKqY6bPEhrYIPiJiU6ryjXENSSwhCc7RwRjVnIfwSR2TKUkQQ9pqKYc7KfadUXwaQh+HDV36KJKhWTOGvsvMV0TjxYtZkUCVXDNFFhPsB57rlrSfRU6uDkm/kchvPY6Vak6+lPEBW jlRx+5eB 6n5Jap2C+nHMP+e7F+9ivy6DjmySZCa0uSZ74qtUgX8qTMP9mb3LjbJws0xWGzKubxJgnOqjQGN+1WRaf6cdmOaG+E/e0TP5K5IqI97pXmtFfI12v7VBvm5Ubh9VSSZNqfDbHjMXzd8v3w5UJwzlnqGYTEEVrt7kYl7VGyCYw7yyTnGH1A5DAOusB7uz7Moy4CZ6+67S5dMnQ5SzWp2ivInjL4ZnHoQ56I9IoXRgyWWX8tS9QOMDUFeduAJA/0q9KYS91Fss/ldXpUt+lGjB02EGTAP038FMcsRaL7R36/umH4ycpxE8so89tCzmjF2/u3TDPeH5ANMj/4L/qpmHHxDRcVPOClkNs9On/7P12Aq0ipAEw3Jp4fmWcl5ZcAmjYghIZcBhoi4q7VJWu2jyCTwzbi3Al0Vtx7QscbM7UHTm4pehQPMyUhjJ4uNPkuSksUvGaaLGTZ6m2B6nM6s+qno+y1TspJv5yVChIDqaLcl4p/ESYCyqKN5va0d2dEE/eXQd5eTVhMUT8O2uBLUC/98dGNLs0PLlIopz4 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, May 15, 2023 at 09:40:50AM +0300, Mike Rapoport wrote: > On Sun, May 14, 2023 at 06:27:31PM +0100, Lorenzo Stoakes wrote: > > The userfaultfd_[un]register() functions will knowingly pass an invalid > > address range to vma_merge(), then rely on it failing to merge to indicate > > that the VMA should be split into a valid one. > > > > This is not something that should be relied upon, as vma_merge() implicitly > > assumes in cases 5-8 that curr->vm_start == addr. This is now enforced > > since commit b0729ae0ae67 ("mm/mmap/vma_merge: explicitly assign res, vma, > > extend invariants") with an explicit VM_WARN_ON() check. > > > > Since commit 29417d292bd0 ("mm/mmap/vma_merge: always check invariants") > > this check is performed unconditionally, which caused this assert to arise > > in tests performed by Mark [1]. > > > > This patch fixes the issue by performing the split operations before > > attempting to merge VMAs in both instances. The problematic operation is > > splitting the start of the VMA since we were clamping to the end of the VMA > > in any case, however it is useful to group both of the split operations > > together to avoid egregious goto's and to abstract the code between the > > functions. > > > > As well as fixing the repro described in [1] this also continues to pass > > uffd unit tests. > > > > [1]:https://lore.kernel.org/all/ZFunF7DmMdK05MoF@FVFF77S0Q05N.cambridge.arm.com > > > > Reported-by: Mark Rutland > > Closes: https://lore.kernel.org/all/ZFunF7DmMdK05MoF@FVFF77S0Q05N.cambridge.arm.com/ > > Signed-off-by: Lorenzo Stoakes > > --- > > fs/userfaultfd.c | 111 +++++++++++++++++++++++++++-------------------- > > 1 file changed, 63 insertions(+), 48 deletions(-) > > > > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c > > index 0fd96d6e39ce..4453e7040157 100644 > > --- a/fs/userfaultfd.c > > +++ b/fs/userfaultfd.c > > @@ -1319,6 +1319,35 @@ static __always_inline int validate_range(struct mm_struct *mm, > > return 0; > > } > > > > +static int split_range(struct vma_iterator *vmi, > > + struct vm_area_struct *vma, > > + unsigned long start, > > + unsigned long end, > > + bool *can_merge) > > Maybe clamp_range()? > I'd also prefer to fill lines with parameters, rather than have each on a > separate line. Sure on both. You know I very nearly called it clamp_range() to start with but then thought perhaps it wasn't clear that it'd split the VMAs, but naming is... hard :) Will fix both on next respin. > > > +{ > > + int ret; > > + bool merge = true; > > + > > + /* The range must always be clamped to the start of a VMA. */ > > + if (vma->vm_start < start) { > > + ret = split_vma(vmi, vma, start, 1); > > + if (ret) > > + return ret; > > + > > + merge = false; > > + } > > + > > + /* It must also be clamped to the end of a VMA. */ > > + if (vma->vm_end > end) { > > + ret = split_vma(vmi, vma, end, 0); > > + if (ret) > > + return ret; > > + } > > + > > + *can_merge = merge; > > + return 0; > > +} > > + > > static int userfaultfd_register(struct userfaultfd_ctx *ctx, > > unsigned long arg) > > { > > @@ -1330,7 +1359,7 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx, > > unsigned long vm_flags, new_flags; > > bool found; > > bool basic_ioctls; > > - unsigned long start, end, vma_end; > > + unsigned long start, end; > > struct vma_iterator vmi; > > > > user_uffdio_register = (struct uffdio_register __user *) arg; > > @@ -1462,6 +1491,8 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx, > > > > ret = 0; > > for_each_vma_range(vmi, vma, end) { > > + bool can_merge; > > + > > cond_resched(); > > > > BUG_ON(!vma_can_userfault(vma, vm_flags)); > > @@ -1477,32 +1508,22 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx, > > (vma->vm_flags & vm_flags) == vm_flags) > > goto skip; > > > > - if (vma->vm_start > start) > > - start = vma->vm_start; > > I don't think this can be removed. Consider a request to register uffd for > a range that spans two disjoint VMAs. Then on the second iteration start > will be equal to vm_end of the first VMA, so it should be clamped to > vm_start of the second VMA. > All references to start from here on in are replaced with references to vma->vm_start, so this is implicit in the logic. In effect the existing code was clamping to the range anyway, this patch actually helps clarify that I feel. > > - vma_end = min(end, vma->vm_end); > > + ret = split_range(&vmi, vma, start, end, &can_merge); > > + if (ret) > > + break; > > > > new_flags = (vma->vm_flags & ~__VM_UFFD_FLAGS) | vm_flags; > > - prev = vma_merge(&vmi, mm, prev, start, vma_end, new_flags, > > - vma->anon_vma, vma->vm_file, vma->vm_pgoff, > > - vma_policy(vma), > > - ((struct vm_userfaultfd_ctx){ ctx }), > > - anon_vma_name(vma)); > > - if (prev) { > > + if (can_merge) { > > + prev = vma_merge(&vmi, mm, prev, vma->vm_start, vma->vm_end, new_flags, > > + vma->anon_vma, vma->vm_file, vma->vm_pgoff, > > + vma_policy(vma), > > + ((struct vm_userfaultfd_ctx){ ctx }), > > + anon_vma_name(vma)); > > + > > /* vma_merge() invalidated the mas */ > > - vma = prev; > > - goto next; > > - } > > - if (vma->vm_start < start) { > > - ret = split_vma(&vmi, vma, start, 1); > > - if (ret) > > - break; > > - } > > - if (vma->vm_end > end) { > > - ret = split_vma(&vmi, vma, end, 0); > > - if (ret) > > - break; > > + if (prev) > > + vma = prev; > > } > > - next: > > /* > > * In the vma_merge() successful mprotect-like case 8: > > * the next vma was merged into the current one and > > @@ -1560,7 +1581,7 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx, > > struct uffdio_range uffdio_unregister; > > unsigned long new_flags; > > bool found; > > - unsigned long start, end, vma_end; > > + unsigned long start, end; > > const void __user *buf = (void __user *)arg; > > struct vma_iterator vmi; > > > > @@ -1627,6 +1648,8 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx, > > prev = vma_prev(&vmi); > > ret = 0; > > for_each_vma_range(vmi, vma, end) { > > + bool can_merge; > > + > > cond_resched(); > > > > BUG_ON(!vma_can_userfault(vma, vma->vm_flags)); > > @@ -1640,9 +1663,9 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx, > > > > WARN_ON(!(vma->vm_flags & VM_MAYWRITE)); > > > > - if (vma->vm_start > start) > > - start = vma->vm_start; > > Ditto Same comment as above, we no longer refer to start only vma->vm_start after this point. > > > - vma_end = min(end, vma->vm_end); > > + ret = split_range(&vmi, vma, start, end, &can_merge); > > + if (ret) > > + break; > > > > if (userfaultfd_missing(vma)) { > > /* > > @@ -1652,35 +1675,27 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx, > > * UFFDIO_WAKE explicitly. > > */ > > struct userfaultfd_wake_range range; > > - range.start = start; > > - range.len = vma_end - start; > > + range.start = vma->vm_start; > > + range.len = vma->vm_end - vma->vm_start; > > wake_userfault(vma->vm_userfaultfd_ctx.ctx, &range); > > } > > > > /* Reset ptes for the whole vma range if wr-protected */ > > if (userfaultfd_wp(vma)) > > - uffd_wp_range(vma, start, vma_end - start, false); > > + uffd_wp_range(vma, vma->vm_start, > > + vma->vm_end - vma->vm_start, false); > > > > new_flags = vma->vm_flags & ~__VM_UFFD_FLAGS; > > - prev = vma_merge(&vmi, mm, prev, start, vma_end, new_flags, > > - vma->anon_vma, vma->vm_file, vma->vm_pgoff, > > - vma_policy(vma), > > - NULL_VM_UFFD_CTX, anon_vma_name(vma)); > > - if (prev) { > > - vma = prev; > > - goto next; > > - } > > - if (vma->vm_start < start) { > > - ret = split_vma(&vmi, vma, start, 1); > > - if (ret) > > - break; > > - } > > - if (vma->vm_end > end) { > > - ret = split_vma(&vmi, vma, end, 0); > > - if (ret) > > - break; > > + if (can_merge) { > > + prev = vma_merge(&vmi, mm, prev, vma->vm_start, > > + vma->vm_end, new_flags, vma->anon_vma, > > + vma->vm_file, vma->vm_pgoff, > > + vma_policy(vma), > > + NULL_VM_UFFD_CTX, anon_vma_name(vma)); > > + /* vma_merge() invalidated the mas */ > > + if (prev) > > + vma = prev; > > } > > - next: > > /* > > * In the vma_merge() successful mprotect-like case 8: > > * the next vma was merged into the current one and > > -- > > 2.40.1 > > -- > Sincerely yours, > Mike.