From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB015C001DE for ; Tue, 25 Jul 2023 14:48:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4B65E6B0071; Tue, 25 Jul 2023 10:48:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 467BC6B0074; Tue, 25 Jul 2023 10:48:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 330026B0078; Tue, 25 Jul 2023 10:48:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 22AF56B0071 for ; Tue, 25 Jul 2023 10:48:30 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id E60B8B12E2 for ; Tue, 25 Jul 2023 14:48:29 +0000 (UTC) X-FDA: 81050415138.29.754AC8C Received: from mail-yw1-f169.google.com (mail-yw1-f169.google.com [209.85.128.169]) by imf27.hostedemail.com (Postfix) with ESMTP id 3602D40018 for ; Tue, 25 Jul 2023 14:48:28 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b="pod5/zCK"; spf=pass (imf27.hostedemail.com: domain of surenb@google.com designates 209.85.128.169 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690296508; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=C7Gn8Ry8xEuNqLZZlVRpUHqGw3LsWVjHvu9dSsqqh6M=; b=PMXV7cwWlc4bst6RTsoVEYpIvJ5ZaTHqmK5Tkh6BuBTPFWui86vwdKX0mhpD6rcT4q+yjo GZMldJm8YMusQzY/yglYNcBFuXAj52QOdHWHugYLX+WjDamyRooToG39WtIS+t7BNlsALR s9hxmyF2iSnzIRGRD637OCPM37t7xYY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690296508; a=rsa-sha256; cv=none; b=rbVMDlHkR5/ltwlSIHtpgmZxHFKmdDj2eTiqdY8/l8EVYozEqKPSd5QMDTqavnEsLFbUju CuyYZ7/Li36oC74z5TexzgWgWv4T4jyHKjm43SaAN3QEF4xcTHssdWVW70gsgDXAAmsYhj NaIPrJf+kS14R1Wvtvm7N25rAiPzgaM= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b="pod5/zCK"; spf=pass (imf27.hostedemail.com: domain of surenb@google.com designates 209.85.128.169 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yw1-f169.google.com with SMTP id 00721157ae682-583a8596e2aso42991297b3.1 for ; Tue, 25 Jul 2023 07:48:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690296507; x=1690901307; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=C7Gn8Ry8xEuNqLZZlVRpUHqGw3LsWVjHvu9dSsqqh6M=; b=pod5/zCKGdx+j1A5GaPsDMKe71jNpry52nQRDuYvgWfw3rdHxiSTLWfnfqhb3+Y7zE oApC3IgjqJAbGTe/LswaJKxlzxofUI4eyjEI83GFJeVodKtGJJ9jDYHWQoBlfI/DPJ5g DLeg7+CuFS7tE/sDPMmxrt5qo+BLq5hBUhvYJAbiBvO9odmBbPdzaOTyFXem5iCSUSGk UuLnz48r5JE2z1vK2EbYetvXVkVAEp8EguPdZJma4vsO6BmatBkmiZP2vDdFNKRzjNmh DiPMEZdTxmCGtdH8/STdzb8WIA8Dx2i4xsGtPOgp/eEg5YeXm3dE6KSXoDiV07mmvqlS wJ7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690296507; x=1690901307; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=C7Gn8Ry8xEuNqLZZlVRpUHqGw3LsWVjHvu9dSsqqh6M=; b=Gspqe5liAmwoYLaFKhLFNbiZq3+FWPEa05BbZowjgzsfeZCdXehBVIxq26gP98iCnc IvJ1WmjGLixlCiFv0RiDMuvq9yDnyk7ATVcHdTCeblQQZm+xQtMNLW0NQhTkT8NOTIWE 6mF+OsDM8EpAxc0Y8kjafqXKufaIDbeWWWLTR9nqaUI1o1FXQ2s1N341S6pfz4Hj2Q6J ESempCNMvgLNfHLcRMdVPB3Qximwu9XOKO3MNPJ9B9tp6sjjoSGlWJ9hvdUtaxoy1cQR HfOh/EwL5vLCHGtQtTgOfSkRoW2Ss3noba4fsWlbFYXpL+6c8ogB6gJnOh1KxRIritVc 4BvQ== X-Gm-Message-State: ABy/qLbMeazZZQsi4+njZ8zefpDeT9s+b/01aMR5HGwJqL2bWl+Np2Iz ATJ7522zmdlO5ITEbciDjq93pVaVmcyxVN6FjA1WVA== X-Google-Smtp-Source: APBJJlEQfAQn35hlyIHILR+I/sT0BQR4wNemdEnIKC2iJDo1Qf/kezcP94RPO7G+G95eOS7IOZqLclHRIkZ/jKc+UFE= X-Received: by 2002:a25:b225:0:b0:d12:25d:fd60 with SMTP id i37-20020a25b225000000b00d12025dfd60mr4943788ybj.9.1690296507091; Tue, 25 Jul 2023 07:48:27 -0700 (PDT) MIME-Version: 1.0 References: <20230724185410.1124082-1-willy@infradead.org> <20230724185410.1124082-3-willy@infradead.org> <20230725-anaconda-that-ac3f79880af1@wendy> In-Reply-To: From: Suren Baghdasaryan Date: Tue, 25 Jul 2023 07:48:17 -0700 Message-ID: Subject: Re: [PATCH v3 02/10] mm: Allow per-VMA locks on file-backed VMAs To: Matthew Wilcox Cc: Conor Dooley , Andrew Morton , linux-mm , linux-fsdevel , Punit Agrawal , Arjun Roy , Eric Dumazet Content-Type: multipart/alternative; boundary="000000000000df29f2060150d23d" X-Rspamd-Queue-Id: 3602D40018 X-Rspam-User: X-Stat-Signature: 99zqexthoyd11uber4egfruurdyytgwz X-Rspamd-Server: rspam03 X-HE-Tag: 1690296508-246827 X-HE-Meta: U2FsdGVkX19/em4YBNID4NKyENXd608UhLvgaJw8os6MeA4JaVhV2cBjys/X8504xs31fQdNiGTmQEu14Q0GVCoryg1t0ywcEeQ9WFqkjYOZPIoYJcJNKEPfodHLRJRfmV1J48naAmvKMJSaiSu6eh6GXwuwXkEOwg+BJqLgWpTcB5ROYDqMsXi5aKEXPQJlMU3MbTT6Jmb8R1OViodnmwSN0OmlpgEGyHhLkt1ljn8S1rid3Fiv86SU4ym7D5IErrZysi3+OedSoxfp148sPWSuxaoN3s31JwlRzKszXHf8O4IKSoP/QgFUdLwNAc6OheuvuFzq9dDyppbCQ7OVaqC+7xCrzMdSQC+GTksBTWhGhsXiNemHdc21eX/sk3m6JZG/ca5m47Qd2YAYf4TY0tmV/RVv+Ex3FO3bwFXrD4b/7hzhEVcp5QiyRIKdzfGwxi0NmpRKL6zijk1PkkpyIEP3OL89wiXFzRjVkAGZmxQOY7LDyRJSVbKg0+c7TAEWUr6Lb8xJ51ZAsnJ47vjew/HOSXMXLazMhvT47tLyExzMeytfw6M/7bu1o4XrPqhYyo34oDXnuTHCHN7VxWL6lJRUr+CsoYenR5WI9PaYeK8cBQBRy2urD8f5yylLf6RQaz9ur+iNVQX2eMgIAvUQC7RRwmEAIHs7br3VrAPplmInbgXKwWvn/mHq+N7+UbkLZuNgHbjkHFCytiIAYAUHSgKzUsSzIHf+JvzmBKjpm3YPunmK+R+TgpoSq8AWrooHcVWj2fR+9yG6fiSo/KuXHZiTe/gUy7HRzraoINiY/MfgbPsOz8pePiUKkPylGieG49JQxPjU3Wc0XCmGsvxvIAtcDIAukp5vhZpvUbMZxQHz/dvONeRy39XbDbXZXRUe1F8w8gHMwxyz9n77/c8O7ALtKgRcK2ogp9bzLDIMzFJiijQShoiHFnHBuptYL6wDS58G4xbhzv40kwi8DmJ c/nj2FCA NypilYFnZfbKQlsX/TcTfaP6dWQ6jGQoysZ54i9lQvuIZzrh3Xuaa1b9v1+bi0tlt5Ng05oVX1ZcUyHA7p54RVEl9FErRF2HkA4wJ6GJaKERNsxk37+430bBMWkDe/AooiMI6l+Mo09sNcpY6kUmfHIA5GY+MpQYHgQ2YpQEBG+vabrD+2u+gcHkY+vJtYnSJnyQFWxSP9d2BdPwVz6qFfldbvMeJ3foCEv+h/qiNLlJrdu4FozTIyYaSwVglYfAEEA5F6YcMusn8NDBqRhbQT0F99cmyGgCQWI1o4RIZarPhCdFLiByQe2TlTbt8geKoGlFDGWWtrjl2gVS5NTlhMTv8l+6JGWc632wqxhqxjnZ14mKLiJ6xHyzfJGLac4VUbnndKR1QnaOr3e9Hk7766pTOYqTPOSBrd2MDrVw3s2c2svfhwoFcOKOIZ+PZNU8r/SuSohY4qUz5PLft2ZIw24aw7Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: --000000000000df29f2060150d23d Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Tue, Jul 25, 2023, 7:31 AM Matthew Wilcox wrote: > On Tue, Jul 25, 2023 at 07:15:08AM -0700, Suren Baghdasaryan wrote: > > On Tue, Jul 25, 2023 at 5:58=E2=80=AFAM Conor Dooley > wrote: > > > > > > Hey, > > > > > > On Mon, Jul 24, 2023 at 07:54:02PM +0100, Matthew Wilcox (Oracle) > wrote: > > > > Remove the TCP layering violation by allowing per-VMA locks on all > VMAs. > > > > The fault path will immediately fail in handle_mm_fault(). There > may be > > > > a small performance reduction from this patch as a little > unnecessary work > > > > will be done on each page fault. See later patches for the > improvement. > > > > > > > > Signed-off-by: Matthew Wilcox (Oracle) > > > > Reviewed-by: Suren Baghdasaryan > > > > Cc: Arjun Roy > > > > Cc: Eric Dumazet > > > > > > Unless my bisection has gone awry, this is causing boot failures for = me > > > in today's linux-next w/ a splat like so. > > > > This patch requires [1] to work correctly. It follows the rule > > introduced in [1] that anyone returning VM_FAULT_RETRY should also do > > vma_end_read(). [1] is merged into mm-unstable but has not reached > > linux-next yet, it seems. > > No, it's in linux-next, but you didn't fix riscv ... > > Andrew, can you add this fix to Suren's patch? > "mm: drop per-VMA lock when returning VM_FAULT_RETRY or VM_FAULT_COMPLETE= D" > Oops. Not sure how I missed riscv. Yes, please, the fix below is correct. > diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c > index 046732fcb48c..6115d7514972 100644 > --- a/arch/riscv/mm/fault.c > +++ b/arch/riscv/mm/fault.c > @@ -296,7 +296,8 @@ void handle_page_fault(struct pt_regs *regs) > } > > fault =3D handle_mm_fault(vma, addr, flags | FAULT_FLAG_VMA_LOCK, > regs); > - vma_end_read(vma); > + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) > + vma_end_read(vma); > > if (!(fault & VM_FAULT_RETRY)) { > count_vm_vma_lock_event(VMA_LOCK_SUCCESS); > --000000000000df29f2060150d23d Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable


On Tue, Jul 25, 2023, 7:31 AM Matthew Wilcox <willy@infradead.org> wrote:
On Tue, Jul 25, 2023 at 07:15:08AM -0700,= Suren Baghdasaryan wrote:
> On Tue, Jul 25, 2023 at 5:58=E2=80=AFAM Conor Dooley <conor= .dooley@microchip.com> wrote:
> >
> > Hey,
> >
> > On Mon, Jul 24, 2023 at 07:54:02PM +0100, Matthew Wilcox (Oracle)= wrote:
> > > Remove the TCP layering violation by allowing per-VMA locks = on all VMAs.
> > > The fault path will immediately fail in handle_mm_fault().= =C2=A0 There may be
> > > a small performance reduction from this patch as a little un= necessary work
> > > will be done on each page fault.=C2=A0 See later patches for= the improvement.
> > >
> > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.= org>
> > > Reviewed-by: Suren Baghdasaryan <surenb@google.com>=
> > > Cc: Arjun Roy <arjunroy@google.com>
> > > Cc: Eric Dumazet <edumazet@google.com>
> >
> > Unless my bisection has gone awry, this is causing boot failures = for me
> > in today's linux-next w/ a splat like so.
>
> This patch requires [1] to work correctly. It follows the rule
> introduced in [1] that anyone returning VM_FAULT_RETRY should also do<= br> > vma_end_read(). [1] is merged into mm-unstable but has not reached
> linux-next yet, it seems.

No, it's in linux-next, but you didn't fix riscv ...

Andrew, can you add this fix to Suren's patch?
"mm: drop per-VMA lock when returning VM_FAULT_RETRY or VM_FAULT_COMPL= ETED"


Oops. Not s= ure how I missed riscv. Yes, please, the fix below is correct.


diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c
index 046732fcb48c..6115d7514972 100644
--- a/arch/riscv/mm/fault.c
+++ b/arch/riscv/mm/fault.c
@@ -296,7 +296,8 @@ void handle_page_fault(struct pt_regs *regs)
=C2=A0 =C2=A0 =C2=A0 =C2=A0 }

=C2=A0 =C2=A0 =C2=A0 =C2=A0 fault =3D handle_mm_fault(vma, addr, flags | FA= ULT_FLAG_VMA_LOCK, regs);
-=C2=A0 =C2=A0 =C2=A0 =C2=A0vma_end_read(vma);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0if (!(fault & (VM_FAULT_RETRY | VM_FAULT_CO= MPLETED)))
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0vma_end_read(vma);<= br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 if (!(fault & VM_FAULT_RETRY)) {
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 count_vm_vma_lock_e= vent(VMA_LOCK_SUCCESS);
--000000000000df29f2060150d23d--