From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 429BEC77B7E for ; Tue, 2 May 2023 16:41:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D2814900004; Tue, 2 May 2023 12:41:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CD8AE900002; Tue, 2 May 2023 12:41:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BC6D5900004; Tue, 2 May 2023 12:41:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from mail-yb1-f171.google.com (mail-yb1-f171.google.com [209.85.219.171]) by kanga.kvack.org (Postfix) with ESMTP id 9FC8C900002 for ; Tue, 2 May 2023 12:41:35 -0400 (EDT) Received: by mail-yb1-f171.google.com with SMTP id 3f1490d57ef6-b9d87dffadfso3337261276.3 for ; Tue, 02 May 2023 09:41:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1683045695; x=1685637695; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Ou1t/3BczDtOieZxUC6qnCvlEYzlS6VkX0oYBp6focA=; b=kiNjbvA116UNqx8OjpQ1Hem0hGkHTZgweFN06QYaIXcoeNvgaJ63hBH0LjlrRtq4re uRqbaUPa3Vb7G3a+rTY0KQXlUkQykLGOfY16TDALfzeCSg7Q+4ZJqfLAWVtmfkyCn7gC WT09jljF3cKoqnyVZzMe9BznyD3+UqIvsZNvFP3YtXAQcVSW5mvvvftJaFNNuOSjVt08 Z5REVO6qKBeghWNWvWHhv2Is/+ykGWRlfIun/pzX2AEyGLc4w6dQj42ibHQ5RoUvml8T FwtLKi+aeL9F13e2GV8lDSZPEnwf05CEqfDV6SvUk7Xtuxps5jg+WRmD9fcGRD5j2lC+ RcIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683045695; x=1685637695; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ou1t/3BczDtOieZxUC6qnCvlEYzlS6VkX0oYBp6focA=; b=bex5KeSJo2AgvigkfGmLQEdlytAYxpC+hdY6zA432V73NCXhjs2b0LvuEJM7vjDZ0E wmvkgcQ6B1LVhL2nESvTs9IfTJdDFutXH6/q+KXhRO1UydIzo6B2QVtlDKtgrral91zN m/l2hix7vAEwyohvmE73qiYJqTIqMKE65X3KIXpnCZj5bnFHxKsQRE55iI5zxjq2LK/B HxpoBWEnuC4W/8Ntfec14F2HUl9kR+m+JAuDC3uK9BD9T3vcIpLhijaWUM8joYAgNg4v U7bitTT2Nqh2lAYQJFcGAsrCtZ/vFaxF+KHvHl2aysXwwAlGkM+6/pUPD5Vkdo4WKr78 R5pg== X-Gm-Message-State: AC+VfDwcqCPw7JNq7P2pwkIhmW6mu5nfVgWgok9/HnDvrivG2av0UxyQ AK4wPsf7CvCksQNOgWpu743K1/vYcq+WR/a4PmK8Lg== X-Google-Smtp-Source: ACHHUZ7pFgxpbEMoTmNRS1Qv8xPHDXRoO9l5+qMsCfPIWdI0vytp3y54ZJrtI+jqIanJWyVsQMBjkvkLNl+sTJjv5kY= X-Received: by 2002:a25:688f:0:b0:b9e:7ef1:2bfb with SMTP id d137-20020a25688f000000b00b9e7ef12bfbmr2056028ybc.9.1683045695001; Tue, 02 May 2023 09:41:35 -0700 (PDT) MIME-Version: 1.0 References: <20230501175025.36233-1-surenb@google.com> <20230501175025.36233-2-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Tue, 2 May 2023 09:41:23 -0700 Message-ID: Subject: Re: [PATCH 2/3] mm: drop VMA lock before waiting for migration To: Matthew Wilcox Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, May 2, 2023 at 7:28=E2=80=AFAM Matthew Wilcox = wrote: > > On Mon, May 01, 2023 at 10:50:24AM -0700, Suren Baghdasaryan wrote: > > migration_entry_wait does not need VMA lock, therefore it can be droppe= d > > before waiting. Introduce VM_FAULT_VMA_UNLOCKED to indicate that VMA > > lock was dropped while in handle_mm_fault(). > > Note that once VMA lock is dropped, the VMA reference can't be used as > > there are no guarantees it was not freed. > > How about we introduce: > > void vmf_end_read(struct vm_fault *vmf) > { > if (!vmf->vma) > return; > vma_end_read(vmf->vma); > vmf->vma =3D NULL; > } > > Now we don't need a new flag, and calling vmf_end_read() is idempotent. > > Oh, argh, we create the vmf too late. We really need to hoist the > creation of vm_fault to the callers of handle_mm_fault(). Yeah, unfortunately vmf does not propagate all the way up to do_user_addr_fault which needs to know that we dropped the lock. >