From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 971F3C77B73 for ; Tue, 2 May 2023 14:29:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 014AB6B0071; Tue, 2 May 2023 10:29:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F07916B0074; Tue, 2 May 2023 10:29:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E1E3D6B0075; Tue, 2 May 2023 10:29:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by kanga.kvack.org (Postfix) with ESMTP id BD0C06B0071 for ; Tue, 2 May 2023 10:29:26 -0400 (EDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=ELVVbw+54Ch0+43hfmruTFJjfFtCPHuCtqF3FOZLWUg=; b=g6eJOOGyCZpxMJcZ9yJoCd1AkW 2IkBBSA32Oo0CTWn86cEPm2abQyr6zxmX9+QTElcddtTBR4RaOC61lOPB28TuaREMOa2wYLayK/xV FoVhwKpwjOt7jF3g1kfQywXcqG3cIbxnW7/lZu7EAm4agzRj/7wISg6iZqRchmiM8+9iPpWvIWcI2 2O5CaiAcYoNk/NY7VB63QeNbmTStykUos/zP+fBTZbzowDUWBWJBb4hbKiQnRxNiDaIZukmxjn4z3 rBmy8mzsnFH4Wjh50AadBku1oeCN2tD/XryfULi2xIFup0LvJAgKiLPfOLsMophIqdjunqZcjC0J9 4n8EVpyA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1ptqzm-008Nj4-ED; Tue, 02 May 2023 14:28:46 +0000 Date: Tue, 2 May 2023 15:28:46 +0100 From: Matthew Wilcox To: Suren Baghdasaryan Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Subject: Re: [PATCH 2/3] mm: drop VMA lock before waiting for migration Message-ID: References: <20230501175025.36233-1-surenb@google.com> <20230501175025.36233-2-surenb@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230501175025.36233-2-surenb@google.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000030, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, May 01, 2023 at 10:50:24AM -0700, Suren Baghdasaryan wrote: > migration_entry_wait does not need VMA lock, therefore it can be dropped > before waiting. Introduce VM_FAULT_VMA_UNLOCKED to indicate that VMA > lock was dropped while in handle_mm_fault(). > Note that once VMA lock is dropped, the VMA reference can't be used as > there are no guarantees it was not freed. How about we introduce: void vmf_end_read(struct vm_fault *vmf) { if (!vmf->vma) return; vma_end_read(vmf->vma); vmf->vma = NULL; } Now we don't need a new flag, and calling vmf_end_read() is idempotent. Oh, argh, we create the vmf too late. We really need to hoist the creation of vm_fault to the callers of handle_mm_fault().