From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9829C7EE2E for ; Mon, 12 Jun 2023 18:44:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4069C8E0002; Mon, 12 Jun 2023 14:44:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3BA016B0075; Mon, 12 Jun 2023 14:44:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 25C4E8E0002; Mon, 12 Jun 2023 14:44:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 126506B0074 for ; Mon, 12 Jun 2023 14:44:48 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id CE513120309 for ; Mon, 12 Jun 2023 18:44:47 +0000 (UTC) X-FDA: 80894972214.19.7E8D0B8 Received: from mail-ot1-f41.google.com (mail-ot1-f41.google.com [209.85.210.41]) by imf12.hostedemail.com (Postfix) with ESMTP id CDFF54001C for ; Mon, 12 Jun 2023 18:44:45 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=5Y4Vy2ii; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf12.hostedemail.com: domain of surenb@google.com designates 209.85.210.41 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1686595485; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gLSvKncaUgXX5sMVjgHiwvAyzgL2miMkV5TwRmL7eWs=; b=nF4LgF870/n4cOYGsrWWZMtkw06HZvHsaqZhVZIoV7q/dWlf4FcYkmV88bhTGseOggKvHB gCrZaazgq69JpKGoGHRVTVjRs6aC+/H7wxn8x/LihFrwIr0S2cBp3i6SuQt+krv5O53WsK 9rKw15RfiAxPecucsFTCM0xUdOpAI7E= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=5Y4Vy2ii; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf12.hostedemail.com: domain of surenb@google.com designates 209.85.210.41 as permitted sender) smtp.mailfrom=surenb@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1686595485; a=rsa-sha256; cv=none; b=Iir6AMYIMWuETbw31ee5Vj0TizxMBfEYFd8is8Gxk+khAVFSAKkNCBzEnEVzqE9lnTaOMl 1/FSusK62gqp/A9Zn0NrnP+bOaGKAhU7uizcu7qo8btzNyJWC5Rukh9SicrObSTEI40aVM WHZ5sAq9zupY0KfpsQ4Hf8isQRrOW6g= Received: by mail-ot1-f41.google.com with SMTP id 46e09a7af769-6b1fa5a03daso2231060a34.1 for ; Mon, 12 Jun 2023 11:44:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686595485; x=1689187485; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=gLSvKncaUgXX5sMVjgHiwvAyzgL2miMkV5TwRmL7eWs=; b=5Y4Vy2iismoAzZRhbru/V2dPtaEpSrjSWGAfWTaV/GCVgdTzE3uEYmWLtkAw83XnbU +w8KNafozBU6IE2VJ2xTusRtqZEYg5sPtWP9cRbzBxArC7cGXYVRD6XYTGQ6s5sEBHCw +sUusni7pX503tgT/34wEX1EtJtAGRxpI/m2DnVA/nXxgYJNaUWbmM0eDG2An8aq70td Kz6eDZ2DGLuBVm5y0DeUHhHJHAUU5Z8J1gZByNQ3zDilNMNn1asZLjd4UWbON/IxayMv z8BH/H+TXzGcm95HEHXW5Bs7LmwIOUCJRvd64FNrbT3Snso8giytf+Fs0svKVrb7Iq7+ GSaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686595485; x=1689187485; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gLSvKncaUgXX5sMVjgHiwvAyzgL2miMkV5TwRmL7eWs=; b=FFhrNyPdt0/IBdXWSz2wWHtDKsB67XTv3750qc/BbujGYnxMNIhkS/kwjJJxAwzyMr mZdM86u4dIJ5IiECtKG3MdiXGf7ZNSa2hFcyzAlx/XdLPo8NxBTdpJYfWfT0PLvCvioF jAOx/RydqHn8Fkqp+r86pJO54x20ZUGNA66GsqjNWY+pAgN4djK/2Us0HoyYh5OtIW0U dQh2yLA8KqE/wuGwan5Ip9WsWk195Ct5tJD5qo6pqXsex9LyBBxeIvG440x79/wsRRDz ZbQDskO5bQNEihC6CbTihT7VFA8jREPo8dIq9zvXxqIanZGtDiLytZ7ZZIAWAYh2wBvU oQXw== X-Gm-Message-State: AC+VfDyFMw8WAGVP/bLGgJVfUne1WQfKl085ziyBlbfULBmvmkTsjZu0 DDeWRXgYQ6SDZYHKn9DSAoOs7WS8yGjeKikZXo1zXg== X-Google-Smtp-Source: ACHHUZ7GdKM3fdCF0BbRZ7LDjfdKukE+hxCwPT2UJ9z/E0fZdGEJitZ6xGzwak0ybXCSfMcrn6FPBI3A9bGC7wYAIZc= X-Received: by 2002:a05:6358:c603:b0:12b:e390:2b5f with SMTP id fd3-20020a056358c60300b0012be3902b5fmr1677492rwb.6.1686595484442; Mon, 12 Jun 2023 11:44:44 -0700 (PDT) MIME-Version: 1.0 References: <20230609005158.2421285-1-surenb@google.com> <20230609005158.2421285-5-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Mon, 12 Jun 2023 11:44:33 -0700 Message-ID: Subject: Re: [PATCH v2 4/6] mm: drop VMA lock before waiting for migration To: Peter Xu Cc: akpm@linux-foundation.org, willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: CDFF54001C X-Stat-Signature: 8nwbojsq3dbhgm8qdqc65rwm7rraxsua X-HE-Tag: 1686595485-325542 X-HE-Meta: U2FsdGVkX189BYHxkGFbfYm0yHhEMwTz4mFSev2SWxgmcIwfhaCozVpWIt1N6UU+qaJagWRM0iEmGsSTiZjjLN/Ad+Jh6znqDRxGpNaAD+W59BJStclxL/+D5JBDlLVgyEoo4+FXlmublVxi/HL3gUKlQ7yXqLGyFCY6K/SQ79+MUjvANkHuTX67CjsMMJCWD2FK+fZp3rEk0cX3HSOpxIf3njvG7vpr3uJ5FgoKsI/mwBh96qsPVYqzHFi0gJyRbD/qR3cq3RYft+PFI+uJ1jAnBHD55Tnat6j8LxiYKZ3GDxqVxWszlZwEEV6SLx7N6DKsl29sDXPzEt3fb/dgAKoa4lY9rkE8yEIvr72bDsnOffuXZUimGsoBgG+Jg8XBv+UqtHLREymtL6x4wpfsIcxqHk5OObKWuK/Re+08sAFsAwcGaXv7gH833ltlJJoj68XyukcX3kIPF8xcTQX7OLQGza2SQHor5xZISy+1YH5Sur42f+35xsnGIhE8lDBa1wGMyhrwxKrXt6AcH3GlFbzO3jIZjwP1D9R9J1Ni+D/wq9iG6M8VLTR0aftYQkW7XGMowTSp+B4OGx0TKdMZVI2Lm9qyMFKp0MJqv58EumAYwcTTvhEPbbWOH4olNBWrQw6z7rMyA/BegdkrdBucvbRNJG+MGmyn7PujPNxYbw9vF95LEuLMyOszq7hvGjRHV2Oz1QeQR7yDzczuSHGHdNf2f/SWQjwAMl5SmlISGu7i3axMMAq9dwns7zyndClMFDCaiwYxE0SeqKu7yh4BhHyB3dl49aciQKNtowHnQJEaHHE82LuWEPWc919fmkPA6hJnpGQkBJ8Psx4UNR8RYHVNaH1xgEEp3FFQMcR5izFCpw4zAKTA6+sU+KG2UO6m3uOx0Jm8IOrRU/sLwo1qjxN49zjWJT89Qr7DENppDjPQcjBZAPCM3IZmcGMXN+UHITin9CpVbTEgC3eAD+G EeyiT+Pb Lw9GLwNlDdeCS6W7UPPpkpxXWedqERvyfNV445Obdd/BBu2PSiIE2cAOAwN8s4xaeFhCJY9egtlsa9oHGn+6pprmVt4wnN0GSFTbiMax6pu2bPGnKZ2BfGKDyOPkUbaWlDqyowSWu3H5a33xYjkz2OCx5Pwn2eXzuxSjoYp1+KDR6/PpoGLer7/SBeG/EKmFlgWr2j7ACCzkaUO+myDENWcfKLqii/EcYuhiS/HYlzvMKf+nDScEX9pUCkNQ6PuIdi3PT/IWwBocQVFkN/Q1DjaOwldg27CUs7ZAE7arjS75EAfbEO0VMg1JvbQUazkaDALee X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jun 12, 2023 at 11:34=E2=80=AFAM Peter Xu wrote= : > > On Mon, Jun 12, 2023 at 09:07:38AM -0700, Suren Baghdasaryan wrote: > > On Mon, Jun 12, 2023 at 6:36=E2=80=AFAM Peter Xu wr= ote: > > > > > > On Fri, Jun 09, 2023 at 06:29:43PM -0700, Suren Baghdasaryan wrote: > > > > On Fri, Jun 9, 2023 at 3:30=E2=80=AFPM Suren Baghdasaryan wrote: > > > > > > > > > > On Fri, Jun 9, 2023 at 1:42=E2=80=AFPM Peter Xu wrote: > > > > > > > > > > > > On Thu, Jun 08, 2023 at 05:51:56PM -0700, Suren Baghdasaryan wr= ote: > > > > > > > migration_entry_wait does not need VMA lock, therefore it can= be dropped > > > > > > > before waiting. Introduce VM_FAULT_VMA_UNLOCKED to indicate t= hat VMA > > > > > > > lock was dropped while in handle_mm_fault(). > > > > > > > Note that once VMA lock is dropped, the VMA reference can't b= e used as > > > > > > > there are no guarantees it was not freed. > > > > > > > > > > > > Then vma lock behaves differently from mmap read lock, am I rig= ht? Can we > > > > > > still make them match on behaviors, or there's reason not to do= so? > > > > > > > > > > I think we could match their behavior by also dropping mmap_lock = here > > > > > when fault is handled under mmap_lock (!(fault->flags & > > > > > FAULT_FLAG_VMA_LOCK)). > > > > > I missed the fact that VM_FAULT_COMPLETED can be used to skip dro= pping > > > > > mmap_lock in do_page_fault(), so indeed, I might be able to use > > > > > VM_FAULT_COMPLETED to skip vma_end_read(vma) for per-vma locks as= well > > > > > instead of introducing FAULT_FLAG_VMA_LOCK. I think that was your= idea > > > > > of reusing existing flags? > > > > Sorry, I meant VM_FAULT_VMA_UNLOCKED, not FAULT_FLAG_VMA_LOCK in th= e > > > > above reply. > > > > > > > > I took a closer look into using VM_FAULT_COMPLETED instead of > > > > VM_FAULT_VMA_UNLOCKED but when we fall back from per-vma lock to > > > > mmap_lock we need to retry with an indication that the per-vma lock > > > > was dropped. Returning (VM_FAULT_RETRY | VM_FAULT_COMPLETE) to > > > > indicate such state seems strange to me ("retry" and "complete" see= m > > > > > > Not relevant to this migration patch, but for the whole idea I was th= inking > > > whether it should just work if we simply: > > > > > > fault =3D handle_mm_fault(vma, address, flags | FAULT_FLAG_VM= A_LOCK, regs); > > > - vma_end_read(vma); > > > + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) > > > + vma_end_read(vma); > > > > > > ? > > > > Today when we can't handle a page fault under per-vma locks we return > > VM_FAULT_RETRY, in which case per-vma lock is dropped and the fault is > > Oh I see what I missed. I think it may not be a good idea to reuse > VM_FAULT_RETRY just for that, because it makes VM_FAULT_RETRY even more > complicated - now it adds one more case where the lock is not released, > that's when PER_VMA even if !NOWAIT. > > > retried under mmap_lock. The condition you suggest above would not > > drop per-vma lock for VM_FAULT_RETRY case and would break the current > > fallback mechanism. > > However your suggestion gave me an idea. I could indicate that per-vma > > lock got dropped using vmf structure (like Matthew suggested before) > > and once handle_pte_fault(vmf) returns I could check if it returned > > VM_FAULT_RETRY but per-vma lock is still held. > > If that happens I can > > call vma_end_read() before returning from __handle_mm_fault(). That > > way any time handle_mm_fault() returns VM_FAULT_RETRY per-vma lock > > will be already released, so your condition in do_page_fault() will > > work correctly. That would eliminate the need for a new > > VM_FAULT_VMA_UNLOCKED flag. WDYT? > > Sounds good. > > So probably that's the major pain for now with the legacy fallback (I'll > have commented if I noticed it with the initial vma lock support..). I > assume that'll go away as long as we recover the VM_FAULT_RETRY semantics > to before. I think so. With that change getting VM_FAULT_RETRY in do_page_fault() will guarantee that per-vma lock was dropped. Is that what you mean? > > -- > Peter Xu > > -- > To unsubscribe from this group and stop receiving emails from it, send an= email to kernel-team+unsubscribe@android.com. >