From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CC17EB64DA for ; Wed, 28 Jun 2023 16:04:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8A7158D0002; Wed, 28 Jun 2023 12:04:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 857438D0001; Wed, 28 Jun 2023 12:04:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6F8308D0002; Wed, 28 Jun 2023 12:04:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 619A78D0001 for ; Wed, 28 Jun 2023 12:04:12 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 054E8802D2 for ; Wed, 28 Jun 2023 16:04:11 +0000 (UTC) X-FDA: 80952628344.13.CEC25A2 Received: from mail-ot1-f49.google.com (mail-ot1-f49.google.com [209.85.210.49]) by imf06.hostedemail.com (Postfix) with ESMTP id 38D87180047 for ; Wed, 28 Jun 2023 16:01:53 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=Fdk3eQgW; spf=pass (imf06.hostedemail.com: domain of surenb@google.com designates 209.85.210.49 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687968115; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=A0YtgDozjdii90yu/P/cvvY8NjkTnOoYyTNr0H+ehnc=; b=wdcYx0pefMPDZCQtoVrLKMEMfeAhLqTsuKHMii9G7jPB/rA7l4KmHg3024tDlKeaWlX3Iz y6AexxyqqcLDm+o9lFPzNFpXZegknBv0eMsjJH83cZtUSbzDCnm5UUCslX5qMcaU/6RtcO QrFQq6HMbksawqP2UH8whGVZPblUlaQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687968115; a=rsa-sha256; cv=none; b=o0iLvGLVXDlIgMGEEzt6pLXMZx2r1kFkHRJSt3zk+be9aqHk/N4RnyxCXPz2E++Xc+ibRX Nq1fGuB0MeDOO91qP7xNgroyY4WeREdzNcY+YvSzMAt0PFJtO0oiYKXHeUXFbftirFYNxj NbyUM+qQre3NWFkCZK7pRx2TNGhHBdI= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=Fdk3eQgW; spf=pass (imf06.hostedemail.com: domain of surenb@google.com designates 209.85.210.49 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-ot1-f49.google.com with SMTP id 46e09a7af769-6b5cf23b9fcso5479990a34.0 for ; Wed, 28 Jun 2023 09:01:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687968112; x=1690560112; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=A0YtgDozjdii90yu/P/cvvY8NjkTnOoYyTNr0H+ehnc=; b=Fdk3eQgW6M4y2vRA9A5DVQT9z64In+YjUwgVmEuQ1bNUhFL+r3D55kRCRuruEZinWg D5+TWSi2+LWwRTXlSxpM20BP311Zv6bWV8ESN6gfxRu/zk8opD0LPkkdRY+MXILsF0jZ Fj61gqbdAH4ZGARRIeRegAgeyruL8HgwA7QF0lTv7o1WjxYDnQgEgu4UM6rxCcytvWWt /zwbBBqzpTGoPqmE4HLM02thiyJi59c/ETrQP5BCCGjDShnxZUVBhefkUtVyeCPA63/G /P4aotyrxc5FoXWpjStBd+N288MNsMZ/qrZg/1+CezQvt1JLaAWzE/LewI16t8EW0Hdn Hoxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687968112; x=1690560112; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=A0YtgDozjdii90yu/P/cvvY8NjkTnOoYyTNr0H+ehnc=; b=BQoNGBdGPDA3TJlN8NSqRnUe4lmHHxtX9lzKEqdGoE9/ecCJk4ofAPxLW+casOqdgL 67+zf+v9d7FAtxXVj3hB08FFGg14va/uaSd8OxVEIHCykHhifng+GC9A/9RVqxzqVfjs abD5l5MneV9hmfPobYYWIsf52LBL+rRl8pkhlZ6PPxQA4fVonfusJplwDjUaJOJ2Tddx Y7CTJIVzimMFCx7Z1xaQmJgaUChkCkX+G65GQ4A7EhcMubF6Wj0H1ggNAdqnmwBmV6KH zdprFeTUme0gtCxnv4D7dfU16gBJ5VOMm3Q+ZF7inA6uL3XszzvEMQknW44VyFbC6tfR pjAA== X-Gm-Message-State: AC+VfDy87IAwNFDtqDhL72OFq691XJI58lt6Yr4iseh4fFZZREMOJSUW XSP0YocurIgMqtKlFJJ3eqshX/1IvVu4oY2T1aafAw== X-Google-Smtp-Source: ACHHUZ7533wXBnWktmSSRPByYw1TMWtZVhlLHUc2Pf5Hqf9s+lboc9cxCz7TLF5IKMCdplVDaD5dyRIixkG2oXuTlGs= X-Received: by 2002:a9d:6447:0:b0:6b0:cde0:d9b with SMTP id m7-20020a9d6447000000b006b0cde00d9bmr30594492otl.2.1687968111920; Wed, 28 Jun 2023 09:01:51 -0700 (PDT) MIME-Version: 1.0 References: <20230628071800.544800-1-surenb@google.com> <20230628071800.544800-6-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Wed, 28 Jun 2023 09:01:40 -0700 Message-ID: Subject: Re: [PATCH v4 5/6] mm: handle swap page faults under per-VMA lock To: Peter Xu Cc: akpm@linux-foundation.org, willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 38D87180047 X-Rspam-User: X-Stat-Signature: 5pwhd699nfj1hnokajo3wit1swrbh1fp X-Rspamd-Server: rspam03 X-HE-Tag: 1687968113-584993 X-HE-Meta: U2FsdGVkX1+EsiBKDhP+jliKNNgTgIXrmuFDfh0ARuqAo56tbGovDdDZSh6E/aj3UetnX6Jymm4QND6YdX2o7deI9ZlpRVj9HdamU1kVSiFIJ8sAjAxomF1r8hLNUT8KphksjNQWQQK0pVh7lPzC+IklEAYb1aFmW7im8YAd2tE6uyeR/0yKFEeOtRujH6C+kphYQF+zKr35VjP0LudK/mrESRIZk+O8nSiHN0hJAVjxRrITjr0lCpQB912TMiepqmtyVlCawwpYHG8VEddZRutDXCH06Q8boIgs0K0PVYzX8qD9vPNQFw9jtfyl1ev1I7U+5dbEVwnAJz31oBeTjHS7Pue+EbDHaGsI2soTOwb6Ehvq/s9+jXF0pE9t+uGGS80/YMBfjtF85txycZgNQ4tMh1EwsI3VvhCKI0yZrW/2DA1mJu/3VFb90h0fLR83Gci2x9dfLysIj876Q0HVmSbJ7qt7dnCzbnWNDE9GcJazeeH8Dvztki0Xpd/jzePszrMAZ9Wjl1K5pAx1EukJN+YMUBm4jHX8xbZRa/GjiyjpdBZDWCd6O6LMpv3AGxFzY15XjZixKPdCRKpBH39igthIDsrkteXfOiImN+wteF/0Oj3AqdQ5lmN8fZCPGFQHokPWJ0tmZWXPcGAYlENHJDnCL08M1L/zsJZNi7nYIA0aZxGPVMqTjdxnmigveA3wKNIyYyNQrCst7ZHK3m87sAuLdo7hJ5nYLkt/VIBm5Nwu3CeDt22Xmr4K8cpy2nfda6G2QBzOZnbeSvJLcA9BPRJtmUHlgA8+m3sQHb3CnrbHX3nAomOnw0J5FBzpZDihJ2Kpi9SIUEAwDx5vnW6FwtucvKIw252Esyv1tooJBBE69pRc/y5pZMArM4x50ZlFVBFgweO76vJTen41cBeOzIrR5e7QL9/X1+jpcQ+ar1k4WzTr3WF7cVP+2GP3oCKiHafdNlZ89rXKhopkUGo RzH47RZm XS6gJg5w8rUbXc+0WTn0irv8RUPhwHXKkVsGGcSYLsFhfKM7ZmtMMYfaDLxmqRSp5SxlL/hDL4/tBF62Z+jmrbpyE2RN6DXydBtjcn2IeKAcgTFKINyS7jhDxzDaQUXOkELJOtjSoQhwNI6V8kFpHvsa2tuOt20RF5N41YiqiWt+nyTEF9PJoi4dhZ5S133xzB4IHDtqVQs7vy0shsAeOGAH5dr5OxCBzsmYFWXcu9zQSLWt5MRNQ28Ehp5XIGYd1U4mMXfaW9WdEAlAtGkwZbzm6imDzxnpCdNrbi8SSS/DhQd0t7CNRKBwaCaA9FWa4R0jg X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jun 28, 2023 at 6:43=E2=80=AFAM Peter Xu wrote: > > On Wed, Jun 28, 2023 at 12:17:59AM -0700, Suren Baghdasaryan wrote: > > When page fault is handled under per-VMA lock protection, all swap page > > faults are retried with mmap_lock because folio_lock_or_retry has to dr= op > > and reacquire mmap_lock if folio could not be immediately locked. > > Follow the same pattern as mmap_lock to drop per-VMA lock when waiting > > for folio and retrying once folio is available. > > With this obstacle removed, enable do_swap_page to operate under > > per-VMA lock protection. Drivers implementing ops->migrate_to_ram might > > still rely on mmap_lock, therefore we have to fall back to mmap_lock in > > that particular case. > > Note that the only time do_swap_page calls synchronous swap_readpage > > is when SWP_SYNCHRONOUS_IO is set, which is only set for > > QUEUE_FLAG_SYNCHRONOUS devices: brd, zram and nvdimms (both btt and > > pmem). Therefore we don't sleep in this path, and there's no need to > > drop the mmap or per-VMA lock. > > > > Signed-off-by: Suren Baghdasaryan > > Acked-by: Peter Xu > > One nit below: > > > --- > > mm/filemap.c | 25 ++++++++++++++++--------- > > mm/memory.c | 16 ++++++++++------ > > 2 files changed, 26 insertions(+), 15 deletions(-) > > > > diff --git a/mm/filemap.c b/mm/filemap.c > > index 52bcf12dcdbf..7ee078e1a0d2 100644 > > --- a/mm/filemap.c > > +++ b/mm/filemap.c > > @@ -1699,31 +1699,38 @@ static int __folio_lock_async(struct folio *fol= io, struct wait_page_queue *wait) > > return ret; > > } > > > > +static void release_fault_lock(struct vm_fault *vmf) > > +{ > > + if (vmf->flags & FAULT_FLAG_VMA_LOCK) > > + vma_end_read(vmf->vma); > > + else > > + mmap_read_unlock(vmf->vma->vm_mm); > > +} > > + > > /* > > * Return values: > > * 0 - folio is locked. > > * VM_FAULT_RETRY - folio is not locked. > > - * mmap_lock has been released (mmap_read_unlock(), unless flags h= ad both > > - * FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in > > - * which case mmap_lock is still held. > > + * mmap_lock or per-VMA lock has been released (mmap_read_unlock()= or > > + * vma_end_read()), unless flags had both FAULT_FLAG_ALLOW_RETRY a= nd > > + * FAULT_FLAG_RETRY_NOWAIT set, in which case the lock is still he= ld. > > * > > * If neither ALLOW_RETRY nor KILLABLE are set, will always return 0 > > - * with the folio locked and the mmap_lock unperturbed. > > + * with the folio locked and the mmap_lock/per-VMA lock is left unpert= urbed. > > */ > > vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault = *vmf) > > { > > - struct mm_struct *mm =3D vmf->vma->vm_mm; > > unsigned int flags =3D vmf->flags; > > > > if (fault_flag_allow_retry_first(flags)) { > > /* > > - * CAUTION! In this case, mmap_lock is not released > > - * even though return VM_FAULT_RETRY. > > + * CAUTION! In this case, mmap_lock/per-VMA lock is not > > + * released even though returning VM_FAULT_RETRY. > > */ > > if (flags & FAULT_FLAG_RETRY_NOWAIT) > > return VM_FAULT_RETRY; > > > > - mmap_read_unlock(mm); > > + release_fault_lock(vmf); > > if (flags & FAULT_FLAG_KILLABLE) > > folio_wait_locked_killable(folio); > > else > > @@ -1735,7 +1742,7 @@ vm_fault_t __folio_lock_or_retry(struct folio *fo= lio, struct vm_fault *vmf) > > > > ret =3D __folio_lock_killable(folio); > > if (ret) { > > - mmap_read_unlock(mm); > > + release_fault_lock(vmf); > > return VM_FAULT_RETRY; > > } > > } else { > > diff --git a/mm/memory.c b/mm/memory.c > > index 345080052003..76c7907e7286 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -3712,12 +3712,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > if (!pte_unmap_same(vmf)) > > goto out; > > > > - if (vmf->flags & FAULT_FLAG_VMA_LOCK) { > > - ret =3D VM_FAULT_RETRY; > > - vma_end_read(vma); > > - goto out; > > - } > > - > > entry =3D pte_to_swp_entry(vmf->orig_pte); > > if (unlikely(non_swap_entry(entry))) { > > if (is_migration_entry(entry)) { > > @@ -3727,6 +3721,16 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > vmf->page =3D pfn_swap_entry_to_page(entry); > > ret =3D remove_device_exclusive_entry(vmf); > > } else if (is_device_private_entry(entry)) { > > + if (vmf->flags & FAULT_FLAG_VMA_LOCK) { > > + /* > > + * migrate_to_ram is not yet ready to ope= rate > > + * under VMA lock. > > + */ > > + vma_end_read(vma); > > + ret |=3D VM_FAULT_RETRY; > > Here IIUC ret=3D=3D0 is guaranteed, so maybe "ret =3D VM_FAULT_RETRY" is = slightly > clearer. Ack. > > > + goto out; > > + } > > + > > vmf->page =3D pfn_swap_entry_to_page(entry); > > vmf->pte =3D pte_offset_map_lock(vma->vm_mm, vmf-= >pmd, > > vmf->address, &vmf->ptl); > > -- > > 2.41.0.162.gfafddb0af9-goog > > > > -- > Peter Xu > > -- > To unsubscribe from this group and stop receiving emails from it, send an= email to kernel-team+unsubscribe@android.com. >