From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B345C77B72 for ; Mon, 17 Apr 2023 23:50:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E92858E0001; Mon, 17 Apr 2023 19:50:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E42376B0072; Mon, 17 Apr 2023 19:50:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CE3548E0001; Mon, 17 Apr 2023 19:50:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id BF2FD6B0071 for ; Mon, 17 Apr 2023 19:50:18 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 8F24F1406D1 for ; Mon, 17 Apr 2023 23:50:18 +0000 (UTC) X-FDA: 80692529316.26.4B67A9E Received: from mail-yb1-f169.google.com (mail-yb1-f169.google.com [209.85.219.169]) by imf10.hostedemail.com (Postfix) with ESMTP id C97A1C0020 for ; Mon, 17 Apr 2023 23:50:16 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=cRP1IaZ7; spf=pass (imf10.hostedemail.com: domain of surenb@google.com designates 209.85.219.169 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1681775416; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=DRJVk9iTqF9ZMo+FBIJvmLJ+0S2Q+JphhYoPlNaNuHk=; b=lw+cfNgiL5of4yFspqDzhbk+EnvSnB5YQC3+3waFgRvzmyvqmqbW6nQM+ZY4ThNWMXfFJH bEGvqx4HobPaLXjHwTOrzRjtoxx1H2ydEMXk6MqTauXxS/O94b1/rD0hzqtrDxpMcjXU1E wntrZSV22eW/PNK5plf8/zc8e7Q0oYI= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=cRP1IaZ7; spf=pass (imf10.hostedemail.com: domain of surenb@google.com designates 209.85.219.169 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1681775416; a=rsa-sha256; cv=none; b=266/MTzcpzaLPMPsWbqmXY8kPTohJ49HvvYC80V0a2kXD8aMKqDLZztnZD/NZIW5zVafuu Erir4mKg/u+SHrmU3JnDRq61IfZ/Wax+QYJ1UQFydJwc07S49Fciv7wSNXMQRzRZ1429IB c4NVXv4uxzqAECfDJbW2GeE0UHU0ttE= Received: by mail-yb1-f169.google.com with SMTP id k39so5902998ybj.8 for ; Mon, 17 Apr 2023 16:50:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681775416; x=1684367416; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=DRJVk9iTqF9ZMo+FBIJvmLJ+0S2Q+JphhYoPlNaNuHk=; b=cRP1IaZ7CCCnalk2jelsP6PlAwmUUsE2BGcSOLlfrKrF8KlbuRh5J17ssUOUJnlJb2 9y78mSCGwduh/mM34CTLDKk3VtAO+U6IUByPmlsk92pV5AqS3B7HqazW3e2AZnuBJhWL FOP1ZLiOUnhuJNPvWyQWVA9fX044WQDJs1g2jHG/ERFOC20YD1rYu3SZR8vggfVBRwKp KaQSuNWduBJYWVX21XShMqTD6SU1TCN7O5lNCyzptpPH2hDmgoX7GHlbCShUrZK+G/JX NhbI69NeyhZQ8ZDLPxQbxL/y1+8sK0zqtn86Rdjm2KMcI826zMKpvilMzmfTo4xhxI8A pfzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681775416; x=1684367416; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DRJVk9iTqF9ZMo+FBIJvmLJ+0S2Q+JphhYoPlNaNuHk=; b=lWYqcOcule6sOpJYlTx2oWImO8vem3mJS8mp3FHBQKLW0OsJe+cLn/IPFrQq8NcWjC D5ww5EQDq9PNxNro76/ka9WJMZbi5+xYozhaaoHE0mLBnS4v8fMWOYht18hjUkn+qaBn eJyVI0X45WA8Yi0u9TfCSqMb7+sHnrhUqlS4wt3T3RozX1zpVVZYgQs53u2fodih5p4P pBlawSPGgVUYLK4NJIT5r69BgW5qIEcTe5sZvjQ0iOLzuC3YHn3jhyTRqzXKj+eICFb1 kuLtHSV/7vvp3pMCqJj/ZrDR71lJhqtscp52rnjA4azY0OE7L5K6x4po5vlR8Ac7Ae1T HsoA== X-Gm-Message-State: AAQBX9fOYG6kz0gV2M2VDzmz6imd7OAGhk8omY6X6d9sr6u4DLRdoOAR 1C3LlOpyvc7x5xxmUaM36NKyOkFabl6F7935UH8Z/g== X-Google-Smtp-Source: AKy350bH9xayjiSzs5VyslCJKTV1f5j6eXdbbOobtR3saaXoTC6nZ8ojkx/1icFSJ15bt29Rn/08ipRCZm8YDTP/4LY= X-Received: by 2002:a25:7411:0:b0:b8f:6d23:3c7a with SMTP id p17-20020a257411000000b00b8f6d233c7amr8506924ybc.12.1681775415705; Mon, 17 Apr 2023 16:50:15 -0700 (PDT) MIME-Version: 1.0 References: <20230414180043.1839745-1-surenb@google.com> <87sfczuxkc.fsf@nvidia.com> <877cuaulrm.fsf@nvidia.com> In-Reply-To: <877cuaulrm.fsf@nvidia.com> From: Suren Baghdasaryan Date: Mon, 17 Apr 2023 16:50:04 -0700 Message-ID: Subject: Re: [PATCH 1/1] mm: handle swap page faults if the faulting page can be locked To: Alistair Popple Cc: Matthew Wilcox , akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: C97A1C0020 X-Stat-Signature: gi96ahrg4o7g8ggj17cr5dgwysrqj54o X-Rspam-User: X-HE-Tag: 1681775416-889149 X-HE-Meta: U2FsdGVkX19Am1WVfjZLtAmugWqNpF5iCm4i1NiPpxqTDfiOrZ9PmvXtNvOxd9dLH5BcQnkcX0UdM0EHEF32a5Bqgjy2u/BeXVCNqV/LS5mf3mVHNi3iwxGIGl0o+IwBl31oumf9vv7ST/ErrdX9n4gMQG/yAcQWbPGG7ILnwUXtvAxQLg//h1lv2O8ndCtWqurcFIWCuoi0L/jXWkVOeu4TQ1TayYMDKs+d7CyhEh02tXDXDLgYuZcYZdS13xZ3JPkv3t42gLItKQF1DVWMY66o0Ajvgjm5VUc+5rUKKSkeIZdnN2i7ROtcb3PCWl3RW91xcKxobdeeZGG8sxpUVjDgEeHxsXe366K3eptcQDMcZppYRwp6KXm+yUuYm2//yh0H2qgi3OSMptDtMkWL6kYk/DmhiEeFe0qyOc1GLGbkrARpfpv6h/+KmIq334Gy4HDZRSRFSqzJU3X1rqwjLnSvU61b2IwfewQcm45rgKdR4drITthDtLwpjkpVwWKA3+ikLhO7ryqqKBgSXNFyHqoBRUSRmGtprDQzblLlp6zwRvKEV6rTRJxcg5m25A87MN56V6YfZBBaDrPlcElc9+UPrBOOSkbiZpyDTClmwPuHwXa27lnNrUVeKyi3GoC1e1aYhHIHGQEZvd8ciknK0dMuz7UviU7LpTOHpWrbrkDwquyL7Bk28xZzPgMJ7L9ewY1tHTrO3mrPNWGlvcdG8kyltWWGFeQ1l6mzxDUc4G6EN0MnLRj9NwOT2px4Oo1i1DPIX9YF7zC/S6L9nIVJkdkwMkf15SnviXVBaM/exryg7KIdFqDo6Fy+UjVPfx2898bfleKqY2+VdEFsT62UlZaZJGZfOhERGsNAnmHaLWiwL+11jAIsJJ7G2uMYFIMyGP/k+jziJv07DiSRwiolPsgbYY4l6bMrAZqdkxCSUyIT0EtG7Sl4SbU6iMOUjL3uftv6dtX0XutFuk88m3q TG2jQjNn jxKCC6ffub0+pggcCyVIbvDaq2b92a2937d78NBD/oA4v5FYUz6Fuj51konqvIwI00bgDrkm5tITnmYims1r3WA+ZLMwSHIEllY6B/INYBT3Bc8q9o9PIAP4lG6qw+YgTRezkah9WmhA7UjgK+zj3nvQQp7LHDuccFwTnOLnZEkJH1XLOpttSPl+jWQU6RFL8fU6FCSIkekkV9k0Ae81jA4op4+ZpaGGWKuzUS9cVvacz31keKCblXNS7n1AeiftoWjLKqAW+WhaXCH/3PLe3oecrzv2dXmfsLowQyT17/wSQZa3h4XFYt7gdeY4ENR6ajhmAwJd9VecfY1Q5Qq3o3O+a8rJMO3HSR6Jt X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Apr 17, 2023 at 4:33=E2=80=AFPM Alistair Popple wrote: > > > Suren Baghdasaryan writes: > > > On Sun, Apr 16, 2023 at 6:06=E2=80=AFPM Alistair Popple wrote: > >> > >> > >> Matthew Wilcox writes: > >> > >> > On Fri, Apr 14, 2023 at 11:00:43AM -0700, Suren Baghdasaryan wrote: > >> >> When page fault is handled under VMA lock protection, all swap page > >> >> faults are retried with mmap_lock because folio_lock_or_retry > >> >> implementation has to drop and reacquire mmap_lock if folio could > >> >> not be immediately locked. > >> >> Instead of retrying all swapped page faults, retry only when folio > >> >> locking fails. > >> > > >> > Reviewed-by: Matthew Wilcox (Oracle) > >> > > >> > Let's just review what can now be handled under the VMA lock instead= of > >> > the mmap_lock, in case somebody knows better than me that it's not s= afe. > >> > > >> > - We can call migration_entry_wait(). This will wait for PG_locked= to > >> > become clear (in migration_entry_wait_on_locked()). As previousl= y > >> > discussed offline, I think this is safe to do while holding the V= MA > >> > locked. > >> > >> Do we even need to be holding the VMA locked while in > >> migration_entry_wait()? My understanding is we're just waiting for > >> PG_locked to be cleared so we can return with a reasonable chance the > >> migration entry is gone. If for example it has been unmapped or > >> protections downgraded we will simply refault. > > > > If we drop VMA lock before migration_entry_wait() then we would need > > to lock_vma_under_rcu again after the wait. In which case it might be > > simpler to retry the fault with some special return code to indicate > > that VMA lock is not held anymore and we want to retry without taking > > mmap_lock. I think it's similar to the last options Matthew suggested > > earlier. In which case we can reuse the same retry mechanism for both > > cases, here and in __folio_lock_or_retry. > > Good point. Agree there is no reason to re-take the VMA lock after the > wait, although in this case we shouldn't need to retry the fault > (ie. return VM_FAULT_RETRY). Just skip calling vma_end_read() on the way > out to userspace. Actually, __collapse_huge_page_swapin() which calls do_swap_page() can use VMA reference again inside its loop unless we return VM_FAULT_RETRY or VM_FAULT_ERROR. That is not safe since we dropped the VMA lock and stability of the VMA is not guaranteed at that point. So, we do need to return VM_FAULT_RETRY maybe with another bit indicating that retry does not need to fallback to mmap_lock. Smth like "return VM_FAULT_RETRY | VM_FAULT_USE_VMA_LOCK". > > >> > >> > - We can call remove_device_exclusive_entry(). That calls > >> > folio_lock_or_retry(), which will fail if it can't get the VMA lo= ck. > >> > >> Looks ok to me. > >> > >> > - We can call pgmap->ops->migrate_to_ram(). Perhaps somebody famil= iar > >> > with Nouveau and amdkfd could comment on how safe this is? > >> > >> Currently this won't work because drives assume mmap_lock is held duri= ng > >> pgmap->ops->migrate_to_ram(). Primarily this is because > >> migrate_vma_setup()/migrate_vma_pages() is used to handle the fault an= d > >> that asserts mmap_lock is taken in walk_page_range() and also > >> migrate_vma_insert_page(). > >> > >> So I don't think we can call that case without mmap_lock. > >> > >> At a glance it seems it should be relatively easy to move to using > >> lock_vma_under_rcu(). Drivers will need updating as well though becaus= e > >> migrate_vma_setup() is called outside of fault handling paths so drive= rs > >> will currently take mmap_lock rather than vma lock when looking up the > >> vma. See for example nouveau_svmm_bind(). > > > > Thanks for the pointers, Alistair! It does look like we need to be > > more careful with the migrate_to_ram() path. For now I can fallback to > > retrying with mmap_lock for this case, like with do with all cases > > today. Afterwards this path can be made ready for working under VMA > > lock and we can remove that retry. Does that sound good? > > Sounds good to me. Fixing that shouldn't be too difficult but will need > changes to at least Nouveau and amdkfd (and hmm-tests obviously). Happy > to look at doing that if/when this change makes it in. Thanks. > > >> > >> > - I believe we can't call handle_pte_marker() because we exclude UF= FD > >> > VMAs earlier. > >> > - We can call swap_readpage() if we allocate a new folio. I haven'= t > >> > traced through all this code to tell if it's OK. > >> > > >> > So ... I believe this is all OK, but we're definitely now willing to > >> > wait for I/O from the swap device while holding the VMA lock when we > >> > weren't before. And maybe we should make a bigger deal of it in the > >> > changelog. > >> > > >> > And maybe we shouldn't just be failing the folio_lock_or_retry(), > >> > maybe we should be waiting for the folio lock with the VMA locked. > >> >