From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D0AAC77B72 for ; Tue, 18 Apr 2023 01:07:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AA4616B0071; Mon, 17 Apr 2023 21:07:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A54DC8E0002; Mon, 17 Apr 2023 21:07:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 91CE78E0001; Mon, 17 Apr 2023 21:07:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 82B726B0071 for ; Mon, 17 Apr 2023 21:07:28 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 56C56AC2FF for ; Tue, 18 Apr 2023 01:07:28 +0000 (UTC) X-FDA: 80692723776.25.81A2AA1 Received: from mail-yb1-f174.google.com (mail-yb1-f174.google.com [209.85.219.174]) by imf04.hostedemail.com (Postfix) with ESMTP id 936E04000F for ; Tue, 18 Apr 2023 01:07:26 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=cXIORrGU; spf=pass (imf04.hostedemail.com: domain of surenb@google.com designates 209.85.219.174 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1681780046; a=rsa-sha256; cv=none; b=lWUb6yxfHddH9IvCJaYayienePB30XCdEvX2zUjtV4iuw1gSZFCJRGYK4BgdD13t9kGsOK dQ5H3imTtUhk4pULfSKr9EehQCTMMK4jMFonE6QNeNcyqZXEx8dObJWm79iLvM/+recc3a Fc0OHXM7HTLi32Doc+ugZ0Yhkkri/3I= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=cXIORrGU; spf=pass (imf04.hostedemail.com: domain of surenb@google.com designates 209.85.219.174 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1681780046; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YzAl+tRiklQsL2wg7fUdZgLPWdtp6cKheKGrE7fap0M=; b=RncvSW5mKQpsKdxWZSZgzrdpND1N60lsuWXnOWP1Sd1o1VfXO4E+M9qxATmvMpGZmveu4I C0LAuJOINsvCQbjL5yZoV2DgEuoZA+ziLcPJAzjLmTi5mYY9zmjoH5faiOSGB7eKSOqyVj kBjTn0OW9UhG+brkH2R4zlLt1erE9Fg= Received: by mail-yb1-f174.google.com with SMTP id m14so10384814ybk.4 for ; Mon, 17 Apr 2023 18:07:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1681780046; x=1684372046; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=YzAl+tRiklQsL2wg7fUdZgLPWdtp6cKheKGrE7fap0M=; b=cXIORrGUVN7NKjj8pIKiraD9pH5PWgtL9Is1q3iRO5/y9tseeT0pX1hXBlZQwzg8M0 M4hh3o1tFYC+2sIVFyuNDeAVqnS7FIQE0+5Hd/byCgPza2qu2hIDqxW9+2l9oMDtyWDQ dTTElDMs5Fd18AOgrQg6yUyK/egU4ZTAPdLqd2Lz12LLwfPsxrMQFWXutLGmOwlrSL7S gglgWzEgBR4HamU27VDf6219iyJoigRasARcWHqp1kTfCID77xoaX+74t8J4ExYgbrs7 EltIgp2g9f9vKgkpihOI2Tlp14sen5QNUIvBK/MpjAULRrfnwK2RIjL6fR2yTpgXeHtz ySNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681780046; x=1684372046; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YzAl+tRiklQsL2wg7fUdZgLPWdtp6cKheKGrE7fap0M=; b=jZjTijn8MF8IHc3q7UNvqi7NkK1c8IboEP22vueoPai6dlN0kd5qH6hh6sDgo/1UPa E7RulDbkNbY+8Q6gJy0pSf4VRt5hQ7+C576zQqUC6nWwAkrFMLNk5PvmCcnQJ9jF6tnK +GFefIHIOw0pjH0/+pyxkjSgl/4w2nSO2+kUZ9arAgbXRYMmgRhsrgzVqOnbeGQNR0uP aGomLUFahRFRg7rKsrwTi3FDSFST+CSTF/eoP9BoZPujjrcKqdPNUFv6vevh6rCBD2Ik aw9QV1bGXsuwqbOqOEsVFr5LNwpyDFAx2TCeINhX6RVH99pdaTrUy7anQXl6sC+6OK2d lk1w== X-Gm-Message-State: AAQBX9epOmx4DwiB3bNJwsPheBI1pkido0GR4asjvu1HlyBlADl/kma9 ggmEEANN4nnbLZanAZ9/HZSJediQG1tE/sQ7ImrWig== X-Google-Smtp-Source: AKy350Y7yOBTthM5SogDPidfKvE4Vq4inNpz9IQs0TBZ0OJEVHqths4TxOKo/mEinMpSAVMoMLBiJH6WZ/INwUylkv0= X-Received: by 2002:a25:d8d7:0:b0:b92:2c78:1481 with SMTP id p206-20020a25d8d7000000b00b922c781481mr6527192ybg.12.1681780045556; Mon, 17 Apr 2023 18:07:25 -0700 (PDT) MIME-Version: 1.0 References: <20230414180043.1839745-1-surenb@google.com> <87sfczuxkc.fsf@nvidia.com> <877cuaulrm.fsf@nvidia.com> In-Reply-To: From: Suren Baghdasaryan Date: Mon, 17 Apr 2023 18:07:14 -0700 Message-ID: Subject: Re: [PATCH 1/1] mm: handle swap page faults if the faulting page can be locked To: Alistair Popple Cc: Matthew Wilcox , akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Queue-Id: 936E04000F X-Rspamd-Server: rspam01 X-Stat-Signature: ah1nfgec4i5n4gc5dd9wmmojm3k7ewo5 X-HE-Tag: 1681780046-540900 X-HE-Meta: U2FsdGVkX18ftkvy5VVZUM9lJ9w5I5CuBdWml2TToU5+zgKPZpa0VIE7wg89Ktg1lYzgROx7k+VCNKJz3KC0aukAk4gmtYyzZ4S63OxY2L8dO7KXl5VItpANlwI8bORb4849CgJWih149KA2cLpqVUXiWKsSRBKcSn29TfGCFw1QX7QH59SqFaU8aLY00nUCpgThJndmisY2Oj04cPVYql+kiigXeVJQevI6VVIN4Zmj6XSRv+HufXslBIJ8fFSkiF20X6pva5ogSyqBm3fQOz0hD7hwsBOSby3muRGaFRPNzTJtrki4yv0g3/rztLOQi+dPIike9fNhvwHZ400R/cwv2uuL+W475ZFg3YSIbA0cIwCcKTrsAQeLKQYytMgiK8mxE4VCr4UJdTf55T1/444BJengs+QpstBmd51vO0tG5cXHDWAM77ztoW3ehKzbKWwL2f2i9FQ2YVkQ/Ce737gYyIslJV5fRh60yAXKQ/YBrL2rT7n7pZrV2bgV13kfGx2R2WJvPBGaTpEUPg3NQhl1PHlV2qCCZfB0qVBFre/Tq9+7xQ6WSxdPbe0FH+86RUwOWKtc+0HXX80oP5NX/rdHTLW11cP5l6kg9BYxc+Hul6rzAN7WeRANuaCq0jmtNPEuxxgljTfNEpGw/VFFxA3zoAcgiWagTA4krYLspd3rc62WDhKvoG0yDpy4rvHRfN7rsWgG1cBYSm1o7DFojSwVD8FhrqgW0xFTn7cwDpZzBVnSue9+whoF9b9S4gftrRozERDoDIy1R6rVgdu9PXzFIVbSqZ2kYKWFwEHyHm/getR728PV2ghrMNgZ04RRSrDpi5tM9f94WT+mE8DvHgBGS9NH+r+m6iMu0GVGAOsZMTNCcF305kfMVQ9lOmv207Sjx9wbwCNX5wgvvMbC/hloSB6t5X4Z8WVewnz17zi+CEh6uxTAauOaQOoogHUU3a8S19/u6SBswubEcom o1EC+Ur8 +7fJFuIPmjUkX2fJjfnfLJvARQO8wlyLjnJWHMnC9Xk1wku2SFk4Vc0oK5xl6AP+v8eEnRY/rf+NKUJd+OZ0kzbV6fdYVehbcuu3LdJx4koeknVE6zT3CQeDAVSj2fOF5BmU+BeVxHH0c7OZRfgMUBttQ39B0tWtr5HXAlrKU0MCJQ5eYJSN07toYSP83P6ob2Cgw2vVEVqsDOFhoVumB3/tVz4r26/WYcZ/9UDPnTDAL5nGpux1y8JzHTfE9qR4+cemj5TNzIFjxrVW29qTrjr9/IwlCxdzqhOY/5i2lLTXq6dZSLJwXyxg2qs/uhwDgJqcGqqkKoKCg5b9qlZIS5LSlCLt+4v22cgsY X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Apr 17, 2023 at 4:50=E2=80=AFPM Suren Baghdasaryan wrote: > > On Mon, Apr 17, 2023 at 4:33=E2=80=AFPM Alistair Popple wrote: > > > > > > Suren Baghdasaryan writes: > > > > > On Sun, Apr 16, 2023 at 6:06=E2=80=AFPM Alistair Popple wrote: > > >> > > >> > > >> Matthew Wilcox writes: > > >> > > >> > On Fri, Apr 14, 2023 at 11:00:43AM -0700, Suren Baghdasaryan wrote= : > > >> >> When page fault is handled under VMA lock protection, all swap pa= ge > > >> >> faults are retried with mmap_lock because folio_lock_or_retry > > >> >> implementation has to drop and reacquire mmap_lock if folio could > > >> >> not be immediately locked. > > >> >> Instead of retrying all swapped page faults, retry only when foli= o > > >> >> locking fails. > > >> > > > >> > Reviewed-by: Matthew Wilcox (Oracle) > > >> > > > >> > Let's just review what can now be handled under the VMA lock inste= ad of > > >> > the mmap_lock, in case somebody knows better than me that it's not= safe. > > >> > > > >> > - We can call migration_entry_wait(). This will wait for PG_lock= ed to > > >> > become clear (in migration_entry_wait_on_locked()). As previou= sly > > >> > discussed offline, I think this is safe to do while holding the= VMA > > >> > locked. > > >> > > >> Do we even need to be holding the VMA locked while in > > >> migration_entry_wait()? My understanding is we're just waiting for > > >> PG_locked to be cleared so we can return with a reasonable chance th= e > > >> migration entry is gone. If for example it has been unmapped or > > >> protections downgraded we will simply refault. > > > > > > If we drop VMA lock before migration_entry_wait() then we would need > > > to lock_vma_under_rcu again after the wait. In which case it might be > > > simpler to retry the fault with some special return code to indicate > > > that VMA lock is not held anymore and we want to retry without taking > > > mmap_lock. I think it's similar to the last options Matthew suggested > > > earlier. In which case we can reuse the same retry mechanism for both > > > cases, here and in __folio_lock_or_retry. > > > > Good point. Agree there is no reason to re-take the VMA lock after the > > wait, although in this case we shouldn't need to retry the fault > > (ie. return VM_FAULT_RETRY). Just skip calling vma_end_read() on the wa= y > > out to userspace. > > Actually, __collapse_huge_page_swapin() which calls do_swap_page() can > use VMA reference again inside its loop unless we return > VM_FAULT_RETRY or VM_FAULT_ERROR. That is not safe since we dropped > the VMA lock and stability of the VMA is not guaranteed at that point. > So, we do need to return VM_FAULT_RETRY maybe with another bit > indicating that retry does not need to fallback to mmap_lock. Smth > like "return VM_FAULT_RETRY | VM_FAULT_USE_VMA_LOCK". False alarm. __collapse_huge_page_swapin is always called under mmap_lock protection. I'll go over the code once more to make sure nothing else would use VMA after we drop the VMA lock in page fault path. > > > > > >> > > >> > - We can call remove_device_exclusive_entry(). That calls > > >> > folio_lock_or_retry(), which will fail if it can't get the VMA = lock. > > >> > > >> Looks ok to me. > > >> > > >> > - We can call pgmap->ops->migrate_to_ram(). Perhaps somebody fam= iliar > > >> > with Nouveau and amdkfd could comment on how safe this is? > > >> > > >> Currently this won't work because drives assume mmap_lock is held du= ring > > >> pgmap->ops->migrate_to_ram(). Primarily this is because > > >> migrate_vma_setup()/migrate_vma_pages() is used to handle the fault = and > > >> that asserts mmap_lock is taken in walk_page_range() and also > > >> migrate_vma_insert_page(). > > >> > > >> So I don't think we can call that case without mmap_lock. > > >> > > >> At a glance it seems it should be relatively easy to move to using > > >> lock_vma_under_rcu(). Drivers will need updating as well though beca= use > > >> migrate_vma_setup() is called outside of fault handling paths so dri= vers > > >> will currently take mmap_lock rather than vma lock when looking up t= he > > >> vma. See for example nouveau_svmm_bind(). > > > > > > Thanks for the pointers, Alistair! It does look like we need to be > > > more careful with the migrate_to_ram() path. For now I can fallback t= o > > > retrying with mmap_lock for this case, like with do with all cases > > > today. Afterwards this path can be made ready for working under VMA > > > lock and we can remove that retry. Does that sound good? > > > > Sounds good to me. Fixing that shouldn't be too difficult but will need > > changes to at least Nouveau and amdkfd (and hmm-tests obviously). Happy > > to look at doing that if/when this change makes it in. Thanks. > > > > >> > > >> > - I believe we can't call handle_pte_marker() because we exclude = UFFD > > >> > VMAs earlier. > > >> > - We can call swap_readpage() if we allocate a new folio. I have= n't > > >> > traced through all this code to tell if it's OK. > > >> > > > >> > So ... I believe this is all OK, but we're definitely now willing = to > > >> > wait for I/O from the swap device while holding the VMA lock when = we > > >> > weren't before. And maybe we should make a bigger deal of it in t= he > > >> > changelog. > > >> > > > >> > And maybe we shouldn't just be failing the folio_lock_or_retry(), > > >> > maybe we should be waiting for the folio lock with the VMA locked. > > >> > >