From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4AB2BC77B7E for ; Tue, 2 May 2023 14:17:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DD50D6B0071; Tue, 2 May 2023 10:17:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D86416B0075; Tue, 2 May 2023 10:17:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C73F36B0078; Tue, 2 May 2023 10:17:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f182.google.com (mail-qt1-f182.google.com [209.85.160.182]) by kanga.kvack.org (Postfix) with ESMTP id A4ADF6B0071 for ; Tue, 2 May 2023 10:17:12 -0400 (EDT) Received: by mail-qt1-f182.google.com with SMTP id d75a77b69052e-3ef32014101so39262421cf.3 for ; Tue, 02 May 2023 07:17:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1683037032; x=1685629032; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Aysv1FPzsQsQI02R++ef6/e3KwVy65b2cRl1SJjrhgI=; b=c6mP0BEPF/Q2PjyiMpk02+HC8tYxVEpGjUfzzHSmrRZf6G1ttgmijVBs06mDIrrJJn vL2+yMPDbkDfF0OHLw1DEoDjt/u+Bk1/7E3AoQv2DpjisiLaBpHOhUBjZoBEU87IG2nQ 1/D61INNrH3JVBTqcQwALfSG6w7hsSEenAam6RYtTZ9EgpVIEtaOqRXbWDfqPHx093jw lV1TiZCMbpDkweBO65/EAGx/NcRL9mieF0YcVqZO8WuZF2iBN6QjA/93sxW5J1pVkY6N GJT1O7AgEqFUpLYVeCPjMR76h2habiNMTsb4HqpiPSgFDlUHHgUtNtS+2mC2bXAK5qHH oZng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683037032; x=1685629032; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Aysv1FPzsQsQI02R++ef6/e3KwVy65b2cRl1SJjrhgI=; b=imNy/E6NK5NgO6fLszJMxVfSZPHEdZ51yJIQFqFh41kAJJh51yTpkoRBLsXoMZ4doK cJ/Gy92VYzvn2p22THkkBBk3tbobfolFSrhuNqs1OIh1R+oneuYvYgnfFMUei6cCojeG BUyE5bar05wgYBhxhe1X/PtRjm6uW8j0VkyagitVuiDBjZ1hc2ZJ7gnVoiNgiZkmhea1 jab3QelOajMqyK4G98f6rjsFVHuFltAh/Og0fb0qO2FZcXAphZm4yzTnEOLZhN+Wa2h8 qohqutz+w9jdnsRxHJOp128bqKvqs9xfb2L8m6as0KnxyeB9yTH+WLjPV2h3eH7AvoiG Izrw== X-Gm-Message-State: AC+VfDwMQJiqQVulaC+yyALqEMw+oF58uclID+7/v0OSycJ1ISOXo5fp pKV+SD5KGdwp7TJiLpsht0IPlLacJ5Wqr1KOrW+A3DiUqDWGV1TUqN4Zl/TD X-Google-Smtp-Source: ACHHUZ41N49bf2gvv4oZ4MwpipxspgrvKz+WfxbxbwpyhulOLDwYMSAzukBN48CDjHEjpROA0L7Eh333xaO6tDW/pSY= X-Received: by 2002:a25:ad12:0:b0:b99:4af6:185d with SMTP id y18-20020a25ad12000000b00b994af6185dmr14333224ybi.6.1683003907781; Mon, 01 May 2023 22:05:07 -0700 (PDT) MIME-Version: 1.0 References: <20230501175025.36233-1-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Mon, 1 May 2023 22:04:56 -0700 Message-ID: Subject: Re: [PATCH 1/3] mm: handle swap page faults under VMA lock if page is uncontended To: Matthew Wilcox Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000002, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, May 1, 2023 at 8:22=E2=80=AFPM Matthew Wilcox = wrote: > > On Mon, May 01, 2023 at 07:30:13PM -0700, Suren Baghdasaryan wrote: > > On Mon, May 1, 2023 at 7:02=E2=80=AFPM Matthew Wilcox wrote: > > > > > > On Mon, May 01, 2023 at 10:50:23AM -0700, Suren Baghdasaryan wrote: > > > > +++ b/mm/memory.c > > > > @@ -3711,11 +3711,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf= ) > > > > if (!pte_unmap_same(vmf)) > > > > goto out; > > > > > > > > - if (vmf->flags & FAULT_FLAG_VMA_LOCK) { > > > > - ret =3D VM_FAULT_RETRY; > > > > - goto out; > > > > - } > > > > - > > > > entry =3D pte_to_swp_entry(vmf->orig_pte); > > > > if (unlikely(non_swap_entry(entry))) { > > > > if (is_migration_entry(entry)) { > > > > > > You're missing the necessary fallback in the (!folio) case. > > > swap_readpage() is synchronous and will sleep. > > > > True, but is it unsafe to do that under VMA lock and has to be done > > under mmap_lock? > > ... you were the one arguing that we didn't want to wait for I/O with > the VMA lock held? Well, that discussion was about waiting in folio_lock_or_retry() with the lock being held. I argued against it because currently we drop mmap_lock lock before waiting, so if we don't drop VMA lock we would be changing the current behavior which might introduce new regressions. In the case of swap_readpage and swapin_readahead we already wait with mmap_lock held, so waiting with VMA lock held does not introduce new problems (unless there is a need to hold mmap_lock). That said, you are absolutely correct that this situation can be improved by dropping the lock in these cases too. I just didn't want to attack everything at once. I believe after we agree on the approach implemented in https://lore.kernel.org/all/20230501175025.36233-3-surenb@go= ogle.com for dropping the VMA lock before waiting, these cases can be added easier. Does that make sense?