From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC6BFC77B7C for ; Fri, 5 May 2023 22:30:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 602A26B0072; Fri, 5 May 2023 18:30:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5B308900004; Fri, 5 May 2023 18:30:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4CA58900003; Fri, 5 May 2023 18:30:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from mail-yw1-f177.google.com (mail-yw1-f177.google.com [209.85.128.177]) by kanga.kvack.org (Postfix) with ESMTP id 26E706B0072 for ; Fri, 5 May 2023 18:30:17 -0400 (EDT) Received: by mail-yw1-f177.google.com with SMTP id 00721157ae682-559de1d36a9so34225277b3.1 for ; Fri, 05 May 2023 15:30:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1683325816; x=1685917816; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=aUUTziawIl4BojGG/FdaBcAmXedhxfQcsfQb94otdAo=; b=umRi9+C0iMLswLI9JRNFsF3df0mPDvKkR4pbd7Iv16rUGFmOxMUUpO0v1FV7gqzf9s S/C/gLhte3+iuW1AcsInUFxlriVGiRvOR3diEyrHtEP9clY3yrs3oBT9ntNinCVhW6km zbwdk1LeNP2Ny+6cJ7OReYa2ZCamN6/c917Xps4wAxpPBqgWwjJQUbD/cYGz6pAcN7mV X7qzuhrvktOw0xdfJAjLc3OIzvLKknvTrrBUBhkShomu3bYgw8C1naTqL7mS7AO9Kv7T ZB3tKXycpID763tmw4KivJbaBfBjElA/Z1vdPgScoxBwoSUZvdUYvF4PKJ9cMNagZbNC GYkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683325816; x=1685917816; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aUUTziawIl4BojGG/FdaBcAmXedhxfQcsfQb94otdAo=; b=kw1OG1M1Xr6LfUt18QNn3ljDr/4i/HsQaJ41HBH3jjLFEFA4OWKhp+nTBsZzFIRmc0 fYw0RLB3rzrV4CPmAVcwtRiAcRvoPWlNrDdf5LAlZkySRoeVvfoRIbiZAz9w/TAdCDBN gcg7B52zn3E62bgy/LuDHj/lASDnH7h2lXCkjFjlq4oMUXOb/O+sK89uOGlN8/WqIXxw brm93n0LCxPYKRUyfEi+CGmgEDkXC4rqpu0sI0AVMpGuUR6BHMNZavEcBnA2tI4YvDNq DbV0lF4MZIOF/c+5uqrdnXG7wMYZzITMj7zb+F63UyKXlpnGYRkbWYJuxP9inuLJlfX5 IiAw== X-Gm-Message-State: AC+VfDyhL/UkVk6EDa1FLgaWCXqYmTm91ow4b16pw/NojxzL8Rs4xpw4 qZ2Vv6DVgevy6KR1rhOIgoG6f0palpcshDDBnktrQQ== X-Google-Smtp-Source: ACHHUZ5om/c1vYg0q2ONh4X75lxWFUD/gu+/wmGRdmIRHfa70cfOo6m8jB80pT24p3+vi1YkKo80Dkaw7U6Pcn0ye3Q= X-Received: by 2002:a81:5456:0:b0:55a:5ce4:aff2 with SMTP id i83-20020a815456000000b0055a5ce4aff2mr3639474ywb.39.1683325816398; Fri, 05 May 2023 15:30:16 -0700 (PDT) MIME-Version: 1.0 References: <20230501175025.36233-1-surenb@google.com> <87wn1nbcbg.fsf@yhuang6-desk2.ccr.corp.intel.com> In-Reply-To: <87wn1nbcbg.fsf@yhuang6-desk2.ccr.corp.intel.com> From: Suren Baghdasaryan Date: Fri, 5 May 2023 15:30:04 -0700 Message-ID: Subject: Re: [PATCH 1/3] mm: handle swap page faults under VMA lock if page is uncontended To: "Huang, Ying" Cc: Yosry Ahmed , Matthew Wilcox , akpm@linux-foundation.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com, Ming Lei Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, May 4, 2023 at 10:03=E2=80=AFPM Huang, Ying = wrote: > > Yosry Ahmed writes: > > > On Wed, May 3, 2023 at 12:57=E2=80=AFPM Suren Baghdasaryan wrote: > >> > >> On Wed, May 3, 2023 at 1:34=E2=80=AFAM Yosry Ahmed wrote: > >> > > >> > On Tue, May 2, 2023 at 4:05=E2=80=AFPM Suren Baghdasaryan wrote: > >> > > > >> > > On Tue, May 2, 2023 at 3:31=E2=80=AFPM Matthew Wilcox wrote: > >> > > > > >> > > > On Tue, May 02, 2023 at 09:36:03AM -0700, Suren Baghdasaryan wro= te: > >> > > > > On Tue, May 2, 2023 at 8:03=E2=80=AFAM Matthew Wilcox wrote: > >> > > > > > > >> > > > > > On Mon, May 01, 2023 at 10:04:56PM -0700, Suren Baghdasaryan= wrote: > >> > > > > > > On Mon, May 1, 2023 at 8:22=E2=80=AFPM Matthew Wilcox wrote: > >> > > > > > > > > >> > > > > > > > On Mon, May 01, 2023 at 07:30:13PM -0700, Suren Baghdasa= ryan wrote: > >> > > > > > > > > On Mon, May 1, 2023 at 7:02=E2=80=AFPM Matthew Wilcox = wrote: > >> > > > > > > > > > > >> > > > > > > > > > On Mon, May 01, 2023 at 10:50:23AM -0700, Suren Bagh= dasaryan wrote: > >> > > > > > > > > > > +++ b/mm/memory.c > >> > > > > > > > > > > @@ -3711,11 +3711,6 @@ vm_fault_t do_swap_page(str= uct vm_fault *vmf) > >> > > > > > > > > > > if (!pte_unmap_same(vmf)) > >> > > > > > > > > > > goto out; > >> > > > > > > > > > > > >> > > > > > > > > > > - if (vmf->flags & FAULT_FLAG_VMA_LOCK) { > >> > > > > > > > > > > - ret =3D VM_FAULT_RETRY; > >> > > > > > > > > > > - goto out; > >> > > > > > > > > > > - } > >> > > > > > > > > > > - > >> > > > > > > > > > > entry =3D pte_to_swp_entry(vmf->orig_pte); > >> > > > > > > > > > > if (unlikely(non_swap_entry(entry))) { > >> > > > > > > > > > > if (is_migration_entry(entry)) { > >> > > > > > > > > > > >> > > > > > > > > > You're missing the necessary fallback in the (!folio= ) case. > >> > > > > > > > > > swap_readpage() is synchronous and will sleep. > >> > > > > > > > > > >> > > > > > > > > True, but is it unsafe to do that under VMA lock and h= as to be done > >> > > > > > > > > under mmap_lock? > >> > > > > > > > > >> > > > > > > > ... you were the one arguing that we didn't want to wait= for I/O with > >> > > > > > > > the VMA lock held? > >> > > > > > > > >> > > > > > > Well, that discussion was about waiting in folio_lock_or_r= etry() with > >> > > > > > > the lock being held. I argued against it because currently= we drop > >> > > > > > > mmap_lock lock before waiting, so if we don't drop VMA loc= k we would > >> > > > > > > be changing the current behavior which might introduce new > >> > > > > > > regressions. In the case of swap_readpage and swapin_reada= head we > >> > > > > > > already wait with mmap_lock held, so waiting with VMA lock= held does > >> > > > > > > not introduce new problems (unless there is a need to hold= mmap_lock). > >> > > > > > > > >> > > > > > > That said, you are absolutely correct that this situation = can be > >> > > > > > > improved by dropping the lock in these cases too. I just d= idn't want > >> > > > > > > to attack everything at once. I believe after we agree on = the approach > >> > > > > > > implemented in https://lore.kernel.org/all/20230501175025.= 36233-3-surenb@google.com > >> > > > > > > for dropping the VMA lock before waiting, these cases can = be added > >> > > > > > > easier. Does that make sense? > >> > > > > > > >> > > > > > OK, I looked at this path some more, and I think we're fine.= This > >> > > > > > patch is only called for SWP_SYNCHRONOUS_IO which is only se= t for > >> > > > > > QUEUE_FLAG_SYNCHRONOUS devices, which are brd, zram and nvdi= mms > >> > > > > > (both btt and pmem). So the answer is that we don't sleep i= n this > >> > > > > > path, and there's no need to drop the lock. > >> > > > > > >> > > > > Yes but swapin_readahead does sleep, so I'll have to handle th= at case > >> > > > > too after this. > >> > > > > >> > > > Sleeping is OK, we do that in pXd_alloc()! Do we block on I/O a= nywhere > >> > > > in swapin_readahead()? It all looks like async I/O to me. > >> > > > >> > > Hmm. I thought that we have synchronous I/O in the following paths= : > >> > > swapin_readahead()->swap_cluster_readahead()->swap_readpage() > >> > > swapin_readahead()->swap_vma_readahead()->swap_readpage() > >> > > but just noticed that in both cases swap_readpage() is called with= the > >> > > synchronous parameter being false. So you are probably right here.= .. > >> > > >> > In both swap_cluster_readahead() and swap_vma_readahead() it looks > >> > like if the readahead window is 1 (aka we are not reading ahead), th= en > >> > we jump to directly calling read_swap_cache_async() passing do_poll = =3D > >> > true, which means we may end up calling swap_readpage() passing > >> > synchronous =3D true. > >> > > >> > I am not familiar with readahead heuristics, so I am not sure how > >> > common this is, but it's something to think about. > >> > >> Uh, you are correct. If this branch is common, we could use the same > >> "drop the lock and retry" pattern inside read_swap_cache_async(). That > >> would be quite easy to implement. > >> Thanks for checking on it! > > > > > > I am honestly not sure how common this is. > > > > +Ying who might have a better idea. > > Checked the code and related history. It seems that we can just pass > "synchronous =3D false" to swap_readpage() in read_swap_cache_async(). > "synchronous =3D true" was introduced in commit 23955622ff8d ("swap: add > block io poll in swapin path") to reduce swap read latency for block > devices that can be polled. But in commit 9650b453a3d4 ("block: ignore > RWF_HIPRI hint for sync dio"), the polling is deleted. So, we don't > need to pass "synchronous =3D true" to swap_readpage() during > swapin_readahead(), because we will wait the IO to complete in > folio_lock_or_retry(). Thanks for investigating, Ying! It sounds like we can make some simplifications here. I'll double-check and if I don't find anything else, will change to "synchronous =3D false" in the next version of the patchset. > > Best Regards, > Huang, Ying > > >> > >> > >> > > >> > > Does that mean swapin_readahead() might return a page which does n= ot > >> > > have its content swapped-in yet? > >> > > > > -- > To unsubscribe from this group and stop receiving emails from it, send an= email to kernel-team+unsubscribe@android.com. >