From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73F3ECF884D for ; Fri, 4 Oct 2024 16:03:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AC7DE8E0002; Fri, 4 Oct 2024 12:03:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A77958D0003; Fri, 4 Oct 2024 12:03:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8F15E8E0002; Fri, 4 Oct 2024 12:03:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 710898D0003 for ; Fri, 4 Oct 2024 12:03:33 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 21A124134D for ; Fri, 4 Oct 2024 16:03:33 +0000 (UTC) X-FDA: 82636389906.07.19589DA Received: from mail-ua1-f54.google.com (mail-ua1-f54.google.com [209.85.222.54]) by imf24.hostedemail.com (Postfix) with ESMTP id 515AC180037 for ; Fri, 4 Oct 2024 16:03:31 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="hziEa/t/"; spf=pass (imf24.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.54 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1728057744; a=rsa-sha256; cv=none; b=GuDev840okRsyNSmgoxZmSg9Rs+a9ms3VfhDbFbH3MGDiLMhefkoJAV5Pz5OQyePZ3UITr 4jHC9aKd8N8/9CNGmjFc1f5z2L3iMyXFLJNkPtL+sm0JsIJMHirdlkB5q/5ECzvo0asP+j Tn20MRtsk34uwe7O2RV0CNW/P5M7DlY= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b="hziEa/t/"; spf=pass (imf24.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.54 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1728057744; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=TIgiVdq7cFc+ljJH5YQyVg/lbKqFOvpmQ2EmQXCShK0=; b=YflwHW1MAfhVNS5v229gOgW8k/QN9kBnhzkk4A2NQ/QCLLdkY6dE7qfnZzr0F+Ksk8EB1D 4SDeJqEYVByxMJZVusyiIXRdXrBwZQ2ajf1ArSoCGP7GEPjKl+Hb8+rp2RbRU5C9a6GPeT mz6d3Iml1mL5hs85dyAkp2+P3uW58Bs= Received: by mail-ua1-f54.google.com with SMTP id a1e0cc1a2514c-84f1ac129c7so865382241.1 for ; Fri, 04 Oct 2024 09:03:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1728057810; x=1728662610; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=TIgiVdq7cFc+ljJH5YQyVg/lbKqFOvpmQ2EmQXCShK0=; b=hziEa/t/6NOB9CeBxRcsWZ332LC91encmLwjPUP+bCz9hYa4iQovdQoqklKLudQwsJ lwlBK8zwix/KoIdpQtL/LoMD4RQn2SCCI3peL4dxhGI5eTCB0I7YaK7k2gAEVh5yYvsF Zz+Kc9JbRjPZpYUJrFFRFjw0BIWwwfI6+qEVqrKyG3vj6Jq07iohfk0ElsVME91EaoTD aIwTfhQIpwSkkCHK389QAyfxBETLKhZPVfwgbTyMBcsieLkU3Z+4Csb1zUYbr6VRZo1u 2vYm8q/2xQEn45AzI+1CByH7/1/VaVS+jldYZrBf1FfWFE97hVKu5NSr+LC1EnNS2hox 5A4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1728057810; x=1728662610; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TIgiVdq7cFc+ljJH5YQyVg/lbKqFOvpmQ2EmQXCShK0=; b=AfMR/y40lCLhMQLHwN6jJNKckerw6FX2CTqAWKuqOB/HOMxMaFCcMpMF9xFeUptNN7 4AoPtCFfuC48feII5hFzhYGZpeLREpVhQfs/oxC3cCj/+pCnGWYB9OUlXq0T0D+mm0f8 gMkTBf5Up/NK8ONcND0SveEchSuz5nhkVDh4d5Jte0tqCIoKnC94/9yPQpttwjdm2gkh j5n24/rk5v8N1qsH1lYM//IpE4x2IKGKMPuOSn6/ZWXCdi+fzL6THWvLiwvW8FO1iH9R wc4Q6q0HpWNE+gI5Xj5T8Fi7ahQiHkVJnEHuTZoWNsdxtzNDJoQh+ixQCa9zZB1uGiSU VPeg== X-Forwarded-Encrypted: i=1; AJvYcCV/zJzvr6IoT4hKDn62D1BtqrC9DiLsExN7GbZqgItsP9i2rNlFojEYXC01Uu8gKRDhjQpPQW0uJg==@kvack.org X-Gm-Message-State: AOJu0Yw64i4hsrOdmKNPj3fk5slrvivAEEZ0v2+dHlFmJWd0zz9jkKf7 5ax977dR6cc/0yCWpFfjEMsrFnwFYi0x/ZIfCrLxd/l+uql8oztOE5Y+Rl4Le4squOVuiQda3tB Y4zFPmHIeHlgOJm68aRSq4LCfwW/wpV09 X-Google-Smtp-Source: AGHT+IEljj8mGywrE15eU1pVonuWf9T9gUqMpdm4BprMd36TM8glAoaRJODh9Mpb0Q3n6IKnfjrvAtOFHgRSoMH4zG8= X-Received: by 2002:a05:6122:4687:b0:50c:4bcf:2727 with SMTP id 71dfb90a1353d-50c852ff1bamr3866868e0c.0.1728057810171; Fri, 04 Oct 2024 09:03:30 -0700 (PDT) MIME-Version: 1.0 References: <87y137nxqs.fsf@yhuang6-desk2.ccr.corp.intel.com> <20241002015754.969-1-21cnbao@gmail.com> <87ikuani1f.fsf@yhuang6-desk2.ccr.corp.intel.com> In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Sat, 5 Oct 2024 00:03:17 +0800 Message-ID: Subject: Re: [PATCH] mm: avoid unconditional one-tick sleep when swapcache_prepare fails To: Chris Li Cc: "Huang, Ying" , akpm@linux-foundation.org, david@redhat.com, hannes@cmpxchg.org, hughd@google.com, kaleshsingh@google.com, kasong@tencent.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, liyangouwen1@oppo.com, mhocko@suse.com, minchan@kernel.org, sj@kernel.org, stable@vger.kernel.org, surenb@google.com, v-songbaohua@oppo.com, willy@infradead.org, yosryahmed@google.com, yuzhao@google.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: ys5zkbbhzgkcwsxdio3d77xeb4e8smx7 X-Rspamd-Queue-Id: 515AC180037 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1728057811-561209 X-HE-Meta: U2FsdGVkX19Dfxn19pcd/5mTrou8ffzGTrkc2t5S12TJQuNz3pkPc9+F4gUwwIy7L3fFR7jL+Rqafg5V0b+x1WW9cQpuxkfdO8TW16Tn/1wCFigjP9p1dk8RRGhmDq9j/YbK2PHSohrT0RswDPB0+GDhZTw5fADoKBxQ9Je4CIXlIpvDiWYQyjbl2ihL9cwfA+fWl2hp5KNTdQLoKB1rP6lKGmpxfR+6Y1Rsyi9RZA7WouQMVVsZDPtihkqrjekHHjqB2/ACwTJCH/oyb0UOEL7AmbOzbtpAIoNyMmX3x+p9Xd5NQFFqUcejaz+Ti9gBdmFOHKPJdyV/Cq0xVrdhv7uaLPtjLZb+tuLj1qgWT3+WKbfARZ/J8vGK1CkiT8xL9ouAMUF/AvVf+r506S9sdLsE7rFK7V/74LwnnTaiBt8fXefyV/UyR9mryoLdhwfKlZApcEgP6HbxI6+vTSlI923GKewmBPKAWLezmQmOrVDefUGie3+QZZG6xTcrjyUbU2pWEaaN0K4d+vfRmSWA6dilk3g95MpBtcPK3kTVGsZMWB77r4jMXxrcNB1UIh0LwdfTS6WrMt3FWhJWeJR03hBBRDoUZJmO6M58W7J2HHjvjsNyI7boeZHItRqJbnvYuOx+BpP0ehhktxyLZh42zFIEHAUPWIc2qVSTC11XDS7tEDMWT64lW9k2cGh70AYdBHWclUvb6VQt7T4WAZgiYfP2NVZ3NGCwYnU9mALzbC2gNstydW67b48ZrNcofRSr2thmDHK2aJLqFaMZx4nxmp4TCtEdWJ57WjkhvNS9AWJ6HjUy79upT0S+2VFaSkoAnfJgxASP/ehZAinudr5SLEXUUIrquCS6aIz5jSkDQUcY5ihJW2n+SN5CS5LS3JBqbt/IZssoQ/1FsElB4yaM1HY4xr4dAZZsVw0G1HlF5i3efvRnuoDGPEIvGeR1vMOHYEh1i/H66ioQWhTDjsL w38rAK0e QHTBKCiuj1pFiK/km6L/hBxYBAxesLFVKzOxnEkIeVs+u0SR8TF4sClVThynewRXYExD/LY7f9sW87efnHeelp34THS/h6KPycjyK5qnXztpcyjAHhgSwC9tfU0uI9n7JWn29Fj7ELmCWZEOoQ4X6OzG3NYk7kUGukBBcacggeXM3Z5IM+iuB7wBA+NfA4kt/Yn4pdiG4C9O+bNd07s05IheMxjxK66BPliymqZuxxgrq3Vcr7lV2lJL3JsQH5zVuNT2yNTgcGu+CI/eOtrcx6ezRBNaXuN0tusms6wknMplrFFAGX2tQ9zJCOZWsUU4Qz7fdAm3ytV4VdzshOOWaFerFW+q3xN9G76b8PiqVSJjoHsTU8fxT4BZLHcpTc+BtY4XeQLN+VvGR5Clh2wBAn3aRozAzBxm4iNQAmHZgDWO/CqVOF9+HwI9uye4wuyEInnsZvzLI6mKIYJL14335nyh331nfs2EpHUnngT6l6rWJGiaB0RJBfpVSVx+A1soiQ6U8SCfuYIdBAVc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Oct 4, 2024 at 7:03=E2=80=AFAM Chris Li wrote: > > On Wed, Oct 2, 2024 at 5:35=E2=80=AFPM Huang, Ying = wrote: > > > > Barry Song <21cnbao@gmail.com> writes: > > > > > On Wed, Oct 2, 2024 at 8:43=E2=80=AFAM Huang, Ying wrote: > > >> > > >> Barry Song <21cnbao@gmail.com> writes: > > >> > > >> > On Tue, Oct 1, 2024 at 7:43=E2=80=AFAM Huang, Ying wrote: > > >> >> > > >> >> Barry Song <21cnbao@gmail.com> writes: > > >> >> > > >> >> > On Sun, Sep 29, 2024 at 3:43=E2=80=AFPM Huang, Ying wrote: > > >> >> >> > > >> >> >> Hi, Barry, > > >> >> >> > > >> >> >> Barry Song <21cnbao@gmail.com> writes: > > >> >> >> > > >> >> >> > From: Barry Song > > >> >> >> > > > >> >> >> > Commit 13ddaf26be32 ("mm/swap: fix race when skipping swapca= che") > > >> >> >> > introduced an unconditional one-tick sleep when `swapcache_p= repare()` > > >> >> >> > fails, which has led to reports of UI stuttering on latency-= sensitive > > >> >> >> > Android devices. To address this, we can use a waitqueue to = wake up > > >> >> >> > tasks that fail `swapcache_prepare()` sooner, instead of alw= ays > > >> >> >> > sleeping for a full tick. While tasks may occasionally be wo= ken by an > > >> >> >> > unrelated `do_swap_page()`, this method is preferable to two= scenarios: > > >> >> >> > rapid re-entry into page faults, which can cause livelocks, = and > > >> >> >> > multiple millisecond sleeps, which visibly degrade user expe= rience. > > >> >> >> > > >> >> >> In general, I think that this works. Why not extend the solut= ion to > > >> >> >> cover schedule_timeout_uninterruptible() in __read_swap_cache_= async() > > >> >> >> too? We can call wake_up() when we clear SWAP_HAS_CACHE. To = avoid > > >> >> > > > >> >> > Hi Ying, > > >> >> > Thanks for your comments. > > >> >> > I feel extending the solution to __read_swap_cache_async() shou= ld be done > > >> >> > in a separate patch. On phones, I've never encountered any issu= es reported > > >> >> > on that path, so it might be better suited for an optimization = rather than a > > >> >> > hotfix? > > >> >> > > >> >> Yes. It's fine to do that in another patch as optimization. > > >> > > > >> > Ok. I'll prepare a separate patch for optimizing that path. > > >> > > >> Thanks! > > >> > > >> >> > > >> >> >> overhead to call wake_up() when there's no task waiting, we ca= n use an > > >> >> >> atomic to count waiting tasks. > > >> >> > > > >> >> > I'm not sure it's worth adding the complexity, as wake_up() on = an empty > > >> >> > waitqueue should have a very low cost on its own? > > >> >> > > >> >> wake_up() needs to call spin_lock_irqsave() unconditionally on a = global > > >> >> shared lock. On systems with many CPUs (such servers), this may = cause > > >> >> severe lock contention. Even the cache ping-pong may hurt perfor= mance > > >> >> much. > > >> > > > >> > I understand that cache synchronization was a significant issue be= fore > > >> > qspinlock, but it seems to be less of a concern after its implemen= tation. > > >> > > >> Unfortunately, qspinlock cannot eliminate cache ping-pong issue, as > > >> discussed in the following thread. > > >> > > >> https://lore.kernel.org/lkml/20220510192708.GQ76023@worktop.programm= ing.kicks-ass.net/ > > >> > > >> > However, using a global atomic variable would still trigger cache = broadcasts, > > >> > correct? > > >> > > >> We can only change the atomic variable to non-zero when > > >> swapcache_prepare() returns non-zero, and call wake_up() when the at= omic > > >> variable is non-zero. Because swapcache_prepare() returns 0 most ti= mes, > > >> the atomic variable is 0 most times. If we don't change the value o= f > > >> atomic variable, cache ping-pong will not be triggered. > > > > > > yes. this can be implemented by adding another atomic variable. > > > > Just realized that we don't need another atomic variable for this, just > > use waitqueue_active() before wake_up() should be enough. > > > > >> > > >> Hi, Kairui, > > >> > > >> Do you have some test cases to test parallel zram swap-in? If so, t= hat > > >> can be used to verify whether cache ping-pong is an issue and whethe= r it > > >> can be fixed via a global atomic variable. > > >> > > > > > > Yes, Kairui please run a test on your machine with lots of cores befo= re > > > and after adding a global atomic variable as suggested by Ying. I am > > > sorry I don't have a server machine. > > > > > > if it turns out you find cache ping-pong can be an issue, another > > > approach would be a waitqueue hash: > > > > Yes. waitqueue hash may help reduce lock contention. And, we can have > > both waitqueue_active() and waitqueue hash if necessary. As the first > > step, waitqueue_active() appears simpler. > > Interesting. Just take a look at the waitqueue_active(), it requires > smp_mb() if using without holding the lock. > Quote from the comment of waitqueue_active(): > * Also note that this 'optimization' trades a spin_lock() for an smp_mb()= , > * which (when the lock is uncontended) are of roughly equal cost. > probably not a problem in our case. two reasons: 1. we don't have a condition here 2. false postive/negative wake_up() won't cause a problem here. We used to always sleep at least 4ms for an embedded system, if we can kill 99% of the possibilities, it is all good. Ideally, we could combine wait queue hash and wakeup_active(), but Kairui's test shows even if we did neither of the above, it is still accept= able in performance. so probably we can make things simple by just adding a if(waitqueue_active()) before wake_up(). > Chris > > > > > > diff --git a/mm/memory.c b/mm/memory.c > > > index 2366578015ad..aae0e532d8b6 100644 > > > --- a/mm/memory.c > > > +++ b/mm/memory.c > > > @@ -4192,6 +4192,23 @@ static struct folio *alloc_swap_folio(struct v= m_fault *vmf) > > > } > > > #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ > > > > > > +/* > > > + * Alleviating the 'thundering herd' phenomenon using a waitqueue ha= sh > > > + * when multiple do_swap_page() operations occur simultaneously. > > > + */ > > > +#define SWAPCACHE_WAIT_TABLE_BITS 5 > > > +#define SWAPCACHE_WAIT_TABLE_SIZE (1 << SWAPCACHE_WAIT_TABLE_BITS) > > > +static wait_queue_head_t swapcache_wqs[SWAPCACHE_WAIT_TABLE_SIZE]; > > > + > > > +static int __init swapcache_wqs_init(void) > > > +{ > > > + for (int i =3D 0; i < SWAPCACHE_WAIT_TABLE_SIZE; i++) > > > + init_waitqueue_head(&swapcache_wqs[i]); > > > + > > > + return 0; > > > +} > > > +late_initcall(swapcache_wqs_init); > > > + > > > /* > > > * We enter with non-exclusive mmap_lock (to exclude vma changes, > > > * but allow concurrent faults), and pte mapped but not yet locked. > > > @@ -4204,6 +4221,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > > { > > > struct vm_area_struct *vma =3D vmf->vma; > > > struct folio *swapcache, *folio =3D NULL; > > > + DECLARE_WAITQUEUE(wait, current); > > > + wait_queue_head_t *swapcache_wq; > > > struct page *page; > > > struct swap_info_struct *si =3D NULL; > > > rmap_t rmap_flags =3D RMAP_NONE; > > > @@ -4297,12 +4316,16 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > > * undetectable as pte_same() returns t= rue due > > > * to entry reuse. > > > */ > > > + swapcache_wq =3D &swapcache_wqs[hash_lo= ng(vmf->address & PMD_MASK, > > > + SWAPCACHE_WAIT_= TABLE_BITS)]; > > > if (swapcache_prepare(entry, nr_pages))= { > > > /* > > > * Relax a bit to prevent rapid > > > * repeated page faults. > > > */ > > > + add_wait_queue(swapcache_wq, &w= ait); > > > schedule_timeout_uninterruptibl= e(1); > > > + remove_wait_queue(swapcache_wq,= &wait); > > > goto out_page; > > > } > > > need_clear_cache =3D true; > > > @@ -4609,8 +4632,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > > pte_unmap_unlock(vmf->pte, vmf->ptl); > > > out: > > > /* Clear the swap cache pin for direct swapin after PTL unlock = */ > > > - if (need_clear_cache) > > > + if (need_clear_cache) { > > > swapcache_clear(si, entry, nr_pages); > > > + wake_up(swapcache_wq); > > > + } > > > if (si) > > > put_swap_device(si); > > > return ret; > > > @@ -4625,8 +4650,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > > folio_unlock(swapcache); > > > folio_put(swapcache); > > > } > > > - if (need_clear_cache) > > > + if (need_clear_cache) { > > > swapcache_clear(si, entry, nr_pages); > > > + wake_up(swapcache_wq); > > > + } > > > if (si) > > > put_swap_device(si); > > > return ret; > > > > -- > > Best Regards, > > Huang, Ying Thanks Barry