From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 030E5CF34D0 for ; Thu, 3 Oct 2024 22:53:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3D2D66B0431; Thu, 3 Oct 2024 18:53:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 35B8E6B0432; Thu, 3 Oct 2024 18:53:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1FCA06B0433; Thu, 3 Oct 2024 18:53:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id F35636B0431 for ; Thu, 3 Oct 2024 18:53:26 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id A708A405CB for ; Thu, 3 Oct 2024 22:53:26 +0000 (UTC) X-FDA: 82633794012.12.AFCC893 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf18.hostedemail.com (Postfix) with ESMTP id B679B1C000A for ; Thu, 3 Oct 2024 22:53:23 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=CZHduKmD; spf=pass (imf18.hostedemail.com: domain of chrisl@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=chrisl@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1727995874; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=i+rgbO2vft36y0V4f9rrlkjJRcNS2pBQLS+KUjVmchQ=; b=xOAKlCaBPTntTmnEpNbWXuWYtDKRrdzLUSLpOjP7BUGnurnfJh5WMHHBe6Mda0fdGiDreA qcADRkojnSwoPZltRk3QJO4ultMcR2IcbMDKKT2iPahAqQr01LPW4ZHBZk9VBtjCvygImZ BsBxZrIUSVP2DsCeXJ77hVv3/qyZX4c= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1727995874; a=rsa-sha256; cv=none; b=mmbFCR9mhHZClcfAzlS4+hfU33Dq25Lw9vw/nQrKS0QpmFSsN8oL3M/Vr9EXz/DWOGRUwg AGjavLQrGbVMfP0x41GDKcRctjZmjw3L4qqNWTnFKEdhfQW8igVnCQuFxhbvfknegyjeNM 1LJkw9Df737ApfcNqPi3FmiH7rMQB8A= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=CZHduKmD; spf=pass (imf18.hostedemail.com: domain of chrisl@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=chrisl@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 94E485C5BD6 for ; Thu, 3 Oct 2024 22:53:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 21863C4CED1 for ; Thu, 3 Oct 2024 22:53:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1727996002; bh=4bJMo3nY1xRUsay/YRYhzZ06Bs7qNKhJKTAP88z7bJY=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=CZHduKmDCTyR7JvENGL1Uwi56iVsTmKUAdMtOKYnIGOtFoGWBJUKHt5lqut8xUSAS TusLJQjoxJFyldndr2sH0VrFC3iij6Qr/3I2eIp5V/owh3kzqbpDroMovd45GCzdMo kJtiBApYIPeyI8DcZyyzycoSSfARM9Kfr2iK6I5rdBs3GBwGPtxb7TLb+W0qYVJhUH zn+Q1bEswsViM+CFwFIskBYCws4qJZFFkbwJJLk44T5fgPFCSFqi5VQCWXJsduxPJA COV2q/UL6/BrQtZWbh9pc7ya3dUJaf0dK952+aHzMzd2tOVVUmSxdje1SZWV2DZfde TfLNcqcuqoaSg== Received: by mail-pg1-f173.google.com with SMTP id 41be03b00d2f7-7e9ad969a4fso906401a12.3 for ; Thu, 03 Oct 2024 15:53:22 -0700 (PDT) X-Forwarded-Encrypted: i=1; AJvYcCVpo7q9Q6aX3r1qSm9PKvvwP62ze9ttVmyEIYuSASFw6knobwWMM1HXOXIdXRcKY6rSKB9YCTt0sQ==@kvack.org X-Gm-Message-State: AOJu0YywJOCW19IgK1pnap9QtWTAVvlZXF2V2wRfxXzPEddJvFfCdX30 yxiZhdlHk0DvYf/oFLiis0H9Lo8btGl4OxF0s9ATxyFqn23Sysst6TlDMWy/Ig1wB7qSgEdjxXs xfRDz+8d5txrZaJwQA64QwJ9NEg== X-Google-Smtp-Source: AGHT+IHwBHVQvp8FWHCls9Tq0SBa/jR7+ZyLMQ4+TxpzM+WzigjxlqXGYgIyiUXyXyVVY+pUYTbFZqkqhH3Hsq6W+pA= X-Received: by 2002:a05:6a21:680d:b0:1d2:e84a:2cb0 with SMTP id adf61e73a8af0-1d6dfa20986mr1168955637.10.1727996001662; Thu, 03 Oct 2024 15:53:21 -0700 (PDT) MIME-Version: 1.0 References: <87y137nxqs.fsf@yhuang6-desk2.ccr.corp.intel.com> <20241002015754.969-1-21cnbao@gmail.com> In-Reply-To: <20241002015754.969-1-21cnbao@gmail.com> From: Chris Li Date: Thu, 3 Oct 2024 15:53:08 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [PATCH] mm: avoid unconditional one-tick sleep when swapcache_prepare fails To: Barry Song <21cnbao@gmail.com> Cc: ying.huang@intel.com, akpm@linux-foundation.org, david@redhat.com, hannes@cmpxchg.org, hughd@google.com, kaleshsingh@google.com, kasong@tencent.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, liyangouwen1@oppo.com, mhocko@suse.com, minchan@kernel.org, sj@kernel.org, stable@vger.kernel.org, surenb@google.com, v-songbaohua@oppo.com, willy@infradead.org, yosryahmed@google.com, yuzhao@google.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: B679B1C000A X-Stat-Signature: moofe4eno1w6gj47atyse8q5rh8ikukd X-HE-Tag: 1727996003-569987 X-HE-Meta: U2FsdGVkX1/A2CSXMIyyKZwvRDiEepTGlD6BZqKm6hjU3OlfZ1svqb21DKhA5yDQI+to6GLa9beIxEdU+YS8dkVcSQ36p95FWOu5brU8try+AQZLgmIMZ9yhJus80o3xE80+uEwBSe+ZNP4/uM/tjIL9j6c4KbJUkAjxetZP9v87nSxJXWyNZaMSuYrmgs3NVQvEYsgFJ1qB0iAp9HKEvXCyStaI+MXvJ9gMPKCd1wR+3nY+N2vHO4FcVV5CYw5/D2w/7ZgNx3dIkwFHNIoCmRaGFPJtSZtL6n0hcMJgwslwVc8gk3iZ2U9qh7TiogDeqK0xRSb//IG6UxpX2tqeEpBlovKjW+BAPN1CXF1YKr1Jzt2YG+o6nYD6xnb68frLxWjs60R1TV3PRVMWezEaah201/JUPvMxVoe/Oy9VxMzX1x+Q1ak4W83SzEmJqjoGC7zgd0E9dORtL+NmXj7bIzrS6zCc/NPtqRrkn4jU5DcJhKmCCtP7JafgNNDOel8I028Gc5Q9/KkUQBO2kEVI9Ax/V+NiJjS7Hm5bvBodlAqrsoLu6k3uuDQgQFZYQiNlHGMJZEtHXF3fN83tjXIVXcUZuGXW2ZcZMXbz7vwN+o7H37Ti+z5cCeRiYyhNI3x7UIWWrSRH4bPx3yOhQHnXnS1/3IwSzrojEn1wQ00i7MYWJVewsgnyP+EN4rc1blNZ/9dOHln1FHDKaVogrAtBTD9ePUJDsDsgSt9s+4Q7e2OQYbBcbkreLBzXCIbAzS0bGyU7CpYDPHvZR3GIwyf/Q5BQDtTlbff2XbqKY6/FUxc0oKzZj3cDAy3T+yl/EljE90PCvzyaVvkymCS/j6k/WRkeHIQmnD5EtlrzbnSrFbBt+lQ9qN7k0uuiCnkRyJScNhl9gIuTm/sWFJFO+m+KEs2Msa1ncfEIFtazbRYS1OxAy61mTEIub0UIM2Ufber5PT6273JHnLXHOToJ9F+ 9QberE2F QPEbWXfX+F+Qdv3CFR7VqbQAY2QQk9z35k58RVpNn+HKUnG2u6BOG3b/Hb0/GzFaqVYVSfXWUZ3P259TFqYO3yWDbEAhNGax+B5zdtgPkAk6VfG2gvPOvLVZXgmBkHqEmts8v5ULbYEK6UAQbe0K5QycKv1MtSI7mJBFfpfFpUUZNIye52Lk7PoKs4YpyW7wVpAZ5Gaa25uWZ+T57Bqlg+EccwMjKEMPO9xCQ9mtj1FKfz4S9m+nUEjPlwtKTizXzzWuLgLDpvzLE3QX8+FjCp4aD316VGhCY7twpkxIdOO+9BTcdbiP4tvFGQmMgByBp1ygLD5b2CUBFOnW5eCth85nmw1xzV7kKth2f34dW5J8NlrarC8fFeexts9w71atYwoy8CG695f+mTohjzny8SvZ0RNSKyU7d7e3IVU+YhwCb+JK0zMZgJA9Bux54PjxQDCnmZseqtRVumw0oDkLwGtZANwo4guR1pqSgOGQzzY791B4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Tue, Oct 1, 2024 at 6:58=E2=80=AFPM Barry Song <21cnbao@gmail.com> wrote= : > > On Wed, Oct 2, 2024 at 8:43=E2=80=AFAM Huang, Ying = wrote: > > > > Barry Song <21cnbao@gmail.com> writes: > > > > > On Tue, Oct 1, 2024 at 7:43=E2=80=AFAM Huang, Ying wrote: > > >> > > >> Barry Song <21cnbao@gmail.com> writes: > > >> > > >> > On Sun, Sep 29, 2024 at 3:43=E2=80=AFPM Huang, Ying wrote: > > >> >> > > >> >> Hi, Barry, > > >> >> > > >> >> Barry Song <21cnbao@gmail.com> writes: > > >> >> > > >> >> > From: Barry Song > > >> >> > > > >> >> > Commit 13ddaf26be32 ("mm/swap: fix race when skipping swapcache= ") > > >> >> > introduced an unconditional one-tick sleep when `swapcache_prep= are()` > > >> >> > fails, which has led to reports of UI stuttering on latency-sen= sitive > > >> >> > Android devices. To address this, we can use a waitqueue to wak= e up > > >> >> > tasks that fail `swapcache_prepare()` sooner, instead of always > > >> >> > sleeping for a full tick. While tasks may occasionally be woken= by an > > >> >> > unrelated `do_swap_page()`, this method is preferable to two sc= enarios: > > >> >> > rapid re-entry into page faults, which can cause livelocks, and > > >> >> > multiple millisecond sleeps, which visibly degrade user experie= nce. > > >> >> > > >> >> In general, I think that this works. Why not extend the solution= to > > >> >> cover schedule_timeout_uninterruptible() in __read_swap_cache_asy= nc() > > >> >> too? We can call wake_up() when we clear SWAP_HAS_CACHE. To avo= id > > >> > > > >> > Hi Ying, > > >> > Thanks for your comments. > > >> > I feel extending the solution to __read_swap_cache_async() should = be done > > >> > in a separate patch. On phones, I've never encountered any issues = reported > > >> > on that path, so it might be better suited for an optimization rat= her than a > > >> > hotfix? > > >> > > >> Yes. It's fine to do that in another patch as optimization. > > > > > > Ok. I'll prepare a separate patch for optimizing that path. > > > > Thanks! > > > > >> > > >> >> overhead to call wake_up() when there's no task waiting, we can u= se an > > >> >> atomic to count waiting tasks. > > >> > > > >> > I'm not sure it's worth adding the complexity, as wake_up() on an = empty > > >> > waitqueue should have a very low cost on its own? > > >> > > >> wake_up() needs to call spin_lock_irqsave() unconditionally on a glo= bal > > >> shared lock. On systems with many CPUs (such servers), this may cau= se > > >> severe lock contention. Even the cache ping-pong may hurt performan= ce > > >> much. > > > > > > I understand that cache synchronization was a significant issue befor= e > > > qspinlock, but it seems to be less of a concern after its implementat= ion. > > > > Unfortunately, qspinlock cannot eliminate cache ping-pong issue, as > > discussed in the following thread. > > > > https://lore.kernel.org/lkml/20220510192708.GQ76023@worktop.programming= .kicks-ass.net/ > > > > > However, using a global atomic variable would still trigger cache bro= adcasts, > > > correct? > > > > We can only change the atomic variable to non-zero when > > swapcache_prepare() returns non-zero, and call wake_up() when the atomi= c > > variable is non-zero. Because swapcache_prepare() returns 0 most times= , > > the atomic variable is 0 most times. If we don't change the value of > > atomic variable, cache ping-pong will not be triggered. > > yes. this can be implemented by adding another atomic variable. > > > > > Hi, Kairui, > > > > Do you have some test cases to test parallel zram swap-in? If so, that > > can be used to verify whether cache ping-pong is an issue and whether i= t > > can be fixed via a global atomic variable. > > > > Yes, Kairui please run a test on your machine with lots of cores before > and after adding a global atomic variable as suggested by Ying. I am > sorry I don't have a server machine. > > if it turns out you find cache ping-pong can be an issue, another > approach would be a waitqueue hash: > > diff --git a/mm/memory.c b/mm/memory.c > index 2366578015ad..aae0e532d8b6 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -4192,6 +4192,23 @@ static struct folio *alloc_swap_folio(struct vm_fa= ult *vmf) > } > #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ > > +/* > + * Alleviating the 'thundering herd' phenomenon using a waitqueue hash > + * when multiple do_swap_page() operations occur simultaneously. > + */ > +#define SWAPCACHE_WAIT_TABLE_BITS 5 > +#define SWAPCACHE_WAIT_TABLE_SIZE (1 << SWAPCACHE_WAIT_TABLE_BITS) > +static wait_queue_head_t swapcache_wqs[SWAPCACHE_WAIT_TABLE_SIZE]; > + > +static int __init swapcache_wqs_init(void) > +{ > + for (int i =3D 0; i < SWAPCACHE_WAIT_TABLE_SIZE; i++) > + init_waitqueue_head(&swapcache_wqs[i]); > + > + return 0; > +} > +late_initcall(swapcache_wqs_init); > + > /* > * We enter with non-exclusive mmap_lock (to exclude vma changes, > * but allow concurrent faults), and pte mapped but not yet locked. > @@ -4204,6 +4221,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > { > struct vm_area_struct *vma =3D vmf->vma; > struct folio *swapcache, *folio =3D NULL; > + DECLARE_WAITQUEUE(wait, current); > + wait_queue_head_t *swapcache_wq; > struct page *page; > struct swap_info_struct *si =3D NULL; > rmap_t rmap_flags =3D RMAP_NONE; > @@ -4297,12 +4316,16 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > * undetectable as pte_same() returns tru= e due > * to entry reuse. > */ > + swapcache_wq =3D &swapcache_wqs[hash_long= (vmf->address & PMD_MASK, > + SWAPCACHE_WAIT_TA= BLE_BITS)]; It is better to hash against the swap entry value rather than the fault address. Same swap entries can map to different parts of the page table. I am not sure this is triggerable in the SYNC_IO page fault path, hash against the swap entries is more obviously correct. Chris > if (swapcache_prepare(entry, nr_pages)) { > /* > * Relax a bit to prevent rapid > * repeated page faults. > */ > + add_wait_queue(swapcache_wq, &wai= t); > schedule_timeout_uninterruptible(= 1); > + remove_wait_queue(swapcache_wq, &= wait); > goto out_page; > } > need_clear_cache =3D true; > @@ -4609,8 +4632,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > pte_unmap_unlock(vmf->pte, vmf->ptl); > out: > /* Clear the swap cache pin for direct swapin after PTL unlock */ > - if (need_clear_cache) > + if (need_clear_cache) { > swapcache_clear(si, entry, nr_pages); > + wake_up(swapcache_wq); > + } > if (si) > put_swap_device(si); > return ret; > @@ -4625,8 +4650,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > folio_unlock(swapcache); > folio_put(swapcache); > } > - if (need_clear_cache) > + if (need_clear_cache) { > swapcache_clear(si, entry, nr_pages); > + wake_up(swapcache_wq); > + } > if (si) > put_swap_device(si); > return ret; > -- > 2.34.1 > > > -- > > Best Regards, > > Huang, Ying > > Thanks > Barry