From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA3C4C433F5 for ; Thu, 17 Mar 2022 23:04:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3FEA58D0002; Thu, 17 Mar 2022 19:04:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 385208D0001; Thu, 17 Mar 2022 19:04:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 228AE8D0002; Thu, 17 Mar 2022 19:04:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id 112018D0001 for ; Thu, 17 Mar 2022 19:04:24 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id C9BF5A3EE1 for ; Thu, 17 Mar 2022 23:04:23 +0000 (UTC) X-FDA: 79255408848.21.BAD2085 Received: from mail-yb1-f169.google.com (mail-yb1-f169.google.com [209.85.219.169]) by imf09.hostedemail.com (Postfix) with ESMTP id 3684A140016 for ; Thu, 17 Mar 2022 23:04:22 +0000 (UTC) Received: by mail-yb1-f169.google.com with SMTP id y142so12906199ybe.11 for ; Thu, 17 Mar 2022 16:04:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=VGYk8110mxEIfZb97hjNSOmSSAM9U3Ysh5enZGGhhu8=; b=UiJxJlc4/xpUFMOyYI60SdngqCq0itChQDoI227vzhkyaBmHb9JeQGsE/YMnQrHGag DkHB7CFzQLIr0i38uhyRbunBatMWQR/RB5qV7YfA2Cfb7wDBHu8GuIKJIN4ObfYuMtGm wHrcS6QEL6E62FcC69vnga/S0aE/b/qKfDP+R2WPFUyfZYYiMxWwVS9ijl63S/ArikBk Q1S5eoR7Lqjx5pbNHzb69iokOtUB/NVhZIpLboRq84aN3oz5i11pNEyhLOZG3TfHsv9X +PKfBzwwU70sDh45fCJGG3jw29X/LoPxKDK97gFfRTuFHa4UcMKgsZX2PDN8u7NogtWQ xN7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=VGYk8110mxEIfZb97hjNSOmSSAM9U3Ysh5enZGGhhu8=; b=XQYJp7W568w1FaFr5T41SJ7PFFqjUXdaTOq7MsOW0RfCkqJbI1WX7S/ygroFjfRHPe waSy1k57WDlZF5HnJxNLzFJvGjNftnz2oPVL+kMqHcn+S7Mv6DBpI2sgRfQZHSQgvMka C91RJjw1mvAiQDEHy/Xeu1ifyETttXTtT4PLQZ1oV566Ds1SYanmjGR8XZqqNtCm6DHz NAvD+n4+lGZvOj0uJdRBKrjLcWaRHLUAxeoM4d1QzkbUODqVImLsI2oW4Y4p42pgpnq2 sxtDgt0qESqllq7//R3nuTURWXtnXqhpmNx0hkIrlF+mXFOmU4KuHQN0/dQygKiKbSNi 3Rdg== X-Gm-Message-State: AOAM533GlKvJ+pLIQEu8MGgJlsPOB+vhIPW0jWgNeoSUMZ8B/vXqSkui KXeHnD2ptwnaoKfRLf1YpFe7Ih9bzpEuxHIavPaiKQ== X-Google-Smtp-Source: ABdhPJwPczS+6WCYnnhcITlCGDk3THAAaV4Jc7NXiOhLRFH1Y2B7Em3vO9MiZ6aLM2JmJGpm22pOvfhVqCZ16YWK0xQ= X-Received: by 2002:a25:e749:0:b0:633:93d4:4b66 with SMTP id e70-20020a25e749000000b0063393d44b66mr7378064ybh.553.1647558262082; Thu, 17 Mar 2022 16:04:22 -0700 (PDT) MIME-Version: 1.0 References: <20220225012819.1807147-1-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Thu, 17 Mar 2022 16:04:11 -0700 Message-ID: Subject: Re: [RFC 1/1] mm: page_alloc: replace mm_percpu_wq with kthreads in drain_all_pages To: Michal Hocko Cc: Andrew Morton , Johannes Weiner , Petr Mladek , Peter Zijlstra , Roman Gushchin , Shakeel Butt , Minchan Kim , Tim Murray , linux-mm , LKML , kernel-team Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 3684A140016 X-Stat-Signature: j9csznecyhic3hdpdnfbd1tmwykoy3pu Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=UiJxJlc4; spf=pass (imf09.hostedemail.com: domain of surenb@google.com designates 209.85.219.169 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-HE-Tag: 1647558262-637158 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Mar 7, 2022 at 9:24 AM Suren Baghdasaryan wrote: > > On Mon, Mar 7, 2022 at 9:04 AM 'Michal Hocko' via kernel-team > wrote: > > > > On Thu 24-02-22 17:28:19, Suren Baghdasaryan wrote: > > > Sending as an RFC to confirm if this is the right direction and to > > > clarify if other tasks currently executed on mm_percpu_wq should be > > > also moved to kthreads. The patch seems stable in testing but I want > > > to collect more performance data before submitting a non-RFC version. > > > > > > > > > Currently drain_all_pages uses mm_percpu_wq to drain pages from pcp > > > list during direct reclaim. The tasks on a workqueue can be delayed > > > by other tasks in the workqueues using the same per-cpu worker pool. > > > This results in sizable delays in drain_all_pages when cpus are highly > > > contended. > > > > This is not about cpus being highly contended. It is about too much work > > on the WQ context. > > Ack. > > > > > > Memory management operations designed to relieve memory pressure should > > > not be allowed to block by other tasks, especially if the task in direct > > > reclaim has higher priority than the blocking tasks. > > > > Agreed here. > > > > > Replace the usage of mm_percpu_wq with per-cpu low priority FIFO > > > kthreads to execute draining tasks. > > > > This looks like a natural thing to do when WQ context is not suitable > > but I am not sure the additional resources is really justified. Large > > machines with a lot of cpus would create a lot of kernel threads. Can we > > do better than that? > > > > Would it be possible to have fewer workers (e.g. 1 or one per numa node) > > and it would perform the work on a dedicated cpu by changing its > > affinity? Or would that introduce an unacceptable overhead? > > Not sure but I can try implementing per-node kthreads and measure the > performance of the reclaim path, comparing with the current and with > per-cpu approach. Just to update on this RFC. In my testing I don't see a meaningful improvement from using the kthreads yet. This might be due to my test setup, so I'll keep exploring. Will post the next version only if I get demonstrable improvements. Thanks! > > > > > Or would it be possible to update the existing WQ code to use rescuer > > well before the WQ is completely clogged? > > -- > > Michal Hocko > > SUSE Labs > > > > -- > > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com. > >