From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79BB3C34056 for ; Wed, 19 Feb 2020 19:35:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2BA8B208E4 for ; Wed, 19 Feb 2020 19:35:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2BA8B208E4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CCAF26B0003; Wed, 19 Feb 2020 14:35:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CA2FA6B0006; Wed, 19 Feb 2020 14:35:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BB8C76B0007; Wed, 19 Feb 2020 14:35:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0008.hostedemail.com [216.40.44.8]) by kanga.kvack.org (Postfix) with ESMTP id A39EF6B0003 for ; Wed, 19 Feb 2020 14:35:33 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 5342D181AEF09 for ; Wed, 19 Feb 2020 19:35:33 +0000 (UTC) X-FDA: 76507880946.05.body92_87bcf2ca8cb5f X-HE-Tag: body92_87bcf2ca8cb5f X-Filterd-Recvd-Size: 6446 Received: from mail-wr1-f65.google.com (mail-wr1-f65.google.com [209.85.221.65]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Wed, 19 Feb 2020 19:35:32 +0000 (UTC) Received: by mail-wr1-f65.google.com with SMTP id t3so1923488wru.7 for ; Wed, 19 Feb 2020 11:35:32 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=Sju9vDe1bFh3XeOgGZeyRjwaJ6+Sv2HseVKq5/BaUPE=; b=FzM4UZfUf5kbmxAt4wkP/cv5gEolx34oqx1IHcq3o7J6hj/xVzbSv+Oe7g3n/4Nmn+ qVXeXxFDk+4ANVDWXI+o5FBA6BDj9w9uvzAKTS1ftquY5LehtbjziUFqJzhJn7Yw8qic KnKlgDxuYVldjn0Be8xPvBJH16+c9SHpePOY1peQhyTh+++7ceJuoo24JD46ag/OrkvI +LFzxkKfIMJCHL8xJ3GS42A4Wb4JZFOQLo95jg1vm1af2m0jz3TVjOlc1CXefXwM0hPg PQRaB8lf59wi40HrL/jkfBR7/IuRyq9/DAi+vSsOXualOoXTw6CgRjpseaM7UltXLtfV EBdA== X-Gm-Message-State: APjAAAUDDMlMs5dT3b/62s4n31t7eWDKjZHUPtcs3VpMobR1Rf37jA2q mwBEb3d+iF27ZpaPvig74Oc= X-Google-Smtp-Source: APXvYqw/Jvq4ocKxBvyJA8fZN1KBOpjCNSEjn/drMdFxgjVxLO6W3q9W6FWzTqlU1DZIq3IQKKYetA== X-Received: by 2002:adf:b198:: with SMTP id q24mr38718990wra.188.1582140931664; Wed, 19 Feb 2020 11:35:31 -0800 (PST) Received: from localhost (ip-37-188-133-21.eurotel.cz. [37.188.133.21]) by smtp.gmail.com with ESMTPSA id 2sm921194wrq.31.2020.02.19.11.35.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Feb 2020 11:35:30 -0800 (PST) Date: Wed, 19 Feb 2020 20:35:29 +0100 From: Michal Hocko To: Sultan Alsawaf Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mel Gorman , Johannes Weiner Subject: Re: [PATCH] mm: Stop kswapd early when nothing's waiting for it to free pages Message-ID: <20200219193529.GD11847@dhcp22.suse.cz> References: <20200219182522.1960-1-sultan@kerneltoast.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200219182522.1960-1-sultan@kerneltoast.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: [Cc Mel and Johannes] On Wed 19-02-20 10:25:22, Sultan Alsawaf wrote: > From: Sultan Alsawaf > > Keeping kswapd running when all the failed allocations that invoked it > are satisfied incurs a high overhead due to unnecessary page eviction > and writeback, as well as spurious VM pressure events to various > registered shrinkers. When kswapd doesn't need to work to make an > allocation succeed anymore, stop it prematurely to save resources. I do not think this patch is correct. kswapd is supposed to balance a node and get it up to the high watermark. The number of contexts which woke it up is not really relevant. If for no other reasons then each allocation request might be of a different size. Could you be more specific about the problem you are trying to address please? > Signed-off-by: Sultan Alsawaf > --- > include/linux/mmzone.h | 2 ++ > mm/page_alloc.c | 17 ++++++++++++++--- > mm/vmscan.c | 3 ++- > 3 files changed, 18 insertions(+), 4 deletions(-) > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index 462f6873905a..49c922abfb90 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -20,6 +20,7 @@ > #include > #include > #include > +#include > #include > > /* Free memory management - zoned buddy allocator. */ > @@ -735,6 +736,7 @@ typedef struct pglist_data { > unsigned long node_spanned_pages; /* total size of physical page > range, including holes */ > int node_id; > + refcount_t kswapd_waiters; > wait_queue_head_t kswapd_wait; > wait_queue_head_t pfmemalloc_wait; > struct task_struct *kswapd; /* Protected by > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 3c4eb750a199..2d4caacfd2fc 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -4401,6 +4401,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, > int no_progress_loops; > unsigned int cpuset_mems_cookie; > int reserve_flags; > + pg_data_t *pgdat = ac->preferred_zoneref->zone->zone_pgdat; > + bool woke_kswapd = false; > > /* > * We also sanity check to catch abuse of atomic reserves being used by > @@ -4434,8 +4436,13 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, > if (!ac->preferred_zoneref->zone) > goto nopage; > > - if (alloc_flags & ALLOC_KSWAPD) > + if (alloc_flags & ALLOC_KSWAPD) { > + if (!woke_kswapd) { > + refcount_inc(&pgdat->kswapd_waiters); > + woke_kswapd = true; > + } > wake_all_kswapds(order, gfp_mask, ac); > + } > > /* > * The adjusted alloc_flags might result in immediate success, so try > @@ -4640,9 +4647,12 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, > goto retry; > } > fail: > - warn_alloc(gfp_mask, ac->nodemask, > - "page allocation failure: order:%u", order); > got_pg: > + if (woke_kswapd) > + refcount_dec(&pgdat->kswapd_waiters); > + if (!page) > + warn_alloc(gfp_mask, ac->nodemask, > + "page allocation failure: order:%u", order); > return page; > } > > @@ -6711,6 +6721,7 @@ static void __meminit pgdat_init_internals(struct pglist_data *pgdat) > pgdat_page_ext_init(pgdat); > spin_lock_init(&pgdat->lru_lock); > lruvec_init(&pgdat->__lruvec); > + pgdat->kswapd_waiters = (refcount_t)REFCOUNT_INIT(0); > } > > static void __meminit zone_init_internals(struct zone *zone, enum zone_type idx, int nid, > diff --git a/mm/vmscan.c b/mm/vmscan.c > index c05eb9efec07..e795add372d1 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -3694,7 +3694,8 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx) > __fs_reclaim_release(); > ret = try_to_freeze(); > __fs_reclaim_acquire(); > - if (ret || kthread_should_stop()) > + if (ret || kthread_should_stop() || > + !refcount_read(&pgdat->kswapd_waiters)) > break; > > /* > -- > 2.25.1 > -- Michal Hocko SUSE Labs