From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6770EC433F5 for ; Wed, 23 Feb 2022 19:42:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 935AD8D0050; Wed, 23 Feb 2022 14:42:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8E4F48D0011; Wed, 23 Feb 2022 14:42:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 787CE8D0050; Wed, 23 Feb 2022 14:42:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0031.hostedemail.com [216.40.44.31]) by kanga.kvack.org (Postfix) with ESMTP id 4EAE68D0011 for ; Wed, 23 Feb 2022 14:42:15 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id DCA809D295 for ; Wed, 23 Feb 2022 19:42:14 +0000 (UTC) X-FDA: 79175065788.25.3E6ADF4 Received: from mail-yb1-f177.google.com (mail-yb1-f177.google.com [209.85.219.177]) by imf11.hostedemail.com (Postfix) with ESMTP id 6F04040009 for ; Wed, 23 Feb 2022 19:42:14 +0000 (UTC) Received: by mail-yb1-f177.google.com with SMTP id bt13so50216416ybb.2 for ; Wed, 23 Feb 2022 11:42:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=0U8UlTtkfmK0Hou2Y3imiiT6/sq8O5dEjadttlqKgn4=; b=UB9rMJmHZommJulkQTLlEPHC5EYnj3vf+CfXN+z7murp0nGC+VFQZ52o2ckP2//4RD sxsCLAbILrhfbTbErqIxswlOlCCsIYYnxBMgtTm/AKaRwgsmfkCWWbjoTmXFYFmRjUtG nkU604QG8EFUoszWd+qJv0uYBQGlnLTgzO0E9iIXakJ8u8ZDapj3U0x4ESJu118hUgan 25l2s/AK20WZkl5Z6Nx8kU7jj/bFNBgZ9aW9htK2EPSgX+67mD4pHLR5Bn+TT+chsTRQ t1RZ3TPLw/dd+q47PrOcwY13FImYOVfPtmer0rfWdJgpTfiS5MC2f2kfzbeOq/Uca2rT H5fQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=0U8UlTtkfmK0Hou2Y3imiiT6/sq8O5dEjadttlqKgn4=; b=rB4OMgDpTzd1p/NcnaXEHtimt3CRPkpHNsG8oGTTsjKCqIIyC90p5mO4pVtKY9ba7r bZbWYOGzuKvVmb4YJxpoNOCyBf4R3I8JCrDBTMk1G/xWEsWl0+ccmC9FPVa+ADZj/unj W+Njo/rXkriCVDCs9zj8ptD3T82Xt6inlbfKV03CkeOYkVjpm88BmfQE8/0BYzkuU7d2 KJ4PdZGCuP1dRRZNbr41AHQeYeJArz3L/wwsL8VR0kddwvjcfsioH5gbu5g+O/vxlirt TwP0Zg3xdUiBAbiE3qEU4h88uSs2XTf4q70VYsTE2R2RqostDlmrn7lrvK470wE11FuK +MyA== X-Gm-Message-State: AOAM5325R3fMukUgd62n9Y6buIUsgpnLFwYXfajpVmjtb0hjpAdAQagl Mgj1yTkluVtwP/g7VhlazXCms/ohSBMs4sn4rLWaZQ== X-Google-Smtp-Source: ABdhPJyzTmnDajT0/39OEm9br8dyCQP3abEWyUyJXSQYJnofusSMz3F9pX8dU1xoNYP70+Av85AXfgBxX7xY+q7IFfc= X-Received: by 2002:a25:2a45:0:b0:61e:2511:5333 with SMTP id q66-20020a252a45000000b0061e25115333mr1089738ybq.553.1645645333264; Wed, 23 Feb 2022 11:42:13 -0800 (PST) MIME-Version: 1.0 References: <20220219174940.2570901-1-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Wed, 23 Feb 2022 11:42:02 -0800 Message-ID: Subject: Re: [PATCH 1/1] mm: count time in drain_all_pages during direct reclaim as memory pressure To: Johannes Weiner Cc: Minchan Kim , akpm@linux-foundation.org, mhocko@suse.com, peterz@infradead.org, guro@fb.com, shakeelb@google.com, timmurray@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" X-Rspam-User: Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=UB9rMJmH; spf=pass (imf11.hostedemail.com: domain of surenb@google.com designates 209.85.219.177 as permitted sender) smtp.mailfrom=surenb@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 6F04040009 X-Stat-Signature: o7mqtga4aib7jzd3sxkpd9ogs8dg5xy7 X-HE-Tag: 1645645334-244546 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Feb 23, 2022 at 11:06 AM Suren Baghdasaryan wrote: > > On Wed, Feb 23, 2022 at 10:54 AM Johannes Weiner wrote: > > > > On Sun, Feb 20, 2022 at 08:52:38AM -0800, Suren Baghdasaryan wrote: > > > On Sat, Feb 19, 2022 at 4:40 PM Minchan Kim wrote: > > > > > > > > On Sat, Feb 19, 2022 at 09:49:40AM -0800, Suren Baghdasaryan wrote: > > > > > When page allocation in direct reclaim path fails, the system will > > > > > make one attempt to shrink per-cpu page lists and free pages from > > > > > high alloc reserves. Draining per-cpu pages into buddy allocator can > > > > > be a very slow operation because it's done using workqueues and the > > > > > task in direct reclaim waits for all of them to finish before > > > > > > > > Yes, drain_all_pages is serious slow(100ms - 150ms on Android) > > > > especially when CPUs are fully packed. It was also spotted in CMA > > > > allocation even when there was on no memory pressure. > > > > > > Thanks for the input, Minchan! > > > In my tests I've seen 50-60ms delays in a single drain_all_pages but I > > > can imagine there are cases worse than these. > > > > > > > > > > > > proceeding. Currently this time is not accounted as psi memory stall. > > > > > > > > Good spot. > > > > > > > > > > > > > > While testing mobile devices under extreme memory pressure, when > > > > > allocations are failing during direct reclaim, we notices that psi > > > > > events which would be expected in such conditions were not triggered. > > > > > After profiling these cases it was determined that the reason for > > > > > missing psi events was that a big chunk of time spent in direct > > > > > reclaim is not accounted as memory stall, therefore psi would not > > > > > reach the levels at which an event is generated. Further investigation > > > > > revealed that the bulk of that unaccounted time was spent inside > > > > > drain_all_pages call. > > > > > > > > > > Annotate drain_all_pages and unreserve_highatomic_pageblock during > > > > > page allocation failure in the direct reclaim path so that delays > > > > > caused by these calls are accounted as memory stall. > > > > > > > > > > Reported-by: Tim Murray > > > > > Signed-off-by: Suren Baghdasaryan > > > > > --- > > > > > mm/page_alloc.c | 4 ++++ > > > > > 1 file changed, 4 insertions(+) > > > > > > > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > > > > index 3589febc6d31..7fd0d392b39b 100644 > > > > > --- a/mm/page_alloc.c > > > > > +++ b/mm/page_alloc.c > > > > > @@ -4639,8 +4639,12 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order, > > > > > * Shrink them and try again > > > > > */ > > > > > if (!page && !drained) { > > > > > + unsigned long pflags; > > > > > + > > > > > + psi_memstall_enter(&pflags); > > > > > unreserve_highatomic_pageblock(ac, false); > > > > > drain_all_pages(NULL); > > > > > + psi_memstall_leave(&pflags); > > > > > > > > Instead of annotating the specific drain_all_pages, how about > > > > moving the annotation from __perform_reclaim to > > > > __alloc_pages_direct_reclaim? > > > > > > I'm fine with that approach too. Let's wait for Johannes' input before > > > I make any changes. > > > > I think the change makes sense, even if the workqueue fix speeds up > > the drain. I agree with Minchan about moving the annotation upward. > > > > With it moved, please feel free to add > > Acked-by: Johannes Weiner > > Thanks Johannes! > I'll move psi_memstall_enter/psi_memstall_leave from __perform_reclaim > into __alloc_pages_direct_reclaim to cover it completely. After that > will continue on fixing the workqueue issue. Posted v2 at https://lore.kernel.org/all/20220223194018.1296629-1-surenb@google.com/