From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CFB2C433F5 for ; Thu, 24 Feb 2022 16:28:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8E0078D0002; Thu, 24 Feb 2022 11:28:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 868A88D0001; Thu, 24 Feb 2022 11:28:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7099F8D0002; Thu, 24 Feb 2022 11:28:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0211.hostedemail.com [216.40.44.211]) by kanga.kvack.org (Postfix) with ESMTP id 5F14C8D0001 for ; Thu, 24 Feb 2022 11:28:51 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 1B76D181D6051 for ; Thu, 24 Feb 2022 16:28:51 +0000 (UTC) X-FDA: 79178207262.29.3821DD2 Received: from mail-yw1-f175.google.com (mail-yw1-f175.google.com [209.85.128.175]) by imf12.hostedemail.com (Postfix) with ESMTP id A1DE34000F for ; Thu, 24 Feb 2022 16:28:50 +0000 (UTC) Received: by mail-yw1-f175.google.com with SMTP id 00721157ae682-2d62593ad9bso3310087b3.8 for ; Thu, 24 Feb 2022 08:28:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=bluFR6fvvQhB6H6hwiDoTR3wKkQfMctdO4frcI1wEKA=; b=c80cR3l+ItVFxWSro6ydAMDG+tMJYBXmM/LUuf54rg+irlLzimuC/kKw7/ykBgdLKe knqsI+x26Pn/N1J9OEKFvL6NkfCfAet72V1SJRwHaJAJguK5klQrhLBLqyGASmfweMSy KOSx6YIhi59ViHoOmmyVekYzvbMC7kC7Fhk445FnIihwur7uZ1gUaHnpbLmbYtRH/6hI ctxOWQnF829khBaWLNyr86qOUGRtJLeBUw40Q2fPBHbori3TGzG7sbOTBaRfKWclM55G MH9lRTxt7Ror5bE9nWBBHctuPTlttRVq/QMoK+jJWlf5VYmTF+j7XAN8N06E55SVJRVA aQpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=bluFR6fvvQhB6H6hwiDoTR3wKkQfMctdO4frcI1wEKA=; b=1kHIhQkQaV6mumAGqt9iYv22Be2ehRJLEZUZcenerpgtqHGPr93HVC58L8z86DGwSq MII/HUih4m/8qpOhIUAmTi6flYVUYZe62dN4QVPzVME2lKzFE03DwnJR3rIb8e/6oC08 vtWuXUmQMs10yqpgGmVoltRLe+Ifs87GpIlcpDD+tOGrpOq5+jZQjVIqaM76u8a5ZdXC 5SBGY6LVjkxuYNvPu0+GmJp8JzzZQ7WKwvrkxjRzSiAXkgIIqNVq482A+2QfCo0Y0Ag0 vgNwmPc+Y9QGHtgnkZHrqTfTx1CsiBn22ItB4yqkuFxbjWlrDwOX3XqqQ5gU+PGk2DJj fP+g== X-Gm-Message-State: AOAM530+SaB4Sn0nydEUkVZyd5fLp0C6lCFVbzel8oOLtFD3zvN8H0VX dmR02HHp8vois1t3j0PDG4qCdoafwVPD14UqkQ179Q== X-Google-Smtp-Source: ABdhPJxEyA2aqiuxyk1yiOCOVwy83TKAjG97/lB4b391E3MqmPe1r95/XVECYUnh1y1tntDGknkE0qxeeg+v1xFL/j0= X-Received: by 2002:a81:5fc3:0:b0:2d7:ab5d:1dd8 with SMTP id t186-20020a815fc3000000b002d7ab5d1dd8mr3201917ywb.514.1645720129543; Thu, 24 Feb 2022 08:28:49 -0800 (PST) MIME-Version: 1.0 References: <20220223194812.1299646-1-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Thu, 24 Feb 2022 08:28:38 -0800 Message-ID: Subject: Re: [PATCH v3 1/1] mm: count time in drain_all_pages during direct reclaim as memory pressure To: Michal Hocko Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, pmladek@suse.com, peterz@infradead.org, guro@fb.com, shakeelb@google.com, minchan@kernel.org, timmurray@google.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@android.com Content-Type: text/plain; charset="UTF-8" Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=c80cR3l+; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf12.hostedemail.com: domain of surenb@google.com designates 209.85.128.175 as permitted sender) smtp.mailfrom=surenb@google.com X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: A1DE34000F X-Stat-Signature: g6t1ktgzjbiixanx34mdinpk9jrxgwjn X-HE-Tag: 1645720130-696310 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Feb 24, 2022 at 12:53 AM 'Michal Hocko' via kernel-team wrote: > > On Wed 23-02-22 11:48:12, Suren Baghdasaryan wrote: > > When page allocation in direct reclaim path fails, the system will > > make one attempt to shrink per-cpu page lists and free pages from > > high alloc reserves. Draining per-cpu pages into buddy allocator can > > be a very slow operation because it's done using workqueues and the > > task in direct reclaim waits for all of them to finish before > > proceeding. Currently this time is not accounted as psi memory stall. > > > > While testing mobile devices under extreme memory pressure, when > > allocations are failing during direct reclaim, we notices that psi > > events which would be expected in such conditions were not triggered. > > After profiling these cases it was determined that the reason for > > missing psi events was that a big chunk of time spent in direct > > reclaim is not accounted as memory stall, therefore psi would not > > reach the levels at which an event is generated. Further investigation > > revealed that the bulk of that unaccounted time was spent inside > > drain_all_pages call. > > > > A typical captured case when drain_all_pages path gets activated: > > > > __alloc_pages_slowpath took 44.644.613ns > > __perform_reclaim took 751.668ns (1.7%) > > drain_all_pages took 43.887.167ns (98.3%) > > Although the draining is done in the slow path these numbers suggest > that we should really reconsider the use of WQ both for draining and > other purposes (like vmstats). Yep, I'm testing the kthread_create_worker_on_cpu approach suggested by Petr. Will post it later today if nothing regresses. > > > PSI in this case records the time spent in __perform_reclaim but > > ignores drain_all_pages, IOW it misses 98.3% of the time spent in > > __alloc_pages_slowpath. > > > > Annotate __alloc_pages_direct_reclaim in its entirety so that delays > > from handling page allocation failure in the direct reclaim path are > > accounted as memory stall. > > > > Reported-by: Tim Murray > > Signed-off-by: Suren Baghdasaryan > > Acked-by: Johannes Weiner > > Acked-by: Michal Hocko > > Thanks! > > > --- > > changes in v3: > > - Moved psi_memstall_leave after the "out" label > > > > mm/page_alloc.c | 10 ++++++---- > > 1 file changed, 6 insertions(+), 4 deletions(-) > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 3589febc6d31..029bceb79861 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -4595,13 +4595,12 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order, > > const struct alloc_context *ac) > > { > > unsigned int noreclaim_flag; > > - unsigned long pflags, progress; > > + unsigned long progress; > > > > cond_resched(); > > > > /* We now go into synchronous reclaim */ > > cpuset_memory_pressure_bump(); > > - psi_memstall_enter(&pflags); > > fs_reclaim_acquire(gfp_mask); > > noreclaim_flag = memalloc_noreclaim_save(); > > > > @@ -4610,7 +4609,6 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order, > > > > memalloc_noreclaim_restore(noreclaim_flag); > > fs_reclaim_release(gfp_mask); > > - psi_memstall_leave(&pflags); > > > > cond_resched(); > > > > @@ -4624,11 +4622,13 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order, > > unsigned long *did_some_progress) > > { > > struct page *page = NULL; > > + unsigned long pflags; > > bool drained = false; > > > > + psi_memstall_enter(&pflags); > > *did_some_progress = __perform_reclaim(gfp_mask, order, ac); > > if (unlikely(!(*did_some_progress))) > > - return NULL; > > + goto out; > > > > retry: > > page = get_page_from_freelist(gfp_mask, order, alloc_flags, ac); > > @@ -4644,6 +4644,8 @@ __alloc_pages_direct_reclaim(gfp_t gfp_mask, unsigned int order, > > drained = true; > > goto retry; > > } > > +out: > > + psi_memstall_leave(&pflags); > > > > return page; > > } > > -- > > 2.35.1.473.g83b2b277ed-goog > > -- > Michal Hocko > SUSE Labs > > -- > To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com. >