From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9E926C7EE29 for ; Thu, 25 May 2023 13:54:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 247FA280001; Thu, 25 May 2023 09:54:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F7EC900002; Thu, 25 May 2023 09:54:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C05B280001; Thu, 25 May 2023 09:54:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id F145F900002 for ; Thu, 25 May 2023 09:54:11 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id BCC73120951 for ; Thu, 25 May 2023 13:54:11 +0000 (UTC) X-FDA: 80828921502.29.AA34AF8 Received: from mail-qt1-f171.google.com (mail-qt1-f171.google.com [209.85.160.171]) by imf01.hostedemail.com (Postfix) with ESMTP id A560240010 for ; Thu, 25 May 2023 13:54:09 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=cmpxchg-org.20221208.gappssmtp.com header.s=20221208 header.b=LITMVb+o; spf=pass (imf01.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.160.171 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1685022850; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=bLEZRdqcCWpE4xyc1kmqZXFrPIoPVCRTH/D3PTcgmf4=; b=7IOuhcuj1xcDK+0wkR1PqavfuCI4Bo9jWQ0HjSEuq/twtPOD0CNospDhzF/AQI56Mhzbl5 JuPT/KvA0RM2Es7eoDAc0Qc/WxPuJM/i+Dw2BErLaiOyxbFlNbnRPVibjlo/eqSN52TDXC Gnj1+OwkwWDTy60N36mRnrwroZbgIF4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1685022850; a=rsa-sha256; cv=none; b=mcLmYL8wcJ7K+2sAsKraxhRMep6WG3QDcROCUtHOgu3/e0sl6eHFcUTjseJ1gN9iN1uk1J wv3hfq3UhvHE87xbZ7XxkvboXjWA58nHGHruYv63trAgskrDsTOXBhjwltXYLnj59CMrdg ERCDfo3z/sTeV6lzb1iKrtL+4k8/i+I= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=cmpxchg-org.20221208.gappssmtp.com header.s=20221208 header.b=LITMVb+o; spf=pass (imf01.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.160.171 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (policy=none) header.from=cmpxchg.org Received: by mail-qt1-f171.google.com with SMTP id d75a77b69052e-3f6b2af4558so4768191cf.1 for ; Thu, 25 May 2023 06:54:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20221208.gappssmtp.com; s=20221208; t=1685022848; x=1687614848; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=bLEZRdqcCWpE4xyc1kmqZXFrPIoPVCRTH/D3PTcgmf4=; b=LITMVb+ohsmbbnOgQYinNMiwgZI+Vu9oEIG4dSqbWs6cV4BGXZ81XRDZ66RNYkGPcY d94i2HGWR/MShjHHwieUU9IgaHYfY4/RW2gCSITSQmvcZky6VJLfUFjjqAOMF5UMb0ag iUFvRKUnI5KPyy7Dwu6rFA+Gag9PIdFp9MtU2Aga8Zd07p9NVFjX8QD2g5BpPYjEKXaq HvI1qYu6JxRiWjSmGWP4bNvrWAG0blrQjmxw8Hqz47F27DBoaOqzUMBDiG297zBD7XZo sHt2BlLbM/6H9olL2RiHTj+9EGRL5laQenCRqX0NpdxesxF3qXh0QOfKqyTfc5VBMjUj /L2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685022848; x=1687614848; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=bLEZRdqcCWpE4xyc1kmqZXFrPIoPVCRTH/D3PTcgmf4=; b=C9U8F/1EyoO21xSXj045wjdAikuEocmtVSebb3FxTyI2VBkq4jFzaH+Imi2aYq/Re6 YJU24k+uj1KGKpXBAzXmGtyvPERt61SfWzZObfEIk1A6VFRJqU98izXp67B6McorDp1B 6LJDd1GKrG6MJAQAxbGq/C1s1UoaldcMJ1i10QPjWpF8dTaSOwP7l6TMO9YMBzSc8mQs hwLEkXHk6ESgbKQ6wg13zUIqU2u018FGfZLDIdHhRZgNfCWhLrfyPwTYnZqnVBo8a13b XrWRjj5/tk/PMQIyTTV3OSq5yjp9ncaP5vWtH+BwRRbQaD3vMdXgzxM0dxlZ0jviIvjW GJhA== X-Gm-Message-State: AC+VfDxKcfTLEWSR4U1cB0ubR02NHI4tv2VFfbxH+wYI1+QkUHy7ovj7 AmCKiaL3AKFNsbn5EQrETP6wRg== X-Google-Smtp-Source: ACHHUZ55Ue38M3RU860aS455regVxF0q065zBCLs3j24DWS8tAv4nXVZwx3K4sGmBgc3RKH5Blqcag== X-Received: by 2002:a05:622a:8a:b0:3f3:95a7:a5ac with SMTP id o10-20020a05622a008a00b003f395a7a5acmr29078052qtw.51.1685022848413; Thu, 25 May 2023 06:54:08 -0700 (PDT) Received: from localhost ([2620:10d:c091:400::5:8bb6]) by smtp.gmail.com with ESMTPSA id d10-20020ac8534a000000b003e69d6792f6sm412101qto.45.2023.05.25.06.54.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 25 May 2023 06:54:07 -0700 (PDT) Date: Thu, 25 May 2023 09:54:07 -0400 From: Johannes Weiner To: "zhaoyang.huang" Cc: Andrew Morton , Suren Baghdasaryan , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Zhaoyang Huang , ke.wang@unisoc.com Subject: Re: [PATCH] mm: deduct the number of pages reclaimed by madvise from workingset Message-ID: <20230525135407.GA31865@cmpxchg.org> References: <1684919574-28368-1-git-send-email-zhaoyang.huang@unisoc.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1684919574-28368-1-git-send-email-zhaoyang.huang@unisoc.com> X-Rspamd-Queue-Id: A560240010 X-Rspam-User: X-Stat-Signature: 66tae8izjteg6yifbn9mm7sf5yy68kze X-Rspamd-Server: rspam03 X-HE-Tag: 1685022849-842654 X-HE-Meta: U2FsdGVkX1/LnlindLqeKxGmiJ8GrHcWjyvO8lFeNvhkmEhDblvuhQutOVz7HE89d3SiPbN8xlGxvuZFlmZmuEHx++pLI8qXb3T1gEnmiC9++eT0UxbfxvkYuNnJwMJaJlT1W3wV6Z6+Rmu6GTW78pep2rtWZFf2jS+RJVvrxl5uxwxNTOQhVvlKA0lEnhIczxgCWFpsavBbk5GXPvYK6NIXsT0bd7WlzChn1WLP2BuBissXCDuzZKTF9bT9dgzBBqfOjiaaY4erLnNxQP+FkzeuAODMcjsuvUJHk6VKTGbuRFcps14EuVUlZruWt2KYfK43dy+VrmHRny870Ht+H6BgBAVxd/mf05KD+EXPmdocAO3EM19XBQPS3EuklfykGD/lMpkdHCDUOMEXiSzRL0C3rxHX570Zt3MChylWZxV0DmGQYzN2pHfqXdLUfrFT8PLTa5nymQA8WmpZuRe7krvCC/YrmRnRl1vuOHMA00XtDEiCh9Lr3+lkRbCxbmhn++z8GHjYoAbKahj9m+fenk+RUo79UTIwkTpCNwwU0M+J6WBJf9ELUqHOhVVVt+U45pcPSArxjuxHz3qGzGZOKXzDJtWD8jvHVXL0egwEUlYyNyu0hvBFLW+qLmmzzW322sGkOZZq+hw7koXEzM7Idq4LmxWkJRHc9eYTKLjHkiBoQ2yceACf2+m8ykIqlFmPGOEusELzaCIZ9fwJMw0AKbz56EtnbQNbHvJ13v/ZrduZ7g80VQEA/T3eWdRLX/ygHVQsxxHS9V+10KHOEwwgTZewOQz6Q8r84h2Ok8O90pilUAyhSU7I4RWQ4x8IrqY8VhdHQc8eoE1H4f7SaP/t8UJgHhqIFV4XYusM4s0wS167JkWQJqwiBUetQmCYZkXeg2hEeDrYzFyCsnOTwOkRGh/t3XXnhIBsUQwm0UMjgDeB/gIqSgR+xzPCDGpnGTb1LUNfTNQK9gKo5av5dQu OIPFkbd8 sK+BQpAEzPSHvcTkvbuVoirC4SjYp620jfHreCEKVERLQHufDXvJ0u12tNzjYrecNK5V3jWhuMRCr6QBFlTgNf4ZkcVdXS4gKreKyi9CUEpnttavBdWjBZ1DCZnxcUD5c8H6G3oWWjH+HU3a9J/NKcVOic+QhEFnlHf37tws6M4SlYCIj6k+TKwYTVsdTbLZxoU/2kzjwRrE3X+EJ5/+aj99uQlF+wZmuqF/YMK8u3tMUCVQYsvng03iCWIAHT4vJvqk5FVwTDPc+LohWmM8B0n3m4h4HQAjicLiecEPh3NJc1IM9EkGGS6rl95Nd8rcBe7W7CArfHAjQDxiQR3URTB6U8/tQzAq2+Qks+ir0ErwkhnMTgHZ+/oz9g8KvHI8TgUT1A0yxCxa3U0kXpArGUzcRMXwOd75CJphCxdyvERvc3D+gNs/mlyxj+ykkX5KxMRy/1sdf+dNMdf5ZOwmaS/JL2v0Q6oRm0iT1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, May 24, 2023 at 05:12:54PM +0800, zhaoyang.huang wrote: > From: Zhaoyang Huang > > The pages reclaimed by madvise_pageout are made of inactive and dropped from LRU > forcefully, which lead to the coming up refault pages possess a large refault > distance than it should be. These could affect the accuracy of thrashing when > madvise_pageout is used as a common way of memory reclaiming as ANDROID does now. This alludes to, but doesn't explain, a real world usecase. Yes, madvise_pageout() will record non-resident entries today. This means refault and thrash detection is on for user-driven reclaim. So why is that undesirable? Today we measure and report the cost of reclaim and memory pressure for physical memory shortages, cgroup limits, and user-driven cgroup reclaim. Why should we not do the same for madv_pageout()? If the userspace code that drives pageout has a bug and the result is extreme thrashing, wouldn't you want to know that? Please explain the idea here better. > Signed-off-by: Zhaoyang Huang > --- > include/linux/swap.h | 2 +- > mm/madvise.c | 4 ++-- > mm/vmscan.c | 8 +++++++- > 3 files changed, 10 insertions(+), 4 deletions(-) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index 2787b84..0312142 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -428,7 +428,7 @@ extern unsigned long mem_cgroup_shrink_node(struct mem_cgroup *mem, > extern int vm_swappiness; > long remove_mapping(struct address_space *mapping, struct folio *folio); > > -extern unsigned long reclaim_pages(struct list_head *page_list); > +extern unsigned long reclaim_pages(struct mm_struct *mm, struct list_head *page_list); > #ifdef CONFIG_NUMA > extern int node_reclaim_mode; > extern int sysctl_min_unmapped_ratio; > diff --git a/mm/madvise.c b/mm/madvise.c > index b6ea204..61c8d7b 100644 > --- a/mm/madvise.c > +++ b/mm/madvise.c > @@ -420,7 +420,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > huge_unlock: > spin_unlock(ptl); > if (pageout) > - reclaim_pages(&page_list); > + reclaim_pages(mm, &page_list); > return 0; > } > > @@ -516,7 +516,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > arch_leave_lazy_mmu_mode(); > pte_unmap_unlock(orig_pte, ptl); > if (pageout) > - reclaim_pages(&page_list); > + reclaim_pages(mm, &page_list); > cond_resched(); > > return 0; > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 20facec..048c10b 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -2741,12 +2741,14 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list, > return nr_reclaimed; > } > > -unsigned long reclaim_pages(struct list_head *folio_list) > +unsigned long reclaim_pages(struct mm_struct *mm, struct list_head *folio_list) > { > int nid; > unsigned int nr_reclaimed = 0; > LIST_HEAD(node_folio_list); > unsigned int noreclaim_flag; > + struct lruvec *lruvec; > + struct mem_cgroup *memcg = get_mem_cgroup_from_mm(mm); > > if (list_empty(folio_list)) > return nr_reclaimed; > @@ -2764,10 +2766,14 @@ unsigned long reclaim_pages(struct list_head *folio_list) > } > > nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid)); > + lruvec = &memcg->nodeinfo[nid]->lruvec; > + workingset_age_nonresident(lruvec, -nr_reclaimed); > nid = folio_nid(lru_to_folio(folio_list)); > } while (!list_empty(folio_list)); > > nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid)); > + lruvec = &memcg->nodeinfo[nid]->lruvec; > + workingset_age_nonresident(lruvec, -nr_reclaimed); The task might have moved cgroups in between, who knows what kind of artifacts it will introduce if you wind back the wrong clock. If there are reclaim passes that shouldn't participate in non-resident tracking, that should be plumbed through the stack to __remove_mapping (which already has that bool reclaimed param to not record entries).