From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1745AC54798 for ; Fri, 23 Feb 2024 22:09:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 800AE6B007D; Fri, 23 Feb 2024 17:09:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7B0F96B007E; Fri, 23 Feb 2024 17:09:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6788C6B0080; Fri, 23 Feb 2024 17:09:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 554626B007D for ; Fri, 23 Feb 2024 17:09:47 -0500 (EST) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 2827FA193A for ; Fri, 23 Feb 2024 22:09:47 +0000 (UTC) X-FDA: 81824461614.25.B9C991D Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) by imf16.hostedemail.com (Postfix) with ESMTP id 6BD1B18000D for ; Fri, 23 Feb 2024 22:09:45 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=kbG5Yt2M; spf=pass (imf16.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708726185; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=eDQUYGjK0gcC/i45nvgEKWYFzF6rZTJ86bsg/M9a66I=; b=kKTMr5azz+6C1pmE3O2WuCyMr9y/1rAgn66DhP6xP+f8RBM+TpEwUg0/y4HNSOC5HJqeMH ZYxEDh1FZcYUCtfKT1N4gr1ygMMHH1yKw3WFf73VerIEGNNf+/AsqBLfvTjXXEyYfJYH2F EATMBzhpiDSo7zXsUYuBTKnhBmW0JHc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708726185; a=rsa-sha256; cv=none; b=nGkboUCW6P5J38W4QCbALmKwCxN6XO7JWUr0VJwK4+Um4C9iARo1CESj+KbhsCysmFV0k3 4eU+aVi0VtBAbGN/EokSinELPejtJLAhFdxUxPyGEshXychbM5tOgLPi8UrSxASjlpBNYl VwcNEMVOfD3H6sHij3h0Ux0D0kwtqHM= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=kbG5Yt2M; spf=pass (imf16.hostedemail.com: domain of minchan.kim@gmail.com designates 209.85.210.175 as permitted sender) smtp.mailfrom=minchan.kim@gmail.com; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none) Received: by mail-pf1-f175.google.com with SMTP id d2e1a72fcca58-6d9f94b9186so1078909b3a.0 for ; Fri, 23 Feb 2024 14:09:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1708726184; x=1709330984; darn=kvack.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:sender:from:to:cc:subject:date:message-id :reply-to; bh=eDQUYGjK0gcC/i45nvgEKWYFzF6rZTJ86bsg/M9a66I=; b=kbG5Yt2M5Zo1yriN4vaAr6kD+cFv5L43Ktg0H1AhpH6MRBK1tDgBK+314AZOOyS//D SKwX15qlHo+NODEhkpkHTlAwdyfIm4VnCNnzQotBvbbJa42i/otBKiho8snOk+gyw+KY NQQxoSWv4i4c9wMXUCPrgLadjmNtpXGgJCO30EMi5DasqXJNQ46+DxFC2VATk8/Q1SZv qf9rc22XmpdX9ebGnDNZkd2xyT4H59DWF9y6O3OL/R2BtbbbdAFAgZVdHB1X89WvBd2k QsHKeR7Z0u/XUNJ2F53MMmaGFdIS6w69SxX6ADeQ6rMPANCVHRrL8RRL/dOmmYkXpfiy 50KQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708726184; x=1709330984; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:sender:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eDQUYGjK0gcC/i45nvgEKWYFzF6rZTJ86bsg/M9a66I=; b=WdUfGGYOT2W2gdne9iB/b8IBi6uAkXjDWncEog8ZXInRbPQy0iF7+r+nZeanfsYHOY 9Y7bVUd+ELSSoZNw6d+vmWOVtYucqrhfHeKN1awkNsKRLA1Ycspf1tureVjOEiC1H6Ij HuHzCes+4zaGAY8IGj4lYUTFb9IKvWbo/kemZr/R3nS4JfO4SK61mkwZxieGxGXdPfdc nGzRzjWHwhYeHO+PPHEZHqbPTNxLs28aXJwhOyFRXt5cFhr2lgQG1GXbCeyDw2kxfJVc TqBHlWSC3rvZeEwFwRp7ET33Op4src8OJUY05c+2gGOK8oaueBTdJDxVquII0aGpfK8a clCg== X-Forwarded-Encrypted: i=1; AJvYcCURRIynlc0Gt7Y2hWkSX+AyLz79zI1bAcNcpt1K6tTK+2GT7Bzd893fEIycfPq5BmVlnl6lCbXTYqJ/FxP18+2MwPs= X-Gm-Message-State: AOJu0YzUxa6+wJX55cZbTkg84jgwdR4My8sK2xvJSwsTXkh3rfh6WbLO Ma6DQiRBGi+i0sOh2jH1gOwBzqaZqhviU83vni5cR7+0okrAU1MU X-Google-Smtp-Source: AGHT+IG8ledTXUl0IDZBAqHjI2OfFBzfeL/4f3W2QpaICgJumHwDsuOA89YMgW/mABXln+9HXqbx5g== X-Received: by 2002:a62:b505:0:b0:6e3:3bd7:6f23 with SMTP id y5-20020a62b505000000b006e33bd76f23mr960762pfe.8.1708726184168; Fri, 23 Feb 2024 14:09:44 -0800 (PST) Received: from google.com ([2620:0:1000:8411:6da7:9880:a81c:aee8]) by smtp.gmail.com with ESMTPSA id a6-20020aa780c6000000b006e134c4d6b0sm13310100pfn.217.2024.02.23.14.09.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Feb 2024 14:09:43 -0800 (PST) Date: Fri, 23 Feb 2024 14:09:41 -0800 From: Minchan Kim To: Barry Song <21cnbao@gmail.com> Cc: sj@kernel.org, akpm@linux-foundation.org, damon@lists.linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, mhocko@suse.com, hannes@cmpxchg.org, Barry Song Subject: Re: [PATCH RFC] mm: madvise: pageout: ignore references rather than clearing young Message-ID: References: <20240223041550.77157-1-21cnbao@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240223041550.77157-1-21cnbao@gmail.com> X-Rspamd-Queue-Id: 6BD1B18000D X-Rspam-User: X-Stat-Signature: 4arctkkuw5rhocdzyhckzso6hjbaepmx X-Rspamd-Server: rspam03 X-HE-Tag: 1708726185-892717 X-HE-Meta: U2FsdGVkX1/g5wLoVUVY4Mrxkp7gq+IrXplmV+L/Hq5CqzbWit+HY3mHIw0x6SiibNXNOA3bSxTpRfhZ1cikbfnQmkd128A8UBJANgzMeWDvNUrvX69vhEvCVIFi/TqvUmf71oiazKBjuJuhwsUFhmwQadK9EpyehwKjkSb43x51HjsGgOLBZcAza0Qw0djkXeILFdBYvJ+t97zTA0b4KlesRqIqKgArxnUmxlbbBSWu64if7BBXBr1ZwGsZA+O0VGvEKegjZPxMakCyBQzWjo3KjmeGWXgewdZz5ZgkRyizTKenz+6N69b3AA/kK+m8wakGaNjGHnFWZmLZP3VjiGlKqqcUzXmkxltSOplMIwLvqwlGxVtJnImrzzRkuia0apAvXVAHHX6qwADfwpeoBO7+50CIesYMDIXPxjWpMrPUFoYnDVMjIKUOWPUO8muIGEnwQcRw3isB0UV/vDknILulLy6nmwMxKaYfan9Ra3f0KYkXg9OuHfABYomOtCXyxMuvxuUEvBDYmd58G9WrTdDoijUBGYOVCA69ypwnSAH3xAPHvdNHwZhQ+Afb4D30OwbT6wjH9att0AHJdWacVZsG216wbasihuFEETk68S6YgGnNXZ7z9n57MuOlh/uWD0so4ubnIa9YT3hQGzP6tISiu5pYR+TEjuvurEWwcyO6QfGJO8sq2pxFBD9YNVJKpwuxAk3f93rPuVbIOpS3v8nDrO93jOb2ola/xpZfmxdopsMFwXR6e0T4gSVB5iezo8f+3UgDH5ciUd67mEIzuKHCsMIasmCIBD0tICdcmEg14ZL+sf+2IuRKTGMkG+I9FS76+bc2DmoHvlFQMEjMVcAxiRS251OELvERTFtcLrIJAyY4qHCDjAlw2PLHCMnndng50NoGvtWZRXLqaF6Vg0bB2c4AOQVL4bX1mmSayteBvaf3HCyNyYAhhQP7vNtdVzxIviM4XYVGc2pqcBU 81GcIKZ+ bINpcq4Ne58z9WIkRVdtqKVUF6lmp+AxRIoUun+vlPs5ppGTU2NkA3qHN0ZBHmO9gP2BdOzzbZ0ussprHmsqfeeV58VRAfLHKg+Gv5X8AF1APlLHksBWGFeX9Q5Huk7xiGIr1A+LoNjxUF5bWLE7OczhVoSVNRAbT24uJOtytpLpy0C5l++eLaRkHIw2sklRSfrvj4I3dl0R2Yd5MKezCDMWpdw4JGwiJtJAchzArEYgeg/U0ZA7BPuaFH570J0knscBAGk9UgbVBVb3ocnuobMMFUzhzG28yyNo1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Barry, On Fri, Feb 23, 2024 at 05:15:50PM +1300, Barry Song wrote: > From: Barry Song > > While doing MADV_PAGEOUT, the current code will clear PTE young > so that vmscan won't read young flags to allow the reclamation > of madvised folios to go ahead. Isn't it good to accelerate reclaiming? vmscan checks whether the page was accessed recenlty by the young bit from pte and if it is, it doesn't reclaim the page. Since we have cleared the young bit in pte in madvise_pageout, vmscan is likely to reclaim the page since it wouldn't see the ferencecd_ptes from folio_check_references. Could you clarify if I miss something here? > It seems we can do it by directly ignoring references, thus we > can remove tlb flush in madvise and rmap overhead in vmscan. > > Regarding the side effect, in the original code, if a parallel > thread runs side by side to access the madvised memory with the > thread doing madvise, folios will get a chance to be re-activated > by vmscan. But with the patch, they will still be reclaimed. But > this behaviour doing PAGEOUT and doing access at the same time is > quite silly like DoS. So probably, we don't need to care. > > A microbench as below has shown 6% decrement on the latency of > MADV_PAGEOUT, > > #define PGSIZE 4096 > main() > { > int i; > #define SIZE 512*1024*1024 > volatile long *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, > MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); > > for (i = 0; i < SIZE/sizeof(long); i += PGSIZE / sizeof(long)) > p[i] = 0x11; > > madvise(p, SIZE, MADV_PAGEOUT); > } > > w/o patch w/ patch > root@10:~# time ./a.out root@10:~# time ./a.out > real 0m49.634s real 0m46.334s > user 0m0.637s user 0m0.648s > sys 0m47.434s sys 0m44.265s > > Signed-off-by: Barry Song > --- > mm/damon/paddr.c | 2 +- > mm/internal.h | 2 +- > mm/madvise.c | 8 ++++---- > mm/vmscan.c | 12 +++++++----- > 4 files changed, 13 insertions(+), 11 deletions(-) > > diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c > index 081e2a325778..5e6dc312072c 100644 > --- a/mm/damon/paddr.c > +++ b/mm/damon/paddr.c > @@ -249,7 +249,7 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s) > put_folio: > folio_put(folio); > } > - applied = reclaim_pages(&folio_list); > + applied = reclaim_pages(&folio_list, false); > cond_resched(); > return applied * PAGE_SIZE; > } > diff --git a/mm/internal.h b/mm/internal.h > index 93e229112045..36c11ea41f47 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -868,7 +868,7 @@ extern unsigned long __must_check vm_mmap_pgoff(struct file *, unsigned long, > unsigned long, unsigned long); > > extern void set_pageblock_order(void); > -unsigned long reclaim_pages(struct list_head *folio_list); > +unsigned long reclaim_pages(struct list_head *folio_list, bool ignore_references); > unsigned int reclaim_clean_pages_from_list(struct zone *zone, > struct list_head *folio_list); > /* The ALLOC_WMARK bits are used as an index to zone->watermark */ > diff --git a/mm/madvise.c b/mm/madvise.c > index abde3edb04f0..44a498c94158 100644 > --- a/mm/madvise.c > +++ b/mm/madvise.c > @@ -386,7 +386,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > return 0; > } > > - if (pmd_young(orig_pmd)) { > + if (!pageout && pmd_young(orig_pmd)) { > pmdp_invalidate(vma, addr, pmd); > orig_pmd = pmd_mkold(orig_pmd); > > @@ -410,7 +410,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > huge_unlock: > spin_unlock(ptl); > if (pageout) > - reclaim_pages(&folio_list); > + reclaim_pages(&folio_list, true); > return 0; > } > > @@ -490,7 +490,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > > VM_BUG_ON_FOLIO(folio_test_large(folio), folio); > > - if (pte_young(ptent)) { > + if (!pageout && pte_young(ptent)) { > ptent = ptep_get_and_clear_full(mm, addr, pte, > tlb->fullmm); > ptent = pte_mkold(ptent); > @@ -524,7 +524,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, > pte_unmap_unlock(start_pte, ptl); > } > if (pageout) > - reclaim_pages(&folio_list); > + reclaim_pages(&folio_list, true); > cond_resched(); > > return 0; > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 402c290fbf5a..ba2f37f46a73 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -2102,7 +2102,8 @@ static void shrink_active_list(unsigned long nr_to_scan, > } > > static unsigned int reclaim_folio_list(struct list_head *folio_list, > - struct pglist_data *pgdat) > + struct pglist_data *pgdat, > + bool ignore_references) > { > struct reclaim_stat dummy_stat; > unsigned int nr_reclaimed; > @@ -2115,7 +2116,7 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list, > .no_demotion = 1, > }; > > - nr_reclaimed = shrink_folio_list(folio_list, pgdat, &sc, &dummy_stat, false); > + nr_reclaimed = shrink_folio_list(folio_list, pgdat, &sc, &dummy_stat, ignore_references); > while (!list_empty(folio_list)) { > folio = lru_to_folio(folio_list); > list_del(&folio->lru); > @@ -2125,7 +2126,7 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list, > return nr_reclaimed; > } > > -unsigned long reclaim_pages(struct list_head *folio_list) > +unsigned long reclaim_pages(struct list_head *folio_list, bool ignore_references) > { > int nid; > unsigned int nr_reclaimed = 0; > @@ -2147,11 +2148,12 @@ unsigned long reclaim_pages(struct list_head *folio_list) > continue; > } > > - nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid)); > + nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid), > + ignore_references); > nid = folio_nid(lru_to_folio(folio_list)); > } while (!list_empty(folio_list)); > > - nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid)); > + nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid), ignore_references); > > memalloc_noreclaim_restore(noreclaim_flag); > > -- > 2.34.1 >