From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2252C433F5 for ; Fri, 18 Mar 2022 09:40:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 26B498D0002; Fri, 18 Mar 2022 05:40:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F4878D0001; Fri, 18 Mar 2022 05:40:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 095A98D0002; Fri, 18 Mar 2022 05:40:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id EB6E58D0001 for ; Fri, 18 Mar 2022 05:40:54 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id B0C79251C7 for ; Fri, 18 Mar 2022 09:40:54 +0000 (UTC) X-FDA: 79257012828.09.9B03C4C Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf31.hostedemail.com (Postfix) with ESMTP id 4D4322001A for ; Fri, 18 Mar 2022 09:40:53 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id ED13260FD8; Fri, 18 Mar 2022 09:40:52 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DDD05C340E8; Fri, 18 Mar 2022 09:40:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1647596452; bh=TzuiYSkk5oQ4xdAL+oQUIxsxcrtKWoIhuyDwqlsLKRA=; h=From:To:Cc:Subject:Date:In-Reply-To:From; b=t5WrTJdrnp+61YR9AgCex1kzkNWoGkbFeBJsXH5Jy5FdiCHOO5841PZLDNykwl/iA YFu+sZf3HQvH4NSBBDXaxHhk+mGTJTvd4vCYP8tru2i5D+Ku7Ojqi+Hf/xZOl4MYH9 Sdsce8zrk6G49l3qcCNOMUpee8R21miygD1oQuD34wkThkgsItSjf3C1+ZVkk2elgc +NQQCgLm5kb8SsrkAjEpHi6y6h4fz3YD/Huy27jvC/zwBWDqYzHpTeOEKa29WZjJvn ZDjrzlKVOuIgJ24HyigmcB229NJhwh7sXarJEY+v4IoiT5+D7goBAIMl+pfJ7FsFFZ Tyx36go8ni8Yw== From: sj@kernel.org To: Baolin Wang Cc: sj@kernel.org, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] mm/damon: Make the sampling more accurate Date: Fri, 18 Mar 2022 09:40:41 +0000 Message-Id: <20220318094041.26315-1-sj@kernel.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <1647595393-103185-1-git-send-email-baolin.wang@linux.alibaba.com> X-Rspam-User: X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 4D4322001A X-Stat-Signature: phu7rm3nt1qnwifgfxxrcsf3b1rp3eoh Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=t5WrTJdr; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf31.hostedemail.com: domain of sj@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=sj@kernel.org X-HE-Tag: 1647596453-617675 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000001, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi Baolin, On Fri, 18 Mar 2022 17:23:13 +0800 Baolin Wang wrote: > When I try to sample the physical address with DAMON to migrate pages > on tiered memory system, I found it will demote some cold regions mistakenly. > Now we will choose an physical address in the region randomly, but if > its corresponding page is not an online LRU page, we will ignore the > accessing status in this cycle of sampling, and actually will be treated > as a non-accessed region. Suppose a region including some non-LRU pages, > it will be treated as a cold region with a high probability, and may be > merged with adjacent cold regions, but there are some pages may be > accessed we missed. > > So instead of ignoring the access status of this region if we did not find > a valid page according to current sampling address, we can use last valid > sampling address to help to make the sampling more accurate, then we can do > a better decision. Well... Offlined pages are also a valid part of the memory region, so treating those as not accessed and making the memory region containing the offlined pages looks colder seems legal to me. IOW, this approach could make memory regions containing many non-online-LRU pages as hot. If I'm missing some points, please let me know. Thanks, SJ > > Signed-off-by: Baolin Wang > --- > include/linux/damon.h | 2 ++ > mm/damon/core.c | 2 ++ > mm/damon/paddr.c | 15 ++++++++++++--- > 3 files changed, 16 insertions(+), 3 deletions(-) > > diff --git a/include/linux/damon.h b/include/linux/damon.h > index f23cbfa..3311e15 100644 > --- a/include/linux/damon.h > +++ b/include/linux/damon.h > @@ -38,6 +38,7 @@ struct damon_addr_range { > * struct damon_region - Represents a monitoring target region. > * @ar: The address range of the region. > * @sampling_addr: Address of the sample for the next access check. > + * @last_sampling_addr: Last valid address of the sampling. > * @nr_accesses: Access frequency of this region. > * @list: List head for siblings. > * @age: Age of this region. > @@ -50,6 +51,7 @@ struct damon_addr_range { > struct damon_region { > struct damon_addr_range ar; > unsigned long sampling_addr; > + unsigned long last_sampling_addr; > unsigned int nr_accesses; > struct list_head list; > > diff --git a/mm/damon/core.c b/mm/damon/core.c > index c1e0fed..957704f 100644 > --- a/mm/damon/core.c > +++ b/mm/damon/core.c > @@ -108,6 +108,7 @@ struct damon_region *damon_new_region(unsigned long start, unsigned long end) > region->ar.start = start; > region->ar.end = end; > region->nr_accesses = 0; > + region->last_sampling_addr = 0; > INIT_LIST_HEAD(®ion->list); > > region->age = 0; > @@ -848,6 +849,7 @@ static void damon_split_region_at(struct damon_ctx *ctx, > return; > > r->ar.end = new->ar.start; > + r->last_sampling_addr = 0; > > new->age = r->age; > new->last_nr_accesses = r->last_nr_accesses; > diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c > index 21474ae..5f15068 100644 > --- a/mm/damon/paddr.c > +++ b/mm/damon/paddr.c > @@ -31,10 +31,9 @@ static bool __damon_pa_mkold(struct folio *folio, struct vm_area_struct *vma, > return true; > } > > -static void damon_pa_mkold(unsigned long paddr) > +static void damon_pa_mkold(struct page *page) > { > struct folio *folio; > - struct page *page = damon_get_page(PHYS_PFN(paddr)); > struct rmap_walk_control rwc = { > .rmap_one = __damon_pa_mkold, > .anon_lock = folio_lock_anon_vma_read, > @@ -66,9 +65,19 @@ static void damon_pa_mkold(unsigned long paddr) > static void __damon_pa_prepare_access_check(struct damon_ctx *ctx, > struct damon_region *r) > { > + struct page *page; > + > r->sampling_addr = damon_rand(r->ar.start, r->ar.end); > > - damon_pa_mkold(r->sampling_addr); > + page = damon_get_page(PHYS_PFN(r->sampling_addr)); > + if (page) { > + r->last_sampling_addr = r->sampling_addr; > + } else if (r->last_sampling_addr) { > + r->sampling_addr = r->last_sampling_addr; > + page = damon_get_page(PHYS_PFN(r->last_sampling_addr)); > + } > + > + damon_pa_mkold(page); > } > > static void damon_pa_prepare_access_checks(struct damon_ctx *ctx) > -- > 1.8.3.1