From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9130C282D2 for ; Mon, 3 Mar 2025 22:17:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A43C7280002; Mon, 3 Mar 2025 17:17:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9A745280001; Mon, 3 Mar 2025 17:17:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7ACB7280002; Mon, 3 Mar 2025 17:17:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 549F8280001 for ; Mon, 3 Mar 2025 17:17:37 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id D6F6D140D16 for ; Mon, 3 Mar 2025 22:17:36 +0000 (UTC) X-FDA: 83181652512.23.D60DACA Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf25.hostedemail.com (Postfix) with ESMTP id 3CD30A0018 for ; Mon, 3 Mar 2025 22:17:35 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=YQZOp2wi; spf=pass (imf25.hostedemail.com: domain of sj@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=sj@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1741040255; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4YR495P9DH4kNm/sTuls3XkdlFXS0JL4LQWg3w3elDE=; b=VuAtbhYcDy+1ejTIpAmIdcUN4JTtx5QoTD8aUnxuhwjLibQIUMWFOi3e9xS6vPEP/MoKfT g6elo1uujwniwH6ox2qPO5gD4nqzDq3dfubkpDtKf0Ra12jpqJ5a1BaJzJrZQIcp7Jbnov Z4WbF2xMv+oep9hzq/so64QyaSI7kWs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1741040255; a=rsa-sha256; cv=none; b=SdlW2dbs9NkHlo+AcUiv2MrFHP8vPcUo86Ej8twXYxp3sE1BAFTQhtPQc6vQakEANcXyzP ft8sMJ+XZZCtCRMOTAJaKQyLEmaBVbM4PDkT9Fw3LhdIOP4ScMn++8njpzB4FSRaVwiaov gcAPCgrCzGZKMiKapJjqB0unbkmv6Z0= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=YQZOp2wi; spf=pass (imf25.hostedemail.com: domain of sj@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=sj@kernel.org; dmarc=pass (policy=quarantine) header.from=kernel.org Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 891275C4A04; Mon, 3 Mar 2025 22:15:17 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EEAF7C4CEE6; Mon, 3 Mar 2025 22:17:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1741040254; bh=nNrODnOa9RS2rW/W7BbbOd+/X+ZlX/z8fAm6OpZQMuM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YQZOp2wi8QtcDcLx4CehWUfk8pKN1UIgYZ/j5qo/V2J5h5A+GDFVKRKrv7EjHdl9v 3vyvJRlzucFIzqJbwAOWT3fk3QqNzMjhlc6B+k6bkH2sDz/7kgow9Tx8t9l+UyuVe1 mPRha08uIuKPjp3NfS7Yr1znR7hrosDgbPQbje9al72s2j9TV6Fi7kPNmxf3pqAW2n awoPprDyb6B+JAvh1E3l6exNEhTZSuLb0RE6FhQsJD4mX04lzUTNOlQw9MFSJomVc0 NrqsvWlo5i8P5HIoXed8N+oYnRTVgQXlHsySAB+HtgntJTEoNrn9De/kIhwzgW8uK0 99F3Cd+AAoZfA== From: SeongJae Park To: Andrew Morton Cc: SeongJae Park , damon@lists.linux.dev, kernel-team@meta.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 2/8] mm/damon/core: implement intervals auto-tuning Date: Mon, 3 Mar 2025 14:17:20 -0800 Message-Id: <20250303221726.484227-3-sj@kernel.org> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20250303221726.484227-1-sj@kernel.org> References: <20250303221726.484227-1-sj@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Stat-Signature: 48iqrrdoap71dhjs5jkuquoxmebq67up X-Rspamd-Queue-Id: 3CD30A0018 X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1741040255-840551 X-HE-Meta: U2FsdGVkX19Awlb0VE32iXT3y8QkAVDrkeYxWvgkuxRk0CeytjqVonGUHZvXqTcGUtrecf18NQcAHC34mqBqRSswWJA1JFRi7W1ETKjnGeec41tRUP68gB14LHnR9TkJbI5p+UtGSnvw/PKBv3SXKZfmAbmZTwuNxwFHxWkQun4Sp4Rvx1DLVUrBj4ElvL7rnOfUQm+gx1+3caPuozfqnKxsY7lShpQ6mBXwoR9V6Ajfn1eKHm2Xcz/6HaxTNwtuOq8qb9r/TR9PJQTOEMTyga//2Fud2gkvFdvDaRC7GFdGZEg0PPDgn5Hg3IdKAlbpXO3DCThHK1VrZibgLyMNDonggYR4HN3RowL8Bd3UAGTJbRpAAAaK0K0Y52s5Zyq3PfaLri31Eotgvu4ZzVYyPDav+GXBhvNXr2JeJQmryiBYMgQQrtiyusHgYL+3LnNB08R9OW9U1U+8YGUJtO0BYeOiMXUoaoL4UfSFK2XfFFk9K2/JmjJsrSyzXvazZ+h6QGUBjaC+UB+YEE192XZLsw1YnXhsgQE1dqpA82fg1Q621y6Us6JZV1U4r04Z2gjYjHY0coVIFGH/b4NX0dADtzTiu6v9VhilKyJq+X/ew55wXSTdkA/66f4ojcIst5cFUq6mcg5CTYaK52glZtINtHuqTd59ueY1ZkPpiUrkAFVXXCiZmpmG0KF6qsmuFRtUv6dPE6YHSF6alZ05wGI+RgQSbQpVYfpF388JTvwuxkE09p86erhbBP51UlOIwa8UvE6TG6mKsIvK85gVLNILaUXi6u2stJ58mU2a6gkvzUy2ZzWK5EC6UWvu4Cy6ukF2TGqfHz+HHv5hO1hpDCR8Gc9q67jvxtKVpk7uDA3y0dk0YaJrCj6tgMBJbGI2rubD6oqw0RuEJ04Erq/7oAgRg4IttUZZ8aEf+daDVXBwrS8E08SDA6P4N0PeQY8nlXaXFjTHIDJEvQWNj/2flaK AEOf6OXE L0b/fyVWYtLDMUVn91GFAQGsSbuSnKjjbJGBU/OMnHuYaKZLCii1U42k6DuK2nabloBmL8BZgxBaMx72L62XZh5RGF5pCzrLTnQw3UF54I+TQMLqaMyzdGUEKqM1+D8X8/qeQYHOaDb/i3KaUFNmzoveBGWtgwCbPCrUtON7OqpifOgN4w747kjK/x46CN/4WAKP03I8bmXgq2G1nZ3RPP+jxeEqPjR2j8rK+21ALt1w/jxs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Implement the DAMON sampling and aggregation intervals auto-tuning mechanism as briefly described on 'struct damon_intervals_goal'. The core part for deciding the direction and amount of the changes is implemented reusing the feedback loop function which is being used for DAMOS quotas auto-tuning. Unlike the DAMOS quotas auto-tuning use case, limit the maximum decreasing amount after the adjustment to 50% of the current value, though. This is because the intervals have no good merits at rapid reductions since it could unnecessarily increase the monitoring overhead. Signed-off-by: SeongJae Park --- include/linux/damon.h | 16 +++++++++ mm/damon/core.c | 76 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 92 insertions(+) diff --git a/include/linux/damon.h b/include/linux/damon.h index 5f2609f24761..b3e2c793c1f4 100644 --- a/include/linux/damon.h +++ b/include/linux/damon.h @@ -713,6 +713,17 @@ struct damon_attrs { struct damon_intervals_goal intervals_goal; unsigned long min_nr_regions; unsigned long max_nr_regions; +/* private: internal use only */ + /* + * @aggr_interval to @sample_interval ratio. + * Core-external components call damon_set_attrs() with &damon_attrs + * that this field is unset. In the case, damon_set_attrs() sets this + * field of resulting &damon_attrs. Core-internal components such as + * kdamond_tune_intervals() calls damon_set_attrs() with &damon_attrs + * that this field is set. In the case, damon_set_attrs() just keep + * it. + */ + unsigned long aggr_samples; }; /** @@ -761,6 +772,11 @@ struct damon_ctx { * update */ unsigned long next_ops_update_sis; + /* + * number of sample intervals that should be passed before next + * intervals tuning + */ + unsigned long next_intervals_tune_sis; /* for waiting until the execution of the kdamond_fn is started */ struct completion kdamond_started; /* for scheme quotas prioritization */ diff --git a/mm/damon/core.c b/mm/damon/core.c index ad3b5c065cb8..9d37d3664030 100644 --- a/mm/damon/core.c +++ b/mm/damon/core.c @@ -664,6 +664,10 @@ int damon_set_attrs(struct damon_ctx *ctx, struct damon_attrs *attrs) if (attrs->sample_interval > attrs->aggr_interval) return -EINVAL; + /* calls from core-external doesn't set this. */ + if (!attrs->aggr_samples) + attrs->aggr_samples = attrs->aggr_interval / sample_interval; + ctx->next_aggregation_sis = ctx->passed_sample_intervals + attrs->aggr_interval / sample_interval; ctx->next_ops_update_sis = ctx->passed_sample_intervals + @@ -1301,6 +1305,65 @@ static void kdamond_reset_aggregated(struct damon_ctx *c) } } +static unsigned long damon_get_intervals_score(struct damon_ctx *c) +{ + struct damon_target *t; + struct damon_region *r; + unsigned long sz_region, max_access_events = 0, access_events = 0; + unsigned long target_access_events; + unsigned long goal_bp = c->attrs.intervals_goal.access_bp; + + damon_for_each_target(t, c) { + damon_for_each_region(r, t) { + sz_region = damon_sz_region(r); + max_access_events += sz_region * c->attrs.aggr_samples; + access_events += sz_region * r->nr_accesses; + } + } + target_access_events = max_access_events * goal_bp / 10000; + return access_events * 10000 / target_access_events; +} + +static unsigned long damon_feed_loop_next_input(unsigned long last_input, + unsigned long score); + +static unsigned long damon_get_intervals_adaptation_bp(struct damon_ctx *c) +{ + unsigned long score_bp, adaptation_bp; + + score_bp = damon_get_intervals_score(c); + adaptation_bp = damon_feed_loop_next_input(100000000, score_bp) / + 10000; + /* + * adaptaion_bp ranges from 1 to 20,000. Avoid too rapid reduction of + * the intervals by rescaling [1,10,000] to [5000, 10,000]. + */ + if (adaptation_bp <= 10000) + adaptation_bp = 5000 + adaptation_bp / 2; + return adaptation_bp; +} + +static void kdamond_tune_intervals(struct damon_ctx *c) +{ + unsigned long adaptation_bp; + struct damon_attrs new_attrs; + struct damon_intervals_goal *goal; + + adaptation_bp = damon_get_intervals_adaptation_bp(c); + if (adaptation_bp == 10000) + return; + + new_attrs = c->attrs; + goal = &c->attrs.intervals_goal; + new_attrs.sample_interval = min(goal->max_sample_us, + c->attrs.sample_interval * adaptation_bp / 10000); + new_attrs.sample_interval = max(goal->min_sample_us, + new_attrs.sample_interval); + new_attrs.aggr_interval = new_attrs.sample_interval * + c->attrs.aggr_samples; + damon_set_attrs(c, &new_attrs); +} + static void damon_split_region_at(struct damon_target *t, struct damon_region *r, unsigned long sz_r); @@ -2209,6 +2272,8 @@ static void kdamond_init_intervals_sis(struct damon_ctx *ctx) ctx->next_aggregation_sis = ctx->attrs.aggr_interval / sample_interval; ctx->next_ops_update_sis = ctx->attrs.ops_update_interval / sample_interval; + ctx->next_intervals_tune_sis = ctx->next_aggregation_sis * + ctx->attrs.intervals_goal.aggrs; damon_for_each_scheme(scheme, ctx) { apply_interval = scheme->apply_interval_us ? @@ -2293,6 +2358,17 @@ static int kdamond_fn(void *data) sample_interval = ctx->attrs.sample_interval ? ctx->attrs.sample_interval : 1; if (ctx->passed_sample_intervals >= next_aggregation_sis) { + if (ctx->attrs.intervals_goal.aggrs && + ctx->passed_sample_intervals >= + ctx->next_intervals_tune_sis) { + ctx->next_intervals_tune_sis += + ctx->attrs.aggr_samples * + ctx->attrs.intervals_goal.aggrs; + kdamond_tune_intervals(ctx); + sample_interval = ctx->attrs.sample_interval ? + ctx->attrs.sample_interval : 1; + + } ctx->next_aggregation_sis = next_aggregation_sis + ctx->attrs.aggr_interval / sample_interval; -- 2.39.5