From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 854D7C25B74 for ; Sun, 2 Jun 2024 16:16:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1A2146B00A5; Sun, 2 Jun 2024 12:16:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1526E6B00A6; Sun, 2 Jun 2024 12:16:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F0DB56B00A8; Sun, 2 Jun 2024 12:16:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id D08D16B00A5 for ; Sun, 2 Jun 2024 12:16:15 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 4991B401FD for ; Sun, 2 Jun 2024 16:16:15 +0000 (UTC) X-FDA: 82186450710.15.8A0D300 Received: from mail-lj1-f179.google.com (mail-lj1-f179.google.com [209.85.208.179]) by imf23.hostedemail.com (Postfix) with ESMTP id 2ADD714001C for ; Sun, 2 Jun 2024 16:16:12 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=MLa4oQyt; spf=pass (imf23.hostedemail.com: domain of yorha.op@gmail.com designates 209.85.208.179 as permitted sender) smtp.mailfrom=yorha.op@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717344973; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=k5+Na8BuvGarwTy5CyyoRQegIYkClFlolkAgpOw5nIo=; b=HtEm3ShSaGdaE2lnFWdyYScsN68zf2gZlZTuP/7PQGQJg2suNxFahXLCMXSyuGEcAX5Sun YDHvZq01vIenhr6Pw6VHV+FmLZG082yJwg3+2jtgtRYHWwklPFE5Dl4rerlyZCRQRlnQZh o8D05eY+WokRdGZAAJzwy97VKBpOY8U= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717344973; a=rsa-sha256; cv=none; b=lq+ActW9HdNxeoNXjM0cnOl+l+A5ZhPz2oX9bNnQStdu49Ta+XBK4m+edIM3U62cZv4uAI lx5h2HQvHtto+IeyYYJx6oRGJ+0bkJA0Q4PPSo22GEbaldY3D/WacyQE75NA2SHazQzrVM /mvpnSgOrX82FIwsBvbm9LhTNUwL3g8= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=MLa4oQyt; spf=pass (imf23.hostedemail.com: domain of yorha.op@gmail.com designates 209.85.208.179 as permitted sender) smtp.mailfrom=yorha.op@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-lj1-f179.google.com with SMTP id 38308e7fff4ca-2eaafda3b90so2894091fa.0 for ; Sun, 02 Jun 2024 09:16:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1717344971; x=1717949771; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=k5+Na8BuvGarwTy5CyyoRQegIYkClFlolkAgpOw5nIo=; b=MLa4oQytkWf3h9tLoEhM1HW440rTvqPSoRZDNyron12A2KqeiXRmxc9CX6Tlr5nAoy wrwr57lnjfE2fi8//Xax0nepINzjK/UmZLdSf63V7HK40U7NE/AdlqYkfm5SruNDZ8V3 9ilu8TvCeRq8otMlykZ5cxD5b1Qdkd8Fpn37fDAubksLWoEQ6tcdxuwQmH5KP+jK/3X2 citvExzf+0DlGFjIKuCVSZhrs3Qw7mA9tJUFGf7F05rgx3sH6Nd4jP9kblPMXyYKIvg5 oEhn9iitqoNuOHpOLIhzGAh9vEBje3rY5fDCmaLlCnAfuMKYkygVBXlQr/dTQVDwJVJH Cj/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1717344971; x=1717949771; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=k5+Na8BuvGarwTy5CyyoRQegIYkClFlolkAgpOw5nIo=; b=Vhe7W/lOT3jUvGBhhiojH4uFLcmWbxWLPBmL5xiVK2cX7HpIbC/pQA+xtsLBVl4tKz xQrv+c9mHVVcsdqMEMcxbWmkeHY84kMQbxu/shdaMSf5hgHXj6YhoF2fLkPjdCTtM2aW kiwCoWxteWFG1e84PGScyB8IWTxZzPjjGTbYVoOwMZznAJdhiVTwQ5nv6nJBzqsuo1A1 cT82njMl7D7QGsMAYejQslJnRQYyE8+g7UKYz8+m5Be4dZXFiiE47EXAGASLOCEOg1tX W63wcLUxz9cwrv5KdviJfcDCkAVGp2q+69RRCP1tXzEhtact0hBcTErbEY6L7vWQYWSQ rMRQ== X-Forwarded-Encrypted: i=1; AJvYcCXGPtdjc7n63kWjTXqakwDxhGV2p5XyMCQT+HrNNhQH3S8s704pb4Okp6GFxYE4bfm1kqilUCHAELv3rVEq1auBAZg= X-Gm-Message-State: AOJu0Yz4/Fki1zzwcRZu2W91Jst9iggZK+9lehX/7P8V64pWXgrjxF1M TYAn9U7Mi7sH6hcJQ4gWji+/S8V4Bl+R8qmBGcEVxLuWtVdq4DuC X-Google-Smtp-Source: AGHT+IHZdOSDD9pht2jFyoUU5jUhtz8uB/u3j1hb5kQk6AdM96aS59o8Ndm0dhNwe5ZLD/7HXjofrw== X-Received: by 2002:a2e:b747:0:b0:2e9:53c5:2b5c with SMTP id 38308e7fff4ca-2ea950ec017mr48755471fa.16.1717344970697; Sun, 02 Jun 2024 09:16:10 -0700 (PDT) Received: from localhost.localdomain ([176.59.170.248]) by smtp.gmail.com with ESMTPSA id 38308e7fff4ca-2ea9a71651csm7324681fa.25.2024.06.02.09.16.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 02 Jun 2024 09:16:10 -0700 (PDT) From: Alex Rusuf To: sj@kernel.org Cc: damon@lists.linux.dev, linux-kernel@vger.kernel.org, linux-mm@kvack.org, yorha.op@gmail.com Subject: Re: [PATCH v2 1/2] mm/damon/core: add 'struct kdamond' abstraction layer Date: Sun, 2 Jun 2024 19:15:57 +0300 Message-ID: <20240602161557.927788-1-yorha.op@gmail.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240531193609.71485-1-sj@kernel.org> References: <20240531193609.71485-1-sj@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Rspam-User: X-Stat-Signature: gpk5uetzm5cfrjh11pikgcd5o69exwu4 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 2ADD714001C X-HE-Tag: 1717344972-226562 X-HE-Meta: U2FsdGVkX1/Q3SmXF+LoAUmPmMRgHbO2my9idJpfj5R6kuns0gVRZzC1i0posQwxNVMq72M9zh1FEQEPM0FL3ffQ0135+KYDF0tYYrm33TLWHQORXrCiVu8kaNPPap7/Fg1H+qeM+09/at10FQXajOiWivKNvR/BvNeRJV8cDXWcWbkRWHqp/ZoV3XQzV1Cw+1pN9lTAVDo1BcyzFUm6mGaJRRhrTh59yz+e97jGn21HMgLPBGgd8xxCq80R7tgkscrotgCCXfPTS2QHbXaVi6bTGhdfxa4ET79FnqI3FuthWF3/lX/OhUMeZXqByP7U8BH+pI00f6MhWqsd6aErWqRT3FwMwTK1uKv9YEsWmuMFSS17TPy7Ba7VOAOCOXxm8jJ0y34BAwetsFq6C4/geyAn3jbyPCKBNeOL0JpTWPdTnYhHAIyeEdbsGwNdhD4vaOfGX8UitTPNERdcijyFLYABgB0HH2BJ10Z/wHEe8aTCJx1znNfvSbM3W+Rk2NUChkxMFHpEJeXwHdiPeOtna0wdiuR/ZX6qnuunmifyGKktohcmiIQ78k36uf7VvuLcWh37KzIW4A6y63Os0eXI0nNDnrbPGpORukuGMDQ7zYxu/U2xX4pHt1x2y9WahRQ++/mfGmE6Fd3GOs+QPHEnuwI7uZ+kjeukBvhSMWOvomyiJOXjbKvbanU4+p5X3TMJ34grKLmC42ccsBwOmJNeYAVHMcrJLxMA55KKGAhs3dUs+NNNUkd8cJD5jLs58RarkdffZb84seidxIFetDf2GVeOrbbkGAvpr2lg3Z8s5jgMmcg9QeuPX+/ATnG2FrgbnEOegbUnLx0n0YhE+k6WYFLaC+YtjRIfMrUgK9/l83CAlpysdVjWyOcawIiU/48KQHTovTbOC571UitQVtRc/SFD9ErfQyIXwzDCUOSJ6A31cWAgu7gR4KBsR0J2YjH3EOr1WyWsruObpzMB16C H/e/pHkd P9dmBUbImm9YRtO7c2Yi6cqpy7IO1odKDBQCHkvoHEY19bW/7RRrGeXxcBoCs0MeUNpWGGFFvf04zjPl4BzIty1kU5tyvM1r1DtvooX/BYYdI5LalUJZUdpVZy2EZrY6XxPguwkCnLaswacuHYbmwbyW8spHpDMxy2PszWADAcJLoMBxZ6/rJvxPpM/bSD5DjVDxvmQJKA+ypEuyilc3F1fcD0ZM/PMYutFqtaVDR3gR/tpsR+TiZG3hu78VRoh/zCF/BJ8oK3plEQON4HV2gk2b7Yr9H4Bsm9xZK/W+cdJ1YVekwUAP44IrB8WsUN8LmWj14Hk7fBLY8Jos02+Q0tRNwPiv7KUwIJMWn4h1LOmVsO1xA3ZeHzBQRLZEHysc/eAScOcdsALfpMUYWXYRrbza1SpkRR/FmsGz8RYeJ3hJvf/fSbJDT1rmLiFVfxkmYq7IQfSxPovKWi+Q92iYaxRE35Trs8Q3Co0EB/xzgmXx5kPyi/PjVrDnmr2ltYFxsFA46RfPygzFtZd3+QAs9Q7ceyIpzPOirhZvDaaCR99UeuvL6gik6WPz1WPvkhjTi2hyhYuNngFojEOohdapsmkiIGA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi SJ, > Hi Alex, > > On Fri, 31 May 2024 15:23:19 +0300 Alex Rusuf wrote: > > > In current implementation kdamond tracks only 1 > > context, that is kdamond _is_ damon_ctx what makes > > it very difficult to implement multiple contexts. > > > > This patch adds another level of abstraction, that is > > 'struct kdamond' - structure which represents kdamond itself. > > It holds references to all contexts organized in list. > > > > Few fields like ->kdamond_started and ->kdamond_lock > > (just ->lock for 'struct kdamond') also has been moved > > to 'struct kdamond', because they have nothing to do > > with the context itself, but with the whole kdamond > > daemon. > > > > Signed-off-by: Alex Rusuf > > --- > > include/linux/damon.h | 73 ++++++++--- > > mm/damon/core.c | 249 ++++++++++++++++++++++------------- > > mm/damon/lru_sort.c | 31 +++-- > > mm/damon/modules-common.c | 36 +++-- > > mm/damon/modules-common.h | 3 +- > > mm/damon/reclaim.c | 30 +++-- > > mm/damon/sysfs.c | 268 ++++++++++++++++++++++++-------------- > > 7 files changed, 463 insertions(+), 227 deletions(-) > > > > diff --git a/include/linux/damon.h b/include/linux/damon.h > > index 886d07294..7cb9979a0 100644 > > --- a/include/linux/damon.h > > +++ b/include/linux/damon.h > > @@ -568,29 +568,49 @@ struct damon_attrs { > > unsigned long max_nr_regions; > > }; > > > > +/** > > + * struct kdamond - Represents a background daemon that is responsible > > + * for executing each context. > > + * > > + * @lock: Kdamond's global lock, serializes accesses to any field. > > + * @self: Kernel thread which is actually being executed. > > + * @contexts: Head of contexts (&damon_ctx) list. > > + * @nr_ctxs: Number of contexts being monitored. > > + * > > + * Each DAMON's background daemon has this structure. Once > > + * configured, daemon can be started by calling damon_start(). > > + * > > + * Monitoring can be explicitly stopped by calling damon_stop(). Once > > + * daemon is terminated @self is set to NULL, so users can know if > > + * monitoring is stopped by reading @self pointer. Access to @self > > + * must also be protected by @lock. > > + */ > > +struct kdamond { > > + struct mutex lock; > > + struct task_struct *self; > > I'm not sure if 'self' is a good name. I'd prefer more clear name such as > 'task', but this is not a strong opinion. > > > + struct list_head contexts; > > + size_t nr_ctxs; > > + > > +/* private: */ > > + /* for waiting until the execution of the kdamond_fn is started */ > > + struct completion kdamond_started; > > +}; > > + > > /** > > * struct damon_ctx - Represents a context for each monitoring. This is the > > * main interface that allows users to set the attributes and get the results > > * of the monitoring. > > * > > * @attrs: Monitoring attributes for accuracy/overhead control. > > - * @kdamond: Kernel thread who does the monitoring. > > - * @kdamond_lock: Mutex for the synchronizations with @kdamond. > > + * @kdamond: Back reference to daemon who is the owner of this context. > > Why do we need this? This is mostly for DebugFS, but I guess this can be removed. This has been brought to keep compatibility with previous interfaces while introducing kdamonds abstraction. > > > + * @list: List head of siblings. > > * > > * For each monitoring context, one kernel thread for the monitoring is > > * created. The pointer to the thread is stored in @kdamond. > > * > > * Once started, the monitoring thread runs until explicitly required to be > > * terminated or every monitoring target is invalid. The validity of the > > - * targets is checked via the &damon_operations.target_valid of @ops. The > > - * termination can also be explicitly requested by calling damon_stop(). > > - * The thread sets @kdamond to NULL when it terminates. Therefore, users can > > - * know whether the monitoring is ongoing or terminated by reading @kdamond. > > - * Reads and writes to @kdamond from outside of the monitoring thread must > > - * be protected by @kdamond_lock. > > - * > > - * Note that the monitoring thread protects only @kdamond via @kdamond_lock. > > - * Accesses to other fields must be protected by themselves. > > + * targets is checked via the &damon_operations.target_valid of @ops. > > * > > * @ops: Set of monitoring operations for given use cases. > > * @callback: Set of callbacks for monitoring events notifications. > > @@ -614,12 +634,11 @@ struct damon_ctx { > > * update > > */ > > unsigned long next_ops_update_sis; > > - /* for waiting until the execution of the kdamond_fn is started */ > > - struct completion kdamond_started; > > + unsigned long sz_limit; > > What's this? I forgot to add comments there, this field is just saved like "next_ops_update_sis", which is used in kdamond main loop. > > > > > /* public: */ > > - struct task_struct *kdamond; > > - struct mutex kdamond_lock; > > + struct kdamond *kdamond; > > + struct list_head list; > > > > struct damon_operations ops; > > struct damon_callback callback; > > @@ -653,6 +672,15 @@ static inline unsigned long damon_sz_region(struct damon_region *r) > > return r->ar.end - r->ar.start; > > } > > > > +static inline struct damon_target *damon_first_target(struct damon_ctx *ctx) > > +{ > > + return list_first_entry(&ctx->adaptive_targets, struct damon_target, list); > > +} > > + > > +static inline struct damon_ctx *damon_first_ctx(struct kdamond *kdamond) > > +{ > > + return list_first_entry(&kdamond->contexts, struct damon_ctx, list); > > +} > > > > #define damon_for_each_region(r, t) \ > > list_for_each_entry(r, &t->regions_list, list) > > @@ -675,6 +703,12 @@ static inline unsigned long damon_sz_region(struct damon_region *r) > > #define damon_for_each_scheme_safe(s, next, ctx) \ > > list_for_each_entry_safe(s, next, &(ctx)->schemes, list) > > > > +#define damon_for_each_context(c, kdamond) \ > > + list_for_each_entry(c, &(kdamond)->contexts, list) > > + > > +#define damon_for_each_context_safe(c, next, kdamond) \ > > + list_for_each_entry_safe(c, next, &(kdamond)->contexts, list) > > + > > #define damos_for_each_quota_goal(goal, quota) \ > > list_for_each_entry(goal, "a->goals, list) > > > > @@ -736,7 +770,12 @@ void damon_destroy_target(struct damon_target *t); > > unsigned int damon_nr_regions(struct damon_target *t); > > > > struct damon_ctx *damon_new_ctx(void); > > +void damon_add_ctx(struct kdamond *kdamond, struct damon_ctx *ctx); > > +struct kdamond *damon_new_kdamond(void); > > void damon_destroy_ctx(struct damon_ctx *ctx); > > +void damon_destroy_ctxs(struct kdamond *kdamond); > > Why do we need this? Wouldn't damon_destroy_kdamond() do that? You're right, looks like only damon_destroy_kdamond() uses this function, will be embeded, thank you! > > > +void damon_destroy_kdamond(struct kdamond *kdamond); > > +bool damon_kdamond_running(struct kdamond *kdamond); > > Let's put kdamond control functions outside of functions for damon_ctx control, > like we do for damon_target and damon_ctx. Yeah, I didn't notice this, thank you! > > > int damon_set_attrs(struct damon_ctx *ctx, struct damon_attrs *attrs); > > void damon_set_schemes(struct damon_ctx *ctx, > > struct damos **schemes, ssize_t nr_schemes); > > @@ -758,8 +797,8 @@ static inline unsigned int damon_max_nr_accesses(const struct damon_attrs *attrs > > } > > > > > > -int damon_start(struct damon_ctx **ctxs, int nr_ctxs, bool exclusive); > > -int damon_stop(struct damon_ctx **ctxs, int nr_ctxs); > > +int damon_start(struct kdamond *kdamond, bool exclusive); > > +int damon_stop(struct kdamond *kdamond); > > > > int damon_set_region_biggest_system_ram_default(struct damon_target *t, > > unsigned long *start, unsigned long *end); > > diff --git a/mm/damon/core.c b/mm/damon/core.c > > index 6d503c1c1..cfc9c803d 100644 > > --- a/mm/damon/core.c > > +++ b/mm/damon/core.c > > @@ -24,7 +24,7 @@ > > #endif > > > > static DEFINE_MUTEX(damon_lock); > > -static int nr_running_ctxs; > > +static int nr_running_kdamonds; > > static bool running_exclusive_ctxs; > > > > static DEFINE_MUTEX(damon_ops_lock); > > @@ -488,8 +488,6 @@ struct damon_ctx *damon_new_ctx(void) > > if (!ctx) > > return NULL; > > > > - init_completion(&ctx->kdamond_started); > > - > > ctx->attrs.sample_interval = 5 * 1000; > > ctx->attrs.aggr_interval = 100 * 1000; > > ctx->attrs.ops_update_interval = 60 * 1000 * 1000; > > @@ -499,17 +497,41 @@ struct damon_ctx *damon_new_ctx(void) > > ctx->next_aggregation_sis = 0; > > ctx->next_ops_update_sis = 0; > > > > - mutex_init(&ctx->kdamond_lock); > > - > > ctx->attrs.min_nr_regions = 10; > > ctx->attrs.max_nr_regions = 1000; > > > > INIT_LIST_HEAD(&ctx->adaptive_targets); > > INIT_LIST_HEAD(&ctx->schemes); > > + INIT_LIST_HEAD(&ctx->list); > > > > return ctx; > > } > > > > +/** > > + * Adds newly allocated and configured @ctx to @kdamond. > > There's no guarantee if @ctx is newly allocated and configured. We can simply > say "Adds a DAMON context to a given DAMON thread". > > And this may cause kerneldoc compilation warning. I think the format should be > fixed, and parameters should be commented. True, I'll fix that, thank you! > > > + */ > > +void damon_add_ctx(struct kdamond *kdamond, struct damon_ctx *ctx) > > +{ > > + list_add_tail(&ctx->list, &kdamond->contexts); > > + ++kdamond->nr_ctxs; > > +} > > + > > +struct kdamond *damon_new_kdamond(void) > > +{ > > + struct kdamond *kdamond; > > + > > + kdamond = kzalloc(sizeof(*kdamond), GFP_KERNEL); > > + if (!kdamond) > > + return NULL; > > + > > + init_completion(&kdamond->kdamond_started); > > + mutex_init(&kdamond->lock); > > + > > + INIT_LIST_HEAD(&kdamond->contexts); > > Among the four fields of the struct, three are manually set here, and the final > one is set as zero by 'kzalloc()'. I'd prefer using 'kmalloc()' and manually > set all fields. No problem, I'll change this, thank you! > > > + > > + return kdamond; > > +} > > + > > static void damon_destroy_targets(struct damon_ctx *ctx) > > { > > struct damon_target *t, *next_t; > > @@ -523,6 +545,11 @@ static void damon_destroy_targets(struct damon_ctx *ctx) > > damon_destroy_target(t); > > } > > > > +static inline void damon_del_ctx(struct damon_ctx *ctx) > > +{ > > + list_del(&ctx->list); > > +} > > + > > void damon_destroy_ctx(struct damon_ctx *ctx) > > { > > struct damos *s, *next_s; > > @@ -532,9 +559,27 @@ void damon_destroy_ctx(struct damon_ctx *ctx) > > damon_for_each_scheme_safe(s, next_s, ctx) > > damon_destroy_scheme(s); > > > > + damon_del_ctx(ctx); > > kfree(ctx); > > } > > > > +void damon_destroy_ctxs(struct kdamond *kdamond) > > +{ > > + struct damon_ctx *c, *next; > > + > > + damon_for_each_context_safe(c, next, kdamond) { > > + damon_destroy_ctx(c); > > + --kdamond->nr_ctxs; > > + } > > +} > > Is there explicit user of this function? If not, I'd prefer making this > 'static', or simply embed in damon_destroy_kdamond() below. As I mentioned above there's no explicit users, so I'll embed it into damon_destroy_kdamond(). > > > + > > +void damon_destroy_kdamond(struct kdamond *kdamond) > > +{ > > + damon_destroy_ctxs(kdamond); > > + mutex_destroy(&kdamond->lock); > > + kfree(kdamond); > > +} > > + > > static unsigned int damon_age_for_new_attrs(unsigned int age, > > struct damon_attrs *old_attrs, struct damon_attrs *new_attrs) > > { > > @@ -667,13 +712,27 @@ void damon_set_schemes(struct damon_ctx *ctx, struct damos **schemes, > > */ > > int damon_nr_running_ctxs(void) > > { > > - int nr_ctxs; > > + int nr_kdamonds; > > > > mutex_lock(&damon_lock); > > - nr_ctxs = nr_running_ctxs; > > + nr_kdamonds = nr_running_kdamonds; > > mutex_unlock(&damon_lock); > > > > - return nr_ctxs; > > + return nr_kdamonds; > > Shouldn't we change the name of this function, too? Yeah, this makes sense, thank you! > > > +} > > + > > +/** > > + * damon_kdamond_running() - Return true if kdamond is running > > + * false otherwise. > > + */ > > +bool damon_kdamond_running(struct kdamond *kdamond) > > +{ > > + bool running; > > + > > + mutex_lock(&kdamond->lock); > > + running = kdamond->self != NULL; > > + mutex_unlock(&kdamond->lock); > > + return running; > > } > > Seems this function is used by only DAMON sysfs interface. Don't implement > unnecessary public function. Implement it in mm/damon/sysfs.c please. DebugFS interface also uses this. > > > > > /* Returns the size upper limit for each monitoring region */ > > @@ -700,38 +759,37 @@ static int kdamond_fn(void *data); > > > > /* > > * __damon_start() - Starts monitoring with given context. > > s/context/kdamond/? Maybe we can simply say "starts given kdamond". You're right, thank you! > > > - * @ctx: monitoring context > > + * @kdamond: daemon to be started > > * > > * This function should be called while damon_lock is hold. > > * > > * Return: 0 on success, negative error code otherwise. > > */ > > -static int __damon_start(struct damon_ctx *ctx) > > +static int __damon_start(struct kdamond *kdamond) > > { > > int err = -EBUSY; > > > > - mutex_lock(&ctx->kdamond_lock); > > - if (!ctx->kdamond) { > > + mutex_lock(&kdamond->lock); > > + if (!kdamond->self) { > > err = 0; > > - reinit_completion(&ctx->kdamond_started); > > - ctx->kdamond = kthread_run(kdamond_fn, ctx, "kdamond.%d", > > - nr_running_ctxs); > > - if (IS_ERR(ctx->kdamond)) { > > - err = PTR_ERR(ctx->kdamond); > > - ctx->kdamond = NULL; > > + reinit_completion(&kdamond->kdamond_started); > > + kdamond->self = kthread_run(kdamond_fn, kdamond, "kdamond.%d", > > + nr_running_kdamonds); > > + if (IS_ERR(kdamond->self)) { > > + err = PTR_ERR(kdamond->self); > > + kdamond->self = NULL; > > } else { > > - wait_for_completion(&ctx->kdamond_started); > > + wait_for_completion(&kdamond->kdamond_started); > > } > > } > > - mutex_unlock(&ctx->kdamond_lock); > > + mutex_unlock(&kdamond->lock); > > > > return err; > > } > > > > /** > > * damon_start() - Starts the monitorings for a given group of contexts. > > - * @ctxs: an array of the pointers for contexts to start monitoring > > - * @nr_ctxs: size of @ctxs > > + * @kdamond: a daemon that contains list of monitoring contexts > > * @exclusive: exclusiveness of this contexts group > > * > > * This function starts a group of monitoring threads for a group of monitoring > > @@ -743,74 +801,59 @@ static int __damon_start(struct damon_ctx *ctx) > > * > > * Return: 0 on success, negative error code otherwise. > > */ > > -int damon_start(struct damon_ctx **ctxs, int nr_ctxs, bool exclusive) > > +int damon_start(struct kdamond *kdamond, bool exclusive) > > { > > - int i; > > int err = 0; > > > > + BUG_ON(!kdamond); > > + BUG_ON(!kdamond->nr_ctxs); > > checkpatch complains as below. > > WARNING: Do not crash the kernel unless it is absolutely unavoidable--use WARN_ON_ONCE() plus recovery code (if feasible) instead of BUG() or variants > #432: FILE: mm/damon/core.c:809: > + BUG_ON(!kdamond->nr_ctxs); > > I don't think we really need above BUG_ON() checks. Actually we can just return an error here. I guess I did this for better debugging, so it can be removed. > > > + > > + if (kdamond->nr_ctxs != 1) > > + return -EINVAL; > > Why do we need this? To avoid DAMON sysfs interface change? I think > mm/damon/sysfs.c does this check on its own, so this may not needed? You're right, I didn't notice this, thank you! > > > + > > mutex_lock(&damon_lock); > > - if ((exclusive && nr_running_ctxs) || > > + if ((exclusive && nr_running_kdamonds) || > > (!exclusive && running_exclusive_ctxs)) { > > mutex_unlock(&damon_lock); > > return -EBUSY; > > } > > > > - for (i = 0; i < nr_ctxs; i++) { > > - err = __damon_start(ctxs[i]); > > - if (err) > > - break; > > - nr_running_ctxs++; > > - } > > - if (exclusive && nr_running_ctxs) > > + err = __damon_start(kdamond); > > + if (err) > > + return err; > > + nr_running_kdamonds++; > > + > > + if (exclusive && nr_running_kdamonds) > > running_exclusive_ctxs = true; > > mutex_unlock(&damon_lock); > > > > return err; > > } > > > > -/* > > - * __damon_stop() - Stops monitoring of a given context. > > - * @ctx: monitoring context > > +/** > > + * damon_stop() - Stops the monitorings for a given group of contexts. > > Let's simply say "Stops a given DAMON thread" > > > + * @kdamond: a daemon (that contains list of monitoring contexts) > > + * to be stopped. > > * > > * Return: 0 on success, negative error code otherwise. > > */ > > -static int __damon_stop(struct damon_ctx *ctx) > > +int damon_stop(struct kdamond *kdamond) > > { > > struct task_struct *tsk; > > > > - mutex_lock(&ctx->kdamond_lock); > > - tsk = ctx->kdamond; > > + mutex_lock(&kdamond->lock); > > + tsk = kdamond->self; > > if (tsk) { > > get_task_struct(tsk); > > - mutex_unlock(&ctx->kdamond_lock); > > + mutex_unlock(&kdamond->lock); > > kthread_stop_put(tsk); > > return 0; > > } > > - mutex_unlock(&ctx->kdamond_lock); > > + mutex_unlock(&kdamond->lock); > > > > return -EPERM; > > } > > > > -/** > > - * damon_stop() - Stops the monitorings for a given group of contexts. > > - * @ctxs: an array of the pointers for contexts to stop monitoring > > - * @nr_ctxs: size of @ctxs > > - * > > - * Return: 0 on success, negative error code otherwise. > > - */ > > -int damon_stop(struct damon_ctx **ctxs, int nr_ctxs) > > -{ > > - int i, err = 0; > > - > > - for (i = 0; i < nr_ctxs; i++) { > > - /* nr_running_ctxs is decremented in kdamond_fn */ > > - err = __damon_stop(ctxs[i]); > > - if (err) > > - break; > > - } > > - return err; > > -} > > - > > /* > > * Reset the aggregated monitoring results ('nr_accesses' of each region). > > */ > > @@ -1582,29 +1625,68 @@ static void kdamond_init_intervals_sis(struct damon_ctx *ctx) > > } > > } > > > > +static bool kdamond_init_ctx(struct damon_ctx *ctx) > > +{ > > + if (ctx->ops.init) > > + ctx->ops.init(ctx); > > + if (ctx->callback.before_start && ctx->callback.before_start(ctx)) > > + return false; > > + > > + kdamond_init_intervals_sis(ctx); > > + ctx->sz_limit = damon_region_sz_limit(ctx); > > + > > + return true; > > +} > > I think 'int' return type for error code is better. We can just return > before_start() return value in case of error, otherwise zero. That's true, before_start() also returns 'int', so this looks better, thanks! > > > + > > +static bool kdamond_init_ctxs(struct kdamond *kdamond) > > +{ > > + struct damon_ctx *c; > > + > > + damon_for_each_context(c, kdamond) > > + if (!kdamond_init_ctx(c)) > > + return false; > > + return true; > > +} > > + > > +static void kdamond_finish_ctx(struct damon_ctx *ctx) > > +{ > > + struct damon_target *t; > > + struct damon_region *r, *next; > > + > > + damon_for_each_target(t, ctx) { > > + damon_for_each_region_safe(r, next, t) > > + damon_destroy_region(r, t); > > + } > > + > > + if (ctx->callback.before_terminate) > > + ctx->callback.before_terminate(ctx); > > + if (ctx->ops.cleanup) > > + ctx->ops.cleanup(ctx); > > +} > > + > > +static void kdamond_finish_ctxs(struct kdamond *kdamond) > > +{ > > + struct damon_ctx *c; > > + > > + damon_for_each_context(c, kdamond) > > + kdamond_finish_ctx(c); > > +} > > + > > /* > > * The monitoring daemon that runs as a kernel thread > > */ > > static int kdamond_fn(void *data) > > { > > - struct damon_ctx *ctx = data; > > - struct damon_target *t; > > - struct damon_region *r, *next; > > + struct kdamond *kdamond = data; > > + struct damon_ctx *ctx = damon_first_ctx(kdamond); > > unsigned int max_nr_accesses = 0; > > - unsigned long sz_limit = 0; > > > > pr_debug("kdamond (%d) starts\n", current->pid); > > > > - complete(&ctx->kdamond_started); > > - kdamond_init_intervals_sis(ctx); > > - > > - if (ctx->ops.init) > > - ctx->ops.init(ctx); > > - if (ctx->callback.before_start && ctx->callback.before_start(ctx)) > > + complete(&kdamond->kdamond_started); > > + if (!kdamond_init_ctxs(kdamond)) > > goto done; > > > > - sz_limit = damon_region_sz_limit(ctx); > > - > > while (!kdamond_need_stop(ctx)) { > > /* > > * ctx->attrs and ctx->next_{aggregation,ops_update}_sis could > > @@ -1616,6 +1698,7 @@ static int kdamond_fn(void *data) > > unsigned long next_aggregation_sis = ctx->next_aggregation_sis; > > unsigned long next_ops_update_sis = ctx->next_ops_update_sis; > > unsigned long sample_interval = ctx->attrs.sample_interval; > > + unsigned long sz_limit = ctx->sz_limit; > > > > if (kdamond_wait_activation(ctx)) > > break; > > @@ -1666,28 +1749,20 @@ static int kdamond_fn(void *data) > > sample_interval; > > if (ctx->ops.update) > > ctx->ops.update(ctx); > > - sz_limit = damon_region_sz_limit(ctx); > > + ctx->sz_limit = damon_region_sz_limit(ctx); > > } > > } > > done: > > - damon_for_each_target(t, ctx) { > > - damon_for_each_region_safe(r, next, t) > > - damon_destroy_region(r, t); > > - } > > - > > - if (ctx->callback.before_terminate) > > - ctx->callback.before_terminate(ctx); > > - if (ctx->ops.cleanup) > > - ctx->ops.cleanup(ctx); > > + kdamond_finish_ctxs(kdamond); > > > > pr_debug("kdamond (%d) finishes\n", current->pid); > > - mutex_lock(&ctx->kdamond_lock); > > - ctx->kdamond = NULL; > > - mutex_unlock(&ctx->kdamond_lock); > > + mutex_lock(&kdamond->lock); > > + kdamond->self = NULL; > > + mutex_unlock(&kdamond->lock); > > > > mutex_lock(&damon_lock); > > - nr_running_ctxs--; > > - if (!nr_running_ctxs && running_exclusive_ctxs) > > + nr_running_kdamonds--; > > + if (!nr_running_kdamonds && running_exclusive_ctxs) > > running_exclusive_ctxs = false; > > mutex_unlock(&damon_lock); > > > > diff --git a/mm/damon/lru_sort.c b/mm/damon/lru_sort.c > > index 3de2916a6..76c20098a 100644 > > --- a/mm/damon/lru_sort.c > > +++ b/mm/damon/lru_sort.c > > @@ -142,8 +142,18 @@ static struct damos_access_pattern damon_lru_sort_stub_pattern = { > > .max_age_region = UINT_MAX, > > }; > > > > -static struct damon_ctx *ctx; > > -static struct damon_target *target; > > +static struct kdamond *kdamond; > > + > > +static inline struct damon_ctx *damon_lru_sort_ctx(void) > > +{ > > + return damon_first_ctx(kdamond); > > +} > > + > > +static inline struct damon_target *damon_lru_sort_target(void) > > +{ > > + return damon_first_target( > > + damon_lru_sort_ctx()); > > +} > > Do we really need above two functions? Can't we simply add 'kdamond' global > variable but still uses 'ctx' and 'target' as before, except interface changed > function calls? We can, I just tried to use 'readable' interface, I thought that using raw pointers from kdamond will be less readable. > > > > > static struct damos *damon_lru_sort_new_scheme( > > struct damos_access_pattern *pattern, enum damos_action action) > > @@ -201,6 +211,7 @@ static int damon_lru_sort_apply_parameters(void) > > struct damos *scheme, *hot_scheme, *cold_scheme; > > struct damos *old_hot_scheme = NULL, *old_cold_scheme = NULL; > > unsigned int hot_thres, cold_thres; > > + struct damon_ctx *ctx = damon_lru_sort_ctx(); > > int err = 0; > > > > err = damon_set_attrs(ctx, &damon_lru_sort_mon_attrs); > > @@ -237,7 +248,8 @@ static int damon_lru_sort_apply_parameters(void) > > damon_set_schemes(ctx, &hot_scheme, 1); > > damon_add_scheme(ctx, cold_scheme); > > > > - return damon_set_region_biggest_system_ram_default(target, > > + return damon_set_region_biggest_system_ram_default( > > + damon_lru_sort_target(), > > &monitor_region_start, > > &monitor_region_end); > > } > > @@ -247,7 +259,7 @@ static int damon_lru_sort_turn(bool on) > > int err; > > > > if (!on) { > > - err = damon_stop(&ctx, 1); > > + err = damon_stop(kdamond); > > if (!err) > > kdamond_pid = -1; > > return err; > > @@ -257,10 +269,11 @@ static int damon_lru_sort_turn(bool on) > > if (err) > > return err; > > > > - err = damon_start(&ctx, 1, true); > > + err = damon_start(kdamond, true); > > if (err) > > return err; > > - kdamond_pid = ctx->kdamond->pid; > > + > > + kdamond_pid = kdamond->self->pid; > > return 0; > > } > > > > @@ -279,7 +292,7 @@ static int damon_lru_sort_enabled_store(const char *val, > > return 0; > > > > /* Called before init function. The function will handle this. */ > > - if (!ctx) > > + if (!kdamond) > > goto set_param_out; > > > > err = damon_lru_sort_turn(enable); > > @@ -334,11 +347,13 @@ static int damon_lru_sort_after_wmarks_check(struct damon_ctx *c) > > > > static int __init damon_lru_sort_init(void) > > { > > - int err = damon_modules_new_paddr_ctx_target(&ctx, &target); > > + struct damon_ctx *ctx; > > + int err = damon_modules_new_paddr_kdamond(&kdamond); > > I think we can allocate kdamond struct and set ctx/target here, and remove > unnecessary functions and changes. I see, will change that, thank you! > > > > > if (err) > > return err; > > > > + ctx = damon_lru_sort_ctx(); > > ctx->callback.after_wmarks_check = damon_lru_sort_after_wmarks_check; > > ctx->callback.after_aggregation = damon_lru_sort_after_aggregation; > > > > diff --git a/mm/damon/modules-common.c b/mm/damon/modules-common.c > > index 7cf96574c..436bb7948 100644 > > --- a/mm/damon/modules-common.c > > +++ b/mm/damon/modules-common.c > > @@ -9,13 +9,7 @@ > > > > #include "modules-common.h" > > > > -/* > > - * Allocate, set, and return a DAMON context for the physical address space. > > - * @ctxp: Pointer to save the point to the newly created context > > - * @targetp: Pointer to save the point to the newly created target > > - */ > > -int damon_modules_new_paddr_ctx_target(struct damon_ctx **ctxp, > > - struct damon_target **targetp) > > +static int __damon_modules_new_paddr_kdamond(struct kdamond *kdamond) > > { > > struct damon_ctx *ctx; > > struct damon_target *target; > > @@ -34,9 +28,33 @@ int damon_modules_new_paddr_ctx_target(struct damon_ctx **ctxp, > > damon_destroy_ctx(ctx); > > return -ENOMEM; > > } > > + > > damon_add_target(ctx, target); > > + damon_add_ctx(kdamond, ctx); > > + > > + return 0; > > +} > > + > > +/* > > + * Allocate, set, and return a DAMON daemon for the physical address space. > > + * @kdamondp: Pointer to save the point to the newly created kdamond > > + */ > > +int damon_modules_new_paddr_kdamond(struct kdamond **kdamondp) > > +{ > > + int err; > > + struct kdamond *kdamond; > > + > > + kdamond = damon_new_kdamond(); > > + if (!kdamond) > > + return -ENOMEM; > > + > > + err = __damon_modules_new_paddr_kdamond(kdamond); > > + if (err) { > > + damon_destroy_kdamond(kdamond); > > + return err; > > + } > > + kdamond->nr_ctxs = 1; > > > > - *ctxp = ctx; > > - *targetp = target; > > + *kdamondp = kdamond; > > return 0; > > } > > diff --git a/mm/damon/modules-common.h b/mm/damon/modules-common.h > > index f49cdb417..5fc5b8ae3 100644 > > --- a/mm/damon/modules-common.h > > +++ b/mm/damon/modules-common.h > > @@ -45,5 +45,4 @@ > > module_param_named(nr_##qt_exceed_name, stat.qt_exceeds, ulong, \ > > 0400); > > > > -int damon_modules_new_paddr_ctx_target(struct damon_ctx **ctxp, > > - struct damon_target **targetp); > > +int damon_modules_new_paddr_kdamond(struct kdamond **kdamondp); > > diff --git a/mm/damon/reclaim.c b/mm/damon/reclaim.c > > index 9bd341d62..f6540ef1a 100644 > > --- a/mm/damon/reclaim.c > > +++ b/mm/damon/reclaim.c > > @@ -150,8 +150,18 @@ static struct damos_stat damon_reclaim_stat; > > DEFINE_DAMON_MODULES_DAMOS_STATS_PARAMS(damon_reclaim_stat, > > reclaim_tried_regions, reclaimed_regions, quota_exceeds); > > > > -static struct damon_ctx *ctx; > > -static struct damon_target *target; > > +static struct kdamond *kdamond; > > + > > +static inline struct damon_ctx *damon_reclaim_ctx(void) > > +{ > > + return damon_first_ctx(kdamond); > > +} > > + > > +static inline struct damon_target *damon_reclaim_target(void) > > +{ > > + return damon_first_target( > > + damon_reclaim_ctx()); > > +} > > Ditto. > > > > > static struct damos *damon_reclaim_new_scheme(void) > > { > > @@ -197,6 +207,7 @@ static int damon_reclaim_apply_parameters(void) > > struct damos *scheme, *old_scheme; > > struct damos_quota_goal *goal; > > struct damos_filter *filter; > > + struct damon_ctx *ctx = damon_reclaim_ctx(); > > int err = 0; > > > > err = damon_set_attrs(ctx, &damon_reclaim_mon_attrs); > > @@ -244,7 +255,8 @@ static int damon_reclaim_apply_parameters(void) > > } > > damon_set_schemes(ctx, &scheme, 1); > > > > - return damon_set_region_biggest_system_ram_default(target, > > + return damon_set_region_biggest_system_ram_default( > > + damon_reclaim_target(), > > &monitor_region_start, > > &monitor_region_end); > > } > > @@ -254,7 +266,7 @@ static int damon_reclaim_turn(bool on) > > int err; > > > > if (!on) { > > - err = damon_stop(&ctx, 1); > > + err = damon_stop(kdamond); > > if (!err) > > kdamond_pid = -1; > > return err; > > @@ -264,10 +276,10 @@ static int damon_reclaim_turn(bool on) > > if (err) > > return err; > > > > - err = damon_start(&ctx, 1, true); > > + err = damon_start(kdamond, true); > > if (err) > > return err; > > - kdamond_pid = ctx->kdamond->pid; > > + kdamond_pid = kdamond->self->pid; > > return 0; > > } > > > > @@ -286,7 +298,7 @@ static int damon_reclaim_enabled_store(const char *val, > > return 0; > > > > /* Called before init function. The function will handle this. */ > > - if (!ctx) > > + if (!kdamond) > > goto set_param_out; > > > > err = damon_reclaim_turn(enable); > > @@ -337,11 +349,13 @@ static int damon_reclaim_after_wmarks_check(struct damon_ctx *c) > > > > static int __init damon_reclaim_init(void) > > { > > - int err = damon_modules_new_paddr_ctx_target(&ctx, &target); > > + struct damon_ctx *ctx; > > + int err = damon_modules_new_paddr_kdamond(&kdamond); > > > > if (err) > > return err; > > > > + ctx = damon_reclaim_ctx(); > > ctx->callback.after_wmarks_check = damon_reclaim_after_wmarks_check; > > ctx->callback.after_aggregation = damon_reclaim_after_aggregation; > > > > diff --git a/mm/damon/sysfs.c b/mm/damon/sysfs.c > > index 6fee383bc..bfdb979e6 100644 > > --- a/mm/damon/sysfs.c > > +++ b/mm/damon/sysfs.c > > Ditto. Many of below changes could be separated into another patch in my > opinion. > > > @@ -939,7 +939,7 @@ static const struct kobj_type damon_sysfs_contexts_ktype = { > > struct damon_sysfs_kdamond { > > struct kobject kobj; > > struct damon_sysfs_contexts *contexts; > > - struct damon_ctx *damon_ctx; > > + struct kdamond *kdamond; > > }; > > > > static struct damon_sysfs_kdamond *damon_sysfs_kdamond_alloc(void) > > @@ -974,16 +974,6 @@ static void damon_sysfs_kdamond_rm_dirs(struct damon_sysfs_kdamond *kdamond) > > kobject_put(&kdamond->contexts->kobj); > > } > > > > -static bool damon_sysfs_ctx_running(struct damon_ctx *ctx) > > -{ > > - bool running; > > - > > - mutex_lock(&ctx->kdamond_lock); > > - running = ctx->kdamond != NULL; > > - mutex_unlock(&ctx->kdamond_lock); > > - return running; > > -} > > Don't add replacement of this in mm/damon/core.c. Modify this here. DebugFS also uses this. > > > - > > /* > > * enum damon_sysfs_cmd - Commands for a specific kdamond. > > */ > > @@ -1065,15 +1055,15 @@ static struct damon_sysfs_cmd_request damon_sysfs_cmd_request; > > static ssize_t state_show(struct kobject *kobj, struct kobj_attribute *attr, > > char *buf) > > { > > - struct damon_sysfs_kdamond *kdamond = container_of(kobj, > > + struct damon_sysfs_kdamond *sys_kdamond = container_of(kobj, > > struct damon_sysfs_kdamond, kobj); > > This makes reviewing bit difficult. Let's keep current name for now. > > > - struct damon_ctx *ctx = kdamond->damon_ctx; > > + struct kdamond *kdamond = sys_kdamond->kdamond; > > Keep 'kdamond' variable name as is, and use shorter name for this variable, > say, kd, or do kdamond->kdamond. It usually makes reading code difficult, these are two different abstraction of kdamond, but the name will almost be the same, I don't think this is a good idea. > > > bool running; > > > > - if (!ctx) > > + if (!kdamond) > > "if (!kd)" or "if (!kdamond->kdamond)" And this is an example, I can't say which one belong to sysfs's abstraction and which represents core's kdamond. With "sys_" prefix or "sysfs_" sometimes, it is much easier to read. > > > running = false; > > else > > - running = damon_sysfs_ctx_running(ctx); > > + running = damon_kdamond_running(kdamond); > > > > return sysfs_emit(buf, "%s\n", running ? > > damon_sysfs_cmd_strs[DAMON_SYSFS_CMD_ON] : > > @@ -1242,13 +1232,15 @@ static bool damon_sysfs_schemes_regions_updating; > > static void damon_sysfs_before_terminate(struct damon_ctx *ctx) > > { > > struct damon_target *t, *next; > > - struct damon_sysfs_kdamond *kdamond; > > + struct damon_sysfs_kdamond *sys_kdamond; > > + struct kdamond *kdamond; > > enum damon_sysfs_cmd cmd; > > > > /* damon_sysfs_schemes_update_regions_stop() might not yet called */ > > - kdamond = damon_sysfs_cmd_request.kdamond; > > + kdamond = ctx->kdamond; > > + sys_kdamond = damon_sysfs_cmd_request.kdamond; > > cmd = damon_sysfs_cmd_request.cmd; > > - if (kdamond && ctx == kdamond->damon_ctx && > > + if (sys_kdamond && kdamond == sys_kdamond->kdamond && > > (cmd == DAMON_SYSFS_CMD_UPDATE_SCHEMES_TRIED_REGIONS || > > cmd == DAMON_SYSFS_CMD_UPDATE_SCHEMES_TRIED_BYTES) && > > damon_sysfs_schemes_regions_updating) { > > Again, changed names makes me reviewing difficult. > > >From this point, I think below changes could be significantly changed in next > version of this patchset if you agree to above comments. So I'm stop reviewing > this patch from here for now. Please let me know if you have different opinion > about it. Thank you very much for your comments, I really appreciate that you spent time to review this. I'll try to improve the patch-set in next versions! > > > Thanks, > SJ > > [...] BR, Alex