From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE934E7716F for ; Thu, 5 Dec 2024 02:28:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0C7576B0083; Wed, 4 Dec 2024 21:28:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 077956B0085; Wed, 4 Dec 2024 21:28:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E82C46B008A; Wed, 4 Dec 2024 21:28:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C8DF56B0083 for ; Wed, 4 Dec 2024 21:28:06 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 402A581287 for ; Thu, 5 Dec 2024 02:28:06 +0000 (UTC) X-FDA: 82859320110.12.2C4CCEF Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.14]) by imf05.hostedemail.com (Postfix) with ESMTP id DC1CE10000B for ; Thu, 5 Dec 2024 02:27:34 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=EDvZ5ivj; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf05.hostedemail.com: domain of lkp@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=lkp@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733365677; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Nnae9OXQm5z62oMg6/O6wyuSdr9OSD+Z/2sl/EE6kws=; b=WBXuXMGCX02dWvS4i9/BhZPKNhRIO8twWPfZUIqlIvSbtw5AYkzvX4B5nz6k4DvlrrrcJp MKNjcFQjociZaNM4SsA/V02PTvAD6hSYBJxt+HIFKzp1ynZgdo1JnR4m1xMWOk1kn9TE8q LBVFuboWH+9e1KySeK/Z9QDfRrPS3ws= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733365677; a=rsa-sha256; cv=none; b=qo0crD3skfZF9Qhc8hvko8EdHkRsH6H5GrYtNLxgGlKZQcFk9cruBqL9PnzZ1LgeF+YHTR dLy8Fyd6Q1yPNj3ZbklrP4OhNEc9KByUhSt/bVi8hEr8Eo6mm+zmCyh7Nev/A3L2jHsikS hioYbsiptp629wOMoeBNt7JB01jjb5o= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=EDvZ5ivj; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf05.hostedemail.com: domain of lkp@intel.com designates 192.198.163.14 as permitted sender) smtp.mailfrom=lkp@intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1733365683; x=1764901683; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=QO5IboRu4sXKDliedTe2GJzHgDQy5oh6qE/bI1NxotM=; b=EDvZ5ivj15FqEzse7nNNZ9+7K28FSRMmNNWAt80YSBTUbMAr2UE+K7mY 7i57qib02rl9cEIK3xuEo7r085O+JGpqzniF0SDGHdcG0CSEebFwwdNS4 BvzMsJVxCJDjX3B/iFg86J4wI1kBATdASYxkEAwImxEEsf5VxObnwBIsu MULe0MNb/L5SEqLEPkicRrKHbrnIsugFHbkvIPLiEIVZrWHEB4v3a+/Pg x7bHtkAcivET2OL1d+lgE8OJPUfCZELvCevimK/ipvFaCvb1PCZ3cyNtq GBkZmY0Lus5/pknW00n9ozp+vN76+UIxGysHdCmqHxmVaz7A0zj2BiSQD g==; X-CSE-ConnectionGUID: fm85XFdoSP+K8Q61Rd0LXQ== X-CSE-MsgGUID: CbixugU8Q1ybI9+jAnlYmw== X-IronPort-AV: E=McAfee;i="6700,10204,11276"; a="33906108" X-IronPort-AV: E=Sophos;i="6.12,209,1728975600"; d="scan'208";a="33906108" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa108.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Dec 2024 18:28:00 -0800 X-CSE-ConnectionGUID: Q5g3rUN6RcabXY/8BbzQOA== X-CSE-MsgGUID: nlNLgQgxTM6qb2x9125a6g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,209,1728975600"; d="scan'208";a="98025648" Received: from lkp-server02.sh.intel.com (HELO 1f5a171d57e2) ([10.239.97.151]) by fmviesa003.fm.intel.com with ESMTP; 04 Dec 2024 18:27:57 -0800 Received: from kbuild by 1f5a171d57e2 with local (Exim 4.96) (envelope-from ) id 1tJ1as-0003gr-2k; Thu, 05 Dec 2024 02:27:54 +0000 Date: Thu, 5 Dec 2024 10:27:19 +0800 From: kernel test robot To: Maarten Lankhorst , linux-kernel@vger.kernel.org, intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Tejun Heo , Zefan Li , Johannes Weiner , Andrew Morton , Friedrich Vock , Maxime Ripard Cc: oe-kbuild-all@lists.linux.dev, Linux Memory Management List , cgroups@vger.kernel.org, Maarten Lankhorst Subject: Re: [PATCH v2.1 1/1] kernel/cgroup: Add "dmem" memory accounting cgroup Message-ID: <202412051039.P06riwrP-lkp@intel.com> References: <20241204143112.1250983-1-dev@lankhorst.se> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20241204143112.1250983-1-dev@lankhorst.se> X-Stat-Signature: 9gbkxhutmhrqihdshqfg1bruwrqdmdwf X-Rspamd-Queue-Id: DC1CE10000B X-Rspam-User: X-Rspamd-Server: rspam01 X-HE-Tag: 1733365654-141740 X-HE-Meta: U2FsdGVkX18Z34jmjQXB3pRfZ+p3Xk5gy3MBM8tl+h5mPyWlGjfLrpMl6CRp0cAg9oKYxzoGI+FDb32Uen8SgHWiF3lOWbpGmKG+F07jczTdWwSO28MPNdRVx5sw1+X40hMC7EjhIXNbBe5VwFHBlQHSG2fU91gLGl0E+hKDxXYkTghNWZba/zUwJdZvWjXJU4b5WX1mG5ArqlvWi29DxWmDJIVWPwkLhNVrlGqqBuFLSsGzpTJVOT8/0WYhLI/6V8v8l2LfwjTfZMn79qUNLPY7Uc9Qv2Ho/uN1GO5rFzvHUVXaU5BiVHGdBTS0jwJQ0hoJ1RINP7dTiYKGMzId9Qkbj9i11BKKWtLqxDGT7mBqRagAbub/hoh2MfUToXdaLdUMzwMXA5BM2ImwkXn8FwyDjgAq2WRwx7JjWYvck3lhjr2/THDcx56X5mvZyKK90WJO17Id0lQwmuWkp/f7gAhIHxtx1guRvcHdp/Ww1zq29dpcWh2zgcWcVw5se1gHZx5Acipt9dD3d8x3ybY31QYUl7cjBKkzwhtbgxsTKV9QINQV/uT6zMIfrbDgvcmgqZLEe+kvV4IV6Hk2a849h9XHWmIRh665EwEmNgHc+YaWJcqADn4Cv6eGWVWrqUF65+1zr1rKggUr6uDAC6yBYkNFkffnK6EcHMKGf6LZ6KM1k4miGUHaI6J4mpC9FlZ4G3Uwfydp2LU2zwVu4j+ZEUG6p2EwW7RKVUcsJ/F8cKL1GQsAeo3a5hqns6k+ToP9SYF8dU/qh78M0muZ4dfVby6JLayiWQ2JfumF6HuzBYA4oM2oAQUNZxVPfDtiiHRi9bsThFpBtY0xKapqyiaWX+fRlDG4ECvzhS1EaX5SWdLyAA7EBXMx/ZRmSfWK3GqXXQCOTSrnC35YMqNYFkDvMgL4HJjKpMRU7Ol2MwJsKf5ttQYbJgLTqh90jzkFGKk5/XJWnt6dRRLQ6YI5JsE zWYNFWTU wGat5mXctsCo5C8q0LIWJd43b3aBfCk7r4FGVUo1o6T7oYgXfRo1KXOTkXSnJl/A+8aKxnGifHZFQNQBSfwUZdahSHPNp6saTaXd+mzVHGZE/X0YBffqM2lGSdS+Hf8S4dg10oD/PJ1NylgsZ/WojXf5mFtgGxfyM45n2iVNhKXcZRcdBbWOQGkQMkDkGNweWdHWQGzUCkwpzBeHMgLHwI4kmBLEshETA/zVrXC47sZHwu+7aG28X3kHv70DEP7YNbmelKZ5dcm1SuXNT3vQgjbdh60qJMcvmy2Mw9r1XORM7HEB6bL9gMiPn9RPQguMl1jGDqV7fQekvY/9gTZnbUVhfnxc8n5/kECRl X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hi Maarten, kernel test robot noticed the following build errors: [auto build test ERROR on tj-cgroup/for-next] [also build test ERROR on akpm-mm/mm-everything linus/master v6.13-rc1 next-20241204] [cannot apply to drm-misc/drm-misc-next drm-tip/drm-tip] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Maarten-Lankhorst/kernel-cgroup-Add-dmem-memory-accounting-cgroup/20241204-233207 base: https://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git for-next patch link: https://lore.kernel.org/r/20241204143112.1250983-1-dev%40lankhorst.se patch subject: [PATCH v2.1 1/1] kernel/cgroup: Add "dmem" memory accounting cgroup config: um-randconfig-r061-20241205 (https://download.01.org/0day-ci/archive/20241205/202412051039.P06riwrP-lkp@intel.com/config) compiler: gcc-12 (Debian 12.2.0-14) 12.2.0 reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20241205/202412051039.P06riwrP-lkp@intel.com/reproduce) If you fix the issue in a separate patch/commit (i.e. not just a new version of the same patch/commit), kindly add following tags | Reported-by: kernel test robot | Closes: https://lore.kernel.org/oe-kbuild-all/202412051039.P06riwrP-lkp@intel.com/ All errors (new ones prefixed by >>): /usr/bin/ld: kernel/cgroup/dmem.o: in function `set_resource_min': >> kernel/cgroup/dmem.c:115: undefined reference to `page_counter_set_min' /usr/bin/ld: kernel/cgroup/dmem.o: in function `set_resource_low': >> kernel/cgroup/dmem.c:121: undefined reference to `page_counter_set_low' /usr/bin/ld: kernel/cgroup/dmem.o: in function `set_resource_max': >> kernel/cgroup/dmem.c:127: undefined reference to `page_counter_set_max' /usr/bin/ld: kernel/cgroup/dmem.o: in function `reset_all_resource_limits': >> kernel/cgroup/dmem.c:115: undefined reference to `page_counter_set_min' >> /usr/bin/ld: kernel/cgroup/dmem.c:121: undefined reference to `page_counter_set_low' >> /usr/bin/ld: kernel/cgroup/dmem.c:127: undefined reference to `page_counter_set_max' /usr/bin/ld: kernel/cgroup/dmem.o: in function `dmem_cgroup_uncharge': >> kernel/cgroup/dmem.c:607: undefined reference to `page_counter_uncharge' /usr/bin/ld: kernel/cgroup/dmem.o: in function `dmem_cgroup_calculate_protection': >> kernel/cgroup/dmem.c:275: undefined reference to `page_counter_calculate_protection' /usr/bin/ld: kernel/cgroup/dmem.o: in function `dmem_cgroup_try_charge': >> kernel/cgroup/dmem.c:657: undefined reference to `page_counter_try_charge' collect2: error: ld returned 1 exit status Kconfig warnings: (for reference only) WARNING: unmet direct dependencies detected for GET_FREE_REGION Depends on [n]: SPARSEMEM [=n] Selected by [y]: - RESOURCE_KUNIT_TEST [=y] && RUNTIME_TESTING_MENU [=y] && KUNIT [=y] vim +115 kernel/cgroup/dmem.c 111 112 static void 113 set_resource_min(struct dmem_cgroup_pool_state *pool, u64 val) 114 { > 115 page_counter_set_min(&pool->cnt, val); 116 } 117 118 static void 119 set_resource_low(struct dmem_cgroup_pool_state *pool, u64 val) 120 { > 121 page_counter_set_low(&pool->cnt, val); 122 } 123 124 static void 125 set_resource_max(struct dmem_cgroup_pool_state *pool, u64 val) 126 { > 127 page_counter_set_max(&pool->cnt, val); 128 } 129 130 static u64 get_resource_low(struct dmem_cgroup_pool_state *pool) 131 { 132 return pool ? READ_ONCE(pool->cnt.low) : 0; 133 } 134 135 static u64 get_resource_min(struct dmem_cgroup_pool_state *pool) 136 { 137 return pool ? READ_ONCE(pool->cnt.min) : 0; 138 } 139 140 static u64 get_resource_max(struct dmem_cgroup_pool_state *pool) 141 { 142 return pool ? READ_ONCE(pool->cnt.max) : PAGE_COUNTER_MAX; 143 } 144 145 static u64 get_resource_current(struct dmem_cgroup_pool_state *pool) 146 { 147 return pool ? page_counter_read(&pool->cnt) : 0; 148 } 149 150 static void reset_all_resource_limits(struct dmem_cgroup_pool_state *rpool) 151 { 152 set_resource_min(rpool, 0); 153 set_resource_low(rpool, 0); 154 set_resource_max(rpool, PAGE_COUNTER_MAX); 155 } 156 157 static void dmemcs_offline(struct cgroup_subsys_state *css) 158 { 159 struct dmemcg_state *dmemcs = css_to_dmemcs(css); 160 struct dmem_cgroup_pool_state *pool; 161 162 rcu_read_lock(); 163 list_for_each_entry_rcu(pool, &dmemcs->pools, css_node) 164 reset_all_resource_limits(pool); 165 rcu_read_unlock(); 166 } 167 168 static void dmemcs_free(struct cgroup_subsys_state *css) 169 { 170 struct dmemcg_state *dmemcs = css_to_dmemcs(css); 171 struct dmem_cgroup_pool_state *pool, *next; 172 173 spin_lock(&dmemcg_lock); 174 list_for_each_entry_safe(pool, next, &dmemcs->pools, css_node) { 175 /* 176 *The pool is dead and all references are 0, 177 * no need for RCU protection with list_del_rcu or freeing. 178 */ 179 list_del(&pool->css_node); 180 free_cg_pool(pool); 181 } 182 spin_unlock(&dmemcg_lock); 183 184 kfree(dmemcs); 185 } 186 187 static struct cgroup_subsys_state * 188 dmemcs_alloc(struct cgroup_subsys_state *parent_css) 189 { 190 struct dmemcg_state *dmemcs = kzalloc(sizeof(*dmemcs), GFP_KERNEL); 191 if (!dmemcs) 192 return ERR_PTR(-ENOMEM); 193 194 INIT_LIST_HEAD(&dmemcs->pools); 195 return &dmemcs->css; 196 } 197 198 static struct dmem_cgroup_pool_state * 199 find_cg_pool_locked(struct dmemcg_state *dmemcs, struct dmem_cgroup_region *region) 200 { 201 struct dmem_cgroup_pool_state *pool; 202 203 list_for_each_entry_rcu(pool, &dmemcs->pools, css_node, spin_is_locked(&dmemcg_lock)) 204 if (pool->region == region) 205 return pool; 206 207 return NULL; 208 } 209 210 static struct dmem_cgroup_pool_state *pool_parent(struct dmem_cgroup_pool_state *pool) 211 { 212 if (!pool->cnt.parent) 213 return NULL; 214 215 return container_of(pool->cnt.parent, typeof(*pool), cnt); 216 } 217 218 static void 219 dmem_cgroup_calculate_protection(struct dmem_cgroup_pool_state *limit_pool, 220 struct dmem_cgroup_pool_state *test_pool) 221 { 222 struct page_counter *climit; 223 struct cgroup_subsys_state *css, *next_css; 224 struct dmemcg_state *dmemcg_iter; 225 struct dmem_cgroup_pool_state *pool, *parent_pool; 226 bool found_descendant; 227 228 climit = &limit_pool->cnt; 229 230 rcu_read_lock(); 231 parent_pool = pool = limit_pool; 232 css = &limit_pool->cs->css; 233 234 /* 235 * This logic is roughly equivalent to css_foreach_descendant_pre, 236 * except we also track the parent pool to find out which pool we need 237 * to calculate protection values for. 238 * 239 * We can stop the traversal once we find test_pool among the 240 * descendants since we don't really care about any others. 241 */ 242 while (pool != test_pool) { 243 next_css = css_next_child(NULL, css); 244 if (next_css) { 245 parent_pool = pool; 246 } else { 247 while (css != &limit_pool->cs->css) { 248 next_css = css_next_child(css, css->parent); 249 if (next_css) 250 break; 251 css = css->parent; 252 parent_pool = pool_parent(parent_pool); 253 } 254 /* 255 * We can only hit this when test_pool is not a 256 * descendant of limit_pool. 257 */ 258 if (WARN_ON_ONCE(css == &limit_pool->cs->css)) 259 break; 260 } 261 css = next_css; 262 263 found_descendant = false; 264 dmemcg_iter = container_of(css, struct dmemcg_state, css); 265 266 list_for_each_entry_rcu(pool, &dmemcg_iter->pools, css_node) { 267 if (pool_parent(pool) == parent_pool) { 268 found_descendant = true; 269 break; 270 } 271 } 272 if (!found_descendant) 273 continue; 274 > 275 page_counter_calculate_protection( 276 climit, &pool->cnt, true); 277 } 278 rcu_read_unlock(); 279 } 280 -- 0-DAY CI Kernel Test Service https://github.com/intel/lkp-tests/wiki