From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1093E77172 for ; Thu, 5 Dec 2024 12:07:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DC6BF6B007B; Thu, 5 Dec 2024 07:07:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D76046B0082; Thu, 5 Dec 2024 07:07:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C64716B0083; Thu, 5 Dec 2024 07:07:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id A81156B007B for ; Thu, 5 Dec 2024 07:07:36 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 28BB4141588 for ; Thu, 5 Dec 2024 12:07:36 +0000 (UTC) X-FDA: 82860780744.29.AB51893 Received: from mblankhorst.nl (lankhorst.se [141.105.120.124]) by imf08.hostedemail.com (Postfix) with ESMTP id A9B75160007 for ; Thu, 5 Dec 2024 12:07:23 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=none; spf=none (imf08.hostedemail.com: domain of dev@lankhorst.se has no SPF policy when checking 141.105.120.124) smtp.mailfrom=dev@lankhorst.se; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1733400444; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CrA+nYeR8lng758Z0wvXFX4fDWrse1Uzu6v0I29usKk=; b=Fc6OFvs3VAm8dYfc6VGQejIsmhQ9Ag1F72Xt+toNnvj767841zfL+HAdt7jLlqB5/V88Nn KZGHlJHTL+GSaH5qiOgVH+8zsAkYihpwVgO3rAt8J4PMWiEM4DkW6H8NfpiTaDCkiuMZGr LVKpotK/u50Up+mVsne2h1V2eR3qcaI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1733400444; a=rsa-sha256; cv=none; b=E3Vj7UEQTTZ9ufYxm8tdYhxvLnoSloGzw4VUD683t+oDArZ1JRxly0+btCB49HQoF9NdOf PXpymzVpBW+H70kk19awCUjn9lXO/GNevrtpxVAvxE0z/or52Tmqtj8Es7GmCiHFZCu82T kHWMZOipzL8FD0VDWK0ZO4e2E0LANzE= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; spf=none (imf08.hostedemail.com: domain of dev@lankhorst.se has no SPF policy when checking 141.105.120.124) smtp.mailfrom=dev@lankhorst.se; dmarc=none Message-ID: <4ae9d042-c151-4b3a-b434-a178a0360b3c@lankhorst.se> Date: Thu, 5 Dec 2024 13:07:29 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2.1 1/1] kernel/cgroup: Add "dmem" memory accounting cgroup To: kernel test robot , linux-kernel@vger.kernel.org, intel-xe@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Tejun Heo , Zefan Li , Johannes Weiner , Andrew Morton , Friedrich Vock , Maxime Ripard Cc: oe-kbuild-all@lists.linux.dev, Linux Memory Management List , cgroups@vger.kernel.org, Maarten Lankhorst References: <20241204143112.1250983-1-dev@lankhorst.se> <202412051039.P06riwrP-lkp@intel.com> Content-Language: en-US From: Maarten Lankhorst In-Reply-To: <202412051039.P06riwrP-lkp@intel.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: A9B75160007 X-Stat-Signature: gkgtzbmc36ginui9mr58ajky959swtnm X-Rspam-User: X-HE-Tag: 1733400443-412319 X-HE-Meta: U2FsdGVkX1/1LJE1+gtDytJWtPwjlJ3dctYWU5Qj1Q2hSKY4sVBaAXwra98jGwJhadShQjcE5Z9GdooyY7mONh/8Y+sb5pRvx7+L4Tz2K0uDJUsODS6AZr/1qe/i9EwCyDR1fJhNNUHqOVdVLc5fppkYtHcApYrAhppqq0LZlMP91IRhNl/Ejt04Tyo2OZmh2+8qbKgXJI/TfKFR8+Oj7RsHizUAgWowB2H6GvRXYp8l8o4BDDFJjqq+qRoDMkIeytwz0RaDu1LP1obYNizFz35H4pxb0W8neKzvq2Bcr1Uv4l9o5y+Xr8g8BcuwxXejPi2nCNEwG2ItGNJ4uu84eE5iR1hPaAsayK+pd74fY6E299Bj/DkJ0BM2YC6ODajXZvEVp5xqJa+evLVqz/Aewcvgw1j81xNgCS1pwQaOio5i/cHmHfBQ6THxLQzfThgvOaRWl6wEQQGL7k++im6nsyfby2tOI/Le921CZjJ11x7Sq1zwg2DOvRj1NEZ2EAoFJsn1NbS5lv59sLMp0wkDeUce25bWfp9S/mIdAY9q8wV+m5L7yd7bWBRquBiw5nmiBX794FhmVgFmNthe1+yQaj081Ty5aqsZ94ljuiH2PDlRXwxAReVqI0ft3nE6p13KyRXjAVxU+ybrG4jQ2GZ39bGKobS+d2k+HCGojFFEhZ4yjvogPC1VsYatAZnV/yfstnCoiO5xp5LXF1cBmC/8Dsb2EWkHBByXsnMu2XZzDPHN89O3dfUbaQRsauIa4/hhfvZFT1j35OKLRxiI3j5O1jt1mk7DlsM62uk8HobL9aS0n8+MoPiegnJMJogWiIY4+aSTSzdTtGEinDl2CNDBOEcsnhM/fFRLHb7lII5LlcKLumyw0rl7MgesWk+q9g5rorjCk9itjU6v76jFObZ8klmcwuHQzl4Wf7qeV6Xyu7Odyxj53bsIm/91eTi1SnKHZYraXzzbf8Q7sUSPeNA gCOrig0Y 3xUIZNRek4TeJx18e8gob9L6obRwtCucsGb9Xs3NNPg4cW+43xtEMQdasLZST33FRL/uZ/RbZ6V1gub1VFyN13jEQTRwbYd0VhS6CtzT5fomMIK6pDO26Vs2cAAvPEldP8d8aOFyY1pCpJyBIS8p0hmfizgatFo7oAhwG4CINC8G6ItneCWw4FzOmlca5j5n3QAW0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Hey, Missing a select PAGE_COUNTER in init/Kconfig. I thought I had fixed it, but I must have forgotten to commit those changes when developing between 2 machines. Cheers, ~Maarten Den 2024-12-05 kl. 03:27, skrev kernel test robot: > Hi Maarten, > > kernel test robot noticed the following build errors: > > [auto build test ERROR on tj-cgroup/for-next] > [also build test ERROR on akpm-mm/mm-everything linus/master v6.13-rc1 next-20241204] > [cannot apply to drm-misc/drm-misc-next drm-tip/drm-tip] > [If your patch is applied to the wrong git tree, kindly drop us a note. > And when submitting patch, we suggest to use '--base' as documented in > https://git-scm.com/docs/git-format-patch#_base_tree_information] > > url: https://github.com/intel-lab-lkp/linux/commits/Maarten-Lankhorst/kernel-cgroup-Add-dmem-memory-accounting-cgroup/20241204-233207 > base: https://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git for-next > patch link: https://lore.kernel.org/r/20241204143112.1250983-1-dev%40lankhorst.se > patch subject: [PATCH v2.1 1/1] kernel/cgroup: Add "dmem" memory accounting cgroup > config: um-randconfig-r061-20241205 (https://download.01.org/0day-ci/archive/20241205/202412051039.P06riwrP-lkp@intel.com/config) > compiler: gcc-12 (Debian 12.2.0-14) 12.2.0 > reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20241205/202412051039.P06riwrP-lkp@intel.com/reproduce) > > If you fix the issue in a separate patch/commit (i.e. not just a new version of > the same patch/commit), kindly add following tags > | Reported-by: kernel test robot > | Closes: https://lore.kernel.org/oe-kbuild-all/202412051039.P06riwrP-lkp@intel.com/ > > All errors (new ones prefixed by >>): > > /usr/bin/ld: kernel/cgroup/dmem.o: in function `set_resource_min': >>> kernel/cgroup/dmem.c:115: undefined reference to `page_counter_set_min' > /usr/bin/ld: kernel/cgroup/dmem.o: in function `set_resource_low': >>> kernel/cgroup/dmem.c:121: undefined reference to `page_counter_set_low' > /usr/bin/ld: kernel/cgroup/dmem.o: in function `set_resource_max': >>> kernel/cgroup/dmem.c:127: undefined reference to `page_counter_set_max' > /usr/bin/ld: kernel/cgroup/dmem.o: in function `reset_all_resource_limits': >>> kernel/cgroup/dmem.c:115: undefined reference to `page_counter_set_min' >>> /usr/bin/ld: kernel/cgroup/dmem.c:121: undefined reference to `page_counter_set_low' >>> /usr/bin/ld: kernel/cgroup/dmem.c:127: undefined reference to `page_counter_set_max' > /usr/bin/ld: kernel/cgroup/dmem.o: in function `dmem_cgroup_uncharge': >>> kernel/cgroup/dmem.c:607: undefined reference to `page_counter_uncharge' > /usr/bin/ld: kernel/cgroup/dmem.o: in function `dmem_cgroup_calculate_protection': >>> kernel/cgroup/dmem.c:275: undefined reference to `page_counter_calculate_protection' > /usr/bin/ld: kernel/cgroup/dmem.o: in function `dmem_cgroup_try_charge': >>> kernel/cgroup/dmem.c:657: undefined reference to `page_counter_try_charge' > collect2: error: ld returned 1 exit status > > Kconfig warnings: (for reference only) > WARNING: unmet direct dependencies detected for GET_FREE_REGION > Depends on [n]: SPARSEMEM [=n] > Selected by [y]: > - RESOURCE_KUNIT_TEST [=y] && RUNTIME_TESTING_MENU [=y] && KUNIT [=y] > > > vim +115 kernel/cgroup/dmem.c > > 111 > 112 static void > 113 set_resource_min(struct dmem_cgroup_pool_state *pool, u64 val) > 114 { > > 115 page_counter_set_min(&pool->cnt, val); > 116 } > 117 > 118 static void > 119 set_resource_low(struct dmem_cgroup_pool_state *pool, u64 val) > 120 { > > 121 page_counter_set_low(&pool->cnt, val); > 122 } > 123 > 124 static void > 125 set_resource_max(struct dmem_cgroup_pool_state *pool, u64 val) > 126 { > > 127 page_counter_set_max(&pool->cnt, val); > 128 } > 129 > 130 static u64 get_resource_low(struct dmem_cgroup_pool_state *pool) > 131 { > 132 return pool ? READ_ONCE(pool->cnt.low) : 0; > 133 } > 134 > 135 static u64 get_resource_min(struct dmem_cgroup_pool_state *pool) > 136 { > 137 return pool ? READ_ONCE(pool->cnt.min) : 0; > 138 } > 139 > 140 static u64 get_resource_max(struct dmem_cgroup_pool_state *pool) > 141 { > 142 return pool ? READ_ONCE(pool->cnt.max) : PAGE_COUNTER_MAX; > 143 } > 144 > 145 static u64 get_resource_current(struct dmem_cgroup_pool_state *pool) > 146 { > 147 return pool ? page_counter_read(&pool->cnt) : 0; > 148 } > 149 > 150 static void reset_all_resource_limits(struct dmem_cgroup_pool_state *rpool) > 151 { > 152 set_resource_min(rpool, 0); > 153 set_resource_low(rpool, 0); > 154 set_resource_max(rpool, PAGE_COUNTER_MAX); > 155 } > 156 > 157 static void dmemcs_offline(struct cgroup_subsys_state *css) > 158 { > 159 struct dmemcg_state *dmemcs = css_to_dmemcs(css); > 160 struct dmem_cgroup_pool_state *pool; > 161 > 162 rcu_read_lock(); > 163 list_for_each_entry_rcu(pool, &dmemcs->pools, css_node) > 164 reset_all_resource_limits(pool); > 165 rcu_read_unlock(); > 166 } > 167 > 168 static void dmemcs_free(struct cgroup_subsys_state *css) > 169 { > 170 struct dmemcg_state *dmemcs = css_to_dmemcs(css); > 171 struct dmem_cgroup_pool_state *pool, *next; > 172 > 173 spin_lock(&dmemcg_lock); > 174 list_for_each_entry_safe(pool, next, &dmemcs->pools, css_node) { > 175 /* > 176 *The pool is dead and all references are 0, > 177 * no need for RCU protection with list_del_rcu or freeing. > 178 */ > 179 list_del(&pool->css_node); > 180 free_cg_pool(pool); > 181 } > 182 spin_unlock(&dmemcg_lock); > 183 > 184 kfree(dmemcs); > 185 } > 186 > 187 static struct cgroup_subsys_state * > 188 dmemcs_alloc(struct cgroup_subsys_state *parent_css) > 189 { > 190 struct dmemcg_state *dmemcs = kzalloc(sizeof(*dmemcs), GFP_KERNEL); > 191 if (!dmemcs) > 192 return ERR_PTR(-ENOMEM); > 193 > 194 INIT_LIST_HEAD(&dmemcs->pools); > 195 return &dmemcs->css; > 196 } > 197 > 198 static struct dmem_cgroup_pool_state * > 199 find_cg_pool_locked(struct dmemcg_state *dmemcs, struct dmem_cgroup_region *region) > 200 { > 201 struct dmem_cgroup_pool_state *pool; > 202 > 203 list_for_each_entry_rcu(pool, &dmemcs->pools, css_node, spin_is_locked(&dmemcg_lock)) > 204 if (pool->region == region) > 205 return pool; > 206 > 207 return NULL; > 208 } > 209 > 210 static struct dmem_cgroup_pool_state *pool_parent(struct dmem_cgroup_pool_state *pool) > 211 { > 212 if (!pool->cnt.parent) > 213 return NULL; > 214 > 215 return container_of(pool->cnt.parent, typeof(*pool), cnt); > 216 } > 217 > 218 static void > 219 dmem_cgroup_calculate_protection(struct dmem_cgroup_pool_state *limit_pool, > 220 struct dmem_cgroup_pool_state *test_pool) > 221 { > 222 struct page_counter *climit; > 223 struct cgroup_subsys_state *css, *next_css; > 224 struct dmemcg_state *dmemcg_iter; > 225 struct dmem_cgroup_pool_state *pool, *parent_pool; > 226 bool found_descendant; > 227 > 228 climit = &limit_pool->cnt; > 229 > 230 rcu_read_lock(); > 231 parent_pool = pool = limit_pool; > 232 css = &limit_pool->cs->css; > 233 > 234 /* > 235 * This logic is roughly equivalent to css_foreach_descendant_pre, > 236 * except we also track the parent pool to find out which pool we need > 237 * to calculate protection values for. > 238 * > 239 * We can stop the traversal once we find test_pool among the > 240 * descendants since we don't really care about any others. > 241 */ > 242 while (pool != test_pool) { > 243 next_css = css_next_child(NULL, css); > 244 if (next_css) { > 245 parent_pool = pool; > 246 } else { > 247 while (css != &limit_pool->cs->css) { > 248 next_css = css_next_child(css, css->parent); > 249 if (next_css) > 250 break; > 251 css = css->parent; > 252 parent_pool = pool_parent(parent_pool); > 253 } > 254 /* > 255 * We can only hit this when test_pool is not a > 256 * descendant of limit_pool. > 257 */ > 258 if (WARN_ON_ONCE(css == &limit_pool->cs->css)) > 259 break; > 260 } > 261 css = next_css; > 262 > 263 found_descendant = false; > 264 dmemcg_iter = container_of(css, struct dmemcg_state, css); > 265 > 266 list_for_each_entry_rcu(pool, &dmemcg_iter->pools, css_node) { > 267 if (pool_parent(pool) == parent_pool) { > 268 found_descendant = true; > 269 break; > 270 } > 271 } > 272 if (!found_descendant) > 273 continue; > 274 > > 275 page_counter_calculate_protection( > 276 climit, &pool->cnt, true); > 277 } > 278 rcu_read_unlock(); > 279 } > 280 >