From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10CAFC47077 for ; Wed, 17 Jan 2024 01:02:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2A6A66B0071; Tue, 16 Jan 2024 20:02:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 257296B0074; Tue, 16 Jan 2024 20:02:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1468E6B007B; Tue, 16 Jan 2024 20:02:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 015C26B0071 for ; Tue, 16 Jan 2024 20:02:23 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C0F48C090E for ; Wed, 17 Jan 2024 01:02:23 +0000 (UTC) X-FDA: 81687002166.03.E4D8AA2 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf01.hostedemail.com (Postfix) with ESMTP id 5F0D84001F for ; Wed, 17 Jan 2024 01:02:19 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; spf=pass (imf01.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705453341; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=FFbGohpJI3MZ0AXKLENXglnXnWDpvemouwX07XL5SiA=; b=LYJLOvJXF6T15F3J7qUiXdSoCYrK1ZimAcQV7Lkb5KGHknbGdc58eF2kPyrBQOJM01Wa5V Ha/j+TD+2AE+vv15Ls09b53WiYM7XyCNzjaqMN13Ew+TDHSwpP+MPZfRD+Uh5I4snrP5BB mdVa8Z3hhJBT4cSmueFHmU4H7NxlxT0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705453341; a=rsa-sha256; cv=none; b=i0L/lq4Y/pV3mBBjqL3wgG0qaZ9j24s8+a+baliJh/UMXLf21TueT6CEHVzosuRogCoE0e AIBFP6Ie7f0hecnkhalFeKWrtgGi7ucQ8DVbgGp452cGWbOh8JGXSCpXD6c84AycO8kG1h pEGtdoKL+0TERXgDykBEwFZX/VcOEUk= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; spf=pass (imf01.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4TF6xv0JTHzvV2y; Wed, 17 Jan 2024 09:00:51 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 4B64E18006C; Wed, 17 Jan 2024 09:02:15 +0800 (CST) Received: from [10.174.177.243] (10.174.177.243) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Wed, 17 Jan 2024 09:02:14 +0800 Message-ID: <3089af79-c5fd-4c13-a1e1-cb9f67d4ea4f@huawei.com> Date: Wed, 17 Jan 2024 09:02:14 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] mm: memory: move mem_cgroup_charge() into alloc_anon_folio() To: kernel test robot , Andrew Morton , CC: , Linux Memory Management List , , Matthew Wilcox , David Hildenbrand References: <20240116071302.2282230-1-wangkefeng.wang@huawei.com> <202401170535.2TfJ7u74-lkp@intel.com> Content-Language: en-US From: Kefeng Wang In-Reply-To: <202401170535.2TfJ7u74-lkp@intel.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm100001.china.huawei.com (7.185.36.93) X-Stat-Signature: 6s9bk8kx1qnrrfqr5ko7z3c4iakjjrkt X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 5F0D84001F X-Rspam-User: X-HE-Tag: 1705453339-258416 X-HE-Meta: U2FsdGVkX18S3l2jeb2NWLFnZIkliMZgP50UIXIEOTRIy/nKVWPc5RTewDlib257Tnx9T1cWXc+dn0BDbXSx41H90zNwcbxJNEIhMjBy+875qFqvkkmgSZQ+jYq5ixGAm6dsHxKedQDY1o/TVznqenSqZadgkBKL/hqaxHjvTMN1EsCMg3fhIpxoYbQeunk4roJjSLXe3Qwn1m6r5+Qbb53g5UDyk/xMY1dhEQkfO8jlh2D624cMfkbdjIcTvTEs89KSUf5YQN6jglYM7u4s+dAIVCsgWBG6PHh/aZDFHFvG4CDRtabXGUlz9nnbMDgZIy6D7Q6gHdi055lhbFYfoI1pJj8TU00XYdeEtjromu++qE1lKsIpSMLDR6lrey7IhsXb3+DTTUOsmcriIv9X5n4t9xIX7tvLNJ1EoEXWYgx6PE6PTOMSNtjkuxIAsAbFDkzAIfEuRblk2v1gRov5PHb2m8XBK5zFkOqGaXO739Xdqy05pI2/DkLrm/BrQoYUSVwBPdo/VD7GpSUlS7OQu+6aMLfF0XEDU3YFcPbbznozKVBH9rqE9S/XPr6pegzUOIEe0Jv0v33UU41Pv+Uh6eKoMOZzA7+lDDaI7FuVrizJ5JFpxR8bG5oDe3ylPVN1gL8PqalCogNMY/oFaud1HMpr46Ksi5PUcaaYSRVUbABBcBrkktEZ0VfH1s7BhQThBCrETTtusWLugGz0cH5buIPaiFo2ZpFkSGe1w7lI3EoLhE9gFbA8P/hJJFzQKN2F1mrbnkuYlgU0NtrYlQsURf/fBUjSSn0/IeKHXZssgBOQzczCSykYdiwyRXZB2Up7Yn5tgwOU6eXVo5G8Yd2yZnBlFlrEOIJ3471y5D1iCM9pWlRjMBQTbGVpiax2NNIhxeurYwP73w4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2024/1/17 5:26, kernel test robot wrote: > Hi Kefeng, > > kernel test robot noticed the following build errors: > > [auto build test ERROR on akpm-mm/mm-everything] > > url: https://github.com/intel-lab-lkp/linux/commits/Kefeng-Wang/mm-memory-move-mem_cgroup_charge-into-alloc_anon_folio/20240116-151640 > base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything > patch link: https://lore.kernel.org/r/20240116071302.2282230-1-wangkefeng.wang%40huawei.com > patch subject: [PATCH] mm: memory: move mem_cgroup_charge() into alloc_anon_folio() > config: x86_64-allnoconfig (https://download.01.org/0day-ci/archive/20240117/202401170535.2TfJ7u74-lkp@intel.com/config) > compiler: gcc-12 (Debian 12.2.0-14) 12.2.0 > reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240117/202401170535.2TfJ7u74-lkp@intel.com/reproduce) > > If you fix the issue in a separate patch/commit (i.e. not just a new version of > the same patch/commit), kindly add following tags > | Reported-by: kernel test robot > | Closes: https://lore.kernel.org/oe-kbuild-all/202401170535.2TfJ7u74-lkp@intel.com/ > > All error/warnings (new ones prefixed by >>): > thanks, will fix built on !CONFIG_TRANSPARENT_HUGEPAGE > mm/memory.c: In function 'alloc_anon_folio': >>> mm/memory.c:4223:31: error: 'vma' undeclared (first use in this function); did you mean 'vmf'? > 4223 | return folio_prealloc(vma->vm_mm, vma, vmf->address, true); > | ^~~ > | vmf > mm/memory.c:4223:31: note: each undeclared identifier is reported only once for each function it appears in >>> mm/memory.c:4224:1: warning: control reaches end of non-void function [-Wreturn-type] > 4224 | } > | ^ > > > vim +4223 mm/memory.c > > 4153 > 4154 static struct folio *alloc_anon_folio(struct vm_fault *vmf) > 4155 { > 4156 #ifdef CONFIG_TRANSPARENT_HUGEPAGE > 4157 struct vm_area_struct *vma = vmf->vma; > 4158 unsigned long orders; > 4159 struct folio *folio; > 4160 unsigned long addr; > 4161 pte_t *pte; > 4162 gfp_t gfp; > 4163 int order; > 4164 > 4165 /* > 4166 * If uffd is active for the vma we need per-page fault fidelity to > 4167 * maintain the uffd semantics. > 4168 */ > 4169 if (unlikely(userfaultfd_armed(vma))) > 4170 goto fallback; > 4171 > 4172 /* > 4173 * Get a list of all the (large) orders below PMD_ORDER that are enabled > 4174 * for this vma. Then filter out the orders that can't be allocated over > 4175 * the faulting address and still be fully contained in the vma. > 4176 */ > 4177 orders = thp_vma_allowable_orders(vma, vma->vm_flags, false, true, true, > 4178 BIT(PMD_ORDER) - 1); > 4179 orders = thp_vma_suitable_orders(vma, vmf->address, orders); > 4180 > 4181 if (!orders) > 4182 goto fallback; > 4183 > 4184 pte = pte_offset_map(vmf->pmd, vmf->address & PMD_MASK); > 4185 if (!pte) > 4186 return ERR_PTR(-EAGAIN); > 4187 > 4188 /* > 4189 * Find the highest order where the aligned range is completely > 4190 * pte_none(). Note that all remaining orders will be completely > 4191 * pte_none(). > 4192 */ > 4193 order = highest_order(orders); > 4194 while (orders) { > 4195 addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > 4196 if (pte_range_none(pte + pte_index(addr), 1 << order)) > 4197 break; > 4198 order = next_order(&orders, order); > 4199 } > 4200 > 4201 pte_unmap(pte); > 4202 > 4203 /* Try allocating the highest of the remaining orders. */ > 4204 gfp = vma_thp_gfp_mask(vma); > 4205 while (orders) { > 4206 addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); > 4207 folio = vma_alloc_folio(gfp, order, vma, addr, true); > 4208 if (folio) { > 4209 if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) { > 4210 folio_put(folio); > 4211 goto next; > 4212 } > 4213 folio_throttle_swaprate(folio, gfp); > 4214 clear_huge_page(&folio->page, vmf->address, 1 << order); > 4215 return folio; > 4216 } > 4217 next: > 4218 order = next_order(&orders, order); > 4219 } > 4220 > 4221 fallback: > 4222 #endif >> 4223 return folio_prealloc(vma->vm_mm, vma, vmf->address, true); >> 4224 } > 4225 >