From: kernel test robot <lkp@intel.com>
To: "zhaoyang.huang" <zhaoyang.huang@unisoc.com>,
Andrew Morton <akpm@linux-foundation.org>,
Roman Gushchin <guro@fb.com>,
linux-kernel@vger.kernel.org,
Zhaoyang Huang <huangzhaoyang@gmail.com>,
ke.wang@unisoc.com
Cc: oe-kbuild-all@lists.linux.dev,
Linux Memory Management List <linux-mm@kvack.org>
Subject: Re: [PATCHv2] mm: optimization on page allocation when CMA enabled
Date: Fri, 5 May 2023 00:48:49 +0800 [thread overview]
Message-ID: <202305050012.G279ml2k-lkp@intel.com> (raw)
In-Reply-To: <1683194994-3070-1-git-send-email-zhaoyang.huang@unisoc.com>
Hi zhaoyang.huang,
kernel test robot noticed the following build errors:
[auto build test ERROR on akpm-mm/mm-everything]
url: https://github.com/intel-lab-lkp/linux/commits/zhaoyang-huang/mm-optimization-on-page-allocation-when-CMA-enabled/20230504-181335
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/r/1683194994-3070-1-git-send-email-zhaoyang.huang%40unisoc.com
patch subject: [PATCHv2] mm: optimization on page allocation when CMA enabled
config: x86_64-rhel-8.3 (https://download.01.org/0day-ci/archive/20230505/202305050012.G279ml2k-lkp@intel.com/config)
compiler: gcc-11 (Debian 11.3.0-12) 11.3.0
reproduce (this is a W=1 build):
# https://github.com/intel-lab-lkp/linux/commit/46cd0a3d98d6b43cd59be9d9e743266fc7f61168
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review zhaoyang-huang/mm-optimization-on-page-allocation-when-CMA-enabled/20230504-181335
git checkout 46cd0a3d98d6b43cd59be9d9e743266fc7f61168
# save the config file
mkdir build_dir && cp config build_dir/.config
make W=1 O=build_dir ARCH=x86_64 olddefconfig
make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash
If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>
| Link: https://lore.kernel.org/oe-kbuild-all/202305050012.G279ml2k-lkp@intel.com/
All errors (new ones prefixed by >>):
mm/page_alloc.c: In function '__rmqueue':
>> mm/page_alloc.c:2323:42: error: implicit declaration of function '__if_use_cma_first' [-Werror=implicit-function-declaration]
2323 | bool cma_first = __if_use_cma_first(zone, order, alloc_flags);
| ^~~~~~~~~~~~~~~~~~
cc1: some warnings being treated as errors
vim +/__if_use_cma_first +2323 mm/page_alloc.c
2277
2278 #ifdef CONFIG_CMA
2279 static bool __if_use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags)
2280 {
2281 unsigned long cma_proportion = 0;
2282 unsigned long cma_free_proportion = 0;
2283 unsigned long watermark = 0;
2284 long count = 0;
2285 bool cma_first = false;
2286
2287 watermark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK);
2288 /*check if GFP_MOVABLE pass previous watermark check via the help of CMA*/
2289 if (!zone_watermark_ok(zone, order, watermark, 0, alloc_flags & (~ALLOC_CMA)))
2290 /* WMARK_LOW failed lead to using cma first, this helps U&R stay
2291 * around low when being drained by GFP_MOVABLE
2292 */
2293 cma_first = true;
2294 else {
2295 /*check proportion when zone_watermark_ok*/
2296 count = atomic_long_read(&zone->managed_pages);
2297 cma_proportion = zone->cma_pages * 100 / count;
2298 cma_free_proportion = zone_page_state(zone, NR_FREE_CMA_PAGES) * 100
2299 / zone_page_state(zone, NR_FREE_PAGES);
2300 cma_first = (cma_free_proportion >= cma_proportion * 2
2301 || cma_free_proportion >= 50);
2302 }
2303 return cma_first;
2304 }
2305 #endif
2306 /*
2307 * Do the hard work of removing an element from the buddy allocator.
2308 * Call me with the zone->lock already held.
2309 */
2310 static __always_inline struct page *
2311 __rmqueue(struct zone *zone, unsigned int order, int migratetype,
2312 unsigned int alloc_flags)
2313 {
2314 struct page *page;
2315
2316 if (IS_ENABLED(CONFIG_CMA)) {
2317 /*
2318 * Balance movable allocations between regular and CMA areas by
2319 * allocating from CMA when over half of the zone's free memory
2320 * is in the CMA area.
2321 */
2322 if (migratetype == MIGRATE_MOVABLE) {
> 2323 bool cma_first = __if_use_cma_first(zone, order, alloc_flags);
2324
2325 page = cma_first ? __rmqueue_cma_fallback(zone, order) : NULL;
2326 if (page)
2327 return page;
2328 }
2329 }
2330 retry:
2331 page = __rmqueue_smallest(zone, order, migratetype);
2332 if (unlikely(!page)) {
2333 if (alloc_flags & ALLOC_CMA)
2334 page = __rmqueue_cma_fallback(zone, order);
2335
2336 if (!page && __rmqueue_fallback(zone, order, migratetype,
2337 alloc_flags))
2338 goto retry;
2339 }
2340 return page;
2341 }
2342
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests
next prev parent reply other threads:[~2023-05-04 16:49 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-04 10:09 zhaoyang.huang
2023-05-04 16:48 ` kernel test robot [this message]
2023-05-05 8:02 ` Zhaoyang Huang
2023-05-05 21:25 ` Andrew Morton
2023-05-05 22:28 ` Roman Gushchin
2023-05-06 2:44 ` Zhaoyang Huang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=202305050012.G279ml2k-lkp@intel.com \
--to=lkp@intel.com \
--cc=akpm@linux-foundation.org \
--cc=guro@fb.com \
--cc=huangzhaoyang@gmail.com \
--cc=ke.wang@unisoc.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=oe-kbuild-all@lists.linux.dev \
--cc=zhaoyang.huang@unisoc.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox