From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm0-f70.google.com (mail-wm0-f70.google.com [74.125.82.70]) by kanga.kvack.org (Postfix) with ESMTP id 4366F280273 for ; Mon, 26 Sep 2016 04:58:53 -0400 (EDT) Received: by mail-wm0-f70.google.com with SMTP id b130so76678657wmc.2 for ; Mon, 26 Sep 2016 01:58:53 -0700 (PDT) Received: from mail-wm0-f67.google.com (mail-wm0-f67.google.com. [74.125.82.67]) by mx.google.com with ESMTPS id z21si7644330wmc.56.2016.09.26.01.58.52 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 26 Sep 2016 01:58:52 -0700 (PDT) Received: by mail-wm0-f67.google.com with SMTP id w84so13002432wmg.0 for ; Mon, 26 Sep 2016 01:58:52 -0700 (PDT) Date: Mon, 26 Sep 2016 10:58:50 +0200 From: Michal Hocko Subject: Re: [RFC] mm: a question about high-order check in __zone_watermark_ok() Message-ID: <20160926085850.GB28550@dhcp22.suse.cz> References: <57E8E0BD.2070603@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <57E8E0BD.2070603@huawei.com> Sender: owner-linux-mm@kvack.org List-ID: To: Xishi Qiu Cc: Mel Gorman , Johannes Weiner , Vlastimil Babka , LKML , Linux MM , Yisheng Xie On Mon 26-09-16 16:47:57, Xishi Qiu wrote: > commit 97a16fc82a7c5b0cfce95c05dfb9561e306ca1b1 > (mm, page_alloc: only enforce watermarks for order-0 allocations) > rewrite the high-order check in __zone_watermark_ok(), but I think it > quietly fix a bug. Please see the following. > > Before this patch, the high-order check is this: > __zone_watermark_ok() > ... > for (o = 0; o < order; o++) { > /* At the next order, this order's pages become unavailable */ > free_pages -= z->free_area[o].nr_free << o; > > /* Require fewer higher order pages to be free */ > min >>= 1; > > if (free_pages <= min) > return false; > } > ... > > If we have cma memory, and we alloc a high-order movable page, then it's right. > > But if we alloc a high-order unmovable page(e.g. alloc kernel stack in dup_task_struct()), > and there are a lot of high-order cma pages, but little high-order unmovable > pages, the it is still return *true*, but we will alloc *failed* finally, because > we cannot fallback from migrate_unmovable to migrate_cma, right? AFAIR CMA wmark check was always tricky and the above commit has made the situation at least a bit more clear. Anyway IIRC #ifdef CONFIG_CMA /* If allocation can't use CMA areas don't use free CMA pages */ if (!(alloc_flags & ALLOC_CMA)) free_cma = zone_page_state(z, NR_FREE_CMA_PAGES); #endif if (free_pages - free_cma <= min + z->lowmem_reserve[classzone_idx]) return false; should reduce the prioblem because a lot of CMA pages should just get us below the wmark + reserve boundary. -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org