From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.1 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLACK,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB20FC433DF for ; Wed, 3 Jun 2020 22:58:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5B1CE208C9 for ; Wed, 3 Jun 2020 22:58:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="jUoLCXVc" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5B1CE208C9 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 04E15280029; Wed, 3 Jun 2020 18:58:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F3FD2280003; Wed, 3 Jun 2020 18:58:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E7D84280029; Wed, 3 Jun 2020 18:58:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0212.hostedemail.com [216.40.44.212]) by kanga.kvack.org (Postfix) with ESMTP id CDF95280003 for ; Wed, 3 Jun 2020 18:58:44 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 92CA7181AEF09 for ; Wed, 3 Jun 2020 22:58:44 +0000 (UTC) X-FDA: 76889416968.20.pies51_1214b7dbf1d14 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 737A5180C090E for ; Wed, 3 Jun 2020 22:58:44 +0000 (UTC) X-HE-Tag: pies51_1214b7dbf1d14 X-Filterd-Recvd-Size: 3775 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Wed, 3 Jun 2020 22:58:44 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id ED84720E65; Wed, 3 Jun 2020 22:58:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1591225123; bh=OGlrIMwrFmPspmLoAc++nC9PD5SQd1WvFA5AcY8RIXY=; h=Date:From:To:Subject:In-Reply-To:From; b=jUoLCXVchJvfTGpeU5L44GszUnjo2ZMJ91Grq+guRKmj+H2p95caJ5enZwEWFJTQl 6cxPrzPd16QWh1Xj5ctsaQZrkHF6St06MrlGt7Y0Qt9XiX15K6hS0bOpDrOkUmHSi0 cKavYHGZMTASYxw6dbrFefR4b85izte/pRfuauLY= Date: Wed, 03 Jun 2020 15:58:42 -0700 From: Andrew Morton To: akpm@linux-foundation.org, anshuman.khandual@arm.com, cai@lca.pw, guro@fb.com, js1304@gmail.com, linux-mm@kvack.org, mgorman@techsingularity.net, minchan@kernel.org, mm-commits@vger.kernel.org, riel@surriel.com, torvalds@linux-foundation.org, vbabka@suse.cz Subject: [patch 037/131] mm,page_alloc,cma: conditionally prefer cma pageblocks for movable allocations Message-ID: <20200603225842.xxpO4hJHh%akpm@linux-foundation.org> In-Reply-To: <20200603155549.e041363450869eaae4c7f05b@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Rspamd-Queue-Id: 737A5180C090E X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Roman Gushchin Subject: mm,page_alloc,cma: conditionally prefer cma pageblocks for movable allocations Currently a cma area is barely used by the page allocator because it's used only as a fallback from movable, however kswapd tries hard to make sure that the fallback path isn't used. This results in a system evicting memory and pushing data into swap, while lots of CMA memory is still available. This happens despite the fact that alloc_contig_range is perfectly capable of moving any movable allocations out of the way of an allocation. To effectively use the cma area let's alter the rules: if the zone has more free cma pages than the half of total free pages in the zone, use cma pageblocks first and fallback to movable blocks in the case of failure. [guro@fb.com: ifdef the cma-specific code] Link: http://lkml.kernel.org/r/20200311225832.GA178154@carbon.DHCP.thefacebook.com Link: http://lkml.kernel.org/r/20200306150102.3e77354b@imladris.surriel.com Signed-off-by: Roman Gushchin Signed-off-by: Rik van Riel Co-developed-by: Rik van Riel Acked-by: Vlastimil Babka Acked-by: Minchan Kim Cc: Qian Cai Cc: Mel Gorman Cc: Anshuman Khandual Cc: Joonsoo Kim Signed-off-by: Andrew Morton --- mm/page_alloc.c | 14 ++++++++++++++ 1 file changed, 14 insertions(+) --- a/mm/page_alloc.c~mmpage_alloccma-conditionally-prefer-cma-pageblocks-for-movable-allocations +++ a/mm/page_alloc.c @@ -2752,6 +2752,20 @@ __rmqueue(struct zone *zone, unsigned in { struct page *page; +#ifdef CONFIG_CMA + /* + * Balance movable allocations between regular and CMA areas by + * allocating from CMA when over half of the zone's free memory + * is in the CMA area. + */ + if (migratetype == MIGRATE_MOVABLE && + zone_page_state(zone, NR_FREE_CMA_PAGES) > + zone_page_state(zone, NR_FREE_PAGES) / 2) { + page = __rmqueue_cma_fallback(zone, order); + if (page) + return page; + } +#endif retry: page = __rmqueue_smallest(zone, order, migratetype); if (unlikely(!page)) { _