From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 20E8CC636CC for ; Tue, 31 Jan 2023 07:23:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8BB606B0078; Tue, 31 Jan 2023 02:23:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 81D2C6B007B; Tue, 31 Jan 2023 02:23:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 696D46B007D; Tue, 31 Jan 2023 02:23:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 52B186B0078 for ; Tue, 31 Jan 2023 02:23:36 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 253B7C0842 for ; Tue, 31 Jan 2023 07:23:36 +0000 (UTC) X-FDA: 80414254032.18.4E599AD Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf28.hostedemail.com (Postfix) with ESMTP id F0912C0009 for ; Tue, 31 Jan 2023 07:23:33 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf28.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675149814; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dmNe76kqS75ndSBDdPDMxCwffUB6Tm03BbJrXFvuN1k=; b=qGDgIdiIkPwbxog7L7jAq+crl4g1Z94OdPBrwmYePCB6Xx3axi1SlBT8jXNIRIXKd6L/NS xEXeqlwcz1GkTLjRWCTAqBzblO/rBz8+a8TtMHNC0O1sd2hor73ywDuDdjnIPPXzxyO9qv SasBwPe/n2kds5zoidLyvG6a8yoVfUM= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf28.hostedemail.com: domain of anshuman.khandual@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=anshuman.khandual@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675149814; a=rsa-sha256; cv=none; b=I5JS6Qk/YzTuLBUrOQbymWeCRGJ9b009KtS7qymzE+NoEUDIwT6a/e3Ox/JnDUMSZL74Cr qbUnT4gILAaoHsh1JU+GgccvUPjYkcFm1SwJVHr/ppy6NI6yCTreoqu5gy7qfMojPOhDLy Lu3gEpJ1EPS/waXrn9fUbv78IL622rA= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CA4FF2F4; Mon, 30 Jan 2023 23:24:14 -0800 (PST) Received: from [10.162.43.7] (unknown [10.162.43.7]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E72CD3F71E; Mon, 30 Jan 2023 23:23:28 -0800 (PST) Message-ID: <4f088ff9-d88b-e35b-e8b5-712874b2be8c@arm.com> Date: Tue, 31 Jan 2023 12:53:25 +0530 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.2.2 Subject: Re: [PATCH] mm,page_alloc,cma: configurable CMA utilization Content-Language: en-US To: Sukadev Bhattiprolu , Andrew Morton Cc: Rik van Riel , Roman Gushchin , Vlastimil Babka , Joonsoo Kim , Minchan Kim , Chris Goldsworthy , Georgi Djakov , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <20230131071052.GB19285@hu-sbhattip-lv.qualcomm.com> From: Anshuman Khandual In-Reply-To: <20230131071052.GB19285@hu-sbhattip-lv.qualcomm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: F0912C0009 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: grrdqqqo8ertc7mz8k9bijxekeqs1fag X-HE-Tag: 1675149813-215604 X-HE-Meta: U2FsdGVkX1/M+M+DZCo2yre9FgrTh13qkHpEmA0koPpvBxk0uTyr9D2i6gTR9cqTCP5VxTjrDk6/fR73DE6pPAQmzyUjxl1aHZwntbGL1klfNpTGUg1Vl9HZegg+UTuPznWqkFn0wWL756+D8+Jls/U030k7+88EpBIshbynM2DZbBskmaQrS76OFGA5aCRmpcXNXkHPeUs0g9ZOEWycMy1/YFg7FGUW5q4diWaA73n6tDcCkHh53Bl/IMtkxxRXXdlkhgOsW2CFLV2ExybMYZN6MbzE4AO5cQE7ngLZsT28ndkpwrHGno0q2xRgOURQA0GCO9T8SwR6BXTVg0XeH69pOSb6aN41baKnGw75gs/m6RqzXlG8U2U7NWkZo4ZfI6aOfA1UjBn8dNaz07Q9aA5s49zDh0zrrxoLDtfzTURn0zLn9s+mHd/Yki/MzXfxMfUrkP9vdFeHQ2wfqgTMnR0c/Eikv7p4xG+vRIAC1l6T1hWG5W12+VUJPunWkc9zRZZXYSNfuWOlTSd9Cqyy/iP67fA9yOLqBIPlLefehotClO44Lu4UUpM9iEaU4T8qNFig3gDjznwIu6tCcY97lb+g5/T/zC/F7X1/hmwlNohjtbIJCZz+GAwlIXmLLe2y30lKTv+FNGG6ZoT9M9lFMDoiLtRafMXTTvl91KUFTMZ/xWzDA4m3NmMZJHuhTxP+UVH+A/+/tAiG4DuQeOcKxoiQk0g8Ia00Pi2qVqvkXSLehi64albJTLNpN5gIKCMf/Y6rTkd38DCN7EbKXA6b8OvCIl8+7hgUnCWQFvg1Ik0ODdNFRlhERX++q8HkuhKAsficwjtM8TLyzUqtwzBTGocOaW1LsoqJYE+XjejkrrmPcZDI0hgzLjm+ihLYJobajiZG5WkLoSajmKABpPcdxsSvZFkwnLbuTxjl7KtnwSuEUyToZrBTaZHk+r4lhxv45NDDfu3Ydfyy/R/vB3C /EcG5bXC tj7Uq5JqtPNy8HgSlHIWkV7pVbrG24nn2hVaZ1m0+LPmJ4XK6qb6YG9qj2bh8eYmN07TVgPogdYkNx462wRfCppNoV6aH3Yk2Ppse6K+xxbkItutFWr8oEaCP+b8Elx2QZ/0162qSH4ov96BgPO+w4ikW+SwKdVHqQqaRzj3e/YxdoNyB93UAfWY7pPhJqqJLx4nx9SkdkgCjUrlJUT/vJbpu54wvNmv4fXsp3FQ3kyF8je5cpOhbY+zhOrePzGUlAPGS0bFigXoBko6htIZlD6Eed4mNOjgcJr8o X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 1/31/23 12:40, Sukadev Bhattiprolu wrote: > > Commit 16867664936e ("mm,page_alloc,cma: conditionally prefer cma pageblocks for movable allocations") > added support to use CMA pages when more than 50% of total free pages in > the zone are free CMA pages. > > However, with multiplatform kernels a single binary is used across different > targets of varying memory sizes. A low memory target using one such kernel > would incur allocation failures even when sufficient memory is available in > the CMA region. On these targets we would want to utilize a higher percentage > of the CMA region and reduce the allocation failures, even if it means that a > subsequent cma_alloc() would take longer. > > Make the percentage of CMA utilization a configurable parameter to allow > for such usecases. > > Signed-off-by: Sukadev Bhattiprolu > --- > Note: There was a mention about it being the last resort to making this > percentage configurable (https://lkml.org/lkml/2020/3/12/751). But > as explained above, multi-platform kernels for varying memory size > targets would need this to be configurable. > --- > include/linux/mm.h | 1 + > kernel/sysctl.c | 8 ++++++++ > mm/page_alloc.c | 18 +++++++++++++++--- > mm/util.c | 2 ++ > 4 files changed, 26 insertions(+), 3 deletions(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index 8f857163ac89..e4e5d508e9eb 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -203,6 +203,7 @@ extern unsigned long sysctl_admin_reserve_kbytes; > > extern int sysctl_overcommit_memory; > extern int sysctl_overcommit_ratio; > +extern int sysctl_cma_utilization_ratio; > extern unsigned long sysctl_overcommit_kbytes; > > int overcommit_ratio_handler(struct ctl_table *, int, void *, size_t *, > diff --git a/kernel/sysctl.c b/kernel/sysctl.c > index 137d4abe3eda..2dce6a908aa6 100644 > --- a/kernel/sysctl.c > +++ b/kernel/sysctl.c > @@ -2445,6 +2445,14 @@ static struct ctl_table vm_table[] = { > .extra2 = SYSCTL_ONE, > }, > #endif > + { > + .procname = "cma_utilization_ratio", > + .data = &sysctl_cma_utilization_ratio, > + .maxlen = sizeof(sysctl_cma_utilization_ratio), > + .mode = 0644, > + .proc_handler = proc_dointvec_minmax, > + .extra1 = SYSCTL_ONE, > + }, > { } > }; > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 0745aedebb37..b72db3824687 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -3071,6 +3071,20 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, > > } > > +static __always_inline bool zone_can_use_cma_pages(struct zone *zone) > +{ > + unsigned long cma_free_pages; > + unsigned long zone_free_pages; > + > + cma_free_pages = zone_page_state(zone, NR_FREE_CMA_PAGES); > + zone_free_pages = zone_page_state(zone, NR_FREE_PAGES); > + > + if (cma_free_pages > zone_free_pages / sysctl_cma_utilization_ratio) > + return true; > + > + return false; > +} > + > /* > * Do the hard work of removing an element from the buddy allocator. > * Call me with the zone->lock already held. > @@ -3087,9 +3101,7 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, > * allocating from CMA when over half of the zone's free memory > * is in the CMA area. > */ > - if (alloc_flags & ALLOC_CMA && > - zone_page_state(zone, NR_FREE_CMA_PAGES) > > - zone_page_state(zone, NR_FREE_PAGES) / 2) { > + if (alloc_flags & ALLOC_CMA && zone_can_use_cma_pages(zone)) { > page = __rmqueue_cma_fallback(zone, order); > if (page) > return page; > diff --git a/mm/util.c b/mm/util.c > index b56c92fb910f..4de81f04b249 100644 > --- a/mm/util.c > +++ b/mm/util.c > @@ -781,6 +781,8 @@ void folio_copy(struct folio *dst, struct folio *src) > } > > int sysctl_overcommit_memory __read_mostly = OVERCOMMIT_GUESS; > + > +int sysctl_cma_utilization_ratio __read_mostly = 2; Make '2' here a macro e.g CMA_UTILIZATION_DEFAULT ? Also it might be a good opportunity to comment why the default value is '2' i.e 50 %. > int sysctl_overcommit_ratio __read_mostly = 50; > unsigned long sysctl_overcommit_kbytes __read_mostly; > int sysctl_max_map_count __read_mostly = DEFAULT_MAX_MAP_COUNT;