From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BCD07C7EE22 for ; Tue, 9 May 2023 16:30:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1ED2E6B0071; Tue, 9 May 2023 12:30:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 19D176B0074; Tue, 9 May 2023 12:30:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 08D836B0075; Tue, 9 May 2023 12:30:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id ECB8A6B0071 for ; Tue, 9 May 2023 12:30:35 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id B1BEA16070A for ; Tue, 9 May 2023 16:30:35 +0000 (UTC) X-FDA: 80771254830.19.31F0F31 Received: from out-25.mta1.migadu.com (out-25.mta1.migadu.com [95.215.58.25]) by imf18.hostedemail.com (Postfix) with ESMTP id D9ADE1C001D for ; Tue, 9 May 2023 16:30:30 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=eN6lGg1p; spf=pass (imf18.hostedemail.com: domain of roman.gushchin@linux.dev designates 95.215.58.25 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev; dmarc=pass (policy=none) header.from=linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1683649831; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=RUFaoQGha0Z+r6iw9RZNHszlUlMEAfaTJMqTQTGUWXg=; b=DV9WqYc3gPhkELjVulC5bnYYC2FRgpx8MYwhO4rRtn3dH0W2u2Pn6Ftq0YnfwVY+vRybp/ NMu1GxoOBnQWp6NgFOCUHLNE3CtMSVpjDZDbdgStWASUoUIQjdzapbSegR8+6KOwprah7r 0lDL2ck5cRC3UXlERHaAr87AgaVwq9Y= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1683649831; a=rsa-sha256; cv=none; b=wIzJt4yBnkoSDsKnwn6t4SpnD+7XkhD+Qx2o76ZKPxSOd80w9a9IN45qtR+sZgBcFDqubz 5FJEl8NTiB3LHl7fiRvgvHU6xbSzbT4WkLGT3wGTEW6fShNCRWGVXtZC/vYOUj4NgOnet9 eVEsFvG7mdkbs1rrt2nV7N7Y9EeZIgE= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=eN6lGg1p; spf=pass (imf18.hostedemail.com: domain of roman.gushchin@linux.dev designates 95.215.58.25 as permitted sender) smtp.mailfrom=roman.gushchin@linux.dev; dmarc=pass (policy=none) header.from=linux.dev Date: Tue, 9 May 2023 09:30:08 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1683649828; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=RUFaoQGha0Z+r6iw9RZNHszlUlMEAfaTJMqTQTGUWXg=; b=eN6lGg1p85j+EO3nZANcz+n1hAtqWJc2bKAptsZuSfZF7YgjtaSW94kF6PiYG45+aWHMYG WTRlT0ksVpyGlaUUd/te0Ch05Od/VyZwyUYNXSLw4ZsqJgC1YFW9Cal5VuO7bgYyPfwPPt RoyDWUCxP3IARIgtB+usYU0UIaJ0L+o= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Roman Gushchin To: "zhaoyang.huang" Cc: Andrew Morton , Roman Gushchin , Minchan Kim , Joonsoo Kim , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Zhaoyang Huang , ke.wang@unisoc.com Subject: Re: [PATCHv3] mm: optimization on page allocation when CMA enabled Message-ID: References: <1683355547-10524-1-git-send-email-zhaoyang.huang@unisoc.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1683355547-10524-1-git-send-email-zhaoyang.huang@unisoc.com> X-Migadu-Flow: FLOW_OUT X-Rspamd-Queue-Id: D9ADE1C001D X-Stat-Signature: p5etjddhtwifseiiyfq3r8ckqoqkifde X-Rspam-User: X-Rspamd-Server: rspam09 X-HE-Tag: 1683649830-216151 X-HE-Meta: U2FsdGVkX18pkyhAi05enR2dWhPIWiZA0YSEB2SRlLLmmFxy0b5sNBKmDwpgOzgbrx4HrBqtX6eG2Mzy9O2iPUkxG57KZYluPDpu9vK2EtmJ9RUOesDlBu+HlRH80VeQ5/uLd7550om5Pi5yThYZTYU/g83WzuwoXzkl70lYCrrmUa7AnT31xTf+MXo/B/a9tmBMMrnT9oD0h/H3XyumMmNq0/q1jCyKSe2Q4/nEBsCV0l3sbi7C2tkHiFRf4L0HDwrHUHXXnp18x7KQJAjc7F/7Mt84befbVEQen2rB8Io5oH9iLIEFlCW/xOrKrJ3P66XhjbX/qFgY8/L4buq1SnjIuVIzj9X7ni+PrXw//pWB7I/wpg+SZzSSqTU3qy4n1X2gEBfR2O/I2h6FNerVwh2HwsHudTdYlK/6zsZkN+N0DTlP+LTSK9EikXHGY3Vid6gX6VYt9RwGT+bN/z3NWXLOFnLqpelKPd7Nyj0bMBni7kvoiNhCwspErLhGCuabWOyVuW4SMAWF5eCrdO3HIGutRoSHt0r1YEp/RINJiqKqE0TXgxkM4yUjyGcVJkVCIKI79XHe+vFeH5PFHx/jGFdmTDVdTm3ke4AHVraoZQZ2QavrG0hBgnF1DFuUVIn7zSNXHelbc7vBrmkpan/PlxZjdMhCskzvCifGu7vxMRqepq9XadgX14xEY90zzCdpHm7t2dzl0oze4C2wmK1+FOPMqzi+vud8mqIH2uAWx/r8E9XWxIHNnVvYO7DkAcMAceauRaWipFjTF3qLXZ7x7VjmWnZHLkADXT+wKEaayrulUu3dhO6F11C7B1gY6R80Sx5SyUidjG4Oq/85XoAmhzMAZnqScxVjYm+lau+AjkDOgqL/tODrWa4cHUBD5z2s2VIC3p8M6b7CdKjRerFZNKB7XUQXSsbzq7gvVnevmfAId/3hpR3Kf/ylFWm4fHREG+kVnEF9zN7mUubjZvl jcU9saoD mTYnNzo8G8+6s/Po8rV8/oo/gbdfesroZceBLhTA5dASUcw/dSk0lMR5NP97UM+E+RIkZIcYpPB/0i+y3FHAjshTcliN+McjWdoguiNlg7DIU1+ia5t5cusgvUYpB4QwSU7Tal08swgVNNqrmSE0JwzfgC15/ePbGzoV9dNLjKFyIuYXpA+BmaSmuRszVPbNNRzAB X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sat, May 06, 2023 at 02:45:47PM +0800, zhaoyang.huang wrote: > From: Zhaoyang Huang > > Let us look at the series of scenarios below with WMARK_LOW=25MB,WMARK_MIN=5MB > (managed pages 1.9GB). We can know that current 'fixed 1/2 ratio' start to use > CMA since C which actually has caused U&R lower than WMARK_LOW (this should be > deemed as against current memory policy, that is, UNMOVABLE & RECLAIMABLE should > either stay around WATERMARK_LOW when no allocation or do reclaim via entering > slowpath) > > -- Free_pages > | > | > -- WMARK_LOW > | > -- Free_CMA > | > | > -- > > Free_CMA/Free_pages(MB) A(12/30) B(12/25) C(12/20) > fixed 1/2 ratio N N Y > this commit Y Y Y > > Signed-off-by: Zhaoyang Huang > --- > v2: do proportion check when zone_watermark_ok, update commit message > v3: update coding style and simplify the logic when zone_watermark_ok We're getting closer, thank you for working on it. > --- > --- > mm/page_alloc.c | 46 ++++++++++++++++++++++++++++++++++++++++------ > 1 file changed, 40 insertions(+), 6 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 0745aed..7aca49d 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -3071,6 +3071,41 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, > > } > > +#ifdef CONFIG_CMA > +/* > + * GFP_MOVABLE allocation could drain UNMOVABLE & RECLAIMABLE page blocks via > + * the help of CMA which makes GFP_KERNEL failed. Checking if zone_watermark_ok > + * again without ALLOC_CMA to see if to use CMA first. > + */ > +static bool __if_use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags) Can you, please, rename it to use_cma_first()? > +{ > + unsigned long watermark; > + bool cma_first = false; > + > + watermark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); > + /* check if GFP_MOVABLE pass previous zone_watermark_ok via the help of CMA */ > + if (!zone_watermark_ok(zone, order, watermark, 0, alloc_flags & (~ALLOC_CMA))) Please, add {} for both "if" and "else" parts. Also, please, invert the order to make it easier to follow: if (zone_watermark_ok(...)) { ... } else { ... } > + /* > + * watermark failed means UNMOVABLE & RECLAIMBLE is not enough > + * now, we should use cma first to keep them stay around the > + * corresponding watermark > + */ > + cma_first = true; > + else > + /* > + * remain previous fixed 1/2 logic when watermark ok as we have Nobody knows what was previously here once your change is applied. Please, do not refer to the previous state, add a comment about the current state. You can probably (partially) move the comment from __rmqueue(). > + * above protection now > + */ > + cma_first = (zone_page_state(zone, NR_FREE_CMA_PAGES) > > + zone_page_state(zone, NR_FREE_PAGES) / 2); > + return cma_first; > +} > +#else > +static bool __if_use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags) > +{ > + return false; > +} > +#endif > /* > * Do the hard work of removing an element from the buddy allocator. > * Call me with the zone->lock already held. > @@ -3084,13 +3119,12 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, > if (IS_ENABLED(CONFIG_CMA)) { > /* > * Balance movable allocations between regular and CMA areas by > - * allocating from CMA when over half of the zone's free memory > - * is in the CMA area. > + * allocating from CMA base on judging zone_watermark_ok again > + * to see if the latest check got pass via the help of CMA > */ > - if (alloc_flags & ALLOC_CMA && > - zone_page_state(zone, NR_FREE_CMA_PAGES) > > - zone_page_state(zone, NR_FREE_PAGES) / 2) { > - page = __rmqueue_cma_fallback(zone, order); > + if (migratetype == MIGRATE_MOVABLE) { > + page = __if_use_cma_first(zone, order, alloc_flags) ? > + __rmqueue_cma_fallback(zone, order) : NULL; Can you put it like if (migratetype == MIGRATE_MOVABLE && use_cma_first(...)) { page = ... if (page) return page; } to avoid using a ternary operator without a good reason? Thanks!