From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0E60C433F5 for ; Sun, 24 Apr 2022 04:34:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6E63C6B0074; Sun, 24 Apr 2022 00:34:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6968A6B0075; Sun, 24 Apr 2022 00:34:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 536C06B0078; Sun, 24 Apr 2022 00:34:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 41A476B0074 for ; Sun, 24 Apr 2022 00:34:14 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 1923B61D00 for ; Sun, 24 Apr 2022 04:34:14 +0000 (UTC) X-FDA: 79390505628.03.02B0D4D Received: from mail-ej1-f53.google.com (mail-ej1-f53.google.com [209.85.218.53]) by imf14.hostedemail.com (Postfix) with ESMTP id CA53F100038 for ; Sun, 24 Apr 2022 04:34:12 +0000 (UTC) Received: by mail-ej1-f53.google.com with SMTP id s18so23724566ejr.0 for ; Sat, 23 Apr 2022 21:34:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=NMPVhO322uRvW4wqOFytQMqWXVF6MqnlRZpkBGXZkD4=; b=Qfpn63omQO+LaU94AP11P+BH8G1Cx+nRlmx/zm+/i239z2pBlO82Tm5UrkzVGKozxF YXehEc5M7127yXsH+AqhUAi+I2uOQYCDNfUQsYSJ3VfQExHShI8PNBG5BKRb9eAx58tg Gjg66Eku+RxuW8dRwfiLpkzMDpeeJDzjGAKEGksnoYu4D7N8uOXAuS7V8wwVgCNvDEaC 4A5bvwPgvpldnBT4FXALN/eou7A9ZvV/dzB7HKHsvsrdpP8pp/d2k+JjyoG+rciQvfo1 C2IasGNWs22tc5IAAbcEd/TP49wXKF+XKFgnsgjWEIpSwoPuBhPpZ1XKwr3j4Nt6tlj3 h9ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=NMPVhO322uRvW4wqOFytQMqWXVF6MqnlRZpkBGXZkD4=; b=L69JpoNZCQ8wsXuDUc7GR2xTjjs9A7R0IvK2n7Gu3854xTOXzT7wYKouY/BKHxe30l yBh6W4i+xTQUwUx0ReJ3VCphzEDvQ5zvXbMZEj8zrHc5a5hrpiYNkm3w3T7/jHvxfoHm Z4568/8QTun9UoqsYZnmMHYlz/68vpnOAXGdURsWWpfqU9yrIQSGqK9mBK8b0vE0ubUH zcgzVQ5CBBMyzxN0/PKMlB9vMlPDTPxwjHOUPdmYZx2x/muyA8dJxGFyoJRpIrwCXUYi BY01Udc/R1l3CQ2LnF1HMQx1YgdDTat6ATW15/X0OrjhoGe9u1DcHR4MnT9oG8l5Ju+K i5MA== X-Gm-Message-State: AOAM53306pMCtYcxeSD9VRkrJrAhsZnbk/cGgrjbQKXBtEYtcJXyd2XJ UOHQDS8feZRRpfTb0ZiBiiitZAesGgHnzAiPSq0= X-Google-Smtp-Source: ABdhPJznRxqjruhF6HjM9fyrlu0bowE7Y8O5joGhHDjkHw45KRU8x5Wno1N583/AnR+pZksazfYTTGXsYUl0gZTXI78= X-Received: by 2002:a17:907:7f09:b0:6f0:2991:9c76 with SMTP id qf9-20020a1709077f0900b006f029919c76mr10559045ejc.170.1650774852174; Sat, 23 Apr 2022 21:34:12 -0700 (PDT) MIME-Version: 1.0 References: <20220424032734.1542-1-lipeifeng@oppo.com> In-Reply-To: <20220424032734.1542-1-lipeifeng@oppo.com> From: Barry Song <21cnbao@gmail.com> Date: Sun, 24 Apr 2022 16:34:01 +1200 Message-ID: Subject: Re: [PATCH] mm/page_alloc: give priority to free cma-pages from pcplist to buddy To: =?UTF-8?B?5p2O5Z+56ZSLKHdpbmsp?= , Christoph Hellwig , Marek Szyprowski , robin.murphy@arm.com Cc: Andrew Morton , peifeng55@gmail.com, Linux-MM , LKML , =?UTF-8?B?5byg6K+X5piOKFNpbW9uIFpoYW5nKQ==?= Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: CA53F100038 X-Stat-Signature: x4uc7eeiytho7bamfiziefkoogebbbjk Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=Qfpn63om; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf14.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.218.53 as permitted sender) smtp.mailfrom=21cnbao@gmail.com X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1650774852-189087 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, Apr 24, 2022 at 3:28 PM wrote: > > From: lipeifeng > > Cma-pages will be fallback to movable pages in many scenarios.when cma > pages are freed to pcplist, we give priority to free it from pcplist > to buddy in order to avoids cma-pages to be used as movable-pages soon > if there is enough free-movable-pages, which saves more cma-pages in > buddy to decrease pages migration when cma_alloc. > > Signed-off-by: lipeifeng > --- + Christoph, Marek, Robin as it is cma-related. > mm/page_alloc.c | 28 ++++++++++++++++++++++++---- > 1 file changed, 24 insertions(+), 4 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 3589feb..69369ed 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -3372,7 +3372,7 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone) > } > > static void free_unref_page_commit(struct page *page, unsigned long pfn, > - int migratetype, unsigned int order) > + int migratetype, unsigned int order, bool fast_free) > { > struct zone *zone = page_zone(page); > struct per_cpu_pages *pcp; > @@ -3382,7 +3382,10 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn, > __count_vm_event(PGFREE); > pcp = this_cpu_ptr(zone->per_cpu_pageset); > pindex = order_to_pindex(migratetype, order); > - list_add(&page->lru, &pcp->lists[pindex]); > + if (fast_free) > + list_add_tail(&page->lru, &pcp->lists[pindex]); > + else > + list_add(&page->lru, &pcp->lists[pindex]); Ok. This is interesting, we used to have a separate cma pcp list but now MIGRATE_CMA is an outsider so cma pages are placed in the MIGRATE_MOVABLE list. enum migratetype { MIGRATE_UNMOVABLE, MIGRATE_MOVABLE, MIGRATE_RECLAIMABLE, MIGRATE_PCPTYPES, /* the number of types on the pcp lists */ MIGRATE_HIGHATOMIC = MIGRATE_PCPTYPES, #ifdef CONFIG_CMA ... MIGRATE_CMA, #endif #ifdef CONFIG_MEMORY_ISOLATION MIGRATE_ISOLATE, /* can't allocate from here */ #endif MIGRATE_TYPES }; #define NR_PCP_LISTS (MIGRATE_PCPTYPES * (PAGE_ALLOC_COSTLY_ORDER + 1 + NR_PCP_THP)) > pcp->count += 1 << order; > high = nr_pcp_high(pcp, zone); > if (pcp->count >= high) { > @@ -3400,6 +3403,7 @@ void free_unref_page(struct page *page, unsigned int order) > unsigned long flags; > unsigned long pfn = page_to_pfn(page); > int migratetype; > + bool fast_free = false; > > if (!free_unref_page_prepare(page, pfn, order)) > return; > @@ -3419,9 +3423,15 @@ void free_unref_page(struct page *page, unsigned int order) > } > migratetype = MIGRATE_MOVABLE; > } > + /* > + * Give priority to free cma-pages to buddy in order to > + * decrease pages migration when cma_alloc. > + */ > + if (migratetype == MIGRATE_CMA) > + fast_free = true; > > local_lock_irqsave(&pagesets.lock, flags); > - free_unref_page_commit(page, pfn, migratetype, order); > + free_unref_page_commit(page, pfn, migratetype, order, fast_free); > local_unlock_irqrestore(&pagesets.lock, flags); > } > > @@ -3459,6 +3469,8 @@ void free_unref_page_list(struct list_head *list) > > local_lock_irqsave(&pagesets.lock, flags); > list_for_each_entry_safe(page, next, list, lru) { > + bool fast_free = false; > + > pfn = page_private(page); > set_page_private(page, 0); > > @@ -3467,11 +3479,19 @@ void free_unref_page_list(struct list_head *list) > * to the MIGRATE_MOVABLE pcp list. > */ > migratetype = get_pcppage_migratetype(page); > + > + /* > + * Give priority to free cma-pages to buddy in order to > + * decrease pages migration when cma_alloc. > + */ > + if (migratetype == MIGRATE_CMA) > + fast_free = true; > + > if (unlikely(migratetype >= MIGRATE_PCPTYPES)) > migratetype = MIGRATE_MOVABLE; > > trace_mm_page_free_batched(page); > - free_unref_page_commit(page, pfn, migratetype, 0); > + free_unref_page_commit(page, pfn, migratetype, 0, fast_free); i'd call get_pcppage_migratetype() again in free_unref_page_commit() rather than adding a parameter to increase a couple cross functions. > > /* > * Guard against excessive IRQ disabled times when we get > -- > 2.7.4 > Thanks Barry