From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f199.google.com (mail-pf0-f199.google.com [209.85.192.199]) by kanga.kvack.org (Postfix) with ESMTP id 970AB6B0006 for ; Mon, 12 Mar 2018 09:22:34 -0400 (EDT) Received: by mail-pf0-f199.google.com with SMTP id j12so5730613pff.18 for ; Mon, 12 Mar 2018 06:22:34 -0700 (PDT) Received: from mx2.suse.de (mx2.suse.de. [195.135.220.15]) by mx.google.com with ESMTPS id 3-v6si821232plv.323.2018.03.12.06.22.33 for (version=TLS1 cipher=AES128-SHA bits=128/128); Mon, 12 Mar 2018 06:22:33 -0700 (PDT) Subject: Re: [PATCH v4 1/3] mm/free_pcppages_bulk: update pcp->count inside References: <20180301062845.26038-1-aaron.lu@intel.com> <20180301062845.26038-2-aaron.lu@intel.com> From: Vlastimil Babka Message-ID: Date: Mon, 12 Mar 2018 14:22:28 +0100 MIME-Version: 1.0 In-Reply-To: <20180301062845.26038-2-aaron.lu@intel.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Aaron Lu , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , Huang Ying , Dave Hansen , Kemi Wang , Tim Chen , Andi Kleen , Michal Hocko , Mel Gorman , Matthew Wilcox , David Rientjes On 03/01/2018 07:28 AM, Aaron Lu wrote: > Matthew Wilcox found that all callers of free_pcppages_bulk() currently > update pcp->count immediately after so it's natural to do it inside > free_pcppages_bulk(). > > No functionality or performance change is expected from this patch. Well, it's N decrements instead of one decrement by N / assignment of zero. But I assume the difference is negligible anyway, right? > Suggested-by: Matthew Wilcox > Signed-off-by: Aaron Lu Acked-by: Vlastimil Babka > --- > mm/page_alloc.c | 10 +++------- > 1 file changed, 3 insertions(+), 7 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index cb416723538f..faa33eac1635 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1148,6 +1148,7 @@ static void free_pcppages_bulk(struct zone *zone, int count, > page = list_last_entry(list, struct page, lru); > /* must delete as __free_one_page list manipulates */ > list_del(&page->lru); > + pcp->count--; > > mt = get_pcppage_migratetype(page); > /* MIGRATE_ISOLATE page should not go to pcplists */ > @@ -2416,10 +2417,8 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp) > local_irq_save(flags); > batch = READ_ONCE(pcp->batch); > to_drain = min(pcp->count, batch); > - if (to_drain > 0) { > + if (to_drain > 0) > free_pcppages_bulk(zone, to_drain, pcp); > - pcp->count -= to_drain; > - } > local_irq_restore(flags); > } > #endif > @@ -2441,10 +2440,8 @@ static void drain_pages_zone(unsigned int cpu, struct zone *zone) > pset = per_cpu_ptr(zone->pageset, cpu); > > pcp = &pset->pcp; > - if (pcp->count) { > + if (pcp->count) > free_pcppages_bulk(zone, pcp->count, pcp); > - pcp->count = 0; > - } > local_irq_restore(flags); > } > > @@ -2668,7 +2665,6 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn) > if (pcp->count >= pcp->high) { > unsigned long batch = READ_ONCE(pcp->batch); > free_pcppages_bulk(zone, batch, pcp); > - pcp->count -= batch; > } > } > >