From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BD82C433FE for ; Fri, 4 Dec 2020 15:46:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C50B322BEF for ; Fri, 4 Dec 2020 15:46:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C50B322BEF Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2AECF6B005D; Fri, 4 Dec 2020 10:46:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 25FC86B0068; Fri, 4 Dec 2020 10:46:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 14F416B006C; Fri, 4 Dec 2020 10:46:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0185.hostedemail.com [216.40.44.185]) by kanga.kvack.org (Postfix) with ESMTP id F25016B005D for ; Fri, 4 Dec 2020 10:46:24 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id AB0E31F08 for ; Fri, 4 Dec 2020 15:46:24 +0000 (UTC) X-FDA: 77556026688.03.dogs54_4c0042d273c5 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 8514928A4E8 for ; Fri, 4 Dec 2020 15:46:24 +0000 (UTC) X-HE-Tag: dogs54_4c0042d273c5 X-Filterd-Recvd-Size: 5905 Received: from mail-pl1-f195.google.com (mail-pl1-f195.google.com [209.85.214.195]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Fri, 4 Dec 2020 15:46:23 +0000 (UTC) Received: by mail-pl1-f195.google.com with SMTP id j1so3318393pld.3 for ; Fri, 04 Dec 2020 07:46:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=fVUJ12oZxh3fE+tpbFaO7BT47tEH9r/2CUQ0iKEYM/c=; b=LFFRqpcEvv3LFHUcz7z5NPSi20nI0gQTb6xu3XzZR62RAFhUaNly9In3Q36dk4eg6/ K0yiz/8gn4WUjFNWj0/k3wCqi4XNMUc6XGZYfeG0OWl2eRjanEVys0adqC8wwJql62en boKCtmV8FoJlsV4VkEN2tzuuJlTAnzriCMnxZfrG4kGXNNAy3CByQG7hyz6pIocRY1wa WkC2mvthsL1jUPY0WtCXInubtprX0+PtaPNicznRrllG2++axz+YiIfLG/m76OAep2Z1 dgVr+0F68BoOuF3NvrKc+YFuiaR96p/Hw2c0YKznRj+zLdhQf8plfyGKbRD6nRgr/kcB gLmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=fVUJ12oZxh3fE+tpbFaO7BT47tEH9r/2CUQ0iKEYM/c=; b=T0oVRcjxzWDz8vX6yfsMUX0H3QfTNx4uS530AurEK6cLsGKxuAE0svsoqxQt7mCst1 huBZIi2vNorjRi18zMoTrAnaXoZzXcCBC1s/Q0ZQ7eVwEgMP11HZQ6kSv5N0yPlU/Y+N pnDdUwDNsLQeCXU9rka7JyvkvLbDsBOa4toH/h0hFC6x4K+Z2yhorCAR0X9mxCQ0axAu b0q6m/VimqW7bNdQdS8+3Yh4ZKbrE7vfcCLHKAxwMOEvCtx6aXJ+AqPA9PHIZRvlKQPT Sf+1vSocOaOCnyDorAWP4CuJ6eVKU3CozTPuvUoVF0ykxoQfNrYKz5aiEB4dlUk6nH0v ZS+w== X-Gm-Message-State: AOAM532nEzG8bLu7rZGwv0Zn4NRipRdfcQfhc6O6Bk+molImrWo1o69K UzutoNqR/2cBsB99RB3BxduUnUVb+EEGjxIfooqZPQ== X-Google-Smtp-Source: ABdhPJwDQT/rhn6e5+j2BJSKSv+SjHokbRSP8gl8seNIrhdC3fDvLCZ7VLaASR/oaciI3bW1yXsbwX8AoVRrmAE7PYU= X-Received: by 2002:a17:90a:c588:: with SMTP id l8mr4574169pjt.147.1607096782347; Fri, 04 Dec 2020 07:46:22 -0800 (PST) MIME-Version: 1.0 References: <20201204125640.51804-1-songmuchun@bytedance.com> <27277a9f-c726-b033-51c1-d88f978fabfd@suse.cz> In-Reply-To: <27277a9f-c726-b033-51c1-d88f978fabfd@suse.cz> From: Muchun Song Date: Fri, 4 Dec 2020 23:45:45 +0800 Message-ID: Subject: Re: [External] Re: [PATCH v2] mm/page_alloc: speeding up the iteration of max_order To: Vlastimil Babka Cc: Andrew Morton , Linux Memory Management List , LKML Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Dec 4, 2020 at 11:28 PM Vlastimil Babka wrote: > > On 12/4/20 1:56 PM, Muchun Song wrote: > > When we free a page whose order is very close to MAX_ORDER and greater > > than pageblock_order, it wastes some CPU cycles to increase max_order > > to MAX_ORDER one by one and check the pageblock migratetype of that page > > repeatedly especially when MAX_ORDER is much larger than pageblock_order. > > I would add: > > We also should not be checking migratetype of buddy when "order == MAX_ORDER - > 1" as the buddy pfn may be invalid, so adjust the condition. With the new check, > we don't need the max_order check anymore, so we replace it. > > Also adjust max_order initialization so that it's lower by one than previously, > which makes the code hopefully more clear. Got it. Thanks. > > > Signed-off-by: Muchun Song > > Fixes: d9dddbf55667 ("mm/page_alloc: prevent merging between isolated and other > pageblocks") > Acked-by: Vlastimil Babka > Thanks! > > > --- > > Changes in v2: > > - Rework the code suggested by Vlastimil. Thanks. > > > > mm/page_alloc.c | 8 ++++---- > > 1 file changed, 4 insertions(+), 4 deletions(-) > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index f91df593bf71..56e603eea1dd 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -1002,7 +1002,7 @@ static inline void __free_one_page(struct page *page, > > struct page *buddy; > > bool to_tail; > > > > - max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1); > > + max_order = min_t(unsigned int, MAX_ORDER - 1, pageblock_order); > > > > VM_BUG_ON(!zone_is_initialized(zone)); > > VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page); > > @@ -1015,7 +1015,7 @@ static inline void __free_one_page(struct page *page, > > VM_BUG_ON_PAGE(bad_range(zone, page), page); > > > > continue_merging: > > - while (order < max_order - 1) { > > + while (order < max_order) { > > if (compaction_capture(capc, page, order, migratetype)) { > > __mod_zone_freepage_state(zone, -(1 << order), > > migratetype); > > @@ -1041,7 +1041,7 @@ static inline void __free_one_page(struct page *page, > > pfn = combined_pfn; > > order++; > > } > > - if (max_order < MAX_ORDER) { > > + if (order < MAX_ORDER - 1) { > > /* If we are here, it means order is >= pageblock_order. > > * We want to prevent merge between freepages on isolate > > * pageblock and normal pageblock. Without this, pageblock > > @@ -1062,7 +1062,7 @@ static inline void __free_one_page(struct page *page, > > is_migrate_isolate(buddy_mt))) > > goto done_merging; > > } > > - max_order++; > > + max_order = order + 1; > > goto continue_merging; > > } > > > > > -- Yours, Muchun