From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1ACD9C48BE5 for ; Sat, 12 Jun 2021 00:00:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ADEE9613C8 for ; Sat, 12 Jun 2021 00:00:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ADEE9613C8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 374A96B006C; Fri, 11 Jun 2021 20:00:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 34BA56B006E; Fri, 11 Jun 2021 20:00:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 214136B0070; Fri, 11 Jun 2021 20:00:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0175.hostedemail.com [216.40.44.175]) by kanga.kvack.org (Postfix) with ESMTP id E264B6B006C for ; Fri, 11 Jun 2021 20:00:47 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 7BADD18019F1E for ; Sat, 12 Jun 2021 00:00:47 +0000 (UTC) X-FDA: 78243115734.02.579EF4B Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf02.hostedemail.com (Postfix) with ESMTP id AB2264224979 for ; Sat, 12 Jun 2021 00:00:43 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id E2907613C3; Sat, 12 Jun 2021 00:00:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1623456046; bh=EXvSHSawD5EP+Frh8RZr50LfY51+rl5iUmXUlmQ3PVA=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=tIUgpGHTbI8g9xmTcrlCFE8eoFsp1WcvRvKVLl8jrFTpfZ+uL5veyFljQhQXizWFV P+6rbO17nT0kpv++si/Qze32jb8rcUA7Oc7WqekTskH38ikvhhZmxTixtBn4opzukq yj9XCwXFkYzAbnQuYtpueR35aPKFj85HvoNxIjN0= Date: Fri, 11 Jun 2021 17:00:45 -0700 From: Andrew Morton To: chengkaitao Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, smcdef@gmail.com, Joonsoo Kim Subject: Re: [PATCH] mm: delete duplicate order checking, when stealing whole pageblock Message-Id: <20210611170045.b79a238fa3fc4bc9e4cd1140@linux-foundation.org> In-Reply-To: <20210611063834.11871-1-chengkaitao@didiglobal.com> References: <20210611063834.11871-1-chengkaitao@didiglobal.com> X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=tIUgpGHT; dmarc=none; spf=pass (imf02.hostedemail.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org X-Stat-Signature: opzxqx4r4xpgftq5yk57r4w47i3uobh3 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: AB2264224979 X-HE-Tag: 1623456043-69317 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000001, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, 11 Jun 2021 14:38:34 +0800 chengkaitao wrote: > From: chengkaitao > > 1. Already has (order >= pageblock_order / 2) here, we don't neet > (order >= pageblock_order) > 2. set function can_steal_fallback to inline > > ... > > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -2619,18 +2619,8 @@ static void change_pageblock_range(struct page *pageblock_page, > * is worse than movable allocations stealing from unmovable and reclaimable > * pageblocks. > */ > -static bool can_steal_fallback(unsigned int order, int start_mt) > +static inline bool can_steal_fallback(unsigned int order, int start_mt) > { > - /* > - * Leaving this order check is intended, although there is > - * relaxed order check in next check. The reason is that > - * we can actually steal whole pageblock if this condition met, > - * but, below check doesn't guarantee it and that is just heuristic > - * so could be changed anytime. > - */ > - if (order >= pageblock_order) > - return true; > - > if (order >= pageblock_order / 2 || > start_mt == MIGRATE_RECLAIMABLE || > start_mt == MIGRATE_UNMOVABLE || Well, that redundant check was put there deliberately, as the comment explains. The reasoning is perhaps a little dubious, but it seems that the compiler has optimized away the redundant check anyway (your patch doesn't alter code size).